Re: [Foundation-l] Controversial content software status - the image filter disguised under a new label

2012-03-13 Thread Tobias Oelgarte

Am 13.03.2012 03:39, schrieb Andreas Kolbe:


It's not me who's uploading hundreds of pornographic media onto Wikimedia
sites. There are places for porn online, just like there are places for
online poker, and amateur digital art. I have no problem with any of them.
But listen to yourself – you are accusing me of prudery because I say that
as a tax-exempt educational website we should be handling porn and other
explicit content as responsibly – no more, no less – as Google, YouTube or
Flickr.

Are the adult media sharing groups in Flickr populated by prudes? I don't
think so. But are they in favour of abandoning the Flickr rating system?
No. Are Google right-wingers? No, and they happen to be among our biggest
donors and benefactors.

Your porn must be fre  stance puts you into a fringe corner from
the perspective of which the entirety of mainstream society looks like a
bunch of dastardly right-wing prudes.

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

No. I'm not accusing you for prudery, but for making wrong cited 
statements. Your assumption is that we have to sacrifice neutrality to 
please a audience that doesn't want to see that it is looking for... 
Great start!


No one said that porn must be fre  (double quote, because of 
a quote of a quote, that never was a quote to begin with). All we said 
was: Every content has to be treated as equal.


What you do is just anti porn lobbying and nothing else. It is not for 
the benefit of the project. Your current aim is to change/sacrifice the 
original goal of the project, while arguing that it would be for the 
benefit to reach more users. But what is price of a book that only 
contains what you already know or want to see in the context of 
education? It's not worth a Cent. It's a failed mission.


nya~ (said the cat as it faced a palm)


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Controversial content software status - the image filter disguised under a new label

2012-03-13 Thread Tobias Oelgarte

Am 13.03.2012 10:39, schrieb Andreas Kolbe:

On Tue, Mar 13, 2012 at 9:20 AM, Tobias Oelgarte
tobias.oelga...@googlemail.com  wrote:


Am 13.03.2012 03:39, schrieb Andreas Kolbe:


No. I'm not accusing you for prudery, but for making wrong cited

statements. Your assumption is that we have to sacrifice neutrality to
please a audience that doesn't want to see that it is looking for... Great
start!



Neutrality is following what our sources do.


¹ Depends on:
   * the definition of sources
   * the neutrality of the sources itself
   * the context of do in respect to clould do/might do/supposed 
to do/...

   * the target audience (to entertain vs to educate themself)



All we said was: Every content has to be treated as equal.



That is a fringe position in the real world.


That is the encyclopedic viewpoint of the world. Even so it might not be 
achievable, it is the aim.


nya~ (said the cat leaning at window)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Controversial content software status - the image filter disguised under a new label

2012-03-12 Thread Tobias Oelgarte
I'm tired to reply to this kind of comments since I said anything 
important  multiple times already. So I will keep it as that and only 
write the following:


Sorry, but your comments are total bullshit¹ and you know it.

 ¹ includes strong language, overly repeated selective examples, 
bending of words, bending of facts and accusations that aren't true.


nya~ (said the lobby cat and repeated itself again)


Am 12.03.2012 20:22, schrieb Andreas Kolbe:

On Mon, Mar 12, 2012 at 5:05 PM, Faefae...@gmail.com  wrote:


Strangely enough, searching Commons for Male figure rather than
Male human shows me artwork from the National Museum of African Art
and a Michelangelo Buonarroti sketch from the Louvre in top matches.
No problem with wading through 100 dicks and arseholes. In fact,
carefully checking through the first 100 matches of that search gave
me no explicit photographs of naked people or their private parts at
all.



Well, if you just search for male, you still get lots of penises and
sphincters.

http://en.wikipedia.org/w/index.php?title=Special%3ASearchprofile=imagessearch=malefulltext=Search


Bear in mind that this is what students get in schools, too.




Having a better optimized search engine is the issue here, not
filtering all images of body parts.



I agree that a better search engine is part of the answer. Niabot made an
excellent proposal (clustered search) a week ago, which is written up here:

http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Clustering_for_search_results_on_Commons


But I don't think it obviates the need for a filter, which
is frankly standard even in mainstream *Western* sites that contain adult
material.




Commons has over 10,000,000
images, having several hundred images of human genitals is not to be
unexpected, or a reason to give up on collaboration and turn to
extremes of lobbying multiple authorities and newspapers with claims
that the WMF is promoting paedophilia with the side effect of fuelling
well known internet stalkers to harass staff and users.



We have had a consistent problem with pedophilia advocates in Commons
becoming involved in curating sexual images. It is a problem when an editor
with a child pornography conviction that was prominent enough to hit the
press, who did several years in jail and was deported from the US, is so
involved in our projects.

It is a problem when that editor's block is promptly endorsed by the
arbitration committee on English Wikipedia, but is equally quickly
overturned in Commons.

It is a problem if a Commons admin says, when being made aware of Sue
Gardner's statement about Wikimedia's zero-tolerance policy towards
pedophilia advocacy, that

You can quote Sue if you want - but Sue is Sue and not us. Sue also tried
to install a image filter and was bashed by us.

http://commons.wikimedia.org/w/index.php?title=Commons:Administrators%27_noticeboard/User_problemsdiff=prevoldid=68051777


By the way, that statement of Sue's has now been removed from the Meta page
on pedophilia:

http://meta.wikimedia.org/w/index.php?title=Pedophiliadiff=3557747oldid=3546718


Now, English Wikipedia has for some time had a well-defined process for
such cases. They are not to be discussed on-wiki, but are a matter for
private arbcom communication. That is sensible. However, Commons has lacked
both an arbitration committee, and any equivalent policy. (There are
efforts underway now to write one:
http://commons.wikimedia.org/wiki/Commons:Child_protection)

This being so, there has been no other way to address this in Commons than
to discuss it on-wiki, and it is a problem if an editor who posts evidence
on Commons proving that the person in question has continued to advocate
pedophilia online quite recently, and well after their release from prison,
is blocked for harassment, while the editor in question remains free to
help curate pornographic material. But that is Commons for you.

I am afraid that to most people out there in the real world, it will seem
absolutely extraordinary that an educational charity lets someone with a
child pornography conviction curate adult material, while its
administrators block an editor who points out that the person has continued
to be an open and public childlove advocate online.

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l




___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2012-03-12 Thread Tobias Oelgarte

Am 12.03.2012 23:14, schrieb Andrew Gray:

On 11 March 2012 00:23, David Gerarddger...@gmail.com  wrote:

On 10 March 2012 22:15, Andrew Grayandrew.g...@dunelm.org.uk  wrote:


The image filter may not be a good solution, but too much of the
response involves saying we're fine, we're neutral, we don't need to
do anything and leaving it there; this isn't the case, and we do need
to think seriously about these issues without yelling censorship!
any time someone tries to discuss the problem.

There are theoretical objections, and then there are the actual objectors:

https://en.wikipedia.org/wiki/Talk:Main_Page#Gay_pornography

The objector here earnestly and repeatedly compares the words gay
pornographic in *text* on the page to images of child pornography.

Well, yes, and everyone else involved in that discussion is (at some
length) telling them they're wrong.

There are *other* actual objections, and ones with some sense behind
them; the unexpected Commons search results discussed ad nauseam, for
example. I don't think one quixotic and mistaken complaint somehow
nullifies any other objection people can make about entirely different
material...

At the same time we have a huge amount of search terms that give the 
expected results, while we only see the examples where it goes wrong. I 
remember that Andreas picked drawing style as an example.[1] Was this 
just an coincidence? No it wasn't. He actually knew about an image that 
I uploaded some time ago, he attacked it later on and now used it's file 
description to construct an example.[2] That's how this examples are 
created.


Additionally I proposed a solution for the search a while ago, that 
would avoid any problems from both sides entirely.[3] If we, the board 
or the foundation would put some heart into it, then we would have one 
less problem, even so I don't see it as problem as it currently is. But 
i would also benefit from this kind of improved search. (no tagging, no 
rating, no extra work for users, still better)


[1] 
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Buttons_to_switch_images_off_and_on
[2] 
http://commons.wikimedia.org/wiki/File:On_the_edge_-_free_world_version.jpg
[3] 
http://commons.wikimedia.org/wiki/Commons:Requests_for_comment/improving_search#A_little_bit_of_intelligence




___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2012-03-09 Thread Tobias Oelgarte

Am 09.03.2012 15:34, schrieb Gerard Meijssen:

The question you have to ask yourself, where is the value in Commons when
we do not optimise it as much as possible so that it will be the repository
of choice of freely licensed imagery.
Thanks,
  GerardM
That's right. But why did the current approaches only had one goal - the 
exclusion/hiding of controversial media - in mind?


nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2012-03-09 Thread Tobias Oelgarte

Am 09.03.2012 18:15, schrieb Andreas Kolbe:

On Fri, Mar 9, 2012 at 2:06 PM, Neil Babbagen...@thebabbages.com  wrote:



If you ran a charity store committed to providing educational products
free to all who needed them you wouldn't get many children as customers if
you put hardcore sex products right by the entrance.



^^^ This. ^^^
The little difference is that we aren't a store and have no front or 
back room. We are a skyscraper with an elevator and hundreds of buttons 
for every floor, while kids tend to press on any button at once.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Controversial content software status

2012-03-07 Thread Tobias Oelgarte

Am 07.03.2012 23:41, schrieb Andreas Kolbe:

Juliana,

You simply don't understand where I am coming from.

I have nothing against Wikimedia websites hosting adult content, just like
I have nothing against the far greater amounts of explicit adult material
on Flickr for example. What saddens me though is that Wikimedia is unable
to grow up, and simply can't get it together to host such material
responsibly, like Flickr and YouTube do, behind an age-related filter.
Because that is far and away the mainstream position in society about adult
material.
Sorry to interrupt you. But as i can see, you constantly rage against 
sexuality in any form. I came to this little conclusion because i saw 
never an example from your side considering other topics. What i see is 
the constant lobbying for a safepedia, abusing children and crying 
mothers as the main argument, while praising flickr, youtube and co. as 
the ideal that we all should follow. Im absolutely not convinced that 
this is the right way for knowledge. Not a single website that has this 
kind of service is dedicated to spread education or knowledge. It's 
quite the opposite.

And I am saddened that at least some members of the Wikimedia Foundation
Board lack the balls and vision to make Wikimedia a mainstream operator,
and instead want to whimp out and give in to extremists.
I hope that they have the balls to follow the good examples. What are 
good examples?
* Equal treatment of content and readers (including children), as most 
libraries in the world do.
* The internet. A place for the free mind and everyone that wants to 
share knowledge and to spread the word.

* Diversity in viewpoints, but acting with respect and tolerance.


Now, I am aware of your work in German Wikipedia, and I think that German
Wikipedia generally curates controversial content well. German Wikipedia
would never have an illustration like the Donkey punch animation in
mainspace:

http://www.junkland.net/2011/11/donkey-punch-or-how-i-tried-to-fight.html

So to an extent I can understand German editors saying, There is no
problem. But only to an extent. Commons and parts of English Wikipedia are
a joke. Even some people in German Wikipedia have understood this. In my
view, the editors who cluster around these topic areas in Commons and
English Wikipedia simply lack the ability to curate such material
responsibly. The internal culture is completely inappropriate.

The other day e.g. I noticed that Wikimedia Commons administrators
prominently involved in the curation of adult materials were giving or
being given something called the Hot Sex Barnstar (NSFW) for their
efforts:

http://www.webcitation.org/65yLm9XpJ
http://commons.wikimedia.org/wiki/File:Hot_sex_barnstar.png
http://commons.wikimedia.org/w/index.php?title=User_talk:Cirtoldid=67901160#Hot_sex_barnstar

http://commons.wikimedia.org/w/index.php?title=User_talk:Saibooldid=67973190#The_Hot_Sex_Barnstar

http://commons.wikimedia.org/w/index.php?title=User_talk%3AMattbuckdiff=67910238oldid=67910067
http://commons.wikimedia.org/w/index.php?title=User_talk:Stefan4oldid=67980777#The_Hot_Sex_Barnstar

The editor who designed this barnstar has just been blocked on Commons and
English Wikipedia by Geni, who (because of the Wikipedia Review discussion
thread, I guess) believes him to be the person reported to have been jailed
for possessing and distributing child pornography in the United States in
this article:

http://sptimes.ru/index.php?action_id=2story_id=13283
He said himself that he isn't the same person, while Geni has no 
evidence however. To me it looks like a witch hunt and i would create 
and give you a barnstar for that. The reason this barnstar (hot n sexy) 
exists is also very simple. It exists because people like you only rage 
against sexual topics and that again, again, again, zZzZz, again and 
again. It is boring and a nuisance for the active community that wants 
to curate Commons.

The editor has since been unblocked in Commons, while his unblock request
in English Wikipedia has been denied by the arbitration committee.

Now, this chap has contributed to Wikimedia projects for almost eight
years. He has been one of the most active contributors to Wikimedia Commons
in the adult media area, part of a small group of self-selected editors who
decide what kind of adult educational media Wikimedia Commons should host
to support its tax-exempt educational brief. In the real world, he
represents a fringe political position and a worldview that is aggressively
opposed to mainstream society. In Wikimedia Commons, he is mainstream. That
is a problem.

WMF is looking to work together with lots of mainstream organisations, from
the British Museum to the Smithsonian. But this kind of curation of adult
content is an embarrassment for the Wikimedia Foundation, and a potential
embarrassment for all the institutions collaborating with Wikimedia. And
the German community, happy with its largely well curated content in German

Re: [Foundation-l] Controversial content software status

2012-03-07 Thread Tobias Oelgarte

Am 08.03.2012 01:53, schrieb Andreas Kolbe:

On Wed, Mar 7, 2012 at 11:46 PM, Tobias Oelgarte
tobias.oelga...@googlemail.com  wrote:


Am 07.03.2012 23:41, schrieb Andreas Kolbe:
Sorry to interrupt you. But as i can see, you constantly rage against
sexuality in any form. I came to this little conclusion because i saw never
an example from your side considering other topics.



You not seeing it doesn't mean it ain't happening. :) It's just that these
are the discussions where you choose to hang out.
This is very unconvincing, because it's very easy to keep track on steps 
of other users. ;-)




He said himself that he isn't the same person, while Geni has no evidence

however.



The English Wikipedia's arbitration committee has looked into it and upheld
the block – re-issued it in fact, under its own authority.

http://en.wikipedia.org/wiki/Special:Contributions/Beta_M
And of course there is not a single clue why it happend or what he did 
wrong. That's like putting someone into the jail while holding a trial 
excluded from the public, while the prosecutor and judge are the same 
person(s). Reminds me on the middle age.


You were simply gratified that I thought you had come up with a great idea,
which you have. :) You know what annoys me? That we still have not had one
developer commenting on your proposal at

http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Clustering_for_search_results_on_Commons

It's a good proposal, and would go some way towards alleviating a Wikimedia
problem that's been discussed on the Internet for half a year now.
I don't see it as solving a problem. I see it as way to improve Commons 
while not making the anti porn lobby raining down useless and stupid 
deletion requests on Commons or proposing and pushing even more idiocy 
in resolutions, like that sexuality related images have to be hidden in 
special categories and are forbidden to show up in more general 
categories, even if they contain the subject.


The most useful part of a comment I found in the search discussion on 
Commons was:


Category:Photographs of non-kosher mammals standing on the hind legs 
with the visible genitalia made in Germany with a digital camera during 
Rammadon at night

http://tch516087.tch.www.quora.com/Why-is-the-second-image-returned-on-Wikimedia-Commons-when-one-searches-for-electric-toothbrush-an-image-of-a-female-masturbating

Perhaps you would like to complain, along with me, that your proposal is
not getting the attention it deserves.

Andreas
I don't complain. I made a proposal. Someone might pick it up and make 
something out of it. If no one does, then i won't cry. But if someone 
comes up with such stupid tagging, rating or hiding approaches and 
implements it, then I will leave the project alone, since it would be 
already dead at this point.


nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Controversial content software status

2012-03-05 Thread Tobias Oelgarte

Am 05.03.2012 19:21, schrieb Andreas Kolbe:

I agree you're damned if you do, damned if you don't, and you have my
sympathy.

However, I would like you to consider what our users get when they do a
Multimedia search for male human in Wikipedia:

http://en.wikipedia.org/w/index.php?title=Special:Searchlimit=500offset=0redirs=0profile=imagessearch=male+human

Or try just human:

http://en.wikipedia.org/w/index.php?title=Special:Searchlimit=500offset=0redirs=0profile=imagessearch=human

Is this the Wikimedia view of what humanity is about?

There are people in this movement who are happy with this status quo, and
who say they will fork if anything changes.

Let them.

Andreas

Sometimes your a little bit to persistent. I know that this results are 
giving a wrong image, but you brought them up in at least 20 discussions 
until now. But this won't solve anything. How about some active work to 
come up with possible solutions? No, I don't mean solutions that would 
perfectly fit your own demands. It is way more productive to search for 
solutions that the opposition could agree with, while also achieving the 
own goals at the same time.


You saw my search proposal and you where in favour of it. But it wasn't 
only you who could agree with this proposal. The opposition would be 
happy with it as well. That is the way to go. But to find such solutions 
you will need to respect other opinions as well.


Back to your human examples, I have simple explanation. This images, 
how controversial they are, get good treatment by the community. Yes 
even a deletion request is good treatment in this case. There are much 
more people involved with this files then with many other files. This 
leads to very direct descriptions, better categorization and so on. Now 
we must not wonder that the search is so happy to represent the current 
results. Such actions make them even more popular and give them a high 
rank inside the results.


You also stated in another discussion that the sexuality related 
categories and images are also very popular among our readers and that 
the current practices would make it a porn site. Not that we are such a 
great porn site, we aren't, but we know where all this people come from. 
Take a look at the popular search terms at Google, Bing, Yahoo, etc. One 
thing to notice: Sexuality related search requests are very popular. 
Since Wikipedia is high ranked and Commons as well, it is no wonder that 
so many people visit this galleries, even if they are disappointed in a 
very short time browsing through our content. But using this as an 
argument that we are a porn website is a fraud conclusion, as well as 
using this as an argument.


nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-12-02 Thread Tobias Oelgarte
Am 01.12.2011 10:53, schrieb John Vandenberg:
 On Thu, Dec 1, 2011 at 8:11 PM, Jussi-Ville Heiskanen
 cimonav...@gmail.com  wrote:
 ... The downstream
 use objection
 was *never* about downstream use of _content_ but downstream use of _labels_ 
 and
 the structuring of the semantic data. That is a real horse of a
 different colour, and not
 of straw.
 Tom thinks that this horse is real, but it has bolted.  I agree with
 Tom that it is very simple for a commercial filter provider, or anyone
 else who is sufficiently motivated, to find most naughty content on WP
 and filter it.  Risker said she had experienced something like this.
 Universities and schools have this too.

 I would prefer that we do build good metadata/labels, but that we
 (wikimedia) do not incorporate any general purpose use of them for
 filtering from readers.  Hiding content is the easy way out.  The
 inappropriate content on our projects is of one of two types:

 1. inappropriate content that is quickly addressed, but it is seen by
 some people as it works its way through our processes.  Sometimes it
 is the public that sees the content; sometimes it is only the
 community members who *choose* to patrol new pages/files while on the
 train.

 2. content which is appropriate for certain contexts, is known to be
 problematic but concensus is that the content stays, however readers
 stumble on it unawares.

 The former cant be solved.

 The latter can be solved by labelling but not filtering.  If you are
 on the train and a link is annotated with a tag nsfw, you can not
 click it, or be wary about the destination page.

 --
 John Vandenberg

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

Thats exactly this kind of pre-judicial labeling the ALA speaks about 
[1] and that can be misused by third parties (ISPs in general meaning). 
This kind of labeling has nothing to do with an encyclopedia. Either we 
include such content or we don't. If we include it, then we don't label 
it. This would be pre-judicial and someone has to do this for others. 
This someone will break with NPOV, since it is _his_ opinion and not 
only that of the reader. I thought category based filtering ('nsfw' is a 
category) is off the table?

[1] 
http://www.ala.org/Template.cfm?Section=interpretationsTemplate=/ContentManagement/ContentDisplay.cfmContentID=8657

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-12-02 Thread Tobias Oelgarte
Am 01.12.2011 20:06, schrieb Tom Morris:
 On Thu, Dec 1, 2011 at 09:11, Jussi-Ville Heiskanen
 cimonav...@gmail.com  wrote:
 This is not a theoretical risk. This has happened. Most famously in
 the case of Virgin using pictures of persons that were licenced under
 a free licence, in their advertising campaign. I hesitate to call this
 argument fatuous, but it's relevance is certainly highly
 questionable. Nobody has raised this is as a serious argument except
 you assume it
 has been. This is the bit that truly is a straw horse. The downstream
 use objection
 was *never* about downstream use of _content_ but downstream use of _labels_ 
 and
 the structuring of the semantic data. That is a real horse of a
 different colour, and not
 of straw.

 I was drawing an analogy: the point I was making is very simple - the
 general principle of we shouldn't do X because someone else might
 reuse it for bad thing Y is a pretty lousy argument, given that we do
 quite a lot of things in the free culture/open source software world
 that have the same problem. Should the developers of Hadoop worry that
 (your repressive regime of choice) might use their tools to more
 efficiently sort through surveillance data of their citizens?
If they provide a piece of software that can be used for evil things 
than it is ok, as long they don't support the use of the software for 
such purposes. Otherwise we would have to stop the development of 
Windows, Linux, Mac OS in the first place. What we do is different. We 
provide a weak tool, but we provide strong support for the evil detail. 
I called it weak since everyone should be able to disable it at any 
point he wants (if even enabled). But i also called it strong, because 
we provide the actual data for misuse through our effort to label 
content as inappropriate to some.

 I'm not at all sure how you concluded that I was suggesting filtering
 groups would be reusing the content? Net Nanny doesn't generally need
 to include copies of Autofellatio6.jpg in their software. The reuse of
 the filtering category tree, or even the unstructured user data, is
 something anti-filter folk have been concerned about. But for the most
 part, if a category tree were built for filtering, it wouldn't require
 much more than identifying clusters of categories within Commons. That
 is the point of my post. If you want to find adult content to filter,
 it's pretty damn easy to do: you can co-opt the existing extremely
 detailed category system on Commons (Nude images including Muppets,
 anybody?).
I had a nice conversation with Jimbo about this categories and i guess 
we came to the conclusion that it would not work that way you used it 
for an argument. At some point we will have to provide the user with 
some kind of interface in that he can easily select what should be 
filtered and what not. Giving the users a choice from a list containing 
hundreds of categories wouldn't work, because even Jimbo refuses it as 
to complicated and unsuited to be used. What would need to be done is to 
group this close to neutral (existing) category clusters up to more 
general terms to reduce the number of choices. But this clusters can 
then be easily be misused. That essential means for a category/label 
based filter:

The more user friendly it is, the more likely it is to be abused.

 Worrying that filtering companies will co-opt a new system when the
 existing system gets them 99% of the way anyway seems just a little
 overblown.
Adapting a new source for inexpensive filter data was never a problem 
and is usually quickly done. It costs a lot of worktime (money) to 
maintain filter lists, but it is really cheap to set up automated 
filtering. Thats why many filters based on Googles filtering tools 
exist, even so Google makes a lot of mistakes.

 It isn' one incidence, it isn't a class of incidences. Take it on board that
 the community is against the *principle* of censorship. Please.
 As I said in the post, there may still be good arguments against
 filtering. The issue of principle may be very strong - and Kim Bruning
 made the point about the ALA definition, for instance, which is a
 principled rather than consequentialist objection.

 Generally, though, I don't particularly care *what* people think, I
 care *why* they think it. This is why the debate over this has been so
 unenlightening, because the arguments haven't actually flowed, just
 lots of emotion and anger.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 10:32, schrieb Tom Morris:
 On Tue, Nov 29, 2011 at 08:09, Möller, Carstenc.moel...@wmco.de  wrote:
 No, we need to harden the wall agaist all attacks by hammers, screwdrivers 
 and drills.
 We have consensus: Wikipedia should not be censored.

 You hold strong on that principle. Wikipedia should not be censored!

 Even if that censorship is something the user initiates, desires, and
 can turn off at any time, like AdBlock.

 Glad to see that Sue Gardner's warnings earlier in the debate that
 people don't get entrenched and fundamentalist but try to honestly and
 charitably see other people's points of view has been so well heeded.

There is a simple thing to know, to see, that this wording is actually 
correct. There is not a single filter that can meet the personal 
preferences, is easy to use and not in violation with NPOV, besides two 
extrema. The all and nothing options. We already discussed that in 
detail at the discussion page of the referendum.

If the filter is user initiated then it will meet the personal 
preference is not in violation with NPOV. But it isn't easy to use. He 
will have to do all the work himself. That is good, but practically 
impossible.

If the filter is predefined then it might meet the personal preference 
and can be easy to use. But it will be an violation of NPOV, since 
someone else (a group of reader/users) would have to define it. That 
isn't user initiated censorship anymore.

The comparison with AdBlock sucks, because you didn't looked at the goal 
of both tools. AdBlock and it's predefined lists are trying to hide 
_any_ advertisement, while the filter is meant to _only_ hide 
controversial content. This comes down to the two extrema noted above, 
that are the only two neutral options.

nya~





___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
The problem starts at the point where the user does not choose the 
image(s) for himself and uses a predefined set on what should no be 
shown. Someone will have to create this sets and this will be 
unavoidably a violation of NPOV in the first place. If the user would 
choose for himself the images that shouldn't be shown or even (existing) 
categories of images that he wants to hide, then it would be his 
personal preference. But do we want to exchange this lists or make them 
public? I guess not. Since this lists will be a predefined sets itself.

What i found to be the best solution so far was the blurred images 
filter. You can 'opt-in' to enable it and all images will be blurred as 
the default. Since they are only blurred you will get a rough impression 
on what to expect (something the what a hidden image can't do) and an 
blurred image can be viewed by just hovering the mouse cursor over it. 
While you browse, not a single click is needed. On top of that it is 
awfully easy to implement, we already have a running version of it (see 
brainstorming page), it doesn't feed any information to actual censors 
and it is in no way a violation with NPOV. So far i didn't hear any 
constructive critic why this wouldn't be a very good solution.

nya~

Am 29.11.2011 12:08, schrieb Alasdair:
 I agree that the main obstacle at the moment is that any form of filter 
 list proposal is very controversial as many editors feel that this would be 
 a way of enabling  POV censorship that users may not want.

 One thing I would like to know, which has not been clear to me in discussions 
 is whether there is such a strong objection to any form of  filter which 
 includes in its core design the requirement that it can be trivially 
 overridden on a particular image by asynchronous loading (i.e Images are not 
 shown according to a predefined criterion - but the image is blocked and 
 where the image is a grey square with the image description and a show this 
 image button). So that a user who thinks that they might want to see an 
 image that has been blocked by their filter can do so very easily.

 If the feeling is that such a weak filter would (regardless of how the 
 pre-populated filter lists are created) still attract significant 
 opposition on many projects then I personally don't see how there can be any 
 filter created that is likely to gain consensus support and still be useful - 
 except for one that gives users the option to hide all images by default 
 and then click on the greyed out images to load them if they want to see them.


 --
 Alasdair (User:ajbpearce)


 On Tuesday, 29 November 2011 at 11:37, Tobias Oelgarte wrote:

 Am 29.11.2011 10:32, schrieb Tom Morris:
 On Tue, Nov 29, 2011 at 08:09, Möller, Carstenc.moel...@wmco.de 
 (mailto:c.moel...@wmco.de)  wrote:
 No, we need to harden the wall agaist all attacks by hammers, screwdrivers 
 and drills.
 We have consensus: Wikipedia should not be censored.


 You hold strong on that principle. Wikipedia should not be censored!

 Even if that censorship is something the user initiates, desires, and
 can turn off at any time, like AdBlock.

 Glad to see that Sue Gardner's warnings earlier in the debate that
 people don't get entrenched and fundamentalist but try to honestly and
 charitably see other people's points of view has been so well heeded.


 There is a simple thing to know, to see, that this wording is actually
 correct. There is not a single filter that can meet the personal
 preferences, is easy to use and not in violation with NPOV, besides two
 extrema. The all and nothing options. We already discussed that in
 detail at the discussion page of the referendum.

 If the filter is user initiated then it will meet the personal
 preference is not in violation with NPOV. But it isn't easy to use. He
 will have to do all the work himself. That is good, but practically
 impossible.

 If the filter is predefined then it might meet the personal preference
 and can be easy to use. But it will be an violation of NPOV, since
 someone else (a group of reader/users) would have to define it. That
 isn't user initiated censorship anymore.

 The comparison with AdBlock sucks, because you didn't looked at the goal
 of both tools. AdBlock and it's predefined lists are trying to hide
 _any_ advertisement, while the filter is meant to _only_ hide
 controversial content. This comes down to the two extrema noted above,
 that are the only two neutral options.

 nya~





 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org (mailto:foundation-l@lists.wikimedia.org)
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


___
foundation-l mailing list
foundation-l

Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 12:09, schrieb Andre Engels:
 On Tue, Nov 29, 2011 at 11:37 AM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 If the filter is predefined then it might meet the personal preference
 and can be easy to use. But it will be an violation of NPOV, since
 someone else (a group of reader/users) would have to define it. That
 isn't user initiated censorship anymore.
 It is still the user who chooses whether or not to remove images, and
 if so, which list, although of course their choice is restricted. I
 guess that's not user initiated, but it is still user chosen.
With the tiny (actually big) problem that such lists are public and can 
be directly feed into the filters of not so people loving or extremely 
caring ISP's. This removes the freedom of choice from the users. Not 
from those that want this feature, but from those that don't want or 
that don't want it every time. In this case you trade a convenience for 
some of our readers against the ability to access all the knowledge that 
we could provide.
 The comparison with AdBlock sucks, because you didn't looked at the goal
 of both tools. AdBlock and it's predefined lists are trying to hide
 _any_ advertisement, while the filter is meant to _only_ hide
 controversial content. This comes down to the two extrema noted above,
 that are the only two neutral options.
 I don't agree. We are not deciding which content is controversial and
 which not, we are giving users the option to decide not to see
 such-and-such content if they don't want to. That's not necessarily
 labeling them as controversial; it is even less labeling other content
 as non-controversial.
I neither agree. We decide what belongs to which preset (or who will do 
it?), and it is meant to filter out controversial content. Therefore we 
define what controversial content is, - or at least we tell the people, 
what we think, that might be controversial, while we also tell them 
(exclusion method) that other things aren't controversial.
 Even more importantly, your options are not neutral at all, in my
 opinion. Either everything is controversial or nothing is. That's
 not a neutral statement. It's controversial to you if you consider it
 controversial to you - that's much closer to being NPOV, and that's
 what the proposal is trying to do.
No. This options are meant to say that you have to define for yourself 
what is controversial. They take the extreme stances of equal judgment. 
Either anything is guilty or nothing is guilty and both stances provide 
no information at all. Both give no definition. It is not the answer to 
the question: What is controversial? under the assumption that not 
anything or not everything is controversial. If you agree that not 
anything or not everything is controversial than this simple rule has to 
apply, since both extremes are untrue. That is very simple logic and 
forces you to define it for yourself.

Back to the statement: It's controversial to you if you consider it 
controversial to you. Thats right. But it's not related to the initial 
problem. In this case you will only find a you and a you. There is 
no we, them or anything like that. You could have written: If my 
leg hurts, then my leg hurts. Always true, but useless to be applied to 
something that involves anything not done not by you in the first part 
of the sentence.

   NPOV is not about treating every
 _subject_ as equal, but about treating every _opinion_ as equal.
This is a nice sentence. I hope that you will it. I also hope that you 
remember that images are subjects and not opinions.

   If I
 have a set of images I consider controversial, and you have a
 different, perhaps non-intersecting set that you consider
 controversial, the NPOV method is to consider both distinctions as
 valid, not to say that it means that everything is controversial, or
 nothing is.
A filter with presets considers only one opinion as valid. It shows an 
image or it does hide it. Stating different opinions inside an article 
is a very different thing. You represent both opinions but you don't 
apply them. On top of that it are the opinions of people that don't 
write the article.

   And -surprise- that seems to be exactly what this proposal
 is trying to achieve. It is probably not ideal, there might even be
 reasons to drop it completely, but NPOV is much better served by this
 proposal than it is by yours.

Actually you misused or misunderstood the core of NPOV in combination 
with this two stances. Thats why i can't agree or follow your conclusion.

NPOV is meant in the way that we don't say what is right or is wrong. We 
represent the opinions and we let the user decide what to do with them. 
Additionally NPOV implies that we don't write down our own opinions. 
Instead we cite them.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 13:03, schrieb MZMcBride:
 Alasdair wrote:
 If the feeling is that such a weak filter would (regardless of how the
 pre-populated filter lists are created) still attract significant
 opposition on many projects then I personally don't see how there can be
 any filter created that is likely to gain consensus support and still be
 useful - except for one that gives users the option to hide all images by
 default and then click on the greyed out images to load them if they want
 to see them.
 You're confusing the opinions of a few extremists on foundation-l with
 general consensus. It's unclear what percent of users actually want this
 feature, particularly as the feature's implementation hasn't been fully
 developed. A few people on this list have been trying very hard to make it
 seem as though they're capable of accepting some magical invisible pink
 unicorn-equivalent media filter, but the truth is that they're realistically
 and pragmatically opposed to any media filter, full stop. This is an
 extremist opinion (it's not as though extremist opinions are particularly
 uncommon around here).

 Personally, I want to believe that if the Wikimedia Board is making such a
 strong push for this feature to be implemented, there are very good reasons
 for doing so. Whether or not that's the case, I wouldn't look (closely or
 broadly) at the comments on this mailing list and try to divine
 community-wide views.

 MZMcBride

... And I still want to see the good reason for doing so. So far i 
could not find one single reason that was worthy to implement such a 
filter considering all the drawbacks it causes. That doesn't mean that 
I'm opposed to any kind of filter. It just that we currently have three 
models:

* The very simple clean solutions (all/nothing/blured/...), which aren't 
found intuitive by the filter lovers.
* The category/labeling based solutions, which require an immense effort 
(constantly) and provide data for censors.
* The user based solutions, which are most likely unusable, since they 
require a lot of work by the user himself.

What I'm missing is option four. But as long option four isn't present 
I'm strongly in favor of options 0 and 1. 0 would be: do nothing.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 13:45, schrieb David Gerard:
 On 29 November 2011 12:03, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 What i found to be the best solution so far was the blurred images
 filter. You can 'opt-in' to enable it and all images will be blurred as
 the default. Since they are only blurred you will get a rough impression
 on what to expect (something the what a hidden image can't do) and an
 blurred image can be viewed by just hovering the mouse cursor over it.
 While you browse, not a single click is needed. On top of that it is
 awfully easy to implement, we already have a running version of it (see
 brainstorming page), it doesn't feed any information to actual censors
 and it is in no way a violation with NPOV. So far i didn't hear any
 constructive critic why this wouldn't be a very good solution.

 I gave one before:

  From the far side of the office, a blurred penis on your screen looks
 like a blurred penis on your screen.

 For this reason, I suggest a blank grey square instead.


 - d.

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

Just use another imageprocessingfilter and it will not look like a 
blurred penis, but maybe like a distorted penis or an arm.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 14:28, schrieb Alasdair:
 On Tuesday, 29 November 2011 at 13:42, Tobias Oelgarte wrote:

 With the tiny (actually big) problem that such lists are public and can
 be directly feed into the filters of not so people loving or extremely
 caring ISP's.


   I think this is a point that I was missing about the objections to the 
 filter system.

 So a big objection is that any sets of filters is not so much to the weak 
 filtering on wikipedia but that such sets  would enable other censors to 
 more easily make a form of strong censorship of wikipedia where some images 
 were not available (at all) to readers - regardless of whether or not they 
 want to see them?

 I am not sure I agree with this concern as a practical matter but I can 
 understand it as a theoretical concern. Has the board or WMF talked about / 
 addressed this issue anywhere in regards to set based filter systems?

 --
 Alasdair (User:Ajbpearce)

So far this thought got widely ignored. I can't remember a board member, 
aside from Arne Klempert talking about it. Instead i heard the 
argumentation that some censors would unbanning Wikipedia if we 
implemented such a feature as preemptive obedience. But who is really 
such naive to believe it? Censors aren't happy with a opt-in solution. 
They prefer the unable to opt-out solutions and are also interested in 
textual content as well.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 14:40, schrieb Andre Engels:
 On Tue, Nov 29, 2011 at 1:03 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 The problem starts at the point where the user does not choose the
 image(s) for himself and uses a predefined set on what should no be
 shown. Someone will have to create this sets and this will be
 unavoidably a violation of NPOV in the first place.
 No, why would it? What does it say if someone created such a set?
 These are pictures of such-and-so, and there might be people who do
 not want to see pictures of such-and-so. I don't see the NPOV here.
 Nobody is saying These pictures should not be seen. They are saying,
 some people would not like to see these pictures. That's not POV.

You missed the previous question: Why would some people not like to see 
these pictures? The answer to this question is the motivation to create 
such a list and to spread it. But this answer is any case non NPOV.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 14:48, schrieb Andre Engels:
 On Tue, Nov 29, 2011 at 1:42 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 I neither agree. We decide what belongs to which preset (or who will do
 it?), and it is meant to filter out controversial content. Therefore we
 define what controversial content is, - or at least we tell the people,
 what we think, that might be controversial, while we also tell them
 (exclusion method) that other things aren't controversial.
 No, we don't tell that other things aren't controversial. I consider
 that a ridiculous conclusion to draw. It's just that we have not yet
 found that it is under one of the categories we specified as
 blockable. There are other categories that might be specified, but
 alas, we don't have them yet.
Do you remember your last mail in which you said that my viewpoints are 
extreme? I was writing that considering anything controversial or not 
are the only neutral positions to take. You opposed it strongly. Now you 
start your claim with the preposition that we will eventually find 
categories in a way that anything could be seen as controversial? Thats 
a 180° turn from one mail to the other. Just to find new arguments?

I will read the rest of your answers later on. For now i have some work 
to do. Maybe you want to enlighten me how that is possible.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-29 Thread Tobias Oelgarte
Am 29.11.2011 23:47, schrieb Kim Bruning:
 On Tue, Nov 29, 2011 at 09:09:04AM +0100, M?ller, Carsten wrote:
 ... but -if we want to reach consensus[1]- what we really need to be
 discussing is: screwdrivers.

 sincerely,
 Kim Bruning

 No, we need to harden the wall agaist all attacks by hammers, screwdrivers 
 and drills.
 We have consensus: Wikipedia should not be censored.
 Right, hammering ourselves on the thumb is a bad idea :-P

 However, there's nothing wrong with making sure that people
 don't get odd images when they don't expect it (something
 wikipedia is good at, but commons admittedly perhaps slightly
 less so). This is the screw.
That is more or less a search and time issue. If you search for a 
cucumber and a sexual related image ranks first instead of an actual 
cucumber then it would be time to improve the search function. If we 
have not enough people categorizing images the right way, we might start 
to recruit more helpers.

If we are careful enough we might be able to recycle the hammer to 
construct two or more small screwdrivers an argument against the image 
filter that is read as this: Put more effort inside ideas how to 
improve search functionality and to help categorizing. It will actually 
help everyone and would get clear referendum results. ;P
 I don't think a filter (the hammer) will be very successful at
 doing so, because filters have simply never been very good at
 keeping away unexpected content, and can easily lead to
 censorship and other unwanted side effects (hitting ourselves
 on the thumb).  However, perhaps some other tool might be
 useful for fixing the screw. Some people have come up with some
 interesting proposals.

 But shouting at each other about filters is probably
 counter-productive at this point. ;-)

 sincerely,
   Kim Bruning



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikimania 2011 videos - mission complete!

2011-11-29 Thread Tobias Oelgarte
Am 30.11.2011 00:04, schrieb Kim Bruning:
 On Tue, Nov 29, 2011 at 01:48:24PM +0200, Itzik Edri wrote:
 Hi,

 *I happy to announce that all the videos from Wikimania 2011 in Haifa are
 now available on our channel in YouTube!: http://www.youtube.com/WikimediaIL
 .*
 * http://www.youtube.com/watch?v=emli8S2_trs
 * http://www.youtube.com/watch?v=5c2Vb7CqTdc
 * http://www.youtube.com/watch?v=2iDMLkC_pRg


 O:-)

 sincerely,
   Kim Bruning
That actually gave me a headache. But never mind.

* http://www.youtube.com/watch?v=Z8bODUWy3Ks

:-P

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-26 Thread Tobias Oelgarte
Yes, it is an analogy to KnowledgeBlock, with predefinable lists, 
encouraged to be created by censors best friends, to be shared by the 
local ISPs to give an good understanding in what shouldn't be known.


Putting the sarcasm aside and switching to irony, I see a complicated 
system with very few potential users:

Problems for users/readers:
* The average reader doesn't find the talk page, but it is expected by 
him to manage self maintained filter lists?
* He needs to be shocked first, before he gets informed that such a 
feature exists. Or he will have to trust lists created by someone he 
don't even know.

Problems for the infrastructure:
* Every account stores an additional list of what to block. Doing it 
also for IPs via cookies will create an huge amount of information that 
needs to be managed. (Considering the fact that it is actually used as 
massively as the demand is described by Andreas Kolbe/Jayen466)
* Every use of the filter will circumvent the caching since every page 
requested by a user/reader that uses the filter will have to be created 
from scratch.

Problems in general:
* If we use public lists then the approach is basically the same as with 
categorized filtering. The only difference is, that it is stored in 
another format. Today we serve the same eggs sunny side down.
* Who creates the lists? The user for himself? Considering million of 
images and articles it isn't an option to do it alone. Someone who has a 
lot of freetime? Yes, considering the fact, that he doesn't want to see 
at all the pictures he looks at...


Putting the irony aside an switching to realism:

Every approach aside the hide anything feature that i have seen so far 
is either on the borderline to censorship, practically impossible to 
maintain or generally unusable by the average reader. The only thing i 
noticed is, that every approach seems to be right to introduce some kind 
of filter. If option A is no good, then lets try option B and if B is 
also not the right way then lets try C,... Currently we are at option Z 
II, and it looks not very different from option B, but very importantly 
it is better in the wording and it sounds nicer, like an old bike with a 
foxtail attached is much better then just an old bike.

I'm very curious what we try to achieve with this filter? Is it really 
to get more readers or is it just to introduce a filter that is in some 
way predefinable? Where is the opposition against the simple hide 
anything feature? It is simple, can quickly be implemented, doesn't 
cost much money and serves 99% of the mentioned purposes for filtering. 
But why the hell isn't it an option for our filter-fan-boys and 
filter-fan-girls?

nya~

Am 26.11.2011 15:41, schrieb Tom Morris:
 On Thu, Nov 24, 2011 at 14:59, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:
 I'm a little bit confused by this approach. On the one side it is good
 to have this information stored privately and personal, on the other
 side we encouraging the development of filter lists and the tagging of
 possibly objectionable articles. The later wouldn't be private at all
 and even worse then tagging single images. In fact it would be some kind
 of additional force to ban images from articles just to keep them in the
 clean section.

 Overall i see little to now advantage over the previously supposed
 solutions. It is much more complicated, harder to implement, more
 resource intensive and not a very friendly interface for readers.

 Err, think of it with an analogy to AdBlock. You can have lists stored
 privately (in Adblock: in your browser settings files, in an image
 filter: on the WMF servers but in a secret file that they'll never
 ever ever ever release promise hand-on-heart*) and you can have lists
 stored publicly (in Adblock: the various public block lists that are
 community-maintained so that you don't actually see any ads, in an
 image filter: on the web somewhere). And you can put an instruction in
 the former list to transclude everything on a public list and keep it
 up-to-date.

 Given it works pretty well in Adblock, I don't quite see how that's a
 big deal for Wikimedia either. Performance wise, you just have it so
 the logged in user has a list of images they don't want to see, and
 you have a script that every hour or so downloads and caches the
 public list, then when they call to retrieve the list for the purposes
 of seeing what's on it, it simply concatenates the two. This seems
 pretty straightforward.

 And if the WMF doesn't do it - perhaps because people are whinging
 that me being given the option to opt-in and *not* see My
 micropenis.jpg is somehow evil and tyrannical and contrary to
 NOTCENSORED - it could possibly be done as a service by an outside
 group and then implemented on Wikipedia using userscripts. The
 difference is that the WMF may do it in a slightly more user-friendly
 way given that they have access to the servers.

 * That's less sarcastic than it sounds

Re: [Foundation-l] Image filter brainstorming: Personal filter lists

2011-11-24 Thread Tobias Oelgarte
Am 24.11.2011 15:09, schrieb MZMcBride:
 Andreas K. wrote:
 The way this would work is that each project page would have an Enable
 image filtering entry in the side bar. Clicking on this would add a Hide
 button to each image displayed on the page. Clicking on Hide would then
 grey the image, and automatically add it to the user's personal filter list.
 I think this sounds pretty good. Is there any indication how German
 Wikipedians generally view an implementation like this? I can't imagine
 English Wikipedians caring about an additional sidebar link/opt-in feature
 like this.

 Apart from enabling users to hide images and add them to their PFL as they
 encounter them in surfing our projects, users would also be able to edit
 the PFL manually, just as it is possible to edit one's watchlist manually.
 In this way, they could add any image file or category they want to their
 PFL. They could also add filter lists precompiled for them by a third
 party. Such lists could be crowdsourced by people interested in filtering,
 according to whatever cultural criteria they choose.
 Some sort of subscription service would work well here, right? Where the
 list can auto-update from a central list on a regular basis. I think that's
 roughly how in-browser ad block lists work. Seems like it could work well.
 Keep who pulls what lists private, though, I suppose.

 For unregistered users, their PFL could be stored in a cookie.
 I'm not sure you'd want to put it in a cookie, but that's an implementation
 detail.

 Watchlist editing is generally based on looking at titles. I don't suppose
 you'd want a gallery of hidden images, but it would make filter-list editing
 easier, heh.

 MZMcBride



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

I'm a little bit confused by this approach. On the one side it is good 
to have this information stored privately and personal, on the other 
side we encouraging the development of filter lists and the tagging of 
possibly objectionable articles. The later wouldn't be private at all 
and even worse then tagging single images. In fact it would be some kind 
of additional force to ban images from articles just to keep them in the 
clean section.

Overall i see little to now advantage over the previously supposed 
solutions. It is much more complicated, harder to implement, more 
resource intensive and not a very friendly interface for readers.

My proposal would be: Just give it up and find other ways to improve 
Wikipedia and to make it more attractive.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Show community consensus for Wikilove

2011-10-30 Thread Tobias Oelgarte
That shouldn't be the issue. The question is the effect. What would make 
you more pleased, a standard message/template that you did good, or a 
personal message from someone from who you know yourself that he watched 
over your work? Personally, I doubt that a simple template machine could 
lead to an increase. It simplifies the progress to leaving such a 
message. But it is also an double edged sword. While it is more likely 
that you will get a friendly message, the messages itself are weakened, 
since they look like a standard templates.

PS: As i wrote some month ago: Damn. More kittens smashed at ground of 
the talk page, buried by the annoyed user. Great and important feature 
we haz now!

nya~


Am 31.10.2011 01:57, schrieb Mateus Nobre:
 Totally disagree with you, Yaroslav.

 Do you really think a traditional (you know, traditional in Wikipedia 
 equivalent to bureaucratic) communication and social system, friendship-free, 
 at wikis reduces the efficiency? Why the friendship and camaraderie in 
 editions and talk should reduce the efficiency of quality? Why working in a 
 pleasant ambiete worse results. I think economists and business-men disagree 
 with you.

 For your e-mail I found that you are probably Russian. You probably have read 
 Tolstoi, Anna Karênina. Using a literary example, Lievin, the landowner, 
 greatly increased his profit by changing the method of work of his moujiks. 
 The moujiks used to work in bad taste and bad-tempered when just followind 
 orders in a bad envronment. When Lievin adopted a collaborative approach, 
 when the moujiks could work without the several rules at a amicable 
 environment, profits rose.
 For Wikis is the same thing. Only the ideals are not enough. We have to have 
 a friendly, a pleasant, a nice environment. We've to make the time of 
 editions a good time to us. We've to smile editing Wikipedia. And know our 
 work is important to the community, moral support. Wikilove make Wikipedia 
 less a obligation and more a thing which we need every single day. This is 
 the point.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-23 Thread Tobias Oelgarte
Am 23.10.2011 08:30, schrieb Nikola Smolenski:
 On Sat, 2011-10-22 at 22:56 +0100, David Gerard wrote:
 And, in detail, why is a hide/show all solution inadequate? What is
 the use case this does not serve?
 Are you even trying to pretend to be serious? Use case: me reading an
 article.

 It is my impression that you are pushing for this hide/show all solution
 because you know it will be useless and thus no one will be using it.
That isn't the case. It was claimed multiple times that reading 
Wikipedia in front of bystanders can be a problem, since unwillingly 
some disturbing image might show up. If that is the case, then you can 
hide the images by default and enable them while you read. There were 
also thoughts to not hide the images entirely, but to blur them. So you 
will have glimpse on what it is about and could view it (remove the 
bluring) by just hovering it.

This would satisfy many typical needs and it isn't a thought to make the 
proposed feature useless. It is the result if you try to react to this 
problem without the need for categories and that wikipedians would need 
to play the censor for others.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-23 Thread Tobias Oelgarte
Am 23.10.2011 08:49, schrieb Nikola Smolenski:
 On Sat, 2011-10-22 at 23:35 +0200, Tobias Oelgarte wrote:
 Why? Because it is against the basic rules of the project. It is
 intended to discriminate content. To judge about it and to represent you
 No, it is intended to let people discriminate content themselves if they
 want, which is a huge difference.

 If I feel that this judgment is inadequate, I will turn the filter off.
 Either way, it is My Problem. Not Your Problem.
It is not the user of the filter that decides *what* is hidden or not. 
That isn't his decision. If it is the case that the filter does not meet 
his expectations and he does not use it, then we gained nothing, despite 
the massive effort taken by us to flag all the images. You should know 
that we already have a massive categorization delay on commons.
 easily exploited by your local provider to hide labeled content, so that
 you don't have any way to view it, even if you want to.
 Depending on the way it is implemented, it may be somewhat difficult for
 a provider to do that. Such systems probably already exist on some
 websites, and I am not aware of my provider using them to hide labelled
 content. And even if my provider would start doing that, I could simply
 use Wikipedia over https.
If your provider is a bit clever he would block https and filter the 
rest. An relatively easy job to do. Additionally most people would not 
know the difference between https and http, using the default http version.
 And if providers across the world start abusing the filter, perhaps then
 the filter could be turned off. I just don't see this as a reasonable
 possibility.
Well, we don't have to agree on this point. I think that this is 
possible with very little effort. Especially since images aren't 
provided inside the same document and are not served using https.
 If you want a filter so badly, then install parental software, close
 It is my understanding that parental software is often too overarching
 or otherwise inadequate.
Same would go for a category/preset based filter. You and I mentioned it 
above, that it isn't necessary better from the perspective of the user, 
leading to few users, but wasting our time over it.
 your eyes or don't visit the page. That is up to you. That is your
 If I close my eyes or don't visit the page, I won't be able to read the
 content of the page.
That is the point where a hide all/nothing filter would jump in. He 
would let you read the page without any worries. No faulty categorized 
image would show up and you still would have the option to show images 
in which you are interested.
 But feel free to read the arguments:
 http://de.wikipedia.org/wiki/Wikipedia:Meinungsbilder/Einf%C3%BChrung_pers%C3%B6nlicher_Bildfilter/en#Arguments_for_the_proposal
 It seems to me that the arguments are mostly about a filter that would
 be turned on by default. Most of them seem to evaporate when applied to
 an opt-in filter.

None of the arguments is based on a filter that would be enabled as 
default. It is particularly about any filter that uses categorization to 
distinguish the good from evil. It's about the damage such an approach 
would do the project and even to users that doesn't want or need the 
feature.

The German poll made clear, that not any category based filter will be 
allowed, since category based filtering is unavoidably non-neutral and a 
censorship tool.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] category free image filtering

2011-10-23 Thread Tobias Oelgarte
Am 23.10.2011 15:46, schrieb WereSpielChequers:
 --

 Message: 3
 Date: Sun, 23 Oct 2011 02:57:51 +0200
 From: Tobias Oelgartetobias.oelga...@googlemail.com
 Subject: Re: [Foundation-l] category free image filtering
 To: foundation-l@lists.wikimedia.org
 Message-ID:4ea3668f.5010...@googlemail.com
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed

 Am 23.10.2011 01:49, schrieb WereSpielChequers:
 Hi Tobias,

 Do youhave any problems with this category free proposal
 http://meta.wikimedia.org/wiki/User:WereSpielChequers/filter

 WereSpelChequers
 The idea isn't bad. But it is based on the premise that there are enough
 users of the filter to build such correlations. It requires enough input
 to work properly and therefore enough users of the feature, that have
 longer lists. But how often does an average logged in user find such an
 image and handle accordingly? That would be relatively seldom, resulting
 in a very short own list, by relatively few users, which makes it hard
 to start the system (warm up time).

 Since i love to find ways on how to exploit systems there is one simple
 thing on my mind. Just login to put a picture of penis/bondage/... on
 the list and than add another one of the football team you don't like.
 Repeat this step often enough and the system will believe that all users
 that don't like to see a penis would also not like to see images of that
 football team.

 Another way would be: I find everything offensive. This would hurt the
 system, since correlations would be much harder to find.

 If we assume good faith, then it would probably work. But as soon we
 have spammers of this kind, it will lay in ruins, considering the amount
 of users and corresponding relatively short lists (in average).

 Just my thoughts on this idea.

 Greetings
 nya~


 Hi Tobias,

 Yes if it turned out that almost no-one used this then only the Hide all
 image - recommended for users with slow internet connections and the Never
 show me this image again options would be effective. My suspicion is that
 even if globally there were only a few thousand users then it would start to
 be effective on the most contentious images in popular articles in the most
 widely read versions of wikipedia (and I suspect that many of the same image
 will be used on other language versions). The more people using it the more
 effective it would be, and the more varied phobias and cultural taboos it
 could cater for.  We have hundreds of millions of readers, if we offer them
 a free image filter then I suspect that lots will signup, but in a sense it
 doesn't matter how many do so - one of the advantages to this system is that
 when people complain about images they find offensive we will simply be able
 to respond with instructions as to how they can enable the image filter on
 their account.

 I'm pretty confident that huge numbers, perhaps millions with slow internet
 connections would use the hide all images option, and that enabling them to
 do so would be an uncontentious way to further our mission by making our
 various products much more available in certain parts of the global south.
 As far as I'm concerned this is by far the most important part of the
 feature and the one that I'm most confident will be used, though it may
 cease to be of use in the future when and if the rest of the world has North
 American Internet speeds.

 I'm not sure how spammers would try to use this,  but I accept that vandals
 will try various techniques from liking penises to finding pigs and
 particular politicians equally objectionable. Those who simply use this to
 like picture of Mohammed would not be a problem, the system should easily
 be able to work out that things they liked would be disliked by another
 group of users. The much more clever approach of disliking both a particular
 type of porn and members of a particular football team is harder to cater
 for, but I'm hoping that it could be coded to recognise not just where
 preferences were completely unrelated, as in the people with either
 arachnaphobia  or vertigo, or partially related as in one person having both
 arachnaphobia and vertigo. Those who find everything objectionable and tag
 thousands of images as such would easily be identified as having dissimilar
 preferences to others, as their preferences would be no more relevant to
 another filterer as those of an Arachnaphobe would be to a sufferer of
 vertigo.

 Of course it's possible that there are people out there who are keen to tag
 images for others not to see. In this system there is room for them, if your
 preferences are similar to some such users then the system would pick that
 up. If your preferences are dissimilar or you don't opt in to the filter
 then they would have no effect on you. The system would work without such
 self appointed censors, but why not make use of them? I used to live with an
 Arachnaphobe, if I was still doing so I'd have no problem creating 

Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-23 Thread Tobias Oelgarte
Am 23.10.2011 17:19, schrieb Nikola Smolenski:
 On Sun, 2011-10-23 at 10:31 +0200, Tobias Oelgarte wrote:
 Am 23.10.2011 08:49, schrieb Nikola Smolenski:
 On Sat, 2011-10-22 at 23:35 +0200, Tobias Oelgarte wrote:
 Why? Because it is against the basic rules of the project. It is
 intended to discriminate content. To judge about it and to represent you
 No, it is intended to let people discriminate content themselves if they
 want, which is a huge difference.

 If I feel that this judgment is inadequate, I will turn the filter off.
 Either way, it is My Problem. Not Your Problem.
 It is not the user of the filter that decides *what* is hidden or not.
 That isn't his decision. If it is the case that the filter does not meet
 his expectations and he does not use it, then we gained nothing, despite
 the massive effort taken by us to flag all the images. You should know
 Who is this we you are talking about? No one is going to force anyone
 to categorize images. If some people want to categorize images, and if
 their effort turns out to be in vain, again that is Their Problem and
 not Your Problem.
It is wasted time for them as well as for us, since they are most likely 
editors that are part of us. If they waste their time on 
categorization then it is lost time that could be spend on article 
improvement or invested in better alternatives that are illustrative as 
well as not offending.
 easily exploited by your local provider to hide labeled content, so that
 you don't have any way to view it, even if you want to.
 Depending on the way it is implemented, it may be somewhat difficult for
 a provider to do that. Such systems probably already exist on some
 websites, and I am not aware of my provider using them to hide labelled
 content. And even if my provider would start doing that, I could simply
 use Wikipedia over https.
 If your provider is a bit clever he would block https and filter the
 rest. An relatively easy job to do. Additionally most people would not
 know the difference between https and http, using the default http version.
 If my provider ever blocks https, I am changing my provider. If in some
 country all providers block https, these people have bigger problems
 than images on Wikipedia (that would likely be forbidden anyway).
You can do that. But there are many regions inside the world that depend 
on one local provider that is even regulated by the local 
goverment/regime/... . Since the filter was proposed as a tool to help 
expanding Wikipedia inside this weak regions, it could be as well 
counterproductive. For the weak regions as also for stronger regions. 
Are you willed to implement such a feature without thinking about 
possible outcome?
 And if providers across the world start abusing the filter, perhaps then
 the filter could be turned off. I just don't see this as a reasonable
 possibility.
 Well, we don't have to agree on this point. I think that this is
 possible with very little effort. Especially since images aren't
 provided inside the same document and are not served using https.
 Images should be served using https anyway.
It isn't done for performance reasons. It is much more expansive to 
handle encrypted content, since caching isn't possible and Wikipedia 
strongly depends on caching.  It would cost a lot of money to do so. 
(Effort vs Result)
 If you want a filter so badly, then install parental software, close
 It is my understanding that parental software is often too overarching
 or otherwise inadequate.
 Same would go for a category/preset based filter. You and I mentioned it
 above, that it isn't necessary better from the perspective of the user,
 leading to few users, but wasting our time over it.
 I believe a filter that is adjusted specifically to Wikimedia projects
 would work much better than parental software that has to work across
 the entire Internet. Anyway, I don't see why would anyone have to waste
 time over it.
That is a curious point. People that are so offended by Wikipedia 
content, that they don't want to read it, visit the WWW with all it's 
much darker corners without a personal filter software? Why does it 
sound so one-sided?
 your eyes or don't visit the page. That is up to you. That is your
 If I close my eyes or don't visit the page, I won't be able to read the
 content of the page.
 That is the point where a hide all/nothing filter would jump in. He
 would let you read the page without any worries. No faulty categorized
 image would show up and you still would have the option to show images
 in which you are interested.
 If I would use a hide all/nothing filter, I wouldn't be able to see
 non-offensive relevant images by default. No one is going to use that.
It is meant as a tool that you activate as soon you want to read about 
controversial content. If you have arachnophobia and want to inform 
yourself about spiders, then you would activate it. If you have no 
problem with other topics (e.g. physics, landscapes,...) then you could

Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-23 Thread Tobias Oelgarte
Am 23.10.2011 17:24, schrieb Andrew Garrett:
 On Sun, Oct 23, 2011 at 8:27 AM, David Gerarddger...@gmail.com  wrote:
 A neutral all-or-nothing image filter would not have such side effects
 (and would also neatly help low bandwidth usage).
 It would also make the project useless. I don't want to see the 0.01%
 (yes, rhetorical statistics again) images of medical procedures, and
 I'd avoid seeing the (much higher) X% of images that are NSFW while in
 public. That does not mean that I want to throw the baby out with the
 bathwater and not see any images whatsoever.

 Given the choice, I would not use such a filter.

 We have the technology and the capacity to allow users to make nuanced
 decisions about what they do and don't want to see. Why is this a
 problem?

At some time i should set up an record player, looping the same thing 
over and over again, or set up a FAQ.

We don't have a technology to do this. It comes down to personal 
preferences of some editors that do the categorization. Some might agree 
with their choice, others won't. But who are we to judge about content 
or over other people and their personal preferences and taste? Thats 
what we start to do, as soon we introduce 
controversial/offensive-category based filtering. That was never the 
mission of the project and hopefully it will never be.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-23 Thread Tobias Oelgarte
Am 23.10.2011 19:32, schrieb Ilario Valdelli:
 On 23.10.2011 19:05, Tobias Oelgarte wrote:
 The German poll made clear, that not any category based filter will be
 allowed, since category based filtering is unavoidably non-neutral and a
 censorship tool.
 Who the hell are you to forbid me or allow me to use a piece of
 software? I want to use this category based filter, even if it is
 unavoidably non-neutral and a censorship tool. And now what?

 We are the majority of the contributers that make up the community. We
 decided that it won't be good for the project and it's goals. We don't
 forbid you to use an *own* filter. But we don't want a filter to be
 imposed at the project, because we think, that it is not for the benefit
 of the project. Point.

 nya~

 Which project? de.wikipedia or Commons?

 If the filter will be applied to Commons, I assume that de.wikipedia
 must be conform with the decision of the other communities.

 Ilario
That does not mean that the German community is willed to show a button 
on it's pages to enable it. It will just be disabled and all 
flagged/marked/categorized/discriminated/... images will be copied from 
commons to the local project to remove the flagging, if necessary.

Alternatively the project could think about forking, which would 
remove the yearly hassle from the German verein to calculate the 
spendings and to give away the corresponding money to the foundation...

But it's nice to see that the per project-results of the filter are 
released. It is as expected. The average for importance reaches from 
3,34 to 8,17 on a scale from 0 to 10. That means, that single projects 
have a very different viewpoint on this topic and a very different kind 
of need.

http://meta.wikimedia.org/wiki/Image_filter_referendum/Results/en#Appendix_2

There is no way that this result could justify the approach to impose an 
global image filter on all projects. We also have to ask the question: 
What will happen to commons, which is shared by all projects?

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
If something is useful or not, shouldn't be the question. Alt least the 
WMF seams to see it that way, because it is very doubtful that the image 
filter is useful for the project, for its goals, growth and development.

I would invite the Board to view the movie Schoolbreak Special: The Day 
They Came To Arrest The Book. Well - I know it is old and i know that 
isn't such deep. But in some way it wraps up all the ill logic behind 
the current discussions. If you have a copy, maybe at your local 
library, then you should watch it. For everyone else is still Youtube:

http://youtu.be/Pt_n3cBYCVA
http://youtu.be/Z7qoo4kbcV4
http://youtu.be/5pguP16g5NM
http://youtu.be/4EtKZbEDKl0

nya~

Am 22.10.2011 20:52, schrieb emijrp:
 So, we are going to have virtually two cloned German Wikipedias, one with
 image filter extension enabled and other disabled. Not very useful, but it
 is your choice.

 I hope you enable the Semantic MediaWiki extension in the new fork.

 Good luck.

 2011/10/22 Dirk Frankedirkingofra...@googlemail.com

 Dear Mailinglists,

 the cultural homogenous group of Germans tends to discuss in German. So to
 give you a short update on what is happening:

 A White Bag protest movement against the image filter is forming.

 And people who talked privately about a fork for some time, start to think
 and say it loud.

 In longer:

 http://www.iberty.net/2011/10/news-from-german-wikipedia-white-bag.html

 regards,

 Dirk Franke/Southpark
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 22.10.2011 22:16, schrieb David Gerard:
 Unless nuances of the translation are inaccurate - is this the case?
 Do you see wiggle room in the original German phrasing?
There is no room for interpretation. It clearly says that no category
based filtering of any illustrative media will be accepted.

   Filters, for illustrative media based on categories that can
be enabled or disabled by the readers, ...

   Filter, die illustrierende Dateien anhand von Kategorien der
Wikipedia verbergen und vom Leser an- und abgeschaltet werden
können, ...

This also includes that there will be no filter-categorization of any
media stored inside the local project.

   ... and there shall not be any filter categories for files/media stored
localy on this Wikipedia.

   ... und es sollen auch keine Filterkategorien für auf dieser
Wikipedia lokal gespeicherte Dateien angelegt werden.

 I suspect (I have no direct evidence) that the glaring lack of the
 should we actually have this at all? question on the referendum
 generated a backlash. It's not clear to me how to correct this mistake
 - I fully accept and understand the process by which the referendum
 questions were generated (quickly dashed-off by three people without
 running them past anyone else), and that there was no intent
 whatsoever to spin the result - but from the outside view, having
 people take them as intended in bad faith is, unfortunately, entirely
 natural.
Correctly. The referendum itself was described as manipulative wording. 
This does not only apply to the DE community. Here are some examples:

http://es.wikipedia.org/wiki/Wikipedia:Caf%C3%A9/Portal/Archivo/Noticias/2011/08#Referendo_sobre_filtro_de_im.C3.A1genes

http://it.wikipedia.org/wiki/Wikipedia:Bar/Discussioni/Image_filter_referendum#Riassunto_delle_puntate_precedenti

 I also have to note that Sue's blog post was profoundly ill-considered
 at best - it has left a lot of people feeling highly insulted, and
 reads like an official staff stance to ignore opposition to the
 filter. Using the tone argument was, I think, the fatal element - when
 the powerful side of a dispute pulls out the tone argument, it may not
 actually neatly divide the powerless side; instead, the claimed
 non-targets may get just as offended by it as the claimed targets (and
 this is what happened), and take it as the nuclear option it is (and
 this is what has happened).

 It is not clear in what world any of this was ever a good idea.


 - d.

It was clearly insulting to everyone that participated inside the 
opposition, just being ignored, despite the arguments and project policies.

It would be even more insulting to ask the german community to work out 
a filter proposal. All you can expect is white bag or an empty page. The 
decision is clear: No filter at all!

(filter = selective display of content)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 22.10.2011 22:21, schrieb Erik Moeller:
 On Sat, Oct 22, 2011 at 1:16 PM, David Gerarddger...@gmail.com  wrote:
 This would appear to indicate the opposition is to *any* personal
 image filter per the Board resolution, and the category-based proposal
 additionally as an example of such rather than as the main topic of
 the vote. I think that says should be scrapped pretty blindingly
 clearly.
 The literal translation of what was being voted on:

 Persönliche Bildfilter (Filter, die illustrierende Dateien anhand von
 Kategorien der Wikipedia verbergen und vom Leser an- und abgeschaltet
 werden können, vgl. den vorläufigen [[Entwurf]] der Wikimedia
 Foundation) sollen entgegen dem Beschluss des Kuratoriums der
 Wikimedia Foundation in der deutschsprachigen Wikipedia nicht
 eingeführt werden und es sollen auch keine Filterkategorien für auf
 dieser Wikipedia lokal gespeicherte Dateien angelegt werden.

 Personal image filters (filters, which hide illustrating files based
 on categories and which can be turned on and off by the reader, see
 the preliminary [[draft]] by the Wikimedia Foundation) should,
 contrary to the Board's decision, not be introduced in the German
 Wikipedia, and no filter categories should be created for locally
 uploaded content.

 The [[draft]] link pointed to
 http://www.mediawiki.org/wiki/Personal_image_filter

 So it was pretty closely tied to the mock-ups, just like the referendum was.

 Erik
It is strongly worded against any filtering based on categories. The 
referendum proposals where only mentioned as an example, since it 
illustrated an example. Please refrain from weakening the point the poll 
made. Otherwise we will have to set up another poll with very strong 
wording like: Es soll verboten werden Inhalte jeglicher Art in 
irgendeiner Weise zu Filtern, wenn dabei nicht alle Inhalte gleich 
behandelt werden.

It shall be forbidden to filter content of any kind by any method, if 
it does not treat every content as equal.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 22.10.2011 22:31, schrieb Erik Moeller:

 What am I proposing, Jussi-Ville? So far, the only material proposal
 I've made as part of this debate is here:

 http://lists.wikimedia.org/pipermail/foundation-l/2011-September/069077.html

 And, I don't think you're being accurate, historically or otherwise.
 Arabic and Hebrew Wikipedia have implemented their own personal image
 hiding feature (http://ur1.ca/5g81t and http://ur1.ca/5g81w), and
 even paintings like The Origin of the World are hidden by default
 (!) e.g. in Hebrew Wikipedia ( http://ur1.ca/5g81c ) , or images of
 the founder of the Bahai faith in Arabic Wikipedia (
 http://ur1.ca/5g81s ).

 Do you think that the Hebrew and Arabic Wikipedians who implemented
 these templates are evil?

 Do you think that it is evil to leave it up to editors whether they
 want to implement similar collapsing on a per-article basis (and to
 leave it up to communities to agree on policies around that)? Because
 that's what I'm proposing. And I don't think it's particularly evil,
 nor inconsistent with our traditions.

 Erik
No one said it would be evil. But since we already have working 
solutions for this projects, why do we need another, now global, 
solution, based on categories? Thats when it becomes hairy.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 22.10.2011 23:23, schrieb Nikola Smolenski:
 On Sat, 2011-10-22 at 21:16 +0100, David Gerard wrote:
 Both the opinion poll itself and its proposal were accepted. In
 contrary to the decision of the Board of Trustees of the Wikimedia
 Foundation, personal image filters should not be introduced in
 German-speaking wikipedia and categories for these filters may not be
 created for files locally stored on this wikipedia. 260 of 306 users
 (84.97 percent) accepted the poll as to be formally valid. 357 of 414
 users (86.23 percent) do not agree to the introduction of a personal
 image filter and categories for filtering in German wikipedia.
 I wanted to say this for a long time, and now seems like a good
 opportunity. I see this as a tyranny of the majority. I understand that
 a large majority of German Wikipedia editors are against the filter. But
 even if 99.99% of editors are against the filter, well, it is opt-in and
 they don't have to use it. But why would they prevent me from using it,
 if I want to use it?

Why? Because it is against the basic rules of the project. It is 
intended to discriminate content. To judge about it and to represent you 
this judgment before you have even looked at it. Additionally it can be 
easily exploited by your local provider to hide labeled content, so that 
you don't have any way to view it, even if you want to.

If you want a filter so badly, then install parental software, close 
your eyes or don't visit the page. That is up to you. That is your 
freedom, your judgment and not the judgment of others.

PS: If it wasn't at this place i would call your contribution trolling. 
But feel free to read the arguments: 
http://de.wikipedia.org/wiki/Wikipedia:Meinungsbilder/Einf%C3%BChrung_pers%C3%B6nlicher_Bildfilter/en#Arguments_for_the_proposal

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 22.10.2011 23:44, schrieb Erik Moeller:
 On Sat, Oct 22, 2011 at 1:50 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:
 No one said it would be evil. But since we already have working
 solutions for this projects, why do we need another, now global,
 solution, based on categories? Thats when it becomes hairy.
 The Board of Trustees didn't pass a resolution asking for the
 implementation of a filter based on categories.

 The Board asked Sue in consultation with the community, to develop
 and implement a personal image hiding feature that will enable readers
 to easily hide images hosted on the projects that they do not wish to
 view, either when first viewing the image or ahead of time through
 preference settings.

 Based on the consultation and discussion that's taken place so far, I
 think it's pretty safe to say that a uniform approach based on
 categories has about a snowball's chance in hell of actually being
 widely adopted, used and embraced by the community, if not triggering
 strong opposition and antagonism that's completely against our goals
 and our mission.

 With that in mind, I would humbly propose that we kill with fire at
 this point the idea of a category-based image filtering system.

 There are, however, approaches to empowering both editors and readers
 that do not necessarily suffer from the same problems.

 Erik
I gladly agree that category based filtering should be off the table. It 
has way to many problems that we could justify it in any way.

What approaches do you have in mind, that would empower the editors and 
the readers, aside from an hide/show all solution?

nya~


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 23.10.2011 00:13, schrieb Erik Moeller:
 On Sat, Oct 22, 2011 at 2:51 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:
 What approaches do you have in mind, that would empower the editors and
 the readers, aside from an hide/show all solution?
 1) Add a collapsible [*] parameter to the File: syntax, e.g.
 [[File:Lemonparty.jpg|collapsible]].
 2) When present, add a notice [*] to the top of the page enabling the
 reader to collapse collapsible images (and to make that the default
 setting for all pages if desired).
 3) When absent, do nothing.

 [*] The exact UI language here could be discussed at great length, but
 is irrelevant to the basic operating principles.

 Advantages:
 * Communities without consensus to use collapsible media don't have to
 until/unless such a consensus emerges. It can be governed by normal
 community policy.
 * One community's judgments do not affect another community's.
 Standards can evolve and change over time and in the cultural context.
 * Readers of projects like Hebrew and Arabic Wikipedia (which are
 already collapsing images) who are currently not empowered to choose
 between collapsed by default vs. expanded by default would be
 enabled to do so.
 * Readers only encounter the notice on pages that actually have
 content where it's likely to be of any use.
 * Respects the editorial judgment of the community, as opposed to
 introducing a parallel track of controversial content assessment.
 Doesn't pretend that a technical solution alone can solve social and
 editorial challenges.
 * Easy to implement, easy to iterate on, easy to disable if there are issues.

 Disadvantages:
 * Doesn't help with the specific issues of Wikimedia Commons (what's
 educational scope) and with issues like sorting images of masturbation
 with electric toothbrushes into the toothbrush category. Those are
 arguably separate issues that should be discussed separately.
 * Without further information about what our readers want and don't
 want, we're reinforcing pre-existing biases (whichever they may be) of
 each editorial community, so we should also consider ways to
 continually better understand our audience.

 Erik

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

Isn't that the same as putting some images inside the category 
inappropriate content? Will it not leave the impression to the reader 
that we think that this is something not anybody should see? Can it be 
easily used by providers to filter out this images?

I would add the answers to this questions to disadvantages.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] News from Germany: White Bags and thinking about a fork

2011-10-22 Thread Tobias Oelgarte
Am 23.10.2011 01:57, schrieb Billinghurst:
 On 22 Oct 2011 at 15:36, Erik Moeller wrote:

 On Sat, Oct 22, 2011 at 2:56 PM, David Gerarddger...@gmail.com  wrote:
 On 22 October 2011 22:51, Tobias Oelgarte
 And, in detail, why is a hide/show all solution inadequate? What is
 the use case this does not serve?
 A show/hide all images function is likely too drastic to serve some of
 these use cases well. So for example, if you're at work, you might not
 want to have autofellatio on your screen by accident, but you'd be
 annoyed at having to un-hide a fabulous screenshot of a wonderful
 piece of open source software in order to mitigate that risk.

 Plus for the occasions that some kind vandal adds similar images to your user 
 talk page so
 that you don't even know or have control over what is being displayed let 
 along an ability
 to stop it.  An unfortunate eye opener in the workplace, or similarly at home 
 when working
 with the family.  :-/

 I do wish that this discussion can just move to implementation. This is about 
 what I get
 to filter for what I get to see, or when I get to see it. I have had enough 
 of other
 people believing that they get to make their choices for me.

 Regards, Andrew
The idea isn't bad. But it is based on the premise that there are enough 
users of the filter to build such correlations. It requires enough input 
to work properly and therefore enough users of the feature, that have 
longer lists. But how often does an average logged in user find such an 
image and handle accordingly? That would be relatively seldom, resulting 
in a very short own list, by relatively few users, which makes it hard 
to start the system (warm up time).

Since i love to find ways on how to exploit systems there is one simple 
thing on my mind. Just login to put a picture of penis/bondage/... on 
the list and than add another one of the football team you don't like. 
Repeat this step often enough and the system will believe that all users 
that don't like to see a penis would also not like to see images of that 
football team.

Another way would be: I find everything offensive. This would hurt the 
system, since correlations would be much harder to find.

If we assume good faith, then it would probably work. But as soon we 
have spammers of this kind, it will lay in ruins, considering the amount 
of users and corresponding relatively short lists (in average).

Just my thoughts on this idea.

Greetings
nya~




___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] category free image filtering

2011-10-22 Thread Tobias Oelgarte
Am 23.10.2011 01:49, schrieb WereSpielChequers:
 Hi Tobias,

 Do youhave any problems with this category free proposal
 http://meta.wikimedia.org/wiki/User:WereSpielChequers/filter

 WereSpelChequers
The idea isn't bad. But it is based on the premise that there are enough 
users of the filter to build such correlations. It requires enough input 
to work properly and therefore enough users of the feature, that have 
longer lists. But how often does an average logged in user find such an 
image and handle accordingly? That would be relatively seldom, resulting 
in a very short own list, by relatively few users, which makes it hard 
to start the system (warm up time).

Since i love to find ways on how to exploit systems there is one simple 
thing on my mind. Just login to put a picture of penis/bondage/... on 
the list and than add another one of the football team you don't like. 
Repeat this step often enough and the system will believe that all users 
that don't like to see a penis would also not like to see images of that 
football team.

Another way would be: I find everything offensive. This would hurt the 
system, since correlations would be much harder to find.

If we assume good faith, then it would probably work. But as soon we 
have spammers of this kind, it will lay in ruins, considering the amount 
of users and corresponding relatively short lists (in average).

Just my thoughts on this idea.

Greetings
nya~

PS: Sorry for my miss-post. I answered someone else exactly this, 
quoting the wrong text :P

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread Tobias Oelgarte
Am 19.10.2011 23:19, schrieb Philippe Beaudette:
 On Wed, Oct 19, 2011 at 5:07 AM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!


 First, there's a bit of a framing difference here.  We did not initially
 promise results by project.  Even now, I've never promised that. What I've
 said is that we would attempt to do so.  But it's not solely in the WMF's
 purview - the election had a team of folks in charge of it who came from the
 community and it's not the WMF's role to dictate to them how to do their
 job.

 I (finally) have the full results parsed in such a way as to make it *
 potentially* possible to release them for discussion by project.  However,
 I'm still waiting for the committee to approve that release.  I'll re-ping
 on that, because, frankly, it's been a week or so.  That will be my next
 email. :)

 pb

Don't get me wrong. But this should have been part of the results in the 
first place. The first calls for such results go back to times before 
the referendum even started. [1] That leaves an very bad impression, and 
so far the WMF did nothing to regain any trust. Instead you started to 
loose even more. [2]

[1] 
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/Archive1#Quantification_of_representation_of_the_world-wide_populace
[2] 
http://meta.wikimedia.org/wiki/User_talk:WereSpielChequers/filter#Thanks_for_this_proposal.2C_WereSpielCheqrs

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Tobias Oelgarte
Am 19.10.2011 11:07, schrieb Andrew Garrett:
 On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen
 cimonav...@gmail.com  wrote:
 Yes, but that is not proof of what we as a community understand the
 principle to mean, it means the board is on crack.
 That's not a helpful contribution to this discussion.

But if i look at the current reactions, some might agree with this point 
of view. So far i did not see any reaction to provide sufficient 
information, so that would strengthen the argumentation of the WMF or 
the Board. All we get represented are assumptions on what the problem 
might be and that it might be existing. There was not a single study 
that was directed at the readers, particularity not a single one 
directed at a diverse, multicultural audience. All we got is the 
worthless result of the referendum.

I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!

I asked for this shit multiple month ago. I repeated my request on 
daily/weekly basis. All i got wasn't a T-Shirt, it was nothing. That 
makes people like me very angry and lets me believe that the WMF is 
either trying to hide the facts, to push their own point of view, or 
that they are entirely incompetent. Alternatively they are just busy 
with counting the money...

I lost all trust inside the Foundation and I believe that they would 
sell out the basic idea of the project, whenever possible. Knowledge + 
Principle of least astonishment, applied to everything, no matter how 
the facts are? You truly did not understand the foundation of knowledge. 
Knowledge is interesting because it is shocking. It destroys your own 
sand-castle-world on daily basis.

Hard words? Yes it are hard words, based upon the current situation and 
reactions. All we got are messages to calm down, while nothing changes. 
Now we read at some back-pages (discussions spread out everywhere) that 
there will be a test-run, to invite the readers to flag images. Another 
measure to improve the acceptance if the filter will be enabled, another 
study based on a only English speaking community/audience to make it the 
rule over thumb for every project? It seams to be the case. But where 
does all this will to implement a filter come from? No one said it 
clearly, no one published reliable source (Harris report, a true 
insider joke) and you expect us to believe this shit?

The referendum was a farce, the new approach is again a farce. The only 
way left to assume good faith is to claim that they are on crack. 
Anything else would be worse.

nya~



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 09:57, schrieb Tom Morris:
 On Tuesday, October 18, 2011, Thomas Morton wrote:

 On 17 Oct 2011, at 09:19, Tobias Oelgarte
 tobias.oelga...@googlemail.comjavascript:;  wrote:
 I have no problem with any kind of controversial content. Showing
 progress of fisting on the mainpage? No problem for me. Reading your
 comments? No problem for me. Reading your insults? Also no problem. The
 only thing i did, was the following: I told you, that i will not react
 any longer to your comments, if they are worded in the manner as they
 currently are.

 Literary: I'm feeling free to open your book and start to read. If it is
 interesting and constructive i will continue to read it and i will
 respond to you to share my thoughts. If it is purely meant to insult,
 without any other meaning, then i will get bored and fly over the lines,
 reading only the half or less. I also have no intention to share my
 thoughts with the author of this book. Why? I have nothing to talk
 about. Should i complain over it's content? Which content anyway?

 Give it a try. Make constructive arguments and explain your thoughts.
 There is no need for strong-wording, if the construction of the words
 itself is strong.

 nya~
 And that is a mature and sensible attitude.

 Some people do not share your view and are unable to ignore what to
 them are rude or offensive things.

 Are they wrong?

 Should they be doing what you (and I) do?


 I share the same attitude. I'm pretty much immune to almost anything you can
 throw at me in terms of potentially offensive content.

 But, despite this enlightenment, I am not an island. I use my computer in
 public places: at the workplace, in the university library, on the train, at
 conferences, and in cafes.

 I may have been inured to 'Autofellatio6.jpg', but I'm not sure the random
 person sitting next to me on the train needs to see it. Being able to read,
 edit and patrol Wikipedia in public without offending the moral
 sensibilities of people who catch a glance at my laptop screen would be a
 feature. Being able to click 'Random page' without the chance of a public
 order offence flowing from it would also be pretty nifty.
But that is exactly this typical scenario that does not need a category 
based filtering system. There are many other proposed solutions that 
would handle exactly this case, without the need for any categorization. 
The hide all image feature would already be an good option. An 
improved version is the blured images/pixelated images feature, where 
you enter the hide/distort/... mode and any image is not visible in 
detail as long you don't hover it or click on it.

Still, we discuss about filter categories and their need. In your 
example no categorization is needed at all, to provide a well working 
solution.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 14:00, schrieb Thomas Morton:
 On 18 October 2011 11:56, Tobias 
 Oelgartetobias.oelga...@googlemail.comwrote:

 That controversial content is hidden or that we
 provide a button to hide controversial content is prejudicial.

 I disagree on this, though. There is a balance between encouraging people to
 question their views (and, yes, even our own!) and giving them no option but
 to accept our view.

 This problem can be addressed via wording related to the filter and
 avoidance of phrases like controversial, problematic etc.

 I disagree very strongly with the notion that providing a button to hide
 material is prejudicial.
That comes down to the two layers of judgment involved in this proposal. 
At first we give them the option to view anything and we give them the 
option to view not anything. The problem is that we have to define what 
not anything is. This imposes our judgment to the reader. That means, 
that even if the reader decides to hide some content, then it was our 
(and not his) decision what is hidden.

This concludes to two cases:

1. If he does not use the filter, then - as you say - we impose our 
judgment to the reader,
2. If he does use the filter, then - as i say - we impose our judgment 
to the reader as well.

Both cases seam to be equal. No win or loss with or without filter. But 
there is a slight difference.

If we treat nothing as objectionable (no filter), then we don't need to 
play the judge. We say: We accept anything, it's up to you to judge.
If we start to add a category based filter, then we play the judge 
over our own content. We say: We accept anything, but this might not be 
good to look at. Now it is up to you to trust our opinion or not.

The later imposes our judgment to the reader, while the first makes no 
judgment at all and leaves anything to free mind of the reader. (free 
mind means, that the reader has to find his own answer to this 
question. He might have objections or could agree.)
 It deepens the viewpoint that this content is objectionable and that it is
 generally accepted this way, even if not. That means that we would
 fathering the readers that have a tendency to enable a filter (not even
 particularly an image filter).

 This is a reasonable objection; and again it goes back to this idea of how
 far do we enforce our world view on readers. I think that there are ways
 that a filter could be enabled that improves Wikipedia for our readers
 (helping neutrality) and equally there are ways that it could be enabled
 that adversely affect this goal.

 So if done; it needs to be done right.
The big question is: Can be done right?

A filter that only knows a yes or no to questions that are 
influenced by different cultural views, seams to fail right away. It 
draws a sharp line through anything, ignoring the fact that even in one 
culture there are lot of border cases. I did not want to use examples, 
but i will still give one. If we have a photography of a young woman at 
the beach. How would we handle the case that her swimsuit shows a lot of 
naked flesh? I'm sure more then 90% of western country citizens would 
have no objection against this image, if it is inside a corresponding 
article. But as soon we go to other cultures, lets say Turkey, then we 
might find very different viewpoints if this should be hidden by the 
filter or not. I remember the question in the referendum, if the filter 
should be cultural neutral. Many agreed on this point. But how in gods 
name should this be done? Especially: How can this be done right?

 ... and that is exactly what makes me curious about this approach. You
 assume that we aren't neutral and Sue described us in median a little
 bit geeky, which goes in the same direction.

 We are not; over time it is fairly clear that we reflect certain world
 views. To pluck an example out of thin air - in the 9/11 article there is
 extremely strong resistance to adding a see also link to the article on
 9/11 conspiracies. This reflects a certain bias/world view we are imposing.
 That is an obvious example - there are many more.

 The bias is not uniform; we have various biases depending on the subject -
 and over time those biases can swing back and forth depending on
 the prevalent group of editors at that time. Many of our articles have
 distinctly different tone/content/slant to foreign language ones (which is a
 big giveaway IMO).

 Another example: English Wikipedia has a pretty strong policy on BLP
 material that restricts a lot of what we record - other language Wiki's do
 not have the same restrictions and things we would not consider noting (such
 as non-notable children names) are not considered a problem on other Wikis.


 But if we aren't neutral at
 all, how can we even believe that an controversial-content-filter-system
 based upon our views would be neutral in judgment or as proposed in the
 referendum cultural neutral. (Question: Is there even a thing as
 cultural neutrality?)

 No; this is the 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 17:23, schrieb Thomas Morton:
 That comes down to the two layers of judgment involved in this proposal.
 At first we give them the option to view anything and we give them the
 option to view not anything. The problem is that we have to define what
 not anything is. This imposes our judgment to the reader. That means,
 that even if the reader decides to hide some content, then it was our
 (and not his) decision what is hidden.

 No; because the core functionality of a filter should always present the
 choice do you want to see this image or not. Which is specifically not
 imposing our judgement on the reader :) Whether we then place some optional
 preset filters for the readers to use is certainly a matter of discussion -
 but nothing I have seen argues against this core ideas.
Yes; because even the provision of a filter implies that some content is 
seen as objectionable and treated different from other content. This is 
only no problem, as long we don't represent default settings, aka 
categories, which introduce our judgment to the readership. Only the 
fact that our judgment is visible, is already enough to manipulate the 
reader in what to see as objectionable or not. This scenario is very 
much comparable to the unknown man that sits behind you, looking 
randomly onto your screen, while you want to inform yourself. Just the 
thought that someone else could be upset is already an issue. Having us 
to directly show/indicate what we think of as objectionable by others 
is even the stronger.
 If we treat nothing as objectionable (no filter), then we don't need to
 play the judge. We say: We accept anything, it's up to you to judge.
 If we start to add a category based filter, then we play the judge
 over our own content. We say: We accept anything, but this might not be
 good to look at. Now it is up to you to trust our opinion or not.

 By implementing a graded filter; one which lets you set grades of visibility
 rather than off/on addresses this concern - because once again it gives the
 reader ultimate control over the question of what they want to see. If they
 are seeing too much for their preference they can tweak up, and vice
 versa.
This would imply that we, the ones that are unable to neutrally handle 
content, would be perfect in categorizing images after a fine degree of 
nudity. But even having multiple steps would not be a satisfying 
solution. There are many cultural regions which differentiate strongly 
between man an woman. While they would have no problem to see a man in 
just his boxer short, it would be seen as offending to show a woman open 
hair. I wonder what effort it would need to accomplish this goal (if 
even possible), compared to the benefits.

 The later imposes our judgment to the reader, while the first makes no
 judgment at all and leaves anything to free mind of the reader. (free
 mind means, that the reader has to find his own answer to this
 question. He might have objections or could agree.)

 And if he objects, we are then just ignoring him?

 I disagree with your argument; both points are imposing our judgement on the
 reader.
If _we_ do the categorization, then we impose our judgment, since it was 
us, who made the decision. It is not a customized filter where the user 
decides what is best for himself. Showing anything might not be ideal 
for all readers. Hiding more then preferred might also no be ideal for 
all readers. Hiding less then preferred is just another not ideal case. 
We can't meet everyones taste like no book can meet everyones taste. 
While Harry Potter seams to be fine in many cultures, in some there 
might be parts that are seen as offensive. Would you hide/rewrite parts 
from Harry Potter to make them all happy, or would you go after the 
majority of the market and ignore the rest?

There is one simple way to deal with it. If someone does not like our 
content, then he don't need to use it. If someone does not like the 
content of a book he does not need to buy it. He can complain about it. 
Thats whats Philip Pullman meant with: No one has the right to life 
without being shocked.
 Agreed; which is why we allow people to filter based on a sliding scale,
 rather than a discrete yes or no. So someone who has no objection to such an
 image, but wants to hide people having sex can do so. And someone who wants
 to hide that image can have a stricter grade on the filter.

 If nothing else the latter case is the more important one to address;
 because sexual images are largely tied to sexual subjects, and any
 reasonably person should expect those images to appear. But if culturally
 you object to seeing people in swimwear then this could be found in almost
 any article.

 We shouldn't judge those cultural objections as invalid.  Equally we
 shouldn't endorse them as valid. There is a balance somewhere between those
 two extremes.
Yes there is a balance between two extremes. But who ever said that the 
center between two opinions is seen 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 19:04, schrieb Andreas Kolbe:

 From: Tobias Oelgartetobias.oelga...@googlemail.com
 Am 18.10.2011 11:43, schrieb Thomas Morton:
 It is this fallacious logic that underpins our crazy politics of
 neutrality which we attempt to enforce on people (when in practice we lack
 neutrality almost as much as the next man!).
 ... and that is exactly what makes me curious about this approach. You
 assume that we aren't neutral and Sue described us in median a little
 bit geeky, which goes in the same direction. But if we aren't neutral at
 all, how can we even believe that an controversial-content-filter-system
 based upon our views would be neutral in judgment or as proposed in the
 referendum cultural neutral. (Question: Is there even a thing as
 cultural neutrality?)
 Who said that the personal image filter function should be based on *our* 
 judgment? It shouldn't. 

 As Wikipedians, we are used to working from sources. In deciding what content 
 to include, we look at high-quality, educational sources, and try to reflect 
 them fairly. 

 Now, given that we are a top-10 website, why should it not make sense to look 
 at what other large websites like Google, Bing, and Yahoo allow the user to 
 filter, and what media Flickr and YouTube require opt-ins for? Why should we 
 not take our cues from them? The situation seems quite analogous.

 As the only major website *not* to offer users a filter, we have more in 
 common with 4chan than the mainstream. Any abstract discussion of neutrality 
 that neglects to address this fundamental point misses the mark. Our present 
 approach is not neutral by our own definition of neutrality; it owes more to 
 Internet culture than to the sources we cite.

 Another important point that Thomas made is that any filter set-up should use 
 objective criteria, rather than criteria based on offensiveness. We should 
 not make a value judgment, we should simply offer users the browsing choices 
 they are used to in mainstream sites.  

 Best,
 Andreas
You said that we should learn from Google and other top websites, but at 
the same time you want to introduce objective criteria, which neither of 
this websites did? You also compare Wikipedia with an image board like 
4chan? You want the readers to define what they want see. That means 
they should play the judge and that majority will win. But this in 
contrast to the proposal that the filter should work with objective 
criteria.

Could you please crosscheck your own comment and tell me what kind of 
solution is up on your mind? Currently it is mix of very different 
approaches, that don't fit together.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 23:20, schrieb Andreas K.:
 On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 You said that we should learn from Google and other top websites, but at
 the same time you want to introduce objective criteria, which neither of
 this websites did?



 What I mean is that we should not classify media as offensive, but in terms
 such as photographic depictions of real-life sex and masturbation, images
 of Muhammad. If someone feels strongly that they do not want to see these
 by default, they should not have to. In terms of what areas to cover, we can
 look at what people like Google do (e.g. by comparing moderate safe search
 and safe search off results), and at what our readers request.


The problem is, that we never asked our readers, before the whole thing 
was running wild already. It would be really the time to question the 
feelings of the readers. That would mean to ask the readers in very 
different regions to get an good overview about this topic. What Google 
and other commercial groups do shouldn't be a reference to us. They 
serve their core audience and ignore the rest, since their aim is 
profit, and only profit, no matter what good reasons they represent. 
We are quite an exception from them. Not in popularity, but in concept. 
If we put to the example of futanari, then we surely agree that there 
could be quite a lot of people that would be surprised. Especially if 
safe-search is on. But now we have to ask why it is that way? Why does 
it work so well for other, more common terms in a western audience?

 You also compare Wikipedia with an image board like
 4chan? You want the readers to define what they want see. That means
 they should play the judge and that majority will win. But this in
 contrast to the proposal that the filter should work with objective
 criteria.



 I do not see this as the majority winning, and a minority losing. I see it
 as everyone winning -- those who do not want to be confronted with whatever
 media don't have to be, and those who want to see them can.

I guess you missed the point that a minority of offended people would 
just be ignored. Looking at the goal and Tings examples, then we would 
just strengthen the current position (western majority and point of 
view) but doing little to nothing in the areas that where the main 
concern, or at least the strong argument to start the progress. If it 
really comes down to the point that a majority does not find Muhammad 
caricatures offensive and it wins, then we have no solution.
 Could you please crosscheck your own comment and tell me what kind of
 solution is up on your mind? Currently it is mix of very different
 approaches, that don't fit together.



 My mind is not made up; we are still in a brainstorming phase. Of the
 alternatives presented so far, I like the opt-in version of Neitram's
 proposal best:

 http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fhidden

 If something better were proposed, my views might change.


 Best,
 Andreas

I read this proposal and can't see a real difference in a second 
thought. At first it is good that the decision stays related to the 
topic and is not separated as in the first proposals. But it also has a 
bad taste in itself. We directly deliver the tags needed to remove 
content by third parties (SPI, Local Network, Institutions), no matter 
if the reader chooses to view the image or not, and we are still in 
charge to declare what might be or is offensive to others, forcing our 
judgment onto the users of the feature.

Overall it follows a good intention, but I'm very concerned about the 
side effects, which just let me say no way to this proposal as it is.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-17 Thread Tobias Oelgarte
Am 16.10.2011 21:27, schrieb ???:
 On 16/10/2011 19:36, Tobias Oelgarte wrote:
 Am 16.10.2011 16:17, schrieb ???:
 On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk wrote:

 Don't be an arsehole you get the same sort of stuff if you search for
 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.

 Trolling much eh David?


 But thanks for showing once again your incapacity to acknowledge that
 searching for sexual images and seeing such images, is somewhat
 different, from searching for non sexual imagary and getting sexual images.

 I have to agree with David. Your behavior is provocative and
 unproductive. I don't feel the need to respond to your arguments at all,
 if you write in this tone. You could either excuse yourself for this
 kind of wording, or we are done.


 Now you wouldn't be complainng about seeing content not to your liking
 would you. What are you going to do filter out the posts? Bet your glad
 your email provider added that option for you.

 Yet another censorship hipocrite.
I guess you did not understand my answer. Thats why I'm feeling free to 
respond one more time.

I have no problem with any kind of controversial content. Showing 
progress of fisting on the mainpage? No problem for me. Reading your 
comments? No problem for me. Reading your insults? Also no problem. The 
only thing i did, was the following: I told you, that i will not react 
any longer to your comments, if they are worded in the manner as they 
currently are.

Literary: I'm feeling free to open your book and start to read. If it is 
interesting and constructive i will continue to read it and i will 
respond to you to share my thoughts. If it is purely meant to insult, 
without any other meaning, then i will get bored and fly over the lines, 
reading only the half or less. I also have no intention to share my 
thoughts with the author of this book. Why? I have nothing to talk 
about. Should i complain over it's content? Which content anyway?

Give it a try. Make constructive arguments and explain your thoughts. 
There is no need for strong-wording, if the construction of the words 
itself is strong.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


[Foundation-l] Controversial Content vs Only-Image-Filter

2011-10-16 Thread Tobias Oelgarte
In the last weeks i hold myself back and watched over the comments at 
multiple places to see what is the current development. At first i have 
to point out that I'm very disappointed by the current progress. Sue 
called for a more general discussion. Ting stated again, like in 
Nürnberg, that it is already decided. That is controversial in itself 
and can't lead to a constructive discussion.

That aside, I looked at the various comments and the brainstorming 
pages. It is really boring to look at them, since 99% of the comments 
miss the point. There are a whole lot of comments regarding how the 
image filter should look like. That are all comments/suggestions not 
related to the fundamental questions. But they only serve to disrupt the 
thought progress, ignoring anything aside how it should look like, and 
even ignoring the basic complaints (non-neutral categorization).

The first question should be: Is controversial content a problem for the 
project?

Some might now say yes or no. But I'm not interested in this 
answers. I'm also not interested in single examples. I'm interested in 
whole view and sources that speak in general about this question.

If we might come to the conclusion that there is a general (not 
specific) problem, then we might talk about the image filter and if it 
can be a solution to that problem.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Tobias Oelgarte
Am 16.10.2011 12:53, schrieb ???:
 On 11/10/2011 15:33, Kim Bruning wrote:
   flame on  Therefore you cannot claim that I am stating nonsense.
   The inverse is true: you do not possess the information to support
   your position, as you now admit. In future, before you set out to
   make claims of bad faith in others, it would be wise to ensure that
   your own information is impeccable first./flame  sincerely, Kim
   Bruning
 I claim that you are talking total crap. It is not *that* difficult to
 get the
 categories of an image and reject based on which categories the image
 is in are. There are enough people out there busily categorizing all the
 images already that any org that may wish to could block images that
 are in disapproved categories.
I have to throw that kind wording back at you. It isn't very difficult 
to judge what is offensive and what isn't, because it is impossible to 
do this, if you want to stay neutral and to respect any, even if only 
any major, opinion out there. Wikipedia and Commons are projects that 
gather knowledge or media. Wikipedia has an editorial system that 
watches over the content to be accurate and representative. Commons is a 
media library with a categorization system that aids the reader to what 
he want's to find. The category system in itself is (or should be) build 
upon directional labels. Anything else is contradictory to current 
practice and unacceptable:

* Wikipedia authors do not judge about topics. They also do not claim 
for themselves that something is controversial, ugly, bad, ...
* Commons contributers respect this terms as well. They don't judge 
about the content. They gather and categorize it. But they will not 
append prejudicial labels.
 The problem, and it is a genuine problem, is that the fucking stupid images
 leak out across commons in unexpected ways. Lets assuime that an 6th grade
 class is asked to write a report on Queen Victoria, and a child serach
 commons
 for prince albert:

 http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=prince+albertlimit=50offset=0

 If you at work you probably do not want to clicking the above link at all.

Worst case scenarios will always happen. With filter or without filter, 
you will still and always find such examples. They are seldom, and might 
happen from time to time. But they aren't the rule. They aren't at the 
same height as you should use to measure a flood.

To give an simple example of the opposite. Enable strict filtering on 
google and search for images with the term futanari . Don't say that i 
did not warn you...


A last word: Categorizing content rightful as good and evil is 
impossible for human beings, that we are. But categorizing content as 
good and evil always led to destructive consequences if human beings 
are involved, that we are.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Tobias Oelgarte
Am 16.10.2011 16:17, schrieb ???:
 On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk   wrote:

 Don't be an arsehole you get the same sort of stuff if you search for

 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.


 Trolling much eh David?


 But thanks for showing once again your incapacity to acknowledge that
 searching for sexual images and seeing such images, is somewhat
 different, from searching for non sexual imagary and getting sexual images.

I have to agree with David. Your behavior is provocative and 
unproductive. I don't feel the need to respond to your arguments at all, 
if you write in this tone. You could either excuse yourself for this 
kind of wording, or we are done.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Tobias Oelgarte
Am 11.10.2011 17:42, schrieb Andreas Kolbe:
 From: Faef...@wikimedia.org.uk
 We could also just delete them, unless someone actually uses them in a 
 sensible way in an article. :-)

 sincerely,
 Kim Bruning
 Not on Commons; being objectionable to some viewers and not being
 currently in use does not make a potentially educational image out of
 scope. I have seen many poorly worded deletion requests on Commons on
 the basis of a potentially useable image being orphaned rather than
 it being unrealistic to expect it to ever be used for an educational
 purpose.

 Fae




 Agree with Fae; Commons is a general image repository in its own right, 
 serving a bigger audience than just the other Wikimedia projects.

 So the fact is that Commons will contain controversial images – and that we 
 have to curate them responsibly.

 Someone on Meta has pointed out that Commons seems to list sexual image 
 results for search terms like cucumber, electric toothbrushes or pearl 
 necklace way higher than a corresponding Google search. See 
 http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html

 Andreas
This might just be coincidence for special cases. I'm sure if you search 
long enough you will find opposite examples as well. But wouldn't it run 
against the intention of a search engine to rate down content by 
possibly offensive? If you search for a cucumber you should expect to 
find one. If the description is correct, you should find the most 
suitable images first. But that should be based on the rating algorithm 
that works on the description, not on the fact that content is/might 
be/could be controversial.

Implementing such a restriction for a search engine (by default) would 
go against any principal and would be discrimination of content. We 
should not do this.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-09 Thread Tobias Oelgarte
That means it will be pushed in no matter if wanted/needed or in respect 
to the local communities? I think that will push over the line of 
acceptability.

I also want to remember you that the referendum/referendumm

1. asked the wrong question(s)
2. did not mention any of the possible issues beforehand (biased 
formulation)
3. left much room for possible implementations

!!! IM STILL WAITING FOR RESULTS PER PROJECT !!!
Im very, very disappointed to see that this data is still not released. 
I requested it a dozen times. Every time i got rejected that it will be 
released later on and that we should stay patient. How many weeks ago 
this request was made? I did not count anymore...

Seriously pissed off greetings from
Tobias Oelgarte / user:niabot

Am 09.10.2011 16:12, schrieb Ting Chen:
 Hello Tobias,

 the text of the May resolution to this question is ... and that the
 feature be visible, clear and usable on all Wikimedia projects for both
 logged-in and logged-out readers, and on the current board meeting we
 decided to not ammend the original resolution.

 Greetings
 Ting

 Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
 Hi Ting,

 one simple question: Is the Wikimedia Foundation going to enable the
 image filter on _all_ projects, disregarding consensus by local
 communities of rejecting the image filter? (E.g. German Wikipedia)

 We are currently in a very unpleasant situation of uncertainty. Tensions
 in the community are extremely high (too high, if you ask me, but
 Wikimedians are emotional people), speculations and rumors about what
 WMF is going to do prevail.
 A clear statement would help our discussion process.

 Regards,
 Tobias / User:Church of emacs



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blackout at Italian Wikipedia

2011-10-05 Thread Tobias Oelgarte
Am 05.10.2011 10:46, schrieb Ray Saintonge:
 On 10/04/11 6:03 AM, Ilario Valdelli wrote:
 The question is that the server are in USA, but for the penal law it's
 sufficient to edit from the Italian country.

 I am in a special situation because I live in Switzerland and I
 publish in USA servers, but for the main numbers of Italian editors
 the question is not so easy.


 If they are so fearful they can use pseudonyms.  They would then need to
 get a legal order from a US court to identify the users.

 Ray

But what about Italian re-users? If it.wikipedia does decide to edit 
anonymously and someone in Italy re-uses their content, then he might be 
in trouble. Which means that it will end up in additional restrictions, 
hurting the mission of the project, even if maybe not self affected.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] We need more information (was: Blog from Sue about ...)

2011-09-30 Thread Tobias Oelgarte
Am 30.09.2011 17:06, schrieb Bishakha Datta:
 ...
 **I am also dismayed at the use of the word 'censorship' in the context of a
 software feature that does not ban or block any images. But somehow there
 doesn't seem to be any other paradigm or language to turn to, and this is
 what is used as default, even though it is not accurate. It's been mentioned
 1127 times in the comments, as per Sue's report to the board, and each time
 it is mentioned, it further perpetuates the belief that this is censorship.
There are two issues why this word is used.

1. The word is used for actual censorship (restriction of access) and it 
is used in context with hiding/filtering features. What is really meant, 
is often hard to distinguish.

2. Categorizing content (images, videos, text, events, ...) as 
inappropriate for some (minors, believers, conservatives, liberals, 
extremists, ...) is instead seen as a censors tool. That is one of the 
issues with a filter based on categories. It can be exploited by actual 
censors in many different ways. One hard way is to (mis)use the 
categories to restrict access. One soft way would be to influence the 
categorization itself, leaving the impression to the reader that a 
majority would share this view. To understand this issue, you have think 
about readers which see Wikipedia as a valid source for knowledge. If 
Wikipedia (they don't see or care for the single decisions, they trust 
us) labels such content as inappropriate (for some) it will inevitably 
lead to the believe that a vast majority sees it the same way, which 
doesn't need to be the case.

Since this risk is real (the Google image filter gets already exploited 
this way), it is also described as censorship. Not a single word could 
be found inside the introduction of the referendum, that mentioned 
possible issues. Thats why many editors think, that it was intentionally 
put that way, or that the board/WMF isn't capable to handle this situation.

It just left many open questions. For example: What would the WMF do, if 
they recognize that the filter, and the good idea behind it, is exploited?

-- Niabot

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-09-30 Thread Tobias Oelgarte
Am 30.09.2011 17:49, schrieb Andreas Kolbe:
 --- On Fri, 30/9/11, Ryan Kaldarirkald...@wikimedia.org  wrote:

 From: Ryan Kaldarirkald...@wikimedia.org
 Subject: Re: [Foundation-l] Blog from Sue about censorship, editorial 
 judgement, and image filters
 To: foundation-l@lists.wikimedia.org
 Date: Friday, 30 September, 2011, 0:28


 On 9/28/11 11:30 PM, David Gerard wrote:
 This post appears mostly to be the tone argument:

 http://geekfeminism.wikia.com/wiki/Tone_argument

 - rather than address those opposed to the WMF (the body perceived to
 be abusing its power), Sue frames their arguments as badly-formed and
 that they should therefore be ignored.
 Well, when every thoughtful comment you have on a topic is met with
 nothing more than chants of WP:NOTCENSORED!, the tone argument seems
 quite valid.

 Ryan Kaldari
 Quite. 
 I have had editors tell me that if there were a freely licensed video of a 
 rape (perhaps a historical one, say), then we would be duty-bound to include 
 it in the article on [[rape]], because Wikipedia is not censored. 
 That if we have a freely licensed video showing a person defecating, it 
 should be included in the article on [[defecation]], because Wikipedia is not 
 censored. 
 That if any of the Iraqi beheading videos are CC-licensed, NOTCENSORED 
 requires us to embed them in the biographies of those who were recently 
 beheaded. 
 That if we have five images of naked women in a bondage article, and none of 
 men having the same bondage technique applied to them, still all the images 
 of naked women have to be kept, because Wikipedia is not censored.
 And so on.
 Andreas

I guess you misunderstood those people. Most likely they meant, that 
there should be no rule against such content, if it is an appropriate 
Illustration for the subject. Would you say the same, if this[1] or some 
other documentary film would be put under the CC license? Wouldn't it be 
illustrative as well as educational?

[1] http://www.youtube.com/watch?v=EtvuLAZxgOM

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-09-30 Thread Tobias Oelgarte
I would prefer to read these comments in context and not in snippets. 
Can you point me to the corresponding discussion(s)?

-- Niabot

Am 30.09.2011 19:02, schrieb Andreas Kolbe:
 Tobias, you be the judge whether I misunderstood my fellow Wikipedians' 
 comments. Here are some verbatim quotes, from different contributors:

 How exactly would you propose to get an appropriately licensed video of a 
 rape? [...] I suppose, in the unlikely even that we were to get a video that 
 were appropriately licensed, did not raise privacy concerns, and was germane 
 to the subject, we'd use it. Why shouldn't we? The specific role of 
 NOTCENSORED is to say We do not exclude things because people are squeamish 
 about them, and replacing the word censor with editorial judgment is a 
 simple case of euphemism, and does not change what it means. As to the 
 beheading videos, yes, yes, and most certainly yes. We show graphic images of 
 suffering in articles about The Holocaust, even though that may not be the 
 most comfortable thing for some people. Why wouldn't we do so in an article 
 about another horrific act, if the material is under a license we can use it 
 with?
 I would have no issues with videos of animals (including humans) defecating 
 on appropriate articles. I'm sure you were looking for an OMG THAT'S SO 
 GROSS! response, but you won't find it from me.
 [me:] The question is not whether you would be grossed out watching it. The 
 question is, what encyclopedic value would it add? I don't think there is a 
 single human being on the planet who needs to watch a video of a person 
 defecating to understand how defecation works. If that is your real 
 rationale, then why aren't you going to support removal of images from human 
 nose? But your chat about rape and beheading (both subjects for which I'd 
 strongly advocate a video for, if there could be a free, privacy-keeping one) 
 makes me lose WP:AGF a bittle on this grasping at straws of yours. Let me 
 remember that we, as a culture, had to grow up a lot to accept not being 
 censored. Censoring is the exact opposite of growing up as a culture.
 It sounded to me like they meant it. Doesn't it to you? They were all 
 established users; one of them an admin. I had a long, and perfectly amicable 
 e-mail discussion about it with him afterwards. Their position is entirely 
 logical, but it lacks common sense and, indeed, a little empathy.
 Andreas


 --- On Fri, 30/9/11, Tobias Oelgartetobias.oelga...@googlemail.com  wrote:

 From: Tobias Oelgartetobias.oelga...@googlemail.com
 Subject: Re: [Foundation-l] Blog from Sue about censorship, editorial 
 judgement, and image filters
 To: foundation-l@lists.wikimedia.org
 Date: Friday, 30 September, 2011, 17:06

 Am 30.09.2011 17:49, schrieb Andreas Kolbe:
 --- On Fri, 30/9/11, Ryan Kaldarirkald...@wikimedia.org   wrote:

 From: Ryan Kaldarirkald...@wikimedia.org
 Subject: Re: [Foundation-l] Blog from Sue about censorship, editorial 
 judgement, and image filters
 To: foundation-l@lists.wikimedia.org
 Date: Friday, 30 September, 2011, 0:28


 On 9/28/11 11:30 PM, David Gerard wrote:
 This post appears mostly to be the tone argument:

 http://geekfeminism.wikia.com/wiki/Tone_argument

 - rather than address those opposed to the WMF (the body perceived to
 be abusing its power), Sue frames their arguments as badly-formed and
 that they should therefore be ignored.
 Well, when every thoughtful comment you have on a topic is met with
 nothing more than chants of WP:NOTCENSORED!, the tone argument seems
 quite valid.

 Ryan Kaldari
 Quite.
 I have had editors tell me that if there were a freely licensed video of a 
 rape (perhaps a historical one, say), then we would be duty-bound to include 
 it in the article on [[rape]], because Wikipedia is not censored.
 That if we have a freely licensed video showing a person defecating, it 
 should be included in the article on [[defecation]], because Wikipedia is 
 not censored.
 That if any of the Iraqi beheading videos are CC-licensed, NOTCENSORED 
 requires us to embed them in the biographies of those who were recently 
 beheaded.
 That if we have five images of naked women in a bondage article, and none of 
 men having the same bondage technique applied to them, still all the images 
 of naked women have to be kept, because Wikipedia is not censored.
 And so on.
 Andreas

 I guess you misunderstood those people. Most likely they meant, that
 there should be no rule against such content, if it is an appropriate
 Illustration for the subject. Would you say the same, if this[1] or some
 other documentary film would be put under the CC license? Wouldn't it be
 illustrative as well as educational?

 [1] http://www.youtube.com/watch?v=EtvuLAZxgOM

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 ___
 

Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-09-29 Thread Tobias Oelgarte
Am 29.09.2011 17:00, schrieb Nathan:
 On Thu, Sep 29, 2011 at 2:45 AM, David Gerarddger...@gmail.com  wrote:

 The complete absence of mentioning the de:wp poll that was 85% against
 any imposed filter is just *weird*. Not mentioning it, and not
 acknowledging why someone would do that, doesn't make it go away.

 As you say, this blog post reads like someone forced to defend the
 indefensible, hence the glaringly defective arguments. This will
 convince no-one the post claims to be addressing.


 - d.


 It makes some sense. If you come to the conclusion that your
 constituency for a particularly important decision is a huge and
 diverse array of people (i.e. the readers), and then further conclude
 that opposition to your decision is coming from a very narrow and
 homogenous slice of that array (i.e. contributors)... Ignoring the
 opposition in favor of the larger audience could then be quite
 reasonable.

 Nathan

If it would be the case, that this is a small minority, then i could 
agree and accept that as consensus, even if reasonable arguments were 
ignored. But what the post does is very simple. It describes liberal 
thinking people as a minority - as an extremist minority - that does not 
care about the readers or the project. That isn't any better then the 
we are not censored, we can do it argument. It's the plain opposite, 
but not better or worse. It's the tale about others that might be offended.

What we really need is the discussion if an image is illustrative for 
the topic. We want to spread knowledge. This does not mean to:
a) to leave out illustrative material because is offensive.
b) to include offensive material if something else has the same 
illustrative value.

The image filter, as a tool, is meant to circumvent this question and 
it's answers. Instead of trying to improve the content or providing 
better alternatives it's just the same as to say: we don't care, you 
have to choose, ignoring all possible negative side effects.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2011-09-24 Thread Tobias Oelgarte
Am 24.09.2011 23:40, schrieb :
 On 23/09/2011 17:46, Kim Bruning wrote:
 On Sat, Sep 24, 2011 at 02:43:14AM +1000, Stephen Bain wrote:
 On Fri, Sep 23, 2011 at 11:17 PM, Kim Bruningk...@bruning.xs4all.nl   
 wrote:
 The survey was not a poll or referendum, and did not address the
 fundamental question of whether this feature is wanted.

 The only actual poll I am aware of which asked this question was on
 de.wikipedia.
 My point is that the dewiki poll being worded in a manner that is
 pleasing to people who have critiqued the Foundation-wide survey does
 not render it representative, when it was participated in by at most
 one eightieth of the members of the community whom we know to have an
 opinion on the matter.
 The de poll -however deficient you might consider it- is the only poll
 we have held on the question of whether an implementation will be
 accepted. (In this case, for the de community)

 The last I heard the German people, as expressed through their
 lawmakers, DO NOT want their kids looking at porn or images that are
 excessively violent. They go so far as periodically getting Google to
 filter the search results for Germans.

Where did you hear that? Are there some good sources we could read about 
this topic?


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2011-09-24 Thread Tobias Oelgarte
Am 25.09.2011 00:15, schrieb :
 On 24/09/2011 22:46, David Gerard wrote:
 On 24 September 2011 22:40, wiki-l...@phizz.demon.co.uk   wrote:

 The last I heard the German people, as expressed through their
 lawmakers, DO NOT want their kids looking at porn or images that are
 excessively violent. They go so far as periodically getting Google to
 filter the search results for Germans.

 Analogously, tell me about your personal endorsement of the Digital
 Economy Act and justify each provision.

 Last I heard in the real world Germans did not want their kids looking a
 images of porn or excessive violence online. That sites that were
 targeted at Germans required age filters, that Google was frequently
 asked to remove pages from theor index, and that ISPs were instructed to
 disallow access to such sites.

 Under such circumstances the opinions of 300 self selecting Germans is
 unlikely to be indicative of German opinion.
Please provide some valid, notable sources for such claims. Otherwise i 
find it hard to believe.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-24 Thread Tobias Oelgarte
Am 25.09.2011 00:43, schrieb David Gerard:
 On 24 September 2011 23:00, Phil Nashphn...@blueyonder.co.uk  wrote:

 The IWF just did not understand how access to Wikipedia works; a strange
 situation, given their mission. And it wasn't helped by their publicity at
 the time, IIRC. Fortunately, they seem to have shut up since then, and
 possibly got their act together in targetting stuff that really does need
 action by law enforcement. However, if that is the case, I would have
 expected them to have shouted it from the rooftops, but I haven't seen it.

 The IWF situation is analogous to the present one. The government of
 the day called for something to be done! (the requirement spec for
 magical flying unicorn ponies), and the ISPs nodded and smiled and set
 up something called the IWF that pretended to supply magical flying
 unicorn ponies. And everyone was happy.

 That is, the IWF is the sort of organisation that can exist only as
 long as it doesn't affect people's lives, e.g. doesn't hit the
 headlines.

 What happened then was that the IWF had a rush of blood to the head,
 misunderstood their purpose (to pretend to do something impossible)
 and thought they needed to actually do something to implement the
 magical flying unicorn ponies requirement. So they blocked Wikipedia.
 And everyone noticed. And now they are well-known and are widely
 regarded as dangerous cretins. A cautionary tale for those hoping to
 implement filters.


 - d.

Thats whats make the filter even worse. Which ISP would think about 
blocking Wikipedia entirely? Many would not take that risk. They learned 
from this stories. But what if they use the filter(-categories) to block 
parts of Wikipedia, because an IWF-like group puts pressure on them? It 
is more likely to succeed, especially in regions that aren't as 
developed as the so called western countries.

I don't fear that this could happen in Germany, especially after the 
Pirate-Party got surprisingly 8,9% in the last election. But i fear for 
the users in countries like Turkey, Egypt, Taiwan or parts of the 
African Union. Especially with the mission in mind to reach this people.

That the filter, as proposed, would not satisfy possible censors should 
be clear. But it provides a tool to calmly censor content with our help 
and efforts. A double edged sword.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2011-09-24 Thread Tobias Oelgarte
Am 25.09.2011 01:10, schrieb Jussi-Ville Heiskanen:
 On Sun, Sep 25, 2011 at 1:39 AM, Phil Nashphn...@blueyonder.co.uk  wrote:
  wrote:
 On 24/09/2011 22:46, David Gerard wrote:
 On 24 September 2011 22:40, wiki-l...@phizz.demon.co.ukwrote:

 The last I heard the German people, as expressed through their
 lawmakers, DO NOT want their kids looking at porn or images that are
 excessively violent. They go so far as periodically getting Google
 to filter the search results for Germans.

 Analogously, tell me about your personal endorsement of the Digital
 Economy Act and justify each provision.

 Last I heard in the real world Germans did not want their kids
 looking a images of porn or excessive violence online. That sites
 that were targeted at Germans required age filters, that Google was
 frequently asked to remove pages from theor index, and that ISPs were
 instructed to disallow access to such sites.

 Under such circumstances the opinions of 300 self selecting Germans is
 unlikely to be indicative of German opinion.
 Unless I've missed something of importance, the stance of parents in Germany
 is little different from those in any other country. The USA and UK have
 both tried, and failed, to impose such censorship, even through licensing or
 grading schemes; but the bottom line is that the internet doesn't work that
 way, and in my experience there is no common denominator jurisdiction that
 has the will or the power to impose any restrictions on a global medium.

 Local jurisdictions may attempt to do so, but experience over the last
 thirty years tends to suggest that such restrictions are easily
 circumvented. That's why TOR, to name only one, exists.

 Optimistically, global censorship is just not going to happen.

 Personally my understanding of the German position on censorship
 is that it shouldn't happen, pretty much like in Finland, Sweden,
 Norway, France and the Netherlands. Can't really speak for Austria,
 Belgium, Switcherland or the staunchly mediterranean european
 countries (suspect the mediterraneans are heavily beset by
 cognitive dissonance -- think of the children but when one like
 Berlusconi thinks of the children the wrong way and gets caught,
 it is all just a political witch-hunt; and when it is Carnivale, anything
 goes, it is just a little bit of fun, plenty of time to be offended when
 Carnivale is over.rolls eyes  )
Censorship, as it is, is forbidden by the German constitution, with 
extra rights to allow open (even violent) protest if the constitution is 
in danger to be ignored or abolished. That goes for many other European 
countries as well.

You will really have a hard time to offend European people with sexual 
or violent images, especially when used in educational context. Just go 
in a super market and you will unwillingly stop at the checkout counter 
and look at bare breasts on the title page of the BILD newspaper.[1]

The same picture applies for the other countries as well. That Italy has 
such an grudge against Berlusconi is not based on his 
bunga-bunga-parties alone. It's basically against the money he and his 
political party wastes, while the country itself has it's problems. Of 
course it is a political witch-hunt.

If you speak about Canivale (a mostly German tradition) then it just 
the combination of satire and a party. While the party stops, satire is 
still a daily element. You will find it on the second page of newspapers 
(mostly about politics), in the daily TV-shows or at the local theater. 
If politicians, minorities or majorities would be easily to offend, then 
it would really be big show.

[1] http://en.wikipedia.org/wiki/Bild - The article has a good example 
on how it looks like.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-23 Thread Tobias Oelgarte
Yes we are aware of such pages. Just search for google safe version 
and so on. At first you will find plugins from Google for browsers 
itself, that can be used to enable the filter as an default option. If 
you scroll down a bit, then you will find other pages that are using 
Google to perform so called safe searches.[1] There is a room for such 
tools.[2] Google limited it somewhat by providing the feature trough 
browser plugins itself. But you still find many examples for such pages.[3]

There is already a market for such tools. First someone could check them 
out to see if we really need to do categorization or if this software is 
already good enough. Secondly it's nearly a proven that we would make an 
addition to that market.

[1] For example:
http://www.uk.safesearchlive.com/
http://www.safesearchkids.com/wikipedia-for-kids.html
(Interestingly it does safe-search for Wikipedia trough Googles image 
categorization)
[2] 
https://addons.mozilla.org/de/firefox/addon/linkextend-safety-kidsafe-site/versions/
 
Plugin for firefox that removes even the buttons to disable safe 
search from google pages.
[3] Many Anti-Virus software includes googles safe search 
functionality http://forum.kaspersky.com/lofiversion/index.php/t145285.html
...


Am 23.09.2011 02:46, schrieb Andreas Kolbe:
 Are you aware of any providers that use other sites' category systems in 
 that way? E.g. to disable Google searches with safe search off for all of 
 their subscribers, disable access to adult Flickr material, etc.?


 Am 23.09.2011 01:21, schrieb Andreas Kolbe:
 And where would the problem be? If a user prefers to go to a Bowdlerised 
 site like that,
 rather than wikipedia.org, where they will see the pictures unless they 
 specifically ask not
 to see them, then that is their choice, and no skin off our noses.
 A.

 The problem would be simple. The people that depend on one provider
 for internet access would have no other choice then to use a censored
 version. They type en.wikipepedia.org, the local proxy redirects them
 to filterpedia.org which provides only the content which is not in one
 of the pre-choosen categories.

 It's simple as that. They don't choose to use that site but they will be
 forced to. *We* would make that possible.

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-23 Thread Tobias Oelgarte
Am 23.09.2011 10:27, schrieb Fae:
 How odd, checking Tobias' list, I tried
 http://www.safesearchkids.com/wikipedia-for-kids.html to look for
 penis and it recommended [[File:Male erect penis.jpg]] as the second
 match. I was expecting it to restrict me to the more rounded and
 educational encyclopaedia entries, not straight to the most
 challenging images without context.

 If the WMF were to recommend such a solution for schools or
 religious groups, we might run into some immediate complaints.

 Cheers,
 Fae
I did not say that the Google filter is perfect. Additionally it could 
have been that Google does not see that image as offensive or did never 
revise it. I was talking about the ease that such tools can be built and 
that there is actually a market for such tools.

If we implement a filter with categories, then we would provide the same 
data source as Google does. But we should keep in mind: The 
better/stricter we are, the better/stricter will be filters from third 
parties. We would invest to provide them the tools and data they need.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-23 Thread Tobias Oelgarte
I gave you a simple example on how easy it would be to use our 
categorization to implement a filter based upon those categories.

The sources on that this actually happens are not rare if we look at 
china or Iran. The problem are many local providers over which you will 
seldom find a report. Many third world Internet users are bound to use a 
single local provider or the access depends at an organization.

You said that we have to concern the point, that Wikipedia might be 
blocked entirely if we don't have such a feature.

This argument is weakend by the fact that the filter (as intended) can 
just be ignored by user. This rises the doubt, that the feature would be 
strong enough for censors needs and therefore might not be reason 
against blocking Wikipedia completely.

But lets also think the other way around. Many of this potential 
censors aren't blocking Wikipedia entirely since this would most likely 
result in pressure against the decision to take down Wikipedia. 
Blocking only selected content is the way censors prefer. It is done in 
a much greater amount of countries. For example even in Taiwan or South 
Korea.

If we provide the categories then this is exactly one of the things what 
could be used to extend censorship without the pressure to take down 
Wikipedia entirely. It is much more acceptable. An option that is not 
present at the moment.

To be fair: We have no numbers on that. It is speculation and it might 
go the one way or the other way. But should we take that risk?

Currently we are promoting free access to information and knowledge. If 
a filter like this has a 50:50 chance to improve or worsen things, then 
we might raise the question: Is it worth the effort or should we search 
for better solutions?

Greetings Tobias

Am 23.09.2011 12:38, schrieb Andreas Kolbe:
 Tobias,
 That is not quite what I thought we were talking about, because these are 
 set-ups made on an individual computer, rather than restrictions at the 
 internet service provider level.
 For example, I would not have a problem with it if schools figured out a way 
 to prevent access to controversial images on school computers. I might have a 
 problem with it if no one in an entire country were able to view these 
 images; hence my question. I thought that was what you were talking about. 
 If there are countries/Internet service providers that restrict all of their 
 citizens from accessing porn sites, searching for adult images on Flickr, or 
 prevent them from performing Google searches with safe search switched off, 
 then it would be reasonable to assume that they might make an effort to do 
 the same for Wikipedia.
 There was a similar situation in Germany, when Flickr prevented all German 
 users with a yahoo.de address from accessing adult Flickr material, because 
 Germany has unusually strict youth protection and age verification laws. 
 http://en.wikipedia.org/wiki/Flickr#Controversy
 However, that was done by the company itself, because they wanted to avoid 
 legal liability in Germany, and not by German Internet service providers. 
 People in Germany with a yahoo.com (rather than yahoo.de) e-mail address were 
 still perfectly able to access adult Flickr material from within Germany, 
 using German internet service providers.

 I believe Saudi Arabia has sporadically blocked access to Wikipedia, and 
 blocks access to porn sites at the Internet service provider level: 
 http://en.wikipedia.org/wiki/Censorship_in_Saudi_Arabiahttp://www.andrewlih.com/blog/2006/07/27/wikipedia-blocked-in-saudi-arabia/

 Wikipedia was also briefly blocked in Pakistan, because of the Mohammed 
 cartoon controversy. So there might be a scenario where countries like Saudi 
 Arabia and Pakistan figure out how to block access to adult images and images 
 of Mohammed on Wikipedia permanently, using methods like the ones you 
 describe, based on the personal image filter categories. 
 That might be a concern worth talking about. Of course, it has to be balanced 
 against the concern that these countries can block Wikipedia altogether.

 Regards,Andreas


 --- On Fri, 23/9/11, Tobias Oelgartetobias.oelga...@googlemail.com  wrote:

 From: Tobias Oelgartetobias.oelga...@googlemail.com
 Subject: Re: [Foundation-l] Possible solution for image filter
 To: foundation-l@lists.wikimedia.org
 Date: Friday, 23 September, 2011, 8:33

 Yes we are aware of such pages. Just search for google safe version
 and so on. At first you will find plugins from Google for browsers
 itself, that can be used to enable the filter as an default option. If
 you scroll down a bit, then you will find other pages that are using
 Google to perform so called safe searches.[1] There is a room for such
 tools.[2] Google limited it somewhat by providing the feature trough
 browser plugins itself. But you still find many examples for such pages.[3]

 There is already a market for such tools. First someone could check them
 out to see if we really need to do 

Re: [Foundation-l] Image filter

2011-09-23 Thread Tobias Oelgarte
Am 23.09.2011 14:03, schrieb m...@marcusbuck.org:
 After some thinking I come to the conclusion that this whole
 discussion is a social phenomenon.

 You probably know how some topics when mentioned in newspaper articles
 or blogs spur wild arguments in the comments sections. When the
 article mentions climate change commentators contest the validity of
 the collected data, if it mentions religions commentators argue that
 religion is the root of all evil in the world, if it is about
 immigration commentators start to rant how immigrants cause trouble in
 society, if it is about renewable energies commentators tell us how
 blind society is to believe in its ecologicalness.

 It's always the same pattern: the topic is perceived well in the
 general society (most sane people think that climate change is real,
 that renewable energies are the way to go, that religious freedom is
 good and that most immigrants are people as everybody else who do no
 harm), but a small or not so small minority experiences these
 attitudes as a problem and tries to raise awareness to the problems of
 the trend (usually exaggerating them). The scepticists give their
 arguments and the non-scepticists answer them.

 The non-scepticists usually have not much motivation to present their
 arguments (because their position is already the mainstream, so not
 much incentive to convince more people, just trying to not let the
 scepticists' opinions stand unwithspoken) while the scepticists have
 much motivation to present their arguments (if they don't society will
 presumedly face perdition). This difference in the motivation leads to
 a situation where both groups produce a similar content output leading
 to the semblence that both groups represent equal shares of society.

 I think the same is happening here. The majority of people probably
 think that an optional opt-in filter is a thing that does no harm to
 non-users and has advantages for those who choose to use it. (Ask your
 gramma whether You can hide pictures if you don't want to see them
 sounds like a threatening thing to her.) But the scepticists voice
 their opinions loudly and point out every single imaginable problem.

 I just want to point out that an idea like a free community-driven
 everybody-can-edit-it encyclopedia with no editorial or peer-review
 process would never have been created if a long discussion would have
 preceded its creation. The scepticists would have raised so many
 seemingly valid concerns that they'd buried the idea deep. I'm feeling
 that a group of worst-case scenarioists are leading the discussion to
 a point where the image filter is buried just because everybody is
 bored about the discussion.

 Marcus Buck
 User:Slomox

 PS: Please don't understand this as a longish version of You guys
 opposing my opinion are trolls!. I don't think that the points raised
 by scepticists should be neglected. But I think that many people
 reject the image filter because of very theoretical concerns for the
 sake of it completely removed from pragmatical reasons and that the
 length of the discussion is in no way indicative of the real
 problematicness of the topic.



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

I agree with that. But i also have to mention that we have same 
repeating patterns in the claims that we would need a filter, because 
there is a huge mass of users demanding it. Actually i don't see this 
mass of users in all samples that i have taken over time. Even in 
theoretical support that there are much more complains then actually are 
written down at the discussion pages, it's still below 1% or less. Thats 
make me think that the arguments for the introduction of a filter are 
already based on a loud minority view.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2011-09-23 Thread Tobias Oelgarte
Please don't do the rhetorical trick that a mass of users would support 
some point of view without actual proof. (You've just posted what many 
of us think and feel.)

The chat was of course dominated by the word German. It's the one and 
only poll that states the opposite to the view of the board. But you 
could just leave out the comments from Ottava and it would be the half 
amount of use of this word.

The main problems/questions remain:
* Is the filter any good?
* Is there a big audience that would enjoy and need a filter?
* How do we decide what will be hidden considering NPOV?
* ...

None of this questions where followed before the decision. Actually the 
questions where raised after the decisions in combination with the 
referendum. Thats one of things i really wonder about.



Am 23.09.2011 14:19, schrieb Sarah Stierch:
 +1

 You've just posted what many of us think and feel. I read the transcript for 
 office hours with Sue from yesterday and it was the same thing. 45 minutes of 
 image filter skepticism and more. I'm glad I couldn't attend it, seemed like 
 a painful and unintellectual experience to sit through.

 And if i had a dollar for the mentioning of Germans I'd be rich. And here 
 people are arguing about lack of coverage about other projects and languages. 
 So tired of the Us vs. Them mentality.

 I'd rather talk about GMOs, JFK, Creationism and the end of the world next 
 yearat this point.

 Sarah Stierch
 Who is never bored and is surely not mainstream, but is happy to be called so 
 right now.


 Sent via iPhone - I apologize in advance for my shortness or errors! :)


 On Sep 23, 2011, at 8:03 AM, m...@marcusbuck.org wrote:

 After some thinking I come to the conclusion that this whole
 discussion is a social phenomenon.

 You probably know how some topics when mentioned in newspaper articles
 or blogs spur wild arguments in the comments sections. When the
 article mentions climate change commentators contest the validity of
 the collected data, if it mentions religions commentators argue that
 religion is the root of all evil in the world, if it is about
 immigration commentators start to rant how immigrants cause trouble in
 society, if it is about renewable energies commentators tell us how
 blind society is to believe in its ecologicalness.

 It's always the same pattern: the topic is perceived well in the
 general society (most sane people think that climate change is real,
 that renewable energies are the way to go, that religious freedom is
 good and that most immigrants are people as everybody else who do no
 harm), but a small or not so small minority experiences these
 attitudes as a problem and tries to raise awareness to the problems of
 the trend (usually exaggerating them). The scepticists give their
 arguments and the non-scepticists answer them.

 The non-scepticists usually have not much motivation to present their
 arguments (because their position is already the mainstream, so not
 much incentive to convince more people, just trying to not let the
 scepticists' opinions stand unwithspoken) while the scepticists have
 much motivation to present their arguments (if they don't society will
 presumedly face perdition). This difference in the motivation leads to
 a situation where both groups produce a similar content output leading
 to the semblence that both groups represent equal shares of society.

 I think the same is happening here. The majority of people probably
 think that an optional opt-in filter is a thing that does no harm to
 non-users and has advantages for those who choose to use it. (Ask your
 gramma whether You can hide pictures if you don't want to see them
 sounds like a threatening thing to her.) But the scepticists voice
 their opinions loudly and point out every single imaginable problem.

 I just want to point out that an idea like a free community-driven
 everybody-can-edit-it encyclopedia with no editorial or peer-review
 process would never have been created if a long discussion would have
 preceded its creation. The scepticists would have raised so many
 seemingly valid concerns that they'd buried the idea deep. I'm feeling
 that a group of worst-case scenarioists are leading the discussion to
 a point where the image filter is buried just because everybody is
 bored about the discussion.

 Marcus Buck
 User:Slomox

 PS: Please don't understand this as a longish version of You guys
 opposing my opinion are trolls!. I don't think that the points raised
 by scepticists should be neglected. But I think that many people
 reject the image filter because of very theoretical concerns for the
 sake of it completely removed from pragmatical reasons and that the
 length of the discussion is in no way indicative of the real
 problematicness of the topic.



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 

Re: [Foundation-l] Possible solution for image filter

2011-09-23 Thread Tobias Oelgarte
You may need to add additional points:

5. A country or ISP does not unblock Wikipedia because he doesn't think 
that it's a usable alternative for a full block, even if he could filter 
the images based on the filter. (It already works, why step down...)

6. A country or ISP that only hides certain topics/articles could decide 
to also hide images marked by the filter.

Am 23.09.2011 14:38, schrieb Andreas Kolbe:
 As I see it, if the personal image filter categories can be exploited by 
 censors to restrict image access permanently and irrevocably, this could 
 result in the following scenarios:
 1. A country or ISP that currently does not censor access to Wikipedia 
 switches to access without the categorised images, removing choice from users 
 (net loss for free access to information; this might extend even to basic 
 anatomical images of vulvas, penises etc.).
 2. A country or ISP that currently blocks access to Wikipedia completely 
 makes Wikipedia available again, but without access to the images covered by 
 the personal image filter categories (net gain for free access to 
 information).
 3. A country or ISP that currently blocks access to all Wikimedia images 
 restores access to all images outside the personal image filter categories 
 (net gain for free access to information, but it would be useful to have 
 confirmation as to how many ISPs currently block all Wikimedia images -- at 
 the moment we only have an unsourced statement in 
 http://en.wikipedia.org/w/index.php?title=List_of_websites_blocked_in_the_People%27s_Republic_of_Chinaoldid=451338781#Wikipedia
  claiming that some Chinese ISPs do this). 
 4. A country or ISP that currently blocks access to Wikipedia completely, or 
 currently blocks access to Wikimedia images globally, restores access, using 
 the personal image filter as designed, i.e. leaving it at the user's 
 discretion (net gain for free access to information, but I agree with you 
 that this scenario is rather unlikely).
 We clearly should not assume that these net gains or net losses are all equal 
 in magnitude, or that all these scenarios would be equally likely. 
 We should also remember that this only addresses the consequences of 
 countries or providers using the personal image filter categories in the way 
 that you have warned would be possible, i.e. for complete censorship of these 
 images. 
 Such use of the categories for outright censorship is an important part of 
 the picture, but it's not the whole picture, as there is also the perceived 
 benefit of the personal image filter when it works as designed (i.e. giving 
 the user a choice they don't have right now). 

 Still, these are important matters to think about. I like the personal image 
 filter idea as designed, but I'd be uncomfortable with 50 countries, say, 
 using the opportunity to implement scenario 1.

 Andreas

 --- On Fri, 23/9/11, Tobias Oelgartetobias.oelga...@googlemail.com  wrote:

 From: Tobias Oelgartetobias.oelga...@googlemail.com
 Subject: Re: [Foundation-l] Possible solution for image filter
 To: foundation-l@lists.wikimedia.org
 Date: Friday, 23 September, 2011, 12:03

 I gave you a simple example on how easy it would be to use our
 categorization to implement a filter based upon those categories.

 The sources on that this actually happens are not rare if we look at
 china or Iran. The problem are many local providers over which you will
 seldom find a report. Many third world Internet users are bound to use a
 single local provider or the access depends at an organization.

 You said that we have to concern the point, that Wikipedia might be
 blocked entirely if we don't have such a feature.

 This argument is weakend by the fact that the filter (as intended) can
 just be ignored by user. This rises the doubt, that the feature would be
 strong enough for censors needs and therefore might not be reason
 against blocking Wikipedia completely.

 But lets also think the other way around. Many of this potential
 censors aren't blocking Wikipedia entirely since this would most likely
 result in pressure against the decision to take down Wikipedia.
 Blocking only selected content is the way censors prefer. It is done in
 a much greater amount of countries. For example even in Taiwan or South
 Korea.

 If we provide the categories then this is exactly one of the things what
 could be used to extend censorship without the pressure to take down
 Wikipedia entirely. It is much more acceptable. An option that is not
 present at the moment.

 To be fair: We have no numbers on that. It is speculation and it might
 go the one way or the other way. But should we take that risk?

 Currently we are promoting free access to information and knowledge. If
 a filter like this has a 50:50 chance to improve or worsen things, then
 we might raise the question: Is it worth the effort or should we search
 for better solutions?

 Greetings Tobias

 Am 23.09.2011 12:38, schrieb Andreas Kolbe:
 Tobias,
 That 

Re: [Foundation-l] Image filter

2011-09-23 Thread Tobias Oelgarte
 of 
football. ;-)
 -Sarah (Missvain, SarahStierch)
 Who would move to Berlin in a heartbeat to be an unpaid intern for
 Einstürzende Neubauten. So don't think I don't love my Germans ;-) (and
 Bayern Munich is my favorite team!)

 On Fri, Sep 23, 2011 at 8:41 AM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 Please don't do the rhetorical trick that a mass of users would support
 some point of view without actual proof. (You've just posted what many
 of us think and feel.)

 The chat was of course dominated by the word German. It's the one and
 only poll that states the opposite to the view of the board. But you
 could just leave out the comments from Ottava and it would be the half
 amount of use of this word.

 The main problems/questions remain:
 * Is the filter any good?
 * Is there a big audience that would enjoy and need a filter?
 * How do we decide what will be hidden considering NPOV?
 * ...

 None of this questions where followed before the decision. Actually the
 questions where raised after the decisions in combination with the
 referendum. Thats one of things i really wonder about.



 Am 23.09.2011 14:19, schrieb Sarah Stierch:
 +1

 You've just posted what many of us think and feel. I read the transcript
 for office hours with Sue from yesterday and it was the same thing. 45
 minutes of image filter skepticism and more. I'm glad I couldn't attend it,
 seemed like a painful and unintellectual experience to sit through.
 And if i had a dollar for the mentioning of Germans I'd be rich. And
 here people are arguing about lack of coverage about other projects and
 languages. So tired of the Us vs. Them mentality.
 I'd rather talk about GMOs, JFK, Creationism and the end of the world
 next yearat this point.
 Sarah Stierch
 Who is never bored and is surely not mainstream, but is happy to be
 called so right now.

 Sent via iPhone - I apologize in advance for my shortness or errors! :)


 On Sep 23, 2011, at 8:03 AM, m...@marcusbuck.org wrote:

 After some thinking I come to the conclusion that this whole
 discussion is a social phenomenon.

 You probably know how some topics when mentioned in newspaper articles
 or blogs spur wild arguments in the comments sections. When the
 article mentions climate change commentators contest the validity of
 the collected data, if it mentions religions commentators argue that
 religion is the root of all evil in the world, if it is about
 immigration commentators start to rant how immigrants cause trouble in
 society, if it is about renewable energies commentators tell us how
 blind society is to believe in its ecologicalness.

 It's always the same pattern: the topic is perceived well in the
 general society (most sane people think that climate change is real,
 that renewable energies are the way to go, that religious freedom is
 good and that most immigrants are people as everybody else who do no
 harm), but a small or not so small minority experiences these
 attitudes as a problem and tries to raise awareness to the problems of
 the trend (usually exaggerating them). The scepticists give their
 arguments and the non-scepticists answer them.

 The non-scepticists usually have not much motivation to present their
 arguments (because their position is already the mainstream, so not
 much incentive to convince more people, just trying to not let the
 scepticists' opinions stand unwithspoken) while the scepticists have
 much motivation to present their arguments (if they don't society will
 presumedly face perdition). This difference in the motivation leads to
 a situation where both groups produce a similar content output leading
 to the semblence that both groups represent equal shares of society.

 I think the same is happening here. The majority of people probably
 think that an optional opt-in filter is a thing that does no harm to
 non-users and has advantages for those who choose to use it. (Ask your
 gramma whether You can hide pictures if you don't want to see them
 sounds like a threatening thing to her.) But the scepticists voice
 their opinions loudly and point out every single imaginable problem.

 I just want to point out that an idea like a free community-driven
 everybody-can-edit-it encyclopedia with no editorial or peer-review
 process would never have been created if a long discussion would have
 preceded its creation. The scepticists would have raised so many
 seemingly valid concerns that they'd buried the idea deep. I'm feeling
 that a group of worst-case scenarioists are leading the discussion to
 a point where the image filter is buried just because everybody is
 bored about the discussion.

 Marcus Buck
 User:Slomox

 PS: Please don't understand this as a longish version of You guys
 opposing my opinion are trolls!. I don't think that the points raised
 by scepticists should be neglected. But I think that many people
 reject the image filter because of very theoretical concerns for the
 sake of it completely removed from

Re: [Foundation-l] Larry Sanger tweets about 13 yo in Wikiproject Pornography

2011-09-23 Thread Tobias Oelgarte
Am 23.09.2011 19:26, schrieb Kim Bruning:
 Dear Press: a self-described 13 YO joined Wikiproject Pornography. 
 Wikipedians support him. webcitation.org/61v0ykxJe
 webcitation.org/61v1FfW3K
   - http://twitter.com/#!/lsanger/status/117299089439334400


 The on-wiki argument is that there are many areas in that project that don't 
 actually involve nudie pics, but rather cover
 areas of law, etc.scratches head

 sincerely,
   Kim Bruning

That makes twitter so wonderful. One short, provocative headline and no 
background knowledge at all. Just another bad attempt to attack 
Wikipedia. I'm sure everyone knows who Larry Sanger is and what comes 
out of his mouth. Best advice: Just ignore it.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-22 Thread Tobias Oelgarte
Am 22.09.2011 05:15, schrieb Bjoern Hoehrmann:
 * David Gerard wrote:
 233 would be a *large* turnout on en:wp. What is a large turnout on de:wp?
 Most Meinungsbilder have between 100 and 300 editors participating and
 the 300s are seen regularily. Participation maxes out at around 500 so
 large probably begins somewhere in the 300s. This largely matches the
 number of participants in admin elections, to offer a comparison.
You should took into account that this are open polls. One issue with 
open polls is participation. If a poll is on the edge (50:50 situation), 
you will always have much more votes then in a poll that looks already 
decided after a few days. Thats why polls which are going strongly in 
one direction usually have a lesser number of participants.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-22 Thread Tobias Oelgarte
Am 22.09.2011 08:07, schrieb Kanzlei:
 Am 21.09.2011 um 22:37 schrieb David Gerarddger...@gmail.com:

 On 21 September 2011 21:20, Kanzleikanz...@f-t-hofmann.de  wrote:

 This poll was not representative for wikipedia readers, but only for some 
 German wikipedia editors.  Scientifically research found that Germa editors 
 are not representative for German speaking people but far more 
 environmetal-liberal-leftists than avarage Germans. The poll was even not 
 representative for German editors because only a few voted.

 233 would be a *large* turnout on en:wp. What is a large turnout on de:wp?

 Your arguments look to me like fully-general counterarguments against
 *any* on-Wikipedia poll whatsoever, no matter the structure or
 subject. What would you accept as a measure of the de:wp community
 that would actually be feasible to conduct?
 233 is a large amount for a poll on de:wp. But it was no democratic poll, 
 because the manner by which the poll was conducted was not democratic. A 
 democratic and representative poll has to be equal, common and private. The 
 poll was not common because not every user entitled to vote was noticed about 
 the poll,

 (example for a more democratic poll was the poll from the foundation in 
 question bildfilter: it was on an anonymous server and I was notified by 
 email that I was entitled to vote),

 it was not private, because everybody can see who choose what. And finally it 
 was not equal, because there was no means to exclude the possibility of sock 
 puppet voting (Which is very common and very easy as far as I know - I know 
 an unpunished such voting).

Every poll will be visible at the Autorenportal [1] under Aktuelles 
(current issues). So everyone can inform himself and decide if he wants 
to vote. We decided to have public polls since everyone should be able 
to discuss about the arguments and to leave comments. We have a policy 
for that. This is our model.

You must be an asshole to claim that we have many sock puppets inside 
this votes. It's an open attack against the community.

* User must be logged in
* He must be active for at least two month (poll announcement and 
duration time is shorter)
* He must have at least 200 edits inside the article namespace and more 
then 50 edits in the last 12 month.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-22 Thread Tobias Oelgarte
Am 22.09.2011 23:55, schrieb Andrew Gray:
 On 21 September 2011 14:14, Jussi-Ville Heiskanencimonav...@gmail.com  
 wrote:

 The real problem here is that if there was a real market for stupid
 sites like that, they would already be there. And they are not, which
 does seem to point to the conclusion that there isn't a real market
 for such sites. Doesn't it?
 Not really.

 There are basically no major WP-derivative sites of any kind in
 existence - the ones that exist are either plain dumps studded with
 ads, or very small-scale attempts to do something good and innovative.
 As far as I can tell, it's just very hard to get a fork or a
 significantly different derivative site up and running successfully;
 it requires a large investment on fairly speculative predictions.

 Given this, it's hard to say that the absence of a particular kind of
 derivative site is due to there being a lack of demand for that *kind*
 of site - there might be demand, there might not, we just can't tell
 from the available evidence.

 (To steal David's analogy, it's a bit like saying that unicorns can't
 be trained, as there are no trained unicorns. Of course, there are no
 unicorns at all, and their trainability is moot...)

Given the situation that we would provide a filter, as described in the 
referendum as a reference, it would be relatively easy to set up 
something like live mirror. It could work like a proxy (possibly with 
own caches) that could enable specific filtering as the default, without 
the option to disable it. One might provide it as a service for 
institutions that would simply redirect access to Wikipedia over such a 
proxy and therefore enforce the hiding of the images.

Currently you would have the need to create a live mirror and to feed it 
with tagging data. The proxy isn't money intensive, but the tagging is 
very expensive if you would need to do it alone. Thats the main reason 
why no such pages/proxies exist.

If *we* provide the tagging, then it would be much easier to do things 
like that.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-22 Thread Tobias Oelgarte
Am 22.09.2011 23:49, schrieb Andrew Gray:
 On 21 September 2011 18:20, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 Truthfully, i see not different approach to include images and text
 passages. Both are added, discussed, removed, re-added the same way as
 text is. Now i heard some say that text is written by multiple authors
 and images are only created by one. Then i must wonder that we are able
 to decide to include one source and it's arguments written by one
 author, while it seams to be a problem to include the image of one
 photographer/artist. There really is no difference in overall progress.
 If we've a choice of several different images, we can pick the one
 which is most neutral - so if we're writing about a war, we can choose
 not to use a photograph of the Glorious Forces of Our Side Marching In
 Victory, and instead pick a less loaded one of some soldiers in a
 field, or a map with arrows.

 But there's a problem when the issue is whether it's appropriate to
 *include an image at all*. If one position says we should include an
 image and the other position says we shouldn't, then whichever way we
 decide, we've taken sides. We can't really be neutral in a yes-or-no
 situation.

Thats the same situation as to include a fact or a quote from a source 
or not, if the source itself is disputed. Thats not a real difference.

The problem with images has another origin. Images aren't left out 
because they might not be illustrative or not. They are left out because 
of sensibilities. Something we should not do.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-22 Thread Tobias Oelgarte
Am 23.09.2011 01:21, schrieb Andreas Kolbe:
 And where would the problem be? If a user prefers to go to a Bowdlerised site 
 like that,
 rather than wikipedia.org, where they will see the pictures unless they 
 specifically ask not
 to see them, then that is their choice, and no skin off our noses.
 A.

The problem would be simple. The people that depend on one provider 
for internet access would have no other choice then to use a censored 
version. They type en.wikipepedia.org, the local proxy redirects them 
to filterpedia.org which provides only the content which is not in one 
of the pre-choosen categories.

It's simple as that. They don't choose to use that site but they will be 
forced to. *We* would make that possible.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 16:43, schrieb Milos Rancic:
 On Wed, Sep 21, 2011 at 15:16, David Gerarddger...@gmail.com  wrote:
 On 21 September 2011 14:14, Jussi-Ville Heiskanencimonav...@gmail.com  
 wrote:
 The real problem here is that if there was a real market for stupid
 sites like that, they would already be there. And they are not, which
 does seem to point to the conclusion that there isn't a real market
 for such sites. Doesn't it?
 Look, the magical flying unicorn pony and the rainbows it shits have
 been specified, and considerable donors' money *will* be spent on the
 task, and that's all there is to it. The volunteers will just have to
 shape up and participate.
 They want that censoring tool and I think that they won't be content
 until they get it. Thus, let them have it and let them leave the rest
 of us alone.
Let them create, manage and pay for it themselves. I don't like the idea 
to spend money for censorship and to see angry/busy admins that have no 
time for the users, just because some guys are holding editwars in a war 
that no one can win through argumentation.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Board resolutions on controversial content and images of identifiable people

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 16:53, schrieb phoebe ayers:
 On Wed, Sep 21, 2011 at 6:31 AM, Jussi-Ville Heiskanen
 cimonav...@gmail.com  wrote:
 On Thu, Aug 25, 2011 at 10:10 PM, phoebe ayersphoebe.w...@gmail.com  
 wrote:
 This seems like an over-hasty statement. There are many possible
 categorization schemes that are neutral; the ALA in fact makes that
 distinction itself, since libraries (obviously) use all kinds of labeling
 and categorization schemes all the time. The ALA and other library
 organizations have taken a stand against censorious and non-neutral
 labeling, not all labeling. If you keep reading the ALA page you linked, it
 says that the kind of labels that are not appropriate are when the
 prejudicial label is used to warn, discourage or prohibit users or certain
 groups of users from accessing the material -- e.g. a label that reads not
 appropriate for children. That does not mean that picture books for kids,
 or mystery novels, or large-print books, aren't labeled as such in every
 public library in the country -- and that is the difference between
 informative and prejudicial labeling.
 Would I be incorrect in pointing out that American public librarys routinely
 exclude world famous childrens book author Astrid Lindgrens childrens
 books, because to puritanical minds a man who can elevate himself
 with a propeller beany, and look into childs rooms thereby, smacks too
 much of pedophilia?

 Uh... yes, you would be incorrect? I certainly checked out Astrid
 Lindgren books from the public library when I was a kid. I have never
 heard of them getting challenged in the US. Citation needed?

 The ALA maintains a list of books that do get routinely challenged in
 US libraries here:
 http://www.ala.org/ala/issuesadvocacy/banned/frequentlychallenged/index.cfm.
 Note, this just means someone *asked* for the book to be removed from
 the public or school library, not that it actually was; libraries
 generally stand up to such requests.

 Also note that challenges are typically asking for the book to be
 removed from the library altogether -- restricting access to it for
 everyone in the community -- as opposed to simply not looking at it
 yourself or allowing your own kids to check it out. It's the 'removal
 for everyone' part that is the problem; the issue here is freedom of
 choice: people should have the right to read, or not read, a
 particular book as they see fit.

 -- phoebe
As described multiple times earlier.

That is not the main problem. The categorization of the content _by 
ourselfs_ is the problem. It is strongly against the basic rules that 
made Wikipedia motivative and big. Your advocacy means more harm then 
benefit for the project. We waste an enormous effort, open new 
battlefields aside from the content/article related discussions and we 
open the door to censorship. We would set an example that censorship or 
self censorship is needed! Is it that what you try to reach?

It's your basic philosophy that sucks. It's _not_ the choice of the 
reader to hide image he don't like. It's the choice of the reader to 
hide image that others don't like! Now get a cup of tea and think about it.

Tobias

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Board resolutions on controversial content and images of identifiable people

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 17:21, schrieb Jussi-Ville Heiskanen:
 On Wed, Sep 21, 2011 at 5:53 PM, phoebe ayersphoebe.w...@gmail.com  wrote:
 On Wed, Sep 21, 2011 at 6:31 AM, Jussi-Ville Heiskanen
 cimonav...@gmail.com  wrote:
 On Thu, Aug 25, 2011 at 10:10 PM, phoebe ayersphoebe.w...@gmail.com  
 wrote:
 This seems like an over-hasty statement. There are many possible
 categorization schemes that are neutral; the ALA in fact makes that
 distinction itself, since libraries (obviously) use all kinds of labeling
 and categorization schemes all the time. The ALA and other library
 organizations have taken a stand against censorious and non-neutral
 labeling, not all labeling. If you keep reading the ALA page you linked, it
 says that the kind of labels that are not appropriate are when the
 prejudicial label is used to warn, discourage or prohibit users or certain
 groups of users from accessing the material -- e.g. a label that reads 
 not
 appropriate for children. That does not mean that picture books for kids,
 or mystery novels, or large-print books, aren't labeled as such in every
 public library in the country -- and that is the difference between
 informative and prejudicial labeling.
 Would I be incorrect in pointing out that American public librarys routinely
 exclude world famous childrens book author Astrid Lindgrens childrens
 books, because to puritanical minds a man who can elevate himself
 with a propeller beany, and look into childs rooms thereby, smacks too
 much of pedophilia?

 Uh... yes, you would be incorrect? I certainly checked out Astrid
 Lindgren books from the public library when I was a kid. I have never
 heard of them getting challenged in the US. Citation needed?

 The ALA maintains a list of books that do get routinely challenged in
 US libraries here:
 http://www.ala.org/ala/issuesadvocacy/banned/frequentlychallenged/index.cfm.
 Note, this just means someone *asked* for the book to be removed from
 the public or school library, not that it actually was; libraries
 generally stand up to such requests.

 Also note that challenges are typically asking for the book to be
 removed from the library altogether -- restricting access to it for
 everyone in the community -- as opposed to simply not looking at it
 yourself or allowing your own kids to check it out. It's the 'removal
 for everyone' part that is the problem; the issue here is freedom of
 choice: people should have the right to read, or not read, a
 particular book as they see fit.

 The wikipedia article does mention the controversy, but omits the
 fact that several libraries did in fact pull the books from their inventory...

 http://en.wikipedia.org/wiki/Karlsson-on-the-Roof

Most of the very popular books where removed due to other problems. Some 
would have a format/case that would not suite (Madonna for example). 
Some others would be bought and immediately sold out. It's simply not 
the job of a library to represent bestsellers as soon they come out for 
give away. That is often misinterpreted as banned books. It just leads 
to the fact, that some books are bought later on, when the hype settled 
down.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 17:37, schrieb WereSpielChequers:
 I get the idea that there are theoretical reasons why image filters can't
 work, and I share the view that the proposal which was consulted on needs
 some improvement. An individual choice made at the IP level was a circle
 that looked awfully difficult to square.

 But since Flickr has already proven that something like this can work in
 practice, can we agree to classify Image filters as one of those things that
 work in practice but not in theory? Then we can concentrate on the practical
 issue of if we decide to implement this, how do we do it better than Flickr
 has?

 NB I would not want us to implement this the way Flickr has
 http://www.flickr.com/help/filters/#258 And not only because I'm not totally
 convinced that our community would share their view that Germany is the
 country that needs the tightest restrictions.

 Hugs

 WereSpielChequers

 PS My niece absolutely wants that magical flying unicorn pony for the winter
 solstice, especially if it s***s rainbows. Would you mind telling me where I
 can order one
Using flickr as an example is an bad example. At first there thousands 
if not millions of images with false categorization, meaning that the 
filter is ineffective. Just do a quick search on your own and you will 
find the examples. Secondly flickr does not advocate knowledge. It has a 
completely different mission.

PS: Just implement the filter and you will see that 
unicorn-rainbow-brick-argumentation falling from the sky, where you 
pushed it.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Board resolutions on controversial content and images of identifiable people

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 18:31, schrieb Kanzlei:
 Am 21.09.2011 um 17:36 schrieb Tobias 
 Oelgartetobias.oelga...@googlemail.com:

 It's your basic philosophy that sucks. It's _not_ the choice of the
 reader to hide image he don't like. It's the choice of the reader to
 hide image that others don't like! Now get a cup of tea and think about it.
 It's the bad double-think that sucks. In most cases pictures give no 
 neccessary information in an article or they represent no NPOV information at 
 all. They just illustrate. No piece of information would be missing if the 
 pictures were linked instead of shown. Often it is sheer random which picture 
 is choosen for an article.
For the same reason you could write articles consisting only out of 
links, since writing the article would represent no NPOV information at 
all. Do you really believe that nonsense you just wrote down?
 But You are right. The basic conflict is philosophical. The question behind 
 is: Shall we continue as tough guys with porn pictures, no limits and no 
 rules as everything started or shall we include more sensitive people, women 
 and nations?
We already include them. The problem aren't some articles. The problem 
is the needed knowledge to participate in an encyclopedia that forces 
you to understand a complete syntax before you even know what your 
doing. That makes us geeky, not our content. Additionally this claim:

tough guys with porn pictures, no limits and no rules.

Sorry, i won't comment on this. It's just so out of place and complete 
nonsense-strong-wording.
 Shall our knowledge come rude in one step to everybody or shall we try to 
 reach more people by making steps of least astonishment towards the same 
 truth, but in a pace everybody can live with?
We have no problem with reaching people. We have a problem to let them 
participate. The images aren't the issue. The main issue is the editor 
and overall project climate. Aggressive people, that using one false 
claim after the other or would need to append {{citation needed}} after 
every word, are the ones that drive authors away. Just let the people do 
as they please, and don't say them what they shouldn't look at. That is 
their own decision. The WMF should provide them tools to edit and to 
discuss, but not to blend out the actual content.
 For me this discussion is hypocrite. Don't hide Yoursef behind the choice of 
 the reader. The writers of an article choose alone. They choose words, order 
 and content. The pictures are in most cases the least important of these. So 
 every article hides a lot of information the writers choose not to show. 
 That's normal. And they normally flippantly forget to write a style the more 
 sensitive can live with, that's all.
How writes articles in a style the more sensitives can life with 
should just leave the project. This would be bending of facts and a 
strict violation against NPOV.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 18:41, schrieb Andrew Gray:
 On 21 September 2011 16:53, David Gerarddger...@gmail.com  wrote:

 They do it by crowdsourcing a mass American bias, don't they?

 An American POV being enforced strikes me as a problematic solution.

 (I know that FAQ says global community. What they mean is people
 all around the world who are Silicon Valley technologists like us -
 you know, normal people. This approach also has a number of fairly
 obvious problems.)
 I mentioned this a couple of weeks ago, I think, but this effect cuts both 
 ways.

 We already know that our community skews to - as you put it - people
 all around the world who are technologists like us. As a result, that
 same community is who decides what images are reasonable and
 appropriate to put in articles.

 People look at images and say - yes, it's appropriate, yes, it's
 encyclopedic, no, it's excessively violent, no, that's gratuitous
 nudity, yes, I like kittens, etc etc etc. You do it, I do it, we try
 to be sensible, but we're not universally representative. The
 community, over time, imposes its own de facto standards on the
 content, and those standards are those of - well, we know what our
 systemic biases are. We've not managed a quick fix to that problem,
 not yet.

 One of the problems with the discussions about the image filter is
 that many of them argue - I paraphrase - that Wikipedia must not be
 censored because it would stop being neutral. But is the existing
 Wikipedian POV *really* the same as neutral, or are we letting our
 aspirations to inclusive global neutrality win out over the real state
 of affairs? It's the great big unexamined assumption in our
 discussions...
You describe us as geeks and that we can't write in a way that would 
please the readers. Since we are geeks, we are strongly biased and write 
down POV all day. If that is true, why is Wikipedia such a success? Why 
people read it? Do they like geeky stuff?

Don't you think that we would have thousands of complaints a day if your 
words would be true at all? Just have a look at the article [[hentai]] 
and look at the illustration. How many complaints about this image do we 
get a day? None, because it is less then one complain in a month, while 
the article itself is viewed about 8.000 times a day.[1] That would make 
up one complainer in 240.000 (0,0004%). Now we could argue that only 
some of them would comment on the issue. Lets assume 1 of 100 or even 1 
of 1000. Then it are still only 0,04% or 0,4%. That is the big mass of 
users we want to support get more contributers?

[1] http://stats.grok.se/en/201109/hentai

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 18:45, schrieb Milos Rancic:
 On Wed, Sep 21, 2011 at 18:00, David Levylifeisunf...@gmail.com  wrote:
 Some people won't be content until Wikipedia's prose conveys their
 cultural/religious/spiritual beliefs as absolute truth.  Should the
 WMF provide en.[insert belief system].wikipedia.org so they can edit
 it and leave the rest of us alone?
 Don't worry! Any implementation of censorship project would lead to
 endless troll-fests which would be more dumb than Youtube comments.
 The point is just to kick out them out of productive projects. Imagine
 a place where Christian, Muslim,religon3,religionN
 fundamentalists will have to cooperate! That would be the place for
 epic battles of dumbness. We'll have in-house circus!
Then why some people think we could solve this problem with an _global_ 
filter, with rules and judgment that will be defined by an mostly 
English speaking, Christianity dominated project?

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Board resolutions on controversial content and images of identifiable people

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 18:56, schrieb Michael Snow:
 On 9/21/2011 7:53 AM, phoebe ayers wrote:
 On Wed, Sep 21, 2011 at 6:31 AM, Jussi-Ville Heiskanen
 cimonav...@gmail.com   wrote:
 On Thu, Aug 25, 2011 at 10:10 PM, phoebe ayersphoebe.w...@gmail.com   
 wrote:
 This seems like an over-hasty statement. There are many possible
 categorization schemes that are neutral; the ALA in fact makes that
 distinction itself, since libraries (obviously) use all kinds of labeling
 and categorization schemes all the time. The ALA and other library
 organizations have taken a stand against censorious and non-neutral
 labeling, not all labeling. If you keep reading the ALA page you linked, it
 says that the kind of labels that are not appropriate are when the
 prejudicial label is used to warn, discourage or prohibit users or certain
 groups of users from accessing the material -- e.g. a label that reads 
 not
 appropriate for children. That does not mean that picture books for kids,
 or mystery novels, or large-print books, aren't labeled as such in every
 public library in the country -- and that is the difference between
 informative and prejudicial labeling.
 Would I be incorrect in pointing out that American public librarys routinely
 exclude world famous childrens book author Astrid Lindgrens childrens
 books, because to puritanical minds a man who can elevate himself
 with a propeller beany, and look into childs rooms thereby, smacks too
 much of pedophilia?

 Uh... yes, you would be incorrect? I certainly checked out Astrid
 Lindgren books from the public library when I was a kid. I have never
 heard of them getting challenged in the US. Citation needed?

 The ALA maintains a list of books that do get routinely challenged in
 US libraries here:
 http://www.ala.org/ala/issuesadvocacy/banned/frequentlychallenged/index.cfm.
 Note, this just means someone *asked* for the book to be removed from
 the public or school library, not that it actually was; libraries
 generally stand up to such requests.

 Also note that challenges are typically asking for the book to be
 removed from the library altogether -- restricting access to it for
 everyone in the community -- as opposed to simply not looking at it
 yourself or allowing your own kids to check it out. It's the 'removal
 for everyone' part that is the problem; the issue here is freedom of
 choice: people should have the right to read, or not read, a
 particular book as they see fit.
 I'm unable to find a source on this that doesn't appear to be relying on
 the Wikipedia article in the first place. The supposed rationale seems
 to be that Karlsson is sort of subversive, if you will, and the books
 might undermine traditional concepts of authority (for people of a
 certain era, maybe it also didn't help that the books were popular in
 the USSR). It's possible that somebody somewhere did question its
 inclusion once, which could be true of just about any book. Even if so,
 nothing suggests that the concern had anything to do with encouraging or
 catering to pedophiles. Were that the issue, I would have thought The
 Brothers Lionheart a more obvious target, seeing as how it has young
 boys bathing nude in a river (the scene is illustrated - child porn!),
 and I've never heard of it being banned either.

 --Michael Snow
There might be simple reason for that. Some nude boys bathing in a river 
has nothing to do with pornography and therefor nothing to do with child 
pornography. A simple fact that is widely ignored in many discussions, 
by fundamentalists. They claim that any depiction of a nude body is 
sexual and porn. Not even law agrees to this extreme point of view.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 19:10, schrieb Thomas Dalton:
 On 21 September 2011 14:06, Milos Rancicmill...@gmail.com  wrote:
 You didn't understand me well. It's not about fork(s), it's about
 wrappers, shells around the existing projects.

 * en.safe.wikipedia.org/wiki/whatever  would point to
 en.wikipedia.org/wiki/whatever
 * When you click on edit from en.safe, you would get the same text
 as on en.wp.
 * When you click on save from en.safe, you would save the text on
 en.wp, as well.
 * The only difference is that images in wikitext won't be shown like
 [[File:something sensible.jpg]], but as
 [[File:fd37dae713526ee2da82f5a6cf6431de.jpg]].
 * safe.wikimedia.org won't be Commons fork, but area for image
 categorization to those who want to work on it. It is not the job of
 Commons community to work on personal wishes of American
 right-wingers.

 (Note: safe is not good option for name, as it has four characters
 and it could be used for language editions of Wikipedia; maybe
 safe.en.wikipedia.org could be better option.)
 What is the advantage of that compared with the feature as it was
 originally proposed? All you've done is made the URL more complicated.
 You'll still need to use user preferences to determine which images
 are getting hidden, so why can't you just have an on/off user
 preference as well rather than determining whether the filter should
 be on or off based on the URL?
I would encourage to extend this filter. Add the additional option to 
hide all text, since the words might be offensive.

I still can't the a rational difference between images included in 
articles by the will of the community and text passages included by the 
will of the community. But hiding selected text seems to be a totally 
different issue inside the WMF argumentation (it is called censorship). 
Truthfully, i see not different approach to include images and text 
passages. Both are added, discussed, removed, re-added the same way as 
text is. Now i heard some say that text is written by multiple authors 
and images are only created by one. Then i must wonder that we are able 
to decide to include one source and it's arguments written by one 
author, while it seams to be a problem to include the image of one 
photographer/artist. There really is no difference in overall progress.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 19:36, schrieb Kanzlei:
 Am 21.09.2011 um 19:04 schrieb Tobias 
 Oelgartetobias.oelga...@googlemail.com:

 Don't you think that we would have thousands of complaints a day if your
 words would be true at all? Just have a look at the article [[hentai]]
 and look at the illustration. How many complaints about this image do we
 get a day? None, because it is less then one complain in a month, while
 the article itself is viewed about 8.000 times a day.[1] That would make
 up one complainer in 240.000 (0,0004%). Now we could argue that only
 some of them would comment on the issue. Lets assume 1 of 100 or even 1
 of 1000. Then it are still only 0,04% or 0,4%. That is the big mass of
 users we want to support get more contributers?

 [1] http://stats.grok.se/en/201109/hentai
 Your assumtion is wrong. The 8.000 daily are neither neutral nor 
 representative for all users. Put the picture on the main page and You get 
 representative results. We had that in Germany.
Yes we put the vulva on the main page and it got quite some attention. 
We wanted it this way to test out the reaction of the readers and to 
start a discussion about it. The result was as expected. Complains that 
it is offensive together with Praises to show what neutrality really is. 
After the discussion settled, we opened a Meinungsbild (Poll) to 
question if any article/image would be suitable for the main page 
(Actually it asked to not allow any topic). The result was very clear. 
13 supported the approach to leave out some content from the main page. 
233 (95%) were against the approach to hide some subjects from the main 
page.

You said that my assumption is wrong. We can repeat this for hundreds of 
articles and you would get the same result. Now proof that this 
assumption, which is sourced (just look at it) is wrong or say what is 
wrong with my assumption (in detail).

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 19:37, schrieb Milos Rancic:
 On Wed, Sep 21, 2011 at 19:10, Thomas Daltonthomas.dal...@gmail.com  wrote:
 What is the advantage of that compared with the feature as it was
 originally proposed? All you've done is made the URL more complicated.
 You'll still need to use user preferences to determine which images
 are getting hidden, so why can't you just have an on/off user
 preference as well rather than determining whether the filter should
 be on or off based on the URL?
 * People should have possibility to choose the set of images which
 they don't want to see.
They already have this choice. Just hide images and life without them. 
We have no way guarantee that our expectations on filtering will meet 
the expectation of the audience. How will they choose the images _they_ 
don't want to see? Hundreds of categories to comply with diverse 
sensibilities?
 * As it's not the main site, but wrapper, it could have turned off
 images offensive to anyone, so everybody would be able to see the site
 without having to log in. It could lead to no images by default, but
 that's not my problem.
That isn't the problem. What is the difference to type an different URL 
or to click a button. This does change nothing beside the fact that we 
would have two different URLs now. It's a solution for a not existing 
problem (with or without image filter). Actually it would create one 
additional deficit. The user would have no categories to choose from. 
Something you requested in your first point. It makes things even worse.
 * They could experiment, as nobody would care about the site. As
 Tobias mentioned below, if some text is offensive to someone, they
 could add it into the filter.
Currently we can't filter text. This is technically an impossible job 
without fixed versions. The text changes constantly. Some might get 
offensive over time, other might get milder. The only thing why image 
filtering is a little bit different is the technical aspect, that images 
once uploaded rarely change it's content. They are like text-modules put 
inside the article and therefore much easier to handle than content itself.

You proposed that we could set up an project to play the role of a 
censor (not in an evil way), so we could experiment with it and to find 
out how people react. I would not support such a project and i would 
refrain from investing time and money into it. It's clear to me that the 
benefits would be eaten up easily. If there was truly an audience that 
enjoyed preselected content from Wikipedia. Then I'm sure we would 
already have commercial pages providing that service for churches, 
institutions and so on. If the possible enjoying audience of such an 
version would be such big, then I'm sure we would have such projects 
already. But it seams to me that such an project would not survive due 
to the massive time spend and effort that needs to be included while the 
paying audience is so minimal. If we implement the image filter, then 
all of our donors would also accept to fund a small but loud minority. 
But if we still support such a project, then we make 
http://wikipedia.censored.net; a possibility. Since we are the 
providers for the content. Now let churches, institutions, etc. pay 
money for censored.net and block wikipedia.org. I would be the first 
to open this site. Let the Wikipedia-Volunteers do the hard job, use 
their categories, review with little effort for some minor mistakes and 
sell it for money. What an amazing thing to do! Congratulations 
community ;-)
 * Most importantly, that won't affect anything else. Except, probably,
 ~$1M/year of WMF budget for development of censorship software and
 censorship itself, as they will say that they lack of people to censor
 images and that they need employees to do that. Although it would be
 more useful to give that ~$1M/year for access to Wikipedia from
 African countries, I think that it's reasonable price for having
 people who want censorship content. Bottom line is that News Corp will
 pay all of that and much more by giving us free access to Fox News.
It would not be so drastic and would doubt that we would need any 
content from foxy newswash. But the believe that they would pay for our 
issues makes me laugh so hard that I'm in pain. ;-)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 20:05, schrieb Andre Engels:
 On Wed, Sep 21, 2011 at 7:20 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:


 I still can't the a rational difference between images included in
 articles by the will of the community and text passages included by the
 will of the community.

 It's much easier to note offensive text fragments before reading them than
 to note offensive images before seeing them. But I guess the more
 fundamental issue is: there are, I assume, people who have requested this
 feature for images. There are either no or only very few who have requested
 it for text.

I would doubt that. For me it seams only to be a technical issue. Images 
don't change over time (at least not often), while text is in constant 
movement. The images are also in constant movement. Some will be 
replaced by others, some will be updated, some might be moved to another 
sub-article and so on. That means filtering images is technically, in 
comparison to text, the only feasible element that could be implemented 
in a more or less direct way.

Thats why no one asks for text. Actually i think that we have more 
potentially offending articles / text passages then images. Just count 
the biology/species articles with this enormous info boxes showing the 
development of species (an exploration by Darwin). If we could filter 
text, we would have more then enough claims to remove that. I'm sure 
about that.

The basic thought progress at the WMF must have been:
A: We need to do something, otherwise we could lose some donors. We 
need to look fresh and attractive.
B: But what do we do? All we can really do is something technically, 
without upsetting a huge amount of authors.
A: Yeah Wikitext is so hard to parse and we have already a project for 
that. This will take ages...
B: Didn't we have some complains. There was a group that claimed 
Wikipedia has to many male authors.
A: A you mean that gender-gap project. But just look at our pages. Who 
without studying informatics would really participate? It's way to 
complicated and we should represent some results now.
B: Hey, yesterday i read a comment by Hero from FOX that we have to 
much porn. OK, they had nothing else to report, but this could be something
A: Great idea. Lets delete all pornographic images.
B: We can't do that. Look what happened to Jimbo. As soon we delete the 
images it will cause problems.
A: Just got an idea. Hiding is not deleting. How about hiding all this 
images by default.
B: Would that be accepted? Some might ask: Why only porn?
A: OK then we need to make it more general
B: Wouldn't they cry this is despotism and censorship?
A: Let's see... How about we let someone write a report, praise him as 
neutral and to make sure that the report sees a great need for such a 
feature? We could argument, that it is important and not our idea.
B: Thats great. Could we improve that also for text?
A: Text would be so hard and it would remind people on blacked out 
pages. I don't think that this would be an good idea. But how about to 
give them a new tool to decide if images are hidden or not? I see a lot 
of reasons to do so. It could please FOX and some other critics.
B: Wouldn't this just move the problem to another project?
A: Who cares. Let them handle it. We will just say that the community 
will find a solution, as we always do.
B: OK. Bye

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 21:02, schrieb Milos Rancic:
 On Wed, Sep 21, 2011 at 20:47, David Levylifeisunf...@gmail.com  wrote:
 Milos Rancic wrote:
 Don't worry! Any implementation of censorship project would lead to
 endless troll-fests which would be more dumb than Youtube comments.
 The point is just to kick out them out of productive projects. Imagine
 a place where Christian, Muslim,religon3,religionN
 fundamentalists will have to cooperate! That would be the place for
 epic battles of dumbness. We'll have in-house circus!
 You're comfortable with the Wikimedia Foundation hosting/funding an
 in-house circus?
 Between:
 1) implementation against the majority will on main project;
 2) prolonged discussion about this issue, which would harm community;
 3) irrelevant in-house circus

 -- I choose the circus.

You choose discussions about images in a circus outside the context 
they belong to? This won't be circus, since we just reduced the amount 
of arguments from some to zero. If combatants argue about a topic 
without having a word left, isn't this called a battlefield?


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 21:28, schrieb Sue Gardner:
 On 21 September 2011 11:10, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:
 Am 21.09.2011 19:36, schrieb Kanzlei:
 Am 21.09.2011 um 19:04 schrieb Tobias 
 Oelgartetobias.oelga...@googlemail.com:

 Don't you think that we would have thousands of complaints a day if your
 words would be true at all? Just have a look at the article [[hentai]]
 and look at the illustration. How many complaints about this image do we
 get a day? None, because it is less then one complain in a month, while
 the article itself is viewed about 8.000 times a day.[1] That would make
 up one complainer in 240.000 (0,0004%). Now we could argue that only
 some of them would comment on the issue. Lets assume 1 of 100 or even 1
 of 1000. Then it are still only 0,04% or 0,4%. That is the big mass of
 users we want to support get more contributers?

 [1] http://stats.grok.se/en/201109/hentai
 Your assumtion is wrong. The 8.000 daily are neither neutral nor 
 representative for all users. Put the picture on the main page and You get 
 representative results. We had that in Germany.
 Yes we put the vulva on the main page and it got quite some attention.
 We wanted it this way to test out the reaction of the readers and to
 start a discussion about it. The result was as expected. Complains that
 it is offensive together with Praises to show what neutrality really is.
 After the discussion settled, we opened a Meinungsbild (Poll) to
 question if any article/image would be suitable for the main page
 (Actually it asked to not allow any topic). The result was very clear.
 13 supported the approach to leave out some content from the main page.
 233 (95%) were against the approach to hide some subjects from the main
 page.

 Can you point me towards that poll?

 Thanks,
 Sue

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

Gladly. You will find it under: Restrictions of topics for article of 
the day
http://de.wikipedia.org/wiki/Wikipedia:Meinungsbilder/Beschr%C3%A4nkung_der_Themen_f%C3%BCr_den_Artikel_des_Tages

It started some time after the vulva was presented at the main page. 
After the poll we even presented a topics like Futanari [1] on the main 
page at November 10th 2010 [2]. The reaction can be described with no 
reaction at all. It was just as if it was any other article. Some left 
some praise at the discussion, some others made some corrections and so 
on. There simply wasn't such a thing as an uproar or any complaints. Now 
the article had 3k views a day and not one comment on removing images or 
something else since that date. Thats one of the reasons why I'm 
wondering if the offensive image problem is even exists, for the 
German Wikipedia. But if i look at the discussion pages at EN it's 
basically the same. There are more complaints, but also at least the 
triple amount of viewers per day.

[1] http://de.wikipedia.org/wiki/Futanari
[2] 
http://de.wikipedia.org/wiki/Wikipedia:Hauptseite/Artikel_des_Tages/Zeittafel#November_2010

Tobias

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 22:20, schrieb Kanzlei:
 Am 21.09.2011 um 20:10 schrieb Tobias 
 Oelgartetobias.oelga...@googlemail.com:

 Am 21.09.2011 19:36, schrieb Kanzlei:
 Am 21.09.2011 um 19:04 schrieb Tobias 
 Oelgartetobias.oelga...@googlemail.com:

 Don't you think that we would have thousands of complaints a day if your
 words would be true at all? Just have a look at the article [[hentai]]
 and look at the illustration. How many complaints about this image do we
 get a day? None, because it is less then one complain in a month, while
 the article itself is viewed about 8.000 times a day.[1] That would make
 up one complainer in 240.000 (0,0004%). Now we could argue that only
 some of them would comment on the issue. Lets assume 1 of 100 or even 1
 of 1000. Then it are still only 0,04% or 0,4%. That is the big mass of
 users we want to support get more contributers?

 [1] http://stats.grok.se/en/201109/hentai
 Your assumtion is wrong. The 8.000 daily are neither neutral nor 
 representative for all users. Put the picture on the main page and You get 
 representative results. We had that in Germany.
 Yes we put the vulva on the main page and it got quite some attention.
 We wanted it this way to test out the reaction of the readers and to
 start a discussion about it. The result was as expected. Complains that
 it is offensive together with Praises to show what neutrality really is.
 After the discussion settled, we opened a Meinungsbild (Poll) to
 question if any article/image would be suitable for the main page
 (Actually it asked to not allow any topic). The result was very clear.
 13 supported the approach to leave out some content from the main page.
 233 (95%) were against the approach to hide some subjects from the main
 page.
 This poll was not representative for wikipedia readers, but only for some 
 German wikipedia editors.  Scientifically research found that Germa editors 
 are not representative for German speaking people but far more 
 environmetal-liberal-leftists than avarage Germans. The poll was even not 
 representative for German editors because only a few voted.

This needs a big *CITATION NEEDED*. We have the opposite examples like 
the article Futanari, which i mentioned before.
 You said that my assumption is wrong. We can repeat this for hundreds of
 articles and you would get the same result. Now proof that this
 assumption, which is sourced (just look at it) is wrong or say what is
 wrong with my assumption (in detail).
 See above

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 22:37, schrieb David Gerard:
 On 21 September 2011 21:20, Kanzleikanz...@f-t-hofmann.de  wrote:

 This poll was not representative for wikipedia readers, but only for some 
 German wikipedia editors.  Scientifically research found that Germa editors 
 are not representative for German speaking people but far more 
 environmetal-liberal-leftists than avarage Germans. The poll was even not 
 representative for German editors because only a few voted.

 233 would be a *large* turnout on en:wp. What is a large turnout on de:wp?

 Your arguments look to me like fully-general counterarguments against
 *any* on-Wikipedia poll whatsoever, no matter the structure or
 subject. What would you accept as a measure of the de:wp community
 that would actually be feasible to conduct?


 - d.

A so called Meinungsbild (opinion poll) is the tool of choice to make 
basic decisions for the project. Admins and authors are bound to such 
decisions. It usually needs 2/3 of the users to agree with a proposal 
(formally correctness) and 2/3 of the users actually voting for and not 
against the proposal. There may be variations depending on the questioning.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 21:52, schrieb Sue Gardner:
 On 21 September 2011 12:37, Bjoern Hoehrmannderhoe...@gmx.net  wrote:
 * Sue Gardner wrote:
 Yes we put the vulva on the main page and it got quite some attention.
 We wanted it this way to test out the reaction of the readers and to
 start a discussion about it. The result was as expected. Complains that
 it is offensive together with Praises to show what neutrality really is.
 After the discussion settled, we opened a Meinungsbild (Poll) to
 question if any article/image would be suitable for the main page
 (Actually it asked to not allow any topic). The result was very clear.
 13 supported the approach to leave out some content from the main page.
 233 (95%) were against the approach to hide some subjects from the main
 page.
 Can you point me towards that poll?
 http://de.wikipedia.org/wiki/Wikipedia:Meinungsbilder/Beschränkung_der_Themen_für_den_Artikel_des_Tages
 Thanks, Björn. That's so interesting: I hadn't known about that poll.

 Can someone help me understand the implications of it?

 Does it mean basically this: deWP put the Vulva article on its front
 page, and then held a poll to decide whether to i) stop putting
 articles like Vulva on its front page, because they might surprise or
 shock some readers, or ii) continue putting articles like Vulva on the
 front page, regardless of whether they surprise or shock some readers.
 And the voted supported the latter.

 If I've got that right, I assume it means that policy on the German
 Wikipedia today would support putting Vulva on the main page. Is there
 an 'element of least surprise' type policy or convention that would be
 considered germane to this, or not?

 I'd be grateful too if anyone would point me towards the page that
 delineates the process for selecting the Article of the Day. I can
 read pages in languages other than English (sort of) using Google
 Translate, but I have a tough time actually finding them :-)

 Thanks,
 Sue

At first we had some basic discussion which topic might be suitable for 
the main page. That was the offspring for idea to put the excellent 
article vulva together with a depiction (photograph) on the main page 
to see what would be the reaction. There was quite some reaction, but 
not so much as we expected. The opinions where fairly balanced. After 
some other topics with may be objectionable content followed in the 
meantime the discussion was going forward, leading to the decision 
(initiated by a group of users who opposed that every topic should be 
treated equally) to create a Meinungsbild (the linked one). The result 
was very clear and one of the main arguments where: How do we draw a 
line between objectionable and not objectionable content, without 
violating NPOV?

After that we did not represent one shocking article after the other. We 
just let them come and if the article itself is well written he will 
have it's chance to be put on the main page (it has to be an excellent 
or worth reading article, after the article quality rating system [1]) 
. The decision will be made in an open progress (even so it looks like a 
poll, it isn't) found at: 
http://de.wikipedia.org/wiki/Wikipedia_Diskussion:Hauptseite/Artikel_des_Tages 


[1] 
http://de.wikipedia.org/wiki/Wikipedia:Kandidaturen_von_Artikeln,_Listen_und_Portalen

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 21.09.2011 23:53, schrieb Bjoern Hoehrmann:
 * Sue Gardner wrote:
 Does it mean basically this: deWP put the Vulva article on its front
 page, and then held a poll to decide whether to i) stop putting
 articles like Vulva on its front page, because they might surprise or
 shock some readers, or ii) continue putting articles like Vulva on the
 front page, regardless of whether they surprise or shock some readers.
 And the voted supported the latter.
 The poll asked whether there should be formalized restrictions beyond
 the existing ones (only good articles can be proposed). Voters decided
 against that and to keep the status quo instead where it is decided on
 a case-by-case basis which articles to feature on the main page without
 additional formalized selection criteria that would disqualify certain
 articles. Put differently, they decided that if someone disagress that
 a certain article should not be featured, they cannot point to policy
 to support their argument.

That isn't true. Since the policy states that all terms are treated 
equal (NPOV) there is only a discussion if the date might be suitable 
(topics with correlation to a certain date get precedence). Otherwise it 
is decided if the quality (actuality and so on) is suitable for AotD, 
since there might be a lot of time between the last nomination for good 
articles and the versions might differ strongly due to recent changes. 
If a topic is offensive or not does not play any role. Only quality 
matters. This rule existed from the beginning and it did not change.
 If I've got that right, I assume it means that policy on the German
 Wikipedia today would support putting Vulva on the main page. Is there
 an 'element of least surprise' type policy or convention that would be
 considered germane to this, or not?
 Among editors who bothered to participate in the process, featuring
 the article at all was not particularily controversial, but there
 was a rather drawn out discussion about which, if any, image to use.
 I have read much of the feedback at the time and my impression is
 that this was not very different among readers, most complaints
 were about the image they had picked (and possibly some about images
 in the article itself).

 Keep in mind that continental europe's attitude towards sex is quite
 different than north america's. I read this the other day and found
 it quite illustrative, While nine out of 10 Dutch parents had allowed
 or would consider sleepovers once the child was 16 or 17, nine out of
 10 American parents were adamant: “not under my roof.”.
That illustrates very well why the german community would not share the 
same view. Additionally it clarifies that a global approach for 
filtering isn't possible to be implemented the right way. We really put 
something like ice and fire in the same box and want them to come to the 
same conclusion. It will just happen to be something like a battle. But 
a result, a compromise? Impossible by design.
 I'd be grateful too if anyone would point me towards the page that
 delineates the process for selecting the Article of the Day. I can
 read pages in languages other than English (sort of) using Google
 Translate, but I have a tough time actually finding them :-)
 http://de.wikipedia.org/wiki/WD:Hauptseite/Artikel_des_Tages


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 22.09.2011 00:07, schrieb Andrew Gray:
 On 21 September 2011 18:04, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 One of the problems with the discussions about the image filter is
 that many of them argue - I paraphrase - that Wikipedia must not be
 censored because it would stop being neutral. But is the existing
 Wikipedian POV *really* the same as neutral, or are we letting our
 aspirations to inclusive global neutrality win out over the real state
 of affairs? It's the great big unexamined assumption in our
 discussions...
 You describe us as geeks and that we can't write in a way that would
 please the readers. Since we are geeks, we are strongly biased and write
 down POV all day. If that is true, why is Wikipedia such a success? Why
 people read it? Do they like geeky stuff?
 ...no, that's really not what I said.

 We've known for ten years that Wikipedia editors have systemic biases,
 and we've tried to avoid them by insisting on NPOV. This is one of the
 reasons we've been successful - it's not the only one, but it's
 helped.

 But being neutral in text is simple. You give both sides of the
 argument, and you do it carefully, and that's it. The method of
 writing is the same whichever side you're on, and so most topics get a
 fair treatment regardless of our bias.

 We can't do that for images. A potentially offensive image is either
 there, or it is not. We can't be neutral by half including it, or by
 including it as well as another image to balance it out - these don't
 make sense. So we go for reasonable, acceptable, appropriate, not
 shocking, etc. Our editors say this is acceptable or this is not
 acceptable, and almost all the time that's based on *our personal
 opinions* of what is and isn't acceptable.
Given that this would be true. Do you expect us to categorize images for 
the filter in a right way, so that we are able to define what is 
offensive or not? Do we have now the option to hide an image or not, 
while being able to be neutral in judgment? Isn't it just the same? Did 
anything change, despite the fact that we are now making global, image 
based (not article based) decisions to show or hide an image?
 The end result is that our text is very neutral, but our images
 reflect the biases of our users - you and me. That doesn't seem to be
 a problem to *us*, because everything looks fine to us - the
 acceptable images are in articles, the unacceptable ones aren't.
If a statement is included in the article is based upon the decision of 
the authors. If some authors disagree they will have to discuss. If one 
author inserts an image in the article that he does find usable and 
another disagrees, don't we also discuss about it? What is the 
difference between the decision to include a fact or an image inside an 
article?
 People are saying we can't have the image filter because it would stop
 us being neutral. If we aren't neutral to begin with, this is a bad
 argument. It doesn't mean we *should* have the image filter, but it
 does mean we need to think some more about the reasons for or against
 it.

I personally choose images only based on the fact if they illustrate the 
topic. That means that an offensive image will without doubt get 
precedence over an not offensive alternative image if it depicts the 
subject better. Thats a very simple way. Just leave out moral aspects 
and use the images to describe the topic. If two images have the same 
educational value then we could start to discuss if other aspects 
(quality, moral, etc.) might apply. But I'm not willed to exchange a 
correct depiction of a subject against and imperfect depiction on moral 
grounds. That means to represent the truth, pleasing or not, and not to 
represent pink easter bunnies on soft green with a charming sunset in 
the background.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread Tobias Oelgarte
Am 22.09.2011 00:20, schrieb Robert Rohde:
 On Wed, Sep 21, 2011 at 5:00 AM, David Gerarddger...@gmail.com  wrote:
 The board resolution specifies a magical flying unicorn pony that
 shits rainbows. A wide-ranging survey has been conducted on the
 precise flight patterns and the importance of which way round the
 rainbow spectrum goes. These tiresome people who keep calling this
 impossible just do not understand that the high-level decision for a
 magical flying unicorn pony that shits rainbows has been set in stone.

 I don't have any unicorns, but there are lots of ponies.  I'd be happy
 to stick a horn on one and call her sparkles if that would help?

 User rating / categorization systems are like ponies.  They are a
 familiar and commonplace way of organizing things.  They can be used
 to filter some things and reduce the degree of surprise; however they
 will always have both a large false positive rate and a large false
 negative rate.  No filter is going to fly or shit rainbows.

 The question is not where to find mythical beasts, but whether
 dressing up a horse so that it looks a little like a unicorn would
 actually be useful.  And that depends on whether there is actual
 demand for such filters, and whether having a filter that is
 sort-of-okay some of the time would be helpful to the people who want
 filtering.

 -Robert Rohde
The questions are. How many of the readers would actually:
* want such a filter?
* use such a filter?
* see a need for a filter?
* accept an biased filter that doesn't comply to their opinion?
* think of it as a tool to protect their children?

Given the current data i have, this will be a very tiny group of users, 
but an huge amount of work, a new battlefield and tool for censors.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread Tobias Oelgarte
Am 22.09.2011 00:42, schrieb Bjoern Hoehrmann:
 * Tobias Oelgarte wrote:
 The poll asked whether there should be formalized restrictions beyond
 the existing ones (only good articles can be proposed). Voters decided
 against that and to keep the status quo instead where it is decided on
 a case-by-case basis which articles to feature on the main page without
 additional formalized selection criteria that would disqualify certain
 articles. Put differently, they decided that if someone disagress that
 a certain article should not be featured, they cannot point to policy
 to support their argument.

 That isn't true. Since the policy states that all terms are treated
 equal (NPOV) there is only a discussion if the date might be suitable
 (topics with correlation to a certain date get precedence). Otherwise it
 is decided if the quality (actuality and so on) is suitable for AotD,
 since there might be a lot of time between the last nomination for good
 articles and the versions might differ strongly due to recent changes.
 If a topic is offensive or not does not play any role. Only quality
 matters. This rule existed from the beginning and it did not change.
 What I meant to say is: if someone disagrees with featuring a certain
 article, they cannot point to policy that restricts which subjects can
 be featured to support their argument as there is none and editors de-
 cided against introducing any.
Now we speak the same language. Sorry if i misunderstood your first 
wording. ;-)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-19 Thread Tobias Oelgarte
Many contributers to the poll mentioned that the categorization by 
sensitivities is already a big problem in itself. At first, as you 
mentioned, it can be misused. Either by third parties which could use it 
for aggressive filtering (completely hidden/cot out images) or directly 
at the Wiki itself. Since we have many images with in comparison few 
active users, it would be very easy for influential groups to push there 
POV. Such minorities can easily get local majority and there is no way 
to defend against them with argumentation or sources. We have no 
arguments or sources for single images regarding sensitivities.

The second problem will be the categorization progress. We would 
categorize the images for others, not our selfs, and we also have no 
sources for argumentation. But there is another problem. We already 
discuss about the inclusion of images inside related articles discussion 
pages. While some image might not be appropriate for inclusion in one 
article, it might be the perfect, valuable, needed for understanding, 
maybe offensive illustration for another article. The categorization far 
away from the article, not visible to users who don't enable the filter, 
will not be related to article needs. So we will discuss at the article 
first and then again at the new battle field. It's not hard to believe 
that this will cost us much more time and effort as anything else really 
worthy we could do in the meantime. It's a fight against the symptoms of 
a cultural problem without actually tackling it. We just push it away.

Am 19.09.2011 11:42, schrieb Lodewijk:
 I understand that the details (well, quite big and relevant details) of this
 concept was the topic of the survey. So probably it has not been mapped out
 yet (because it was/is unknown), but that would be the next step.

 I also would like to make a sidenote: if the main argument of the German
 Wikipedians would be that this categorization an sich would be evil because
 it can be used by governments and ISP's etc, then I have to disappoint you:
 even if only one project would like to make the implementation of a filter
 possible for their readers, categorization would appear.

 Further, categorization of images will be happening likely on Commons (my
 guess) - so even if you opt out as German Wikipedia (although personally I
 think it would be more interesting to do a reader survey inside the German
 langauge visitors before deciding on that) it would not help that specific
 scenario.

 Lodewijk

 Am 19 de Setembro de 2011 09:47 schrieb David Gerarddger...@gmail.com:

 On 19 September 2011 06:28, David Levylifeisunf...@gmail.com  wrote:

 Additionally, if and when the WMF proudly announces the filters'
 introduction, the news media and general public won't accept bad luck
 to those using the feature as an excuse for its failure.

 Oh, yes. The trouble with a magical category is not just that it's
 impossible to implement well - but that it's fraught as a public
 relations move.

 What is the WMF going to be explicitly - and *implicitly* - promising
 readers? What is the publicity plan? Has this actually been mapped out
 at all?


 - d.

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-19 Thread Tobias Oelgarte
Am 19.09.2011 15:33, schrieb m...@marcusbuck.org:
 Zitat von Tobias Oelgartetobias.oelga...@googlemail.com:

 The second problem will be the categorization progress. We would
 categorize the images for others, not our selfs, and we also have no
 sources for argumentation. But there is another problem. We already
 discuss about the inclusion of images inside related articles discussion
 pages. While some image might not be appropriate for inclusion in one
 article, it might be the perfect, valuable, needed for understanding,
 maybe offensive illustration for another article.
From what I understood the image filter will not have subjective
 criteria like a little offensive, very offensive, pornography,
 but neutrally decidable criteria like depicts nude female breasts,
 depicts the face of Muhammad, depicts mutilated dead body. If you
 select these criteria carefully there should be no need for any
 sources for your decision to put a file in the criterion's category.
 Either the image depicts the category topic or it doesn't.

 Marcus Buck
 User:Slomox

We discussed this already and came to the conclusion, that you would 
need hundreds of these categories to filter out most of the 
objectionable content. But that is neither manageable from our side 
nor manageable by the user. You run into a deadlock. Either we will end 
up having some rather subjective categories or we have whole lot of 
them, we can't manage (at least not under the assumption to be 
user-friendly or wasting a whole lot of resources for tiny group of 
readers).

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-19 Thread Tobias Oelgarte
Am 19.09.2011 18:08, schrieb Stephen Bain:
 On Tue, Sep 20, 2011 at 12:47 AM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:
 We discussed this already and came to the conclusion, that you would
 need hundreds of these categories to filter out most of the
 objectionable content.
 And once again, the labelling doesn't need to be perfect (nothing on a
 wiki is) if an option to hide all images by default is implemented
 (which at present there seems to be broad support for, from most
 quarters).
If we implement an hide all images option, then we solved already 95% 
of all possible use cases mentioned before. Now we take on the doubtful 
work to categorize for even lower potential need? I support the hide 
all option. But if we have this feature, then, especially then, i see 
absolutely no need for any categorization. Then we create hobby project 
for censors with even less support from the community. I definitely 
can't follow your reasoning in this case.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-18 Thread Tobias Oelgarte
Am 18.09.2011 09:46, schrieb Andre Engels:
 On Sun, Sep 18, 2011 at 3:49 AM, Jussi-Ville Heiskanencimonav...@gmail.com
 wrote:

 Wikimedia *used* to hold the position that we wouldn't aid China to block
 images of the Tianamen Massacre, and went to great lengths to assure
 that chinese users of Wikipedia could evade blocks to viewing. I am not
 sure you are on a right track with regards to our traditions and values
 here.

 There's a big difference between the two in that the Chinese case was about
 people wanting to decide what _others_ could see, the filter is about people
 wanting to decide what _they themselves_ would see.

And who decides which image belongs to which category. The one that will 
use the filter or the one that tags the image?

Additionally: Is the reader able to choose if China would use the tags 
to exclude content before it can the reader? Wouldn't we be responsible 
it, if the feature is misused this way, since we know how easy it can be 
misused?

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-18 Thread Tobias Oelgarte
Am 18.09.2011 13:56, schrieb Andre Engels:
 On Sun, Sep 18, 2011 at 11:45 AM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 Am 18.09.2011 09:46, schrieb Andre Engels:
 On Sun, Sep 18, 2011 at 3:49 AM, Jussi-Ville Heiskanen
 cimonav...@gmail.com
 wrote:
 Wikimedia *used* to hold the position that we wouldn't aid China to
 block
 images of the Tianamen Massacre, and went to great lengths to assure
 that chinese users of Wikipedia could evade blocks to viewing. I am not
 sure you are on a right track with regards to our traditions and values
 here.

 There's a big difference between the two in that the Chinese case was
 about
 people wanting to decide what _others_ could see, the filter is about
 people
 wanting to decide what _they themselves_ would see.

 And who decides which image belongs to which category. The one that will
 use the filter or the one that tags the image?

 On itself the one who tags the image, but we happen to have a system for
 that in Wikimedia. It is called discussion and trying to reach consent. Who
 decides whether a page is in a category? Who decides whether a page has an
 image? Who decides whether something is decribed on a page? All the same.

There you have a lot of room for compromise. You might shorten an 
argument or decide to expand another. Most importantly you have 
arguments that you can quote. The decision for including an image is 
(should be) measured by value for illustration.

The filter-tagging is the opposite. You have no room for compromise. It 
does belong to an category or it does not. How would a compromise look like?

Which arguments will be used in this discussions. To expect to see 
quotes/sources to define if this, this particular image, is 
offensive/objectionable or not?

Have a try. I uploaded 
http://commons.wikimedia.org/wiki/File:Anime_Girl.svg some time ago. Now 
put neutral arguments on the table, if you would tag it as nudity or 
why you would not do so. It's a very simple task compared to others. So 
lets hear your argumentation and what the compromise should look like, 
if you would not come to the same conclusion.

I'm bold. I state as the first argument that it does not belong to the 
category nudity, because the depicted figure wears clothes.

 Additionally: Is the reader able to choose if China would use the tags
 to exclude content before it can the reader? Wouldn't we be responsible
 it, if the feature is misused this way, since we know how easy it can be
 misused?

 I don't think it's that easy, and if it were, the best thing would be to
 make it harder to misuse rather than to throw away the child with the
 bathwater.

Maybe it would be lot easier to use a contraceptive. Then you won't have 
a child that you might be thrown away with the bathwater.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


  1   2   >