Please stop calling this an “AI” system, it is not. It is statistical
learning.
This is probably not going to make me popular…
In some jurisdictions you will need a permit to create, manage, and store
biometric identifiers, no matter if the biometric identifier is for a known
person or not. If
users (adversary and postulated sock) are the same because they have
edited the same page. It is quite unlikely a user will edit the same page
with a sock puppet, when it is known that such a system is activated.
On Thu, Aug 6, 2020 at 10:49 PM John Erling Blad wrote:
> Nice idea! First time I wr
Nice idea! First time I wrote about this being possible was back in
2008-ish.
The problem is quite trivial, you use some observable feature to
fingerprint an adversary. The adversary can then game the system if the
observable feature can be somehow changed or modified. To avoid this the
Slow process, fast rendering
Imagine someone edits a page, and that editing hits a very slow
tag-function of some kind. You want to respond fast with something
readable, some kind of temporary page until the slow process has finished.
Do you chose to reuse what you had from the last revision, if
(FSM).
On Thu, Sep 26, 2019 at 3:03 PM John Erling Blad wrote:
> A project that could be really interesting is to make a Lua interface for
> some of the new neural nets, especially based on the Tsetlin-engine. Sounds
> nifty, but it is nothing more than a slight reformulation
A project that could be really interesting is to make a Lua interface for
some of the new neural nets, especially based on the Tsetlin-engine. Sounds
nifty, but it is nothing more than a slight reformulation of an old
learning algorithm (type early 70th), where the old algorithm has problem
:
> I was under the (possibly mistaken) impression that the attacker was just
> flooding the network with traffic?
>
> On Sat, 7 Sep 2019, 12:25 John Erling Blad, wrote:
>
> > There are several papers about how to stop DDoS by using cryptographic
> > puzzles.[1] The core
There are several papers about how to stop DDoS by using cryptographic
puzzles.[1] The core idea is to give the abuser some algorithmic work he
has to solve, thereby forcing him to waste processing power, and then to
slow him down to a manageable level.[2] That only work if you are the
target, and
It is either the limit for unedited automatic translations that is set way
to high, or an admin that blames Google for whatever translated text (s)he
find. The later is not uncommon in Norwegian, even if the admins are told
several times ContentTranslation does not use Google for Norwegian
Sorry, but this is not valid. I can't leave this uncommented.
Assume the article is right, then all metrics would be bad. Thus we
can't find any example that contradicts the statement in the article.
If we pick coverage of automated tests as a metric, then _more_ test
coverage would be bad given
It is a strange discussion, especially as it is now about how some
technical debts are not _real_ technical debts. You have some code,
and you change that code, and breakage emerge both now and for future
projects. That creates a technical debt. Some of it has a more
pronounced short time effect
On Tue, Mar 19, 2019 at 12:53 PM bawolff wrote:
>
> Technical debt is by definition "ickyness felt by devs". It is a thing that
> can be worked on. It is not the only thing to be worked on, nor should it
> be, but it is one aspect of the system to be worked on. If its ignored it
> makes it really
On Mon, Mar 18, 2019 at 10:52 PM bawolff wrote:
>
> First of all, I want to say that I wholeheartedly agree with everything tgr
> wrote.
>
> Regarding Pine's question on technical debt.
>
> Technical debt is basically a fancy way of saying something is "icky". It
> is an inherently subjective
On Sun, Mar 17, 2019 at 2:38 PM C. Scott Ananian wrote:
>
> A secondary issue is that too much wiki dev is done by WMF/WMFDE employees
> (IMO); I don't think the current percentages lead to an overall healthy
> open source community. But (again in my view) the first step to nurturing
> and
> On Sat, Mar 16, 2019 at 8:23 AM Strainu wrote:
>
> > A large backlog by itself is not alarming. A growing one for
> > components deployed to WMF sites is. It indicates insufficient
> > attention is given to ongoing maintenance of projects after they are
> > no longer "actively developed", which
are only changed when the bugs are
closed for whatever reason, which could take years. Creating
additional manual interventions does not work, the process must be
simpler and more efficient.
On Thu, Mar 14, 2019 at 1:23 PM Andre Klapper wrote:
>
> On Tue, 2019-03-12 at 00:29 +0100, John Erlin
Sorry, but I try to point out that the process is broken and give a
few examples on how to fix the process.
On Thu, Mar 14, 2019 at 1:20 PM Andre Klapper wrote:
>
> On Thu, 2019-03-14 at 12:35 +0100, John Erling Blad wrote:
> > Blame games does not fix faulty processes.
oying bugs.
[1] https://www.youtube.com/watch?v=nyOHJ4GR4iU from 32:20
On Wed, Mar 13, 2019 at 11:49 PM Andre Klapper wrote:
>
> On Wed, 2019-03-13 at 21:01 +0100, John Erling Blad wrote:
> > This is like an enormous sinkhole, with people standing on the edge,
> > warning abou
This is like an enormous sinkhole, with people standing on the edge,
warning about the sinkhole. All around people are saying "we must do
something"! Still the sinkhole slowly grows larger and larger. People
placing warning signs "Sinkhole ahead". Others notifying neighbors
about the growing
What frustrates me the most are
- bugs found by the editor community, that has obvious simple fixes,
which isn't acted upon for several years
- new features that isn't fully tested, and you have to answer in the
community about stuff you rather want to throw out
- new features and changes that
On Tue, Mar 12, 2019 at 10:29 PM Bartosz Dziewoński wrote:
>
> I get an impression from this thread that the problem is not really the
> size of the backlog, but rather certain individual tasks that sit in
> said backlog rather than being worked on, and which according to John
> are actually
charge money
> for?
> Are our customers successfully subsidizing our free (as in beer) software?
>
> On Mon, Mar 11, 2019 at 7:33 PM John Erling Blad wrote:
>
> > > 2- Everything is open-source and as non-profit, there's always resource
> > > constra
> 2- Everything is open-source and as non-profit, there's always resource
> constraint. If it's really important to you, feel free to make a patch and
> the team would be always more than happy to review.
Wikipedia is the core product, and the users are the primary
customers. When a group of core
It seems like some projects simply put everything coming from external
sources into deep freezer or add "need volunteer". If they respond at
all. In some cases it could be that the projects are defunc.
On Mon, Mar 11, 2019 at 9:51 PM Stas Malyshev wrote:
>
> Hi!
>
> > In my experience WMF teams
> Also should be on the list: Sometimes bugs have a known fix that isn't
> being rolled out, in favour of a larger more fundamental restructuring
> (demanding even more resources).
Yes, I've seen a lot of cookie licking. It makes it hard to solve even
simple bugs.
The backlog for bugs are pretty large (that is an understatement),
even for bugs with know fixes and available patches. Is there any real
plan to start fixing them? Shall I keep telling the community the bugs
are "tracked"?
/jeblad
___
Wikitech-l
It is extremely easy to detect a bot unless the bot operator chose to make
it hard. Just make a model for how the user interacts with the input
devices, and do anomaly detection. That imply use of Javascript though, but
users not using JS are either very dubious or quite well-known. There are
Those that break the naming scheme *somehow* is 7 extensions
(ArticlePlaceholder (mixed case), DynamicPageListEngine (not extension
name), JsonConfig (not extension name), LinkedWiki (not ext
structure), SemanticScribunto (not extension name), Wikibase Client
(not ext structure), ZeroPortal (not
There are several extensions that diverge on the naming scheme. Some
of them even being referenced as using the scheme, while not providing
lua libs at all. It is a bit weird.
On Fri, Jan 25, 2019 at 7:09 PM Kunal Mehta wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hi,
>
> On
Half a century? 50 years? You have been working for WMDE since 2014.
Perhaps it would be an idea to discuss the naming scheme instead of
doing questionable call to authority?
The interesting point is _what_ to gain by adding unrelated character
sequences to names. If some character sequence don't
It is a description of how it should be done, which is not according
to the current page. Yes it is a call for feedback if I must spell it
out.
On Fri, Jan 25, 2019 at 8:33 AM Thiemo Kreuz wrote:
>
> Is there a question assigned with this long email? Is this a call for
> feedback?
>
> Kind
At the Extension:Scribunto/Lua reference manual, at several places,[1]
it is pointed out that the lua-libs should use the form 'mw.ext.NAME'.
This creates visual noise in the code. Any lib included should have a
extension page, thus it has already been given an unique name. In
addition, only the
Tried a couple of times to rewrite this, but it grows out of bound
anyhow. Seems like it has its own life.
There is a book from 2000 by Robert Dale and Ehud Reiter; Building
natural language generation systems ISBN 978-0-521-02451-8
Wikibase items can be rebuilt as Plans from the type statement
uch harder, especially if the text is supposed to be readable.
Jumbling sentences together as is commonly done by various botscripts
does not work very well, or rather, it does not work at all.
On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad wrote:
>
> Using an abstract language as an basis for t
Using an abstract language as an basis for translations have been
tried before, and is almost as hard as translating between two common
languages.
There are two really hard problems, it is the implied references and
the cultural context. An artificial language can get rid of the
implied
> dga...@wikimedia.org
> >:
> >
> > On Thu, 4 Oct 2018 at 23:29, John Erling Blad wrote:
> >
> > > Usually it comes from user errors while using VE. This kind of errors
> are
> > > quite common, and I asked (several years ago) whether it could be fixed
>
T129778
On Fri, Oct 5, 2018 at 3:59 PM Dan Garry wrote:
> On Thu, 4 Oct 2018 at 23:29, John Erling Blad wrote:
>
> > Usually it comes from user errors while using VE. This kind of errors are
> > quite common, and I asked (several years ago) whether it could be fixed
> in
Wikipedia#Anwendung_von__in_Bildunterschriften
>
> Today there are also more than one user indefinite blocked, which only
> removed https://de.wikipedia.org/wiki/Benutzer:Entgr%C3%A4ten40
>
> Am Fr., 5. Okt. 2018 um 00:29 Uhr schrieb John Erling Blad <
> jeb...@gmail.com
> >:
We have the same in Norwegian, but linking on part of a composite is almost
always wrong. Either you link on the whole composite or no part of the
composite. If you link on a part of a composite, then in nearly all cases I
have seen the link is placed on the wrong term.
Some examples on what
*very much agree with both Amir and Brion*
I've seen the same thing; something is reported as a more or less general
issue, it is then picked up as a task, it is further discussed in a
specific context, then closed because it does not fit the given context.
But the new context wasn't part of the
This can be done with the special page "AboutTopic" with some additional
logic. It has been discussed at a few projects, but the necessary logic
isn't available. That means the redlink must be created with the q-id, and
there is no well-defined process on how to clean it up afterwards.
At nnwiki
I guess this is pretty obvious, but when you create numbers for something
generated by an actor (as something that makes the activation) within that
area, those numbers should be normalized against the number of actors.
There are a whole lot of articles being read in Norwegian from China, does
What is the current state, will some kind of digest be retained?
On Thu, Sep 21, 2017 at 9:56 PM, Gergo Tisza wrote:
> On Thu, Sep 21, 2017 at 6:10 AM, Daniel Kinzler <
> daniel.kinz...@wikimedia.de
> > wrote:
>
> > Yes, we could put it into a separate table. But that
There are two important use cases; one where you want to identify previous
reverts, and one where you want to identify close matches. There are other
ways to do the first than to use a digest, but the digest opens up for
alternate client side algorithms. The last would typically be done by some
This project really need someone in addition to me that has more knowledge
about how to make an extension production ready, that is someone that can
be "maintainer" when the project is done.
Anyone? Free cookies!! :D
On Sat, May 21, 2016 at 11:56 AM, John Erling Blad <jeb...@gma
Good idea, please add the messages on TranslateWiki.
Preferably after some of the proposed fixes.
Thanks!
On Fri, May 20, 2016 at 6:45 PM, Deborah Tankersley <
dtankers...@wikimedia.org> wrote:
> Hello,
>
> The Discovery Team recently added descriptive text to the Wikipedia.org
> page footer in
Hi all,
wrote an application for an IEG-grant on creating a testing environment for
Lua-scripts.[1] Perhaps it is interesting for you. It is mainly a tool for
on-wiki testing of scripts, and I'm not sure if it is that interesting for
use for off-wiki testing.
When I wrote the application I said
-M is more manageable.
On Sun, Mar 13, 2016 at 9:43 PM, Bartosz Dziewoński <matma@gmail.com>
wrote:
> On 2016-03-13 19:49, John Erling Blad wrote:
>
>> A lot of things are much simpler in VisualEditor, but a lot of stuff is
>> harder to do or simply does not work. It
On Sun, Mar 13, 2016 at 8:36 PM, Amir E. Aharoni <
amir.ahar...@mail.huji.ac.il> wrote:
> 2016-03-13 20:49 GMT+02:00 John Erling Blad <jeb...@gmail.com>:
>
> What feels slow may turn out to be fast when the time is actually measured
> (though the _feeling_ of
t that it should be fast to do this "thing". Because
of this we have ended up with a lengthy click-click-click (and some more
clickety-click-click-click) to end up with something we at least half the
time must fix with LessVisualEditingWikiText.
John Erling Blad
/grumpy-jeblad
__
I tried this on a search for "Sør-Aurdal" (a municipality in Norway),
dropped the dash and wrote "sørau" and got a hit on "Søraust-Svalbard
naturreservat" among other things. The topmost hit was "søraurdøl", which
is a denomyn for someone from Sør-Aurdal. It seems to me that a spelling
error is
Use Q-ids and get the links from Wikidata.
On Sun, Dec 6, 2015 at 10:49 PM, Purodha Blissenbach <
puro...@blissenbach.org> wrote:
> How about using the API on the targe side?
> Purodha
>
>
> On 06.12.2015 18:04, Alex Monk wrote:
>
>> I don't think there is a way to get a database name from an
You will run into problems with transclusions
http://www.w3.org/standards/techs/xmlsig#w3c_all
On Tue, Sep 15, 2015 at 1:54 AM, Platonides wrote:
> On 13/09/15 18:20, Purodha Blissenbach wrote:
>
>> The idea is that third parties can publish texts, such as theis
>>
Can you do a followup on replies on the patches?
Is there an open issue on OATH on enwiki?
On Wed, Jan 28, 2015 at 3:11 AM, Tyler Romeo tylerro...@gmail.com wrote:
I don’t usually email the list for this kind of stuff, but if you have some
spare time to test an extension, please check out this
.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]
On 2013-11-07 12:19 AM, John Erling Blad wrote:
Can you explain why you use LocalStorage for this? It seems to me like
this is the wrong solution and you should use cache manifests instead.
LocalStorage is a quite limited area
manifests. So cache manifests won't
help make any improvements.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]
On 2013-11-07 12:19 AM, John Erling Blad wrote:
Can you explain why you use LocalStorage for this? It seems to me like
this is the wrong solution and you should
Can you explain why you use LocalStorage for this? It seems to me like
this is the wrong solution and you should use cache manifests instead.
LocalStorage is a quite limited area for _data_ storage and it will
create problems if we start wasting that space for _code_ storage.
John
On Mon, Nov 4,
As long as it is a separate extension there is no problem, but if you
bundle it in such a way that it is an integral part of the core then
you might get into trouble.
On Mon, Aug 26, 2013 at 9:35 PM, Derric Atzrott
datzr...@alizeepathology.com wrote:
Well if it's a MediaWiki extension, it has to
We don't use votes... ;)
If we forget about the implementation of badges and discuss the
contributions; there are no single correct way to weight
contributions. Assume some user A write N characters as a continuous
string, and some user B writes the same number of characters spread
out over a
at 10:58 AM, John Erling Blad jeb...@gmail.com wrote:
We don't use votes... ;)
I think Lydia was referring to the votes on the bugzilla bug page, 14 votes
so far :)
If we forget about the implementation of badges and discuss the
contributions; there are no single correct way to weight
Once in a while my mobile data connection hits the limit and then goes
into a 64kb/s mode. When that happen pages can't be delivered (or are
delivered with a very long delay) with the HTTP protocol, but the
HTTPS keeps going.
Anyone with an idea what goes on and why HTTPS seems to work while
HTTP
In my opinion the only thing that is going to work on short term is a
guided rule based system. We need that to be able to reuse values from
Wikidata in running text. That is a template text must be transformed
according to gender, plurality, etc, but also that the values must be
adjust to
An UI showing both edit links all the time is a much better way to do
it. I had the same discussion 20 years ago and as far as I know
nothing has changed when it comes to hidden user interactions that
suddenly (and with no explanation) changes the interaction and takes
the user with surprise.
, Jul 17, 2013 at 4:42 AM, John Erling Blad jeb...@gmail.com
wrote:
It doesn't matter because the correct behavior will accumulate over
time. You don't try to fix linkage just because you have one single
observed behavior, you collect and correlate behavior over time and
use several, perhaps
Send out a mw-previous-referrer on the disambiguation page and echo
it back from the browser. It could be done through a cookie. On next
page it must be removed, either in the server or in the browser. The
server can simply rip off any incoming cookie, but not sure if this
will work in the squids
.
There are a lot of works on why and how if anyone bother digging it
up. Short story it is only a matter of number of observations.
On Wed, Jul 17, 2013 at 10:30 AM, Tyler Romeo tylerro...@gmail.com wrote:
On Wed, Jul 17, 2013 at 4:26 AM, John Erling Blad jeb...@gmail.com wrote:
Send out a mw-previous
I don't think global filters should be enabled unless they can be
overridden locally. Only exception would be if all language specific
tests can be disabled or being verified to not be in use for the
specific filter.
John
On Tue, Jul 9, 2013 at 2:03 AM, hoo h...@online.de wrote:
Here's a copy
As the dumb ass trying to merge a lot of the code last year at
Wikidata I would say stop bitching about whether to make tests or not.
Any tests are better than no tests, without tests merging code is pure
gambling. Yes you can create a small piece of code and be fairly sure
that your own code
Test coverage is not a quality metric, it is a quantity metric. That
is it says something about the amount of tests. The coverage can say
something about the overall code given that the code under test infact
reflect the remaining code. As the code under test usually are better
than the remaining
Can you give any examples of real code that become less clear after it
was rewritten for testability, and explain why it is worse after the
rewrite?
On Tue, Jun 4, 2013 at 7:20 PM, Marc A. Pelletier m...@uberbox.org wrote:
On 06/04/2013 12:57 PM, Nikolas Everett wrote:
The thing is quite a few
Its not that difficult to read through the text before you commit, right?
At least try to remove the most obvious spelling errors.
Perhaps I just find to much BS I should have given a -1 or even a -2
in some cases.
John
___
Wikitech-l mailing list
roll-out.
On Wed, Jan 2, 2013 at 8:32 AM, MZMcBride z...@mzmcbride.com wrote:
John Erling Blad wrote:
I'm looking for updated schedules for rollout of ResourceLoader, Lua
and other extensions/features. Are there any centralized page for such
stuff?
https://www.mediawiki.org/wiki
I'm looking for updated schedules for rollout of ResourceLoader, Lua
and other extensions/features. Are there any centralized page for such
stuff?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
In my opinion, parsing .js and .css as wikitext is a borken idea.
Add some kind of pragmas to the page and strip them off in the ResourceLoader.
John
On Fri, Oct 19, 2012 at 12:44 AM, MZMcBride z...@mzmcbride.com wrote:
Krinkle wrote:
On Oct 18, 2012, at 5:04 AM, Daniel Kinzler
On Thu, Oct 18, 2012 at 10:08 AM, Nikola Smolenski smole...@eunet.rs wrote:
On 18/10/12 09:25, Steven Walling wrote:
On Wed, Oct 17, 2012 at 11:46 PM, Nikola Smolenskismole...@eunet.rs
wrote:
The need for such bots should cease after Wikidata is fully deployed. I
suggest to interested
task.
John
On Thu, Oct 18, 2012 at 11:14 AM, Nikola Smolenski smole...@eunet.rs wrote:
On 18/10/12 11:06, John Erling Blad wrote:
well-formed text automatically. One of the more common problems are
names that uses different inflection rules due to context and how they
are written
For those interested this type of text synthesis, it can be done by
using finite-state automata and transducers (FST's). The simplest way
to make them is by cross-compiling into Lua from some other known
form.
John
On Thu, Oct 18, 2012 at 2:10 PM, Denny Vrandečić
denny.vrande...@wikimedia.de
All methods that gives you an edit token should work.
John
On Fri, Sep 28, 2012 at 4:12 PM, Carl (CBM) cbm.wikipe...@gmail.com wrote:
On Fri, Sep 28, 2012 at 8:53 AM, John Erling Blad jeb...@gmail.com wrote:
The token handling through use of a special url-argument gettoken
and the special
I know the bot-people loves changes, so here is a bit more!
The token handling through use of a special url-argument gettoken
and the special itemtoken has died in flames. Use edittoken from
actione=tokens
(http://wikidata-test-repo.wikimedia.de/w/api.php?action=tokenstype=editformat=jsonfm)
and
of entity is
in there.
https://bugzilla.wikimedia.org/show_bug.cgi?id=40407
https://bugzilla.wikimedia.org/show_bug.cgi?id=40408
John Erling Blad
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo
Please note that this is a breaking change for bots!
It is decided that the module wbsetitem will change from the present
short form in the json structure to a long form. Exactly how the
long form will be is still a bit open, but it will be closer to the
json output format. The changes also makes
Your point (a) Implementing a wikiSumarizer widget which will give the
summary of the page being read by the user could be extremely usefull for
a hover/ helpbubbles functionality where bubbles with a small explanations
are created within external articles. Such functionality imply creating an
I like this idea, it solves a lot of problems.
John
On Mon, Mar 26, 2012 at 4:45 PM, Daniel Kinzler dan...@brightbyte.de wrote:
Hi all. I have a bold proposal (read: evil plan).
To put it briefly: I want to remove the assumption that MediaWiki pages
contain
always wikitext. Instead, I
Federated login is not about exclusive login systems, its really the
complete oposite. Its about how to pass credentials in a secure fashion
between systems.
Jeblad on a not-so-smart-phone
On 21. mars 2012 15.45, John Du Hart compwhi...@gmail.com wrote:
On Wed, Mar 21, 2012 at 10:01 AM, Andreas
Use open format, closed format and open content don't mix.
John
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
So, since we're discussing SAML and OAuth and OpenID, and such, I
should mention this:
http://simplesamlphp.org/
It supports SAML, OpenID, OAuth, it's extendable and it supports
multiple backends (LDAP, MySQL, etc). It is also localizable.
- Ryan
That one is interesting for the
Just as an idea, would it be possible for Wikimedia Foundation to
establish some kind of joint project with the SimpleSAMLphp-folks?
Those are basically Uninett, which is FEIDE, which is those that
handle identity federation for lots of the Norwegian schools, colleges
and universities.. The
Exporting authentication from Mediawiki by OAuth is probably both
acceptable and interesting, even if OAuth is said to give a rather
weak security. It could be that people are a bit confused about OAuth
vs OpenID.
In some of the projects where I've been involved the problem is not
about exporting
widgets:
http://webaim.org/blog/web-accessibility-preferences-are-for-sissies/
If you want, I can even dig up the full discussion I had with CCA that ended
in them dropping the text resize widget from their wiki's design ;).
On Thu, 01 Mar 2012 15:19:48 -0800, John Erling Blad jeb...@gmail.com
to the user browsing the entire
Internet. Accessibility is not fixed if the user has to change a preference
at every single website they visit.
On Fri, 02 Mar 2012 00:56:27 -0800, John Erling Blad jeb...@gmail.com
wrote:
You can not design for one size fits all when it comes to
accessibility
What about adding a couple of style markers on the body tag? For
example classes for high-contrast, avoid-red-green,
avoid-green-blue, avoid-red-yellow... or?
Or perhaps as additional styles stright from the mediawiki-space, that
way the accessability issues can be crowdsourced?
There was also a
In some cases it would be better to linke on article ids than their
names, something like
http://en.wikipedia.org/aid/123456
One example is as a link to an article in Wikipedia from tweet posted
through the Twitter API.
John
On Sat, Feb 18, 2012 at 1:51 PM, Bináris wikipo...@gmail.com wrote:
:
http://en.wikipedia.org/wiki?curid=2312711
Although I don't understand what would be the benefit of doing that.
Petr Onderka
[[User:Svick]]
On Sat, Feb 18, 2012 at 14:09, John Erling Blad jeb...@gmail.com wrote:
In some cases it would be better to linke on article ids than their
names
it wold be better if there was a simple way to generate
short urls that also identified Wikipedia as such.
John
On Sun, Feb 19, 2012 at 12:38 AM, [[w:en:User:Madman]]
madman.enw...@gmail.com wrote:
On Sat, Feb 18, 2012 at 8:36 AM, John Erling Blad jeb...@gmail.com wrote:
Yes I know you can do
I have no idea about the schema changes, but to choose a digest for
detection of identity reverts is pretty simple. The really difficult
part is to choose a locally sensitive hash or fingerprint that works
for very similar revisions with a lot of content.
I would propose that the digest is stored
If someone reports something (s)he thinks is an error, even if the
wording seems insulting, there are usually something important in the
report. Don't attack what you think is wrong about the report, try to
figure out whats the root cause behind it, neglecting the insults.
Something happen and
Much of the problem is simply a lack of tools to handle the process in
a timly manner, that also assure that something will happen and that
is sufficient to stop any legal actions.
Why not add a magic word for __NODOWNLOADLINK__ so a template about
deleting an image due to copyright will sort of
People stop the flame war, try to design a solution! What would avoid
the problem, and still make it possible for the commons admins to do
their job properly. Everybody knows they screw up from time to time,
its no help if you keep yelling at them. Been there, done that, didn't
help a bit.
Why
Same behaviour at no.wikipedia. Sometimes it even seems like the
identical article entry is removed from the search result.
A quick-fix could be to use word frequency or tf-idf from the
language, or perhaps from the page your at when you do the search.
John
On Sun, Oct 30, 2011 at 10:19 PM,
Use a jsMsg or the new version of it, and deliver images as data url
in the css. If you use ordinary lmages in a scripted html-tingy it
will load a long time after the dom is generated. Most social networks
can be accessed with a single url, but it is a lot of sites right now.
We have 36 at no.wp
1 - 100 of 101 matches
Mail list logo