Thanks for your reply.

I'm not entirely sure how caching works though.  Are you describing a 
situation in which one requests exactly the same data (which would be 
pointless), or one in which one accesses different data from the same 
parameters (like stepping through articles in a category)?

Every login and edit request is a POST.  The logout routine can use GET 
but will not be repetitious.  Download routines can use GET.  I'm a bit 
dubious about whether that's a good enough reason for me to dump 
additional code and a command line option into my bot, though.

Richard

-----Original Message-----
From: Roan Kattouw <roan.katt...@gmail.com>
To: MediaWiki API announcements & discussion 
<mediawiki-api@lists.wikimedia.org>
Cc: richardcav...@mail.com
Sent: Sun, Mar 27, 2011 2:22 am
Subject: Re: [Mediawiki-api] Is there an advantage to putting 
parameters in the URL?

2011/3/26  <richardcav...@mail.com>:
> I can see that there are avantages to putting parameters into POST
> data.  Are there any advantages to a bot putting parameters into the
> URL?
The only advantage I can think of is caching. If you're repeating the
same request a number of times and want any caching proxies between
you and the server (e.g. Wikimedia's Squid servers) to cache the
result for you, you can put &smaxage=3600 (or any number of seconds)
in the URL. This only works for GET requests and only when the URLs
are exactly the same both times.

Roan Kattouw (Catrope)



_______________________________________________
Mediawiki-api mailing list
Mediawiki-api@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api

Reply via email to