> It seems to me that the -O option has wget touching the file
> which wget then detects.
Close enough. With "-O", Wget opens the output file before it does
any transfers, so when the program gets serious about the transfer, the
file will exist, and that will confuse the "-nc" processing.
Running this command:
rm *.jpg ; wget -O usscole_90.jpg -nc --random-wait
--referer=http://www.pianoladynancy.com/recovery_usscole.htm --
http://www.pianoladynancy.com/images/usscole_90.jpg
generates the error:
File `usscole_90.jpg' already there; not retrieving.
However:
rm *.jp
Hello,
Sometimes passwords contain @’s. When they do,
it seems to cause wget problems if the URL has the password encoded in it (for
example, ftp://username:[EMAIL PROTECTED]@/directory).
The same sort of URL encoding works fine in wput.
Thank you for the fine software,
Lar
"Beni Serfaty" <[EMAIL PROTECTED]> writes:
> I Think I found a bug when STANDALONE is defined on hash.c
> I hope I'm not missing something here...
Good catch, thanks. I've applied a slightly different fix, appended
below.
By the way, are you using hash.c in
I Think I found a bug when STANDALONE is defined on hash.cI hope I'm not missing something here...(Please cc me the replies)@@ -63,7 +63,7 @@ if not enough memory */
# define xfree free # define countof(x) (sizeof (x) / sizeof ((x)[0]))-# define TOLOWER(x
Hi folks,
I think I have found a bug in wget where it fails to change the working
directory when retrying a failed ftp transaction. This is wget 1.10.2 on
FreeBSD-6.0/amd64.
I was trying to use wget to get files from a broken ftp server which
occasionally sends garbled responses, causing
Steven M. Schweda antinode.org> writes:
> > [...] wget version 1.9.1
>
>You might try it with the current version (1.10.2).
>
> http://www.gnu.org/software/wget/wget.html
>
Oh, man - I can't believe I missed that. All better now! Thank you.
Greg
> [...] wget version 1.9.1
You might try it with the current version (1.10.2).
http://www.gnu.org/software/wget/wget.html
Steven M. Schweda (+1) 651-699-9818
382 South Warwick Street[EM
option. However,
when it goes to convert the links, as specified by the -k option, it looks for
the default output filename "processcandquicksearch" rather than the filename
that I specified with the -O option.
This seems to be a bug, though I can work around it with...
wget -k
mv
(Note: I am using wget version 1.9.1)
Best regards,
Greg McCann
hi
i've just posted my comments on the mailinglist [1]. wget doesn't behave
the right way if i use the out --output-document option and
--timestamping together. wget tries to compare the url-file with the
original file instead with the --output-document file.
why i got to this problem was be
Hello all,
I discovered a buffer overflow bug in the base64_encode() function,
located at line 1905 in file src\utils.c. Note that this bug is in the
latest version of the program (version 1.10.2) The bug appears to be that
the function is assuming that the input data is a size that is an even
wget -x -O images/logo.gif
http://www.google.co.uk/intl/en_uk/images/logo.gif
It worked for me.
Try it after "rm -rf images".
That was why it worked... I had an images directory already created.
Should have deleted it before I tried.
Frank
>From Frank McCown:
> wget -x -O images/logo.gif
> http://www.google.co.uk/intl/en_uk/images/logo.gif
>
> It worked for me.
Try it after "rm -rf images".
alp $ wget -x http://alp/test.html -O testxxx/test.html
testxxx/test.html: no such file or directory
alp $ wget -x -O testxxx/test.html h
I wouldn't call it a bug. While it may not be well documented (which
would not be unusual), "-x" affects URL-derived directories, not
user-specified directories.
Presumably Wget could be modified to handle this, but my initial
reaction is that it's not unreasonabl
Chris,
I think the problem is you don't have the URL last. Try this:
wget -x -O images/logo.gif
http://www.google.co.uk/intl/en_uk/images/logo.gif
It worked for me.
Frank
Chris Hills wrote:
Hi
Using wget-1.10.2.
Example command:-
$ wget -x http://www.google.co.uk/intl/en_uk/images/log
Hi
Using wget-1.10.2.
Example command:-
$ wget -x http://www.google.co.uk/intl/en_uk/images/logo.gif -O
images/logo.gif
images/logo.gif: No such file or directory
wget should created the directory images/.
wget --help shows:-
-x, --force-directoriesforce creation of directories.
From: Hrvoje Niksic
> [...] On Unix-like FTP servers, the two methods would
> be equivalent.
Right. So I resisted temptation, and kept the two-step CWD method in
my code for only a VMS FTP server. My hope was that some one would look
at the method, say "That's a good idea", and change the "
[EMAIL PROTECTED] (Steven M. Schweda) writes:
>> and adding it fixed many problems with FTP servers that log you in
>> a non-/ working directory.
>
> Which of those problems would _not_ be fixed by my two-step CWD for
> a relative path? That is: [...]
That should work too. On Unix-like FTP serv
From: Hrvoje Niksic
> Prepending is already there,
Yes, it certainly is, which is why I had to disable it in my code for
VMS FTP servers.
> and adding it fixed many problems with
> FTP servers that log you in a non-/ working directory.
Which of those problems would _not_ be fixed by my t
Daniel Stenberg <[EMAIL PROTECTED]> writes:
> On Fri, 25 Nov 2005, Steven M. Schweda wrote:
>
>> Or, better yet, _DO_ forget to prepend the trouble-causing $CWD to
>> those paths.
>
> I agree. What good would prepending do?
Prepending is already there, and adding it fixed many problems with
FTP
On Fri, 25 Nov 2005, Steven M. Schweda wrote:
Or, better yet, _DO_ forget to prepend the trouble-causing $CWD to those
paths.
I agree. What good would prepending do? It will most definately add problems
such as those Steven describes.
--
-=- Daniel Stenberg -=- http://daniel.haxx
From: Hrvoje Niksic
> Also don't [forget to] prepend the necessary [...] $CWD
> to those paths.
Or, better yet, _DO_ forget to prepend the trouble-causing $CWD to
those paths.
As you might recall from my changes for VMS FTP servers (if you had
ever looked at them), this scheme causes no en
Hrvoje Niksic <[EMAIL PROTECTED]> writes:
> That might work. Also don't prepend the necessary prepending of $CWD
> to those paths.
Oops, I meant "don't forget to prepend ...".
Mauro Tortonesi <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>> Arne Caspari <[EMAIL PROTECTED]> writes:
>>
>> I believe that CWD is mandated by the FTP specification, but you're
>> also right that Wget should try both variants.
>
> i agree. perhaps when retrieving file A/B/F.X we should try
Thank you all for your very fast response. As a further note: When this
error occurs, wget bails out with the following error message:
"No such directory foo/bar".
I think it should instead be "Could not access foo/bar: Permission
denied" or similar in such a situation.
/Arne
Mauro Tortones
Hrvoje Niksic wrote:
Arne Caspari <[EMAIL PROTECTED]> writes:
I believe that CWD is mandated by the FTP specification, but you're
also right that Wget should try both variants.
i agree. perhaps when retrieving file A/B/F.X we should try to use:
GET A/B/F.X
first, then:
CWD A/B
GET F.X
if t
Arne Caspari <[EMAIL PROTECTED]> writes:
> When called like:
> wget user:[EMAIL PROTECTED]/foo/bar/file.tgz
>
> and foo or bar is a read/execute protected directory while file.tgz is
> user-readable, wget fails to retrieve the file because it tries to CWD
> into the directory first.
>
> I think th
Hello,
current wget seems to have the following bug in the ftp retrieval code:
When called like:
wget user:[EMAIL PROTECTED]/foo/bar/file.tgz
and foo or bar is a read/execute protected directory while file.tgz is
user-readable, wget fails to retrieve the file because it tries to CWD
into the
://www.kpn.com/
Looks like a bug?
Cheers,
Mark
--
Pors BV
Internet Projects
"Schatzman, James (Mission Systems)" <[EMAIL PROTECTED]> writes:
> I have double checked the wget documentation. There is no mention of
> the "https_proxy" parameter. The manual and sample wgetrc that are
> provided list http_proxy and ftp_proxy - that is all.
This is indeed the solution.
I have double checked the wget documentation. There is no mention of the
"https_proxy" parameter. The manual and sample wgetrc that are provided
list http_proxy and ftp_proxy - that is all.
Apparently, the bug is with the documentation, not the applicat
t appears that the fix reported in the 1.10 release
> did not take. Any suggestions?
The bug referred to in the release notes manifested itself
differently: Wget would connect to the proxy server, and request the
https URL using GET. The proxies (correctly) refused to obey this
order, as it would pretty much defeat the purpose of using SSL.
According to the wget release notes for 1.10
"*** Talking to SSL/TLS servers over proxies now actually works.
Previous versions of Wget erroneously sent GET requests for https
URLs. Wget 1.10 utilizes the CONNECT method designed for this
purpose."
However, I have tried versions 1.10, 1.10.1, and
According to the wget release notes for 1.10
"*** Talking to SSL/TLS servers over proxies now actually works.
Previous versions of Wget erroneously sent GET requests for https URLs.
Wget 1.10 utilizes the CONNECT method designed for this purpose."
However, I have tried versions 1.10, 1.10.1, and 1
"Jean-Marc MOLINA" <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>> More precisely, it doesn't use the file name advertised by the
>> Content-Disposition header. That is because Wget decides on the file
>> name it will use based on the URL used, *before* the headers are
>> downloaded. This
Tony Lewis wrote:
> The --convert-links option changes the website path to a local file
> system path. That is, it changes the directory, not the file name.
Thanks I didn't understand it that way.
> IMO, your suggestion has merit, but it would require wget to maintain
> a list of MIME types and c
Hrvoje Niksic wrote:
> More precisely, it doesn't use the file name advertised by the
> Content-Disposition header. That is because Wget decides on the file
> name it will use based on the URL used, *before* the headers are
> downloaded. This unfortunate design decision is the cause of all
> thes
Jean-Marc MOLINA wrote:
> For example if a PNG image is generated using a "gen_png_image.php" PHP
> script, I think wget should be able to download it if the option
> "--page-requisites" is used, because it's part of the page and it's not
> an external resource, get its MIME type, "image/png", and
"Jean-Marc MOLINA" <[EMAIL PROTECTED]> writes:
> As I don't know anything about wget sources, I can't tell how it
> innerworks but I guess it doesn't check the MIME types of resources
> linked from the "src" attribute of a "img" element
ext/MIME mappings. So I removed the ".php
to text/html" and got a nice PNG image instead. I don't really know how to
force it not to rename the script but it doesn't really matter.
As I don't know anything about wget sources, I can't tell how it innerworks
but I gues
I am running a PC version of wget.
===
C:\> wget --versionGNU Wget 1.9
Copyright (C) 2003 Free Software Foundation, Inc.This program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANT
I saw that the option "-k, --convert-links" make the links on the root directory, not at the directory you down the pages. For example: if I download a page that the url is www.pageexample.com, the pages I download goes into there. But if i use that option, in the pages the links will link to the r
Tobias Koeck wrote:
done.
==> PORT ... done.==> RETR SUSE-10.0-EvalDVD-i386-GM.iso ... done.
[ <=> ] -673,009,664 113,23K/s
Assertion failed: bytes >= 0, file retr.c, line 292
This application has requested the Runtime to terminate it in an unusual
way.
done.
==> PORT ... done.==> RETR SUSE-10.0-EvalDVD-i386-GM.iso ... done.
[ <=> ] -673,009,664 113,23K/s
Assertion failed: bytes >= 0, file retr.c, line 292
This application has requested the Runtime to terminate it in an unusual
way.
Please contact the
Hi,
The following seems to not be expected behavior:
wget --page-requisites --no-clobber --no-directories --no-host-
directories --convert-links http://www.candidagenome.org/cgi-bin/
locus.pl?locus=HWP1
Two of the images on that page do not get downloaded, and then the
links within the pag
That is, there is HTML like this:
Click the following to go to the
http://www.something.com/junk.asp?thepageIwant=2";;>next
page.
What I need is for wget to understand that stuff following an "?" in a URL
indicates that it's a distinctly different page, and it should go
recursively retrieve
Begin forwarded message:
From: [EMAIL PROTECTED]
Date: October 4, 2005 4:36:09 AM GMT+02:00
To: [EMAIL PROTECTED]
Subject: failure notice
Hi. This is the qmail-send program at sunsite.dk.
I'm afraid I wasn't able to deliver your message to the following
addresses.
This is a permanent erro
's probably not what
you're looking for.
If you want to have a go at this, look for opt.timestamping in ftp.c.
Hope that helps!
-Original Message-
From: bob stephens [contr] [mailto:[EMAIL PROTECTED]
Sent: Friday, September 30, 2005 10:06 AM
To: [EMAIL PROTECTED]
Subject: possible
Hi WGet folks,
This isnt really a bug I found in the operation of wget, but I think
it is a functionality problem.
I wonder if you can help me. I would like to use wget to mirror an
ftp site - this step seems easy.
BUT, I would like to set it up so that the files on my end are un-
gzipped
Hello Hrvoje!
On Tuesday, September 20, 2005 at 12:50:41 AM +0200, Hrvoje Niksic wrote:
> "HonzaCh" <[EMAIL PROTECTED]> writes:
>> the thousand separator (space according to my local settings)
>> displays as "á" (character code 0xA0, see attch.)
> Wget obtains the thousand separator from the ope
"HonzaCh" <[EMAIL PROTECTED]> writes:
>>> My localeconv()->thousands_sep (as well as many other struct
>>> members) reveals to empty string ("") (MSVC6.0).
>>
>> How do you know? I mean, what program did you use to check this?
>
> My quick'n'dirty one. See the source below.
Your source neglects
"HonzaCh" <[EMAIL PROTECTED]> writes:
> Latest version (1.10.1) turns out an UI bug: the thousand separator
> (space according to my local settings) displays as "á" (character
> code 0xA0, see attch.)
>
> Although it does not affect the primary functio
Latest version (1.10.1) turns out an UI bug: the thousand separator
(space according to my local settings) displays as "á" (character code
0xA0, see attch.)
Although it does not affect the primary function of WGET, it looks quite
ugly.
Env.: Win2k Pro/Czech (CP852 for console apps,
Hello!
I'm writing because I've found a bug in the current version of
wget (1.10.1). I've tried to fix it but it has proven to be too much for
me!
The bug is this: If you use -N and -O together, wget does not
behave properly. Wget will always decide to download the r
Daniel Stenberg <[EMAIL PROTECTED]> writes:
> On Fri, 26 Aug 2005, Hrvoje Niksic wrote:
>
>> + /* The OpenSSL library can handle renegotiations automatically, so
>> + tell it to do so. */
>> + SSL_CTX_set_mode (ssl_ctx, SSL_MODE_AUTO_RETRY);
>> +
>
> Just wanted to make sure that you are aw
On Fri, 26 Aug 2005, Hrvoje Niksic wrote:
+ /* The OpenSSL library can handle renegotiations automatically, so
+ tell it to do so. */
+ SSL_CTX_set_mode (ssl_ctx, SSL_MODE_AUTO_RETRY);
+
Just wanted to make sure that you are aware that this option is only available
in OpenSSL 0.9.6 or
Thanks for the report; I've applied this patch:
2005-08-26 Jeremy Shapiro <[EMAIL PROTECTED]>
* openssl.c (ssl_init): Set SSL_MODE_AUTO_RETRY.
Index: openssl.c
===
--- openssl.c (revision 2063)
+++ openssl.c (working c
I believe I've encountered a bug in wget. When using https, if the
server does a renegotiation handshake wget fails trying to peek for
the application data. This occurs because wget does not set the
openssl context mode SSL_MODE_AUTO_RETRY. When I added the line:
SSL_CTX_set_mode (ss
A few comments about the bug tracker saga...
roundup is a really cool piece of software, but it seems that its developers
don't really give a damn about backward compatibility and painless upgrades:
http://roundup.sourceforge.net/doc-0.8/upgrading.html
(well, I can't really blame
Hi wget list!
Is it intended that
wget -P"d:\goog" "http://www.google.com/";
works, whereas
wget -P"d:\goog\" "http://www.google.com/";
does give the error message
wget: missing URL
?
Running wget 1.10 on Windows XP.
Cheers
Jens
all patches are against wget 1.10.
please cc me on all responses as i am not subscribed to this list.
FIRST BUG
there is a bug in http.c.
when connecting by way of proxy & https, if digest authentication is
necessary, then the first connection attempt fails and we go to
retry_with_auth.
Hello,
giuseppe wrote a patch for 1.10.1.beta1. Full report can be viewed here:
http://bugs.debian.org/319088
Weitergeleitete Nachricht
> Von: giuseppe bonacci <[EMAIL PROTECTED]>
> Antwort an: giuseppe bonacci <[EMAIL PROTECTED]>,
> [EMAIL PROTECTED]
>
Using Wget 1.10:
wget -np -p -w 2 -r -l 0 http://www.cs.odu.edu/~mln/lazy/
results in http://www.cs.odu.edu/~mln/lazy/index.html being downloaded
and saved twice. The final number of downloaded files is 1 file too many.
From the wget output:
-
Jogchum Reitsma <[EMAIL PROTECTED]> writes:
> I'm not sure it's a bug, but behaviour descibes below seems strange
> to me, so I thought it was wise to report it:
Upgrade to Wget 1.10 and the problem should go away. Earlier versions
don't handle files larger than 2GB properly.
Hello,
I'm not sure it's a bug, but behaviour descibes below seems strange to
me, so I thought it was wise to report it:
I'm trying to get a Suse 9.3 ISO from sunsite.informatik.rwth-aachen.de,
a file that is 4383158 KB according to the FTP-listing. wget gets about
2.4 GB, th
> => `SUSE-9.3-Eval-DVD.iso'
> Resolving chuck.ucs.indiana.edu... 156.56.247.193
> Connecting to chuck.ucs.indiana.edu[156.56.247.193]:21... connected.
Please upgrade to Wget 1.10, which has this bug fixed.
[EMAIL PROTECTED]:~/Download/Linux> wget -c ftp://chuck.ucs.indiana.edu/linux/suse/suse/i386/9.3/iso/SUSE-9.3-Eval-DVD.iso
--09:55:03-- ftp://chuck.ucs.indiana.edu/linux/suse/suse/i386/9.3/iso/SUSE-9.3-Eval-DVD.iso
=> `SUSE-9.3-Eval-DVD.iso'
Resolving chuck.ucs.indiana.edu... 156.56
Abdurrahman ÇARKACIOĞLU <[EMAIL PROTECTED]> writes:
>>It's already in the repository.
>
> I think you forget to put -DHAVE_SELECT statement
> into makefile.src.mingw at
> http://svn.dotsrc.org/repo/wget/branches/1.10/windows/.
>
> Am I right ?
That was published in a separate patch -- specificall
Title: YNT: YNT: Mingw bug ?
-Özgün İleti-
Kimden: Hrvoje Niksic [mailto:[EMAIL PROTECTED]]
Gönderilmiş: Cmt 02.07.2005 16:00
Kime: Abdurrahman ÇARKACIOĞLU
Bilgi: wget@sunsite.dk
Konu: Re: YNT: Mingw bug ?
>> Will you consider the patch for future release of Wget.
>It&
Abdurrahman ÇARKACIOĞLU <[EMAIL PROTECTED]> writes:
> Now, it works. Thanks a lot.
>
> But I want to understand what is going on ? Was it a bug ?
It was a combination of two Wget bugs, one in actual code and other in
MinGW configuration.
Wget 1.9.1 and earlier used to close con
Title: YNT: Mingw bug ?
Now, it works. Thanks a lot.
But I want to understand what is going on ? Was it a bug ?
Will you consider the patch for future release of Wget.
-Özgün İleti-
Kimden: Hrvoje Niksic [mailto:[EMAIL PROTECTED]]
Gönderilmiş: Cmt 02.07.2005 14:06
Kime
I believe this patch should fix the problem. Could you apply it and
let me know if it fixes things for you?
2005-07-02 Hrvoje Niksic <[EMAIL PROTECTED]>
* http.c (gethttp): Except for head_only, use skip_short_body to
skip the non-20x error message before leaving gethttp.
Ind
Abdurrahman ÇARKACIOĞLU <[EMAIL PROTECTED]> writes:
> Here are the results..
> ---request begin---
> GET /images/spk.ico HTTP/1.0
> Referer: http://www.spk.gov.tr/
> User-Agent: Wget/1.10
> Accept: */*
> Host: www.spk.gov.tr
> Connection: Keep-Alive
>
> ---request end---
> HTTP request sent, await
ndmailto:[EMAIL PROTECTED]
Sent: Saturday, July 02, 2005 1:04 AM
To: Abdurrahman ÇARKACIOĞLU
Cc: wget@sunsite.dk
Subject: Re: Mingw bug ?
"A. Carkaci" <[EMAIL PROTECTED]> writes:
> ---request begin---
> GET /images/spk.ico HTTP/1.0
> Referer: http://www.spk.gov.tr/
Hrvoje Niksic xemacs.org> writes:
>
> "A. Carkaci" spk.gov.tr> writes:
>
> > ---request begin---
> > GET /images/spk.ico HTTP/1.0
> > Referer: http://www.spk.gov.tr/
> > User-Agent: Wget/1.10
> > Accept: */*
> > Host: www.spk.gov.tr
> > Connection: Keep-Alive
> > ---request end---
> > HTTP req
"A. Carkaci" <[EMAIL PROTECTED]> writes:
> ---request begin---
> GET /images/spk.ico HTTP/1.0
> Referer: http://www.spk.gov.tr/
> User-Agent: Wget/1.10
> Accept: */*
> Host: www.spk.gov.tr
> Connection: Keep-Alive
> ---request end---
> HTTP request sent, awaiting response...
> ---response begin--
Abdurrahman ÇARKACIOĞLU spk.gov.tr> writes:
>
> I succesfully compiled Wget 1.10 using mingw. Although Heiko Herold's wget
1.10 (original wget.exe I mean)
> (from http://space.tin.it/computer/hherold/) succesfully download the
following site,
> my compiled wget (produced by mingw32-make) hangs
Abdurrahman ÇARKACIOĞLU <[EMAIL PROTECTED]> writes:
> I succesfully compiled Wget 1.10 using mingw. Although Heiko
> Herold's wget 1.10 (original wget.exe I mean) (from
> http://space.tin.it/computer/hherold/) succesfully download the
> following site, my compiled wget (produced by mingw32-make) h
I succesfully compiled Wget 1.10 using mingw. Although Heiko Herold's wget 1.10
(original wget.exe I mean)
(from http://space.tin.it/computer/hherold/) succesfully download the following
site,
my compiled wget (produced by mingw32-make) hangs immediately forever. Any idea
?
wget www.spk.gov.tr
Marc Niederwieser <[EMAIL PROTECTED]> writes:
> option --mirror is described as
> shortcut option equivalent to -r -N -l inf -nr.
> but option "-nr" is not implemented.
> I think you mean "--no-remove-listing".
Thanks for the report, I've now fixed the --help text.
2005-07-01 Hrvoje Niksic <
Hi
option --mirror is described as
shortcut option equivalent to -r -N -l inf -nr.
but option "-nr" is not implemented.
I think you mean "--no-remove-listing".
greetings
Marc
Although wget 1.9.1 downloaded the following address, wget 1.10 fails (it hangs
immediately forever).
Is there a bug ?
Address: www.spk.gov.tr
> From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
> the 64-bit "download sum", doesn't work for you. What does this
> program print?
>
> #include
> int
> main (void)
> {
> __int64 n = 100I64; // ten billion, doesn't fit in 32 bits
> printf("%I64\n", n);
> return 0;
> }
>
> It shou
David Fritz <[EMAIL PROTECTED]> writes:
> "I64" is a size prefix akin to "ll". One still needs to specify the
> argument type as in "%I64d" as with "%lld".
That makes sense, thanks for the explanation!
"I64" is a size prefix akin to "ll". One still needs to specify the argument
type as in "%I64d" as with "%lld".
Gisle Vanem <[EMAIL PROTECTED]> writes:
> "Hrvoje Niksic" <[EMAIL PROTECTED]> wrote:
>
>> It should print a line containing "100". If it does, it means
>> we're applying the wrong format. If it doesn't, then we must find
>> another way of printing LARGE_INT quantities on Windows.
>
> I d
"Hrvoje Niksic" <[EMAIL PROTECTED]> wrote:
It should print a line containing "100". If it does, it means
we're applying the wrong format. If it doesn't, then we must find
another way of printing LARGE_INT quantities on Windows.
I don't know what compiler OP used, but Wget only uses
"
Hrvoje Niksic <[EMAIL PROTECTED]> writes:
> This would indicate that the "%I64" format, which Wget uses to print
> the 64-bit "download sum", doesn't work for you.
For what it's worth, MSDN documents it: http://tinyurl.com/ysrh/.
Could you be compiling Wget with an older C runtime that doesn't
su
Herold Heiko <[EMAIL PROTECTED]> writes:
> Downloaded: bytes in 2 files
>
> Note missing number of bytes.
This would indicate that the "%I64" format, which Wget uses to print
the 64-bit "download sum", doesn't work for you. What does this
program print?
#include
int
main (void)
{
__int64 n
<[EMAIL PROTECTED]> writes:
> Sorry for the crosspost, but the wget Web site is a little confusing
> on the point of where to send bug reports/patches.
Sorry about that. In this case, either address is fine, and we don't
mind the crosspost.
> After taking a look at i
"Mark Street" <[EMAIL PROTECTED]> writes:
> Many thanks for the explanation and the patch. Yes, this patch
> successfully resolves the problem for my particular test case.
Thanks for testing it. It has been applied to the code and will be in
Wget 1.10.1 and later.
Hrvoje,
Many thanks for the explanation and the patch.
Yes, this patch successfully resolves the problem for my particular test
case.
Best regards,
Mark Street.
es the problem by:
* Making sure that path consistently gets prepended in all entry
points to cookie code;
* Removing the special logic from path_match.
With that change your test case seems to work, and so do all the other
tests I could think of.
Please let me know if it works for you, and than
Hello folks,
I'm running wget v1.10 compiled from source (tested on HP-UX and Linux).
I am having problems handling session cookies. The idea is to request a
web page which returns an ID number in a session cookie. All subsequent
requests from the site must contain this session cookie.
I'm usi
Will Kuhn <[EMAIL PROTECTED]> writes:
> Apparentl wget does not handle single quote or double quote very well.
> wget with the following arguments give error.
>
> wget
> --user-agent='Mozilla/5.0' --cookies=off --header
> 'Cookie: testbounce="testing";
> ih="b'!!!0T#8G(5A!!#c`#8HWs
-to-date wget will not re-download the
page.
Because this behaviour is unexpected and undocumented, I consider it a
bug.
--
Sincerely,
Dennis Kaarsemaker
signature.asc
Description: This is a digitally signed message part
On Wednesday 15 June 2005 05:14 pm, Ulf Harnhammar wrote:
> On Wed, Jun 15, 2005 at 11:57:42PM +0200, Ulf Harnhammar wrote:
> > * faq.html
> > ** "3.1 [..]
> > Yes, starting from version 1.10, GNU Wget support files larger than 2GB."
> > (should be "supports")
>
> ** "2.0 How I compile GNU Wget?"
>
On Wednesday 15 June 2005 04:57 pm, Ulf Harnhammar wrote:
> On Wed, Jun 15, 2005 at 03:53:40PM -0500, Mauro Tortonesi wrote:
> > the web pages (including the documentation) on gnu.org have just been
> > updated.
>
> Nice! I have found some broken links and strange grammar, though:
>
> * index.html:
On Wed, Jun 15, 2005 at 11:57:42PM +0200, Ulf Harnhammar wrote:
> * faq.html
> ** "3.1 [..]
> Yes, starting from version 1.10, GNU Wget support files larger than 2GB."
> (should be "supports")
** "2.0 How I compile GNU Wget?"
(should be "How do I")
// Ulf
On Wed, Jun 15, 2005 at 03:53:40PM -0500, Mauro Tortonesi wrote:
> the web pages (including the documentation) on gnu.org have just been updated.
Nice! I have found some broken links and strange grammar, though:
* index.html: There are archives of the main GNU Wget list at
** fly.cc.fer.hr
** www
201 - 300 of 879 matches
Mail list logo