According to the wget release notes for 1.10
*** Talking to SSL/TLS servers over proxies now actually works.
Previous versions of Wget erroneously sent GET requests for https
URLs. Wget 1.10 utilizes the CONNECT method designed for this
purpose.
However, I have tried versions 1.10, 1.10.1, and
. Any suggestions?
The bug referred to in the release notes manifested itself
differently: Wget would connect to the proxy server, and request the
https URL using GET. The proxies (correctly) refused to obey this
order, as it would pretty much defeat the purpose of using SSL.
This is indeed the solution.
I have double checked the wget documentation. There is no mention of the
https_proxy parameter. The manual and sample wgetrc that are provided
list http_proxy and ftp_proxy - that is all.
Apparently, the bug is with the documentation, not the application
itself
Schatzman, James (Mission Systems) [EMAIL PROTECTED] writes:
I have double checked the wget documentation. There is no mention of
the https_proxy parameter. The manual and sample wgetrc that are
provided list http_proxy and ftp_proxy - that is all.
Apparently, the bug
Tony Lewis wrote:
The --convert-links option changes the website path to a local file
system path. That is, it changes the directory, not the file name.
Thanks I didn't understand it that way.
IMO, your suggestion has merit, but it would require wget to maintain
a list of MIME types and
done.
== PORT ... done.== RETR SUSE-10.0-EvalDVD-i386-GM.iso ... done.
[ = ] -673,009,664 113,23K/s
Assertion failed: bytes = 0, file retr.c, line 292
This application has requested the Runtime to terminate it in an unusual
way.
Please contact the
Tobias Koeck wrote:
done.
== PORT ... done.== RETR SUSE-10.0-EvalDVD-i386-GM.iso ... done.
[ = ] -673,009,664 113,23K/s
Assertion failed: bytes = 0, file retr.c, line 292
This application has requested the Runtime to terminate it in an unusual
way.
I saw that the option "-k, --convert-links" make the links on the root directory, not at the directory you down the pages. For example: if I download a page that the url is www.pageexample.com, the pages I download goes into there. But if i use that option, in the pages the links will link to the
That is, there is HTML like this:
pClick the following to go to the
a href=http://www.something.com/junk.asp?thepageIwant=2;;next
page/a./p
What I need is for wget to understand that stuff following an ? in a URL
indicates that it's a distinctly different page, and it should go
recursively
Begin forwarded message:
From: [EMAIL PROTECTED]
Date: October 4, 2005 4:36:09 AM GMT+02:00
To: [EMAIL PROTECTED]
Subject: failure notice
Hi. This is the qmail-send program at sunsite.dk.
I'm afraid I wasn't able to deliver your message to the following
addresses.
This is a permanent
HonzaCh [EMAIL PROTECTED] writes:
My localeconv()-thousands_sep (as well as many other struct
members) reveals to empty string () (MSVC6.0).
How do you know? I mean, what program did you use to check this?
My quick'n'dirty one. See the source below.
Your source neglects to
HonzaCh [EMAIL PROTECTED] writes:
Latest version (1.10.1) turns out an UI bug: the thousand separator
(space according to my local settings) displays as á (character
code 0xA0, see attch.)
Although it does not affect the primary function of WGET, it looks
quite ugly.
Env.: Win2k Pro/Czech
Thanks for the report; I've applied this patch:
2005-08-26 Jeremy Shapiro [EMAIL PROTECTED]
* openssl.c (ssl_init): Set SSL_MODE_AUTO_RETRY.
Index: openssl.c
===
--- openssl.c (revision 2063)
+++ openssl.c (working
I believe I've encountered a bug in wget. When using https, if the
server does a renegotiation handshake wget fails trying to peek for
the application data. This occurs because wget does not set the
openssl context mode SSL_MODE_AUTO_RETRY. When I added the line:
SSL_CTX_set_mode (ssl_ctx
Hi wget list!
Is it intended that
wget -Pd:\goog http://www.google.com/;
works, whereas
wget -Pd:\goog\ http://www.google.com/;
does give the error message
wget: missing URL
?
Running wget 1.10 on Windows XP.
Cheers
Jens
Hello,
giuseppe wrote a patch for 1.10.1.beta1. Full report can be viewed here:
http://bugs.debian.org/319088
Weitergeleitete Nachricht
Von: giuseppe bonacci [EMAIL PROTECTED]
Antwort an: giuseppe bonacci [EMAIL PROTECTED],
[EMAIL PROTECTED]
An: Debian Bug Tracking System
Hello,
I'm not sure it's a bug, but behaviour descibes below seems strange to
me, so I thought it was wise to report it:
I'm trying to get a Suse 9.3 ISO from sunsite.informatik.rwth-aachen.de,
a file that is 4383158 KB according to the FTP-listing. wget gets about
2.4 GB, than quits
Hrvoje Niksic hniksic at xemacs.org writes:
A. Carkaci carkaci at spk.gov.tr writes:
---request begin---
GET /images/spk.ico HTTP/1.0
Referer: http://www.spk.gov.tr/
User-Agent: Wget/1.10
Accept: */*
Host: www.spk.gov.tr
Connection: Keep-Alive
---request end---
HTTP request
, assuming HTTP/0.9
Length: unspecified
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Saturday, July 02, 2005 1:04 AM
To: Abdurrahman ÇARKACIOĞLU
Cc: wget@sunsite.dk
Subject: Re: Mingw bug ?
A. Carkaci [EMAIL PROTECTED] writes:
---request begin---
GET /images
Abdurrahman ÇARKACIOĞLU [EMAIL PROTECTED] writes:
Here are the results..
---request begin---
GET /images/spk.ico HTTP/1.0
Referer: http://www.spk.gov.tr/
User-Agent: Wget/1.10
Accept: */*
Host: www.spk.gov.tr
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting
I believe this patch should fix the problem. Could you apply it and
let me know if it fixes things for you?
2005-07-02 Hrvoje Niksic [EMAIL PROTECTED]
* http.c (gethttp): Except for head_only, use skip_short_body to
skip the non-20x error message before leaving gethttp.
Title: YNT: Mingw bug ?
Now, it works. Thanks a lot.
But I want to understand what is going on ? Was it a bug ?
Will you consider the patch for future release of Wget.
-Özgün İleti-
Kimden: Hrvoje Niksic [mailto:[EMAIL PROTECTED]]
Gönderilmiş: Cmt 02.07.2005 14:06
Kime
Abdurrahman ÇARKACIOĞLU [EMAIL PROTECTED] writes:
Now, it works. Thanks a lot.
But I want to understand what is going on ? Was it a bug ?
It was a combination of two Wget bugs, one in actual code and other in
MinGW configuration.
Wget 1.9.1 and earlier used to close connections to the server
Title: YNT: YNT: Mingw bug ?
-Özgün İleti-
Kimden: Hrvoje Niksic [mailto:[EMAIL PROTECTED]]
Gönderilmiş: Cmt 02.07.2005 16:00
Kime: Abdurrahman ÇARKACIOĞLU
Bilgi: wget@sunsite.dk
Konu: Re: YNT: Mingw bug ?
Will you consider the patch for future release of Wget.
It's already
Abdurrahman ÇARKACIOĞLU [EMAIL PROTECTED] writes:
It's already in the repository.
I think you forget to put -DHAVE_SELECT statement
into makefile.src.mingw at
http://svn.dotsrc.org/repo/wget/branches/1.10/windows/.
Am I right ?
That was published in a separate patch -- specifically,
I succesfully compiled Wget 1.10 using mingw. Although Heiko Herold's wget 1.10
(original wget.exe I mean)
(from http://space.tin.it/computer/hherold/) succesfully download the following
site,
my compiled wget (produced by mingw32-make) hangs immediately forever. Any idea
?
wget www.spk.gov.tr
Abdurrahman ÇARKACIOĞLU [EMAIL PROTECTED] writes:
I succesfully compiled Wget 1.10 using mingw. Although Heiko
Herold's wget 1.10 (original wget.exe I mean) (from
http://space.tin.it/computer/hherold/) succesfully download the
following site, my compiled wget (produced by mingw32-make) hangs
Abdurrahman ÇARKACIOĞLU abdurrahman.carkacioglu at spk.gov.tr writes:
I succesfully compiled Wget 1.10 using mingw. Although Heiko Herold's wget
1.10 (original wget.exe I mean)
(from http://space.tin.it/computer/hherold/) succesfully download the
following site,
my compiled wget (produced
A. Carkaci [EMAIL PROTECTED] writes:
---request begin---
GET /images/spk.ico HTTP/1.0
Referer: http://www.spk.gov.tr/
User-Agent: Wget/1.10
Accept: */*
Host: www.spk.gov.tr
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
the 64-bit download sum, doesn't work for you. What does this
program print?
#include stdio.h
int
main (void)
{
__int64 n = 100I64; // ten billion, doesn't fit in 32 bits
printf(%I64\n, n);
return 0;
}
It should print a
Herold Heiko [EMAIL PROTECTED] writes:
Downloaded: bytes in 2 files
Note missing number of bytes.
This would indicate that the %I64 format, which Wget uses to print
the 64-bit download sum, doesn't work for you. What does this
program print?
#include stdio.h
int
main (void)
{
__int64 n =
Hrvoje Niksic [EMAIL PROTECTED] writes:
This would indicate that the %I64 format, which Wget uses to print
the 64-bit download sum, doesn't work for you.
For what it's worth, MSDN documents it: http://tinyurl.com/ysrh/.
Could you be compiling Wget with an older C runtime that doesn't
support
Hrvoje Niksic [EMAIL PROTECTED] wrote:
It should print a line containing 100. If it does, it means
we're applying the wrong format. If it doesn't, then we must find
another way of printing LARGE_INT quantities on Windows.
I don't know what compiler OP used, but Wget only uses
%I64
I64 is a size prefix akin to ll. One still needs to specify the argument
type as in %I64d as with %lld.
David Fritz [EMAIL PROTECTED] writes:
I64 is a size prefix akin to ll. One still needs to specify the
argument type as in %I64d as with %lld.
That makes sense, thanks for the explanation!
Hello folks,
I'm running wget v1.10 compiled from source (tested on HP-UX and Linux).
I am having problems handling session cookies. The idea is to request a
web page which returns an ID number in a session cookie. All subsequent
requests from the site must contain this session cookie.
I'm
to cookie code;
* Removing the special logic from path_match.
With that change your test case seems to work, and so do all the other
tests I could think of.
Please let me know if it works for you, and thanks for the detailed
bug report.
2005-06-24 Hrvoje Niksic [EMAIL PROTECTED
Hrvoje,
Many thanks for the explanation and the patch.
Yes, this patch successfully resolves the problem for my particular test
case.
Best regards,
Mark Street.
Mark Street [EMAIL PROTECTED] writes:
Many thanks for the explanation and the patch. Yes, this patch
successfully resolves the problem for my particular test case.
Thanks for testing it. It has been applied to the code and will be in
Wget 1.10.1 and later.
-to-date wget will not re-download the
page.
Because this behaviour is unexpected and undocumented, I consider it a
bug.
--
Sincerely,
Dennis Kaarsemaker
signature.asc
Description: This is a digitally signed message part
Will Kuhn [EMAIL PROTECTED] writes:
Apparentl wget does not handle single quote or double quote very well.
wget with the following arguments give error.
wget
--user-agent='Mozilla/5.0' --cookies=off --header
'Cookie: testbounce=testing;
On Wednesday 15 June 2005 04:57 pm, Ulf Harnhammar wrote:
On Wed, Jun 15, 2005 at 03:53:40PM -0500, Mauro Tortonesi wrote:
the web pages (including the documentation) on gnu.org have just been
updated.
Nice! I have found some broken links and strange grammar, though:
* index.html: There
On Wednesday 15 June 2005 05:14 pm, Ulf Harnhammar wrote:
On Wed, Jun 15, 2005 at 11:57:42PM +0200, Ulf Harnhammar wrote:
* faq.html
** 3.1 [..]
Yes, starting from version 1.10, GNU Wget support files larger than 2GB.
(should be supports)
** 2.0 How I compile GNU Wget?
(should be How
I have a reproducable report (thanks Igor Andreev) about a little verbouse
log problem with ftp with my windows binary, is this reproducable on other
platforms, too ?
wget -v ftp://garbo.uwasa.fi/pc/batchutil/buf01.zip
ftp://garbo.uwasa.fi/pc/batchutil/rbatch15.zip
(seems to happen with any
Herold Heiko schrieb:
I have a reproducable report (thanks Igor Andreev) about a little verbouse
log problem with ftp with my windows binary, is this reproducable on other
platforms, too ?
wget -v ftp://garbo.uwasa.fi/pc/batchutil/buf01.zip
ftp://garbo.uwasa.fi/pc/batchutil/rbatch15.zip
Mauro Tortonesi [EMAIL PROTECTED] writes:
this seems to be already fixed in the 1.10 documentation.
Now that 1.10 is released, we should probably update the on-site
documentation.
On Wednesday 15 June 2005 02:05 pm, Hrvoje Niksic wrote:
Mauro Tortonesi [EMAIL PROTECTED] writes:
this seems to be already fixed in the 1.10 documentation.
Now that 1.10 is released, we should probably update the on-site
documentation.
i am doing it right now.
--
Aequam memento rebus in
On Wednesday 15 June 2005 02:16 pm, Mauro Tortonesi wrote:
On Wednesday 15 June 2005 02:05 pm, Hrvoje Niksic wrote:
Mauro Tortonesi [EMAIL PROTECTED] writes:
this seems to be already fixed in the 1.10 documentation.
Now that 1.10 is released, we should probably update the on-site
On Wed, Jun 15, 2005 at 03:53:40PM -0500, Mauro Tortonesi wrote:
the web pages (including the documentation) on gnu.org have just been updated.
Nice! I have found some broken links and strange grammar, though:
* index.html: There are archives of the main GNU Wget list at
** fly.cc.fer.hr
**
On Wed, Jun 15, 2005 at 11:57:42PM +0200, Ulf Harnhammar wrote:
* faq.html
** 3.1 [..]
Yes, starting from version 1.10, GNU Wget support files larger than 2GB.
(should be supports)
** 2.0 How I compile GNU Wget?
(should be How do I)
// Ulf
Sorry for the crosspost, but the wget Web site is a little confusing on the
point of where to send bug reports/patches.
Just installed wget 1.10 on Friday. Over the weekend, my scripts failed with
the
following error (once for each wget run):
Assertion failed: wget_cookie_jar != NULL, file
On Thursday 02 June 2005 09:33 am, Herb Schilling wrote:
Hi,
On http://www.gnu.org/software/wget/manual/wget.html, the section on
protocol-directories has a paragraph that is a duplicate of the
section on no-host-directories. Other than that, the manual is
terrific! Wget is wonderful also.
Title: Small bug in Wget manual page
Hi,
On
http://www.gnu.org/software/wget/manual/wget.html, the section
on
protocol-directories has a paragraph that is a duplicate of the
section on
no-host-directories. Other than that, the manual is terrific!
Wget is wonderful also. I don't know what I
Wget doesn't recognize the image tag,
Aah, thanks.
Should Wget support it to be compatible?
IMHO yes.
Thanks for your help.
Werner
simply doesn't download -- no error message,
no warning. My Mozilla browser displays the page just fine. Since
wget downloads the first thumbnail picture
`../image/ft2-nautilus-thumb.png' without problems I suspect a serious
bug in wget.
I'm running wget on a GNU/Linux box.
BTW
. Since
wget downloads the first thumbnail picture
`../image/ft2-nautilus-thumb.png' without problems I suspect a
serious bug in wget.
ft2-nautilus-thumb.png is referenced using the regular img tag.
BTW, it is not possible for CVS wget to have builddir != srcdir
(after creating the configure
Hi
wget ftp://someuser:[EMAIL PROTECTED]@www.somedomain.com/some_file.tgz
is splitting using on the first @ not the second.
Is this a problem with the URL standard or a wget issue?
Regards
Andrew Gargan
Andrew Gargan [EMAIL PROTECTED] writes:
wget ftp://someuser:[EMAIL PROTECTED]@www.somedomain.com/some_file.tgz
is splitting using on the first @ not the second.
Encode the '@' as %40 and this will work. For example:
wget ftp://someuser:[EMAIL PROTECTED]/some_file.tgz
Is this a problem
on a Solaris 8 box.
Is this a bug or is just my command invoking wget wrong a somethins
missing. I couldn't find any other options within the help.
Thank you very much in advance.
Anton
Hi,
I wanted to alert you all to a bug in wget, reported by one of our
(gentoo) users at:
https://bugs.gentoo.org/show_bug.cgi?id=69827
I am the maintainer for the Gentoo ebuild for wget.
If someone would be willing to look at and help us with that bug, it'd
be much appreciated.
Thanks
Seemant Kulleen [EMAIL PROTECTED] writes:
I wanted to alert you all to a bug in wget, reported by one of our
(gentoo) users at:
https://bugs.gentoo.org/show_bug.cgi?id=69827
I am the maintainer for the Gentoo ebuild for wget.
If someone would be willing to look at and help us
if that really worked.
I don't even know if this is a bug in Wget or in the way that the
build is attempted by the Gentoo package mechanism. Providing the
actual build output might shed some light on this.
if use static; then
emake LDFLAGS=--static || die
I now tried `LDFLAGS=--static
The following command
wget --convert-links --backup-converted --html-extension --mirror
http.//localhost/index.php
downloads index.php.html and back it up as index.php.html.orig before
converting the links. When re-mirroring, wget looks for index.php.orig
which doesn't exist and thus re-download
The following command
wget --convert-links --backup-converted --html-extension --mirror
http.//localhost/index.php
downloads index.php.html and back it up as index.php.html.orig before
converting the links. When re-mirroring, wget looks for local index.php.orig
which doesn't exist and thus
The following command
wget --convert-links --backup-converted --html-extension --mirror
http.//localhost/index.php
downloads index.php.html and back it up as index.php.html.orig before
converting the links. When re-mirroring, wget looks for local index.php.orig
which doesn't exist and thus
I try to do something like
wget http://website.com/ ...
login=usernamedomain=hotmail%2ecom_lang=EN
But when wget sends the URL out, the hotmail%2ecom
becomes hotmail.com !!! Is this the supposed
behaviour ? I saw this on the sniffer. I suppose the
translation of %2 to . is done by wget. Because
Will Kuhn [EMAIL PROTECTED] writes:
I try to do something like
wget http://website.com/ ...
login=usernamedomain=hotmail%2ecom_lang=EN
But when wget sends the URL out, the hotmail%2ecom
becomes hotmail.com !!! Is this the supposed
behaviour ?
Yes.
I saw this on the sniffer. I suppose
Hrvoje Niksic [EMAIL PROTECTED] writes:
Can I have it not do the translation ??!
Unfortunately, only by changing the source code as described in the
previous mail.
BTW I've just changed the CVS code to not decode the % sequences.
Wget 1.10 will contain the fix.
This problem has been fixed for the upcoming 1.10 release. If you
want to try it, it's available at
ftp://ftp.deepspace6.net/pub/ds6/sources/wget/wget-1.10-alpha2.tar.bz2
Arndt Humpert [EMAIL PROTECTED] writes:
wget, win32 rel. crashes with huge files.
Thanks for the report. This problem has been fixed in the latest
version, available at http://xoomer.virgilio.it/hherold/ .
Hello,
wget, win32 rel. crashes with huge files.
regards
[EMAIL PROTECTED]
___
Gesendet von Yahoo! Mail - Jetzt mit 250MB Speicher kostenlos - Hier anmelden:
http://mail.yahoo.de== Command Line
wget -m
Hi,
When using wget (version 1.9.1 running on Debian Sarge) to download
files over 2 gigs from an ftp server (proftpd), wget reports a negative
length and keeps downloading, but once the file is successfully
downloaded it crashes (and therefore doesn't download the rest of the
files). Here is
Title: WGET Bug?
#
C:\Grabtest\wget.exe -r --tries=3 http://www.xs4all.nl/~npo/ -o C:/Grabtest/Results/log
#
--16:23:02-- http://www.xs4all.nl/%7Enpo
Jens Rösner [EMAIL PROTECTED] writes:
C:\wgetwget --proxy=on -x -r -l 2 -k -x -limit-rate=50k --tries=45
--directory-prefix=AsptDD
As Jens said, Wget 1.5.3 did not yet support bandwidth throttling.
Also please note that the option is named --limit-rate, not
-limit-rate.
Bonjour je n'ai pas une longue expérience sur wget (je
doit dire que je lutilise depuis 1 heure) mais il m'a
semblé qu'il y a un problème avec son « interpréteur
de commande »
Voila jai transmis la commande
Voici la commande :
**
//bug ou pas bug
//sous Windows XP
Bonjour je me suis trompé de fichier
mon probleme est que wget ne reconer pas l'option
`--limit-rate=50k'
voici le fichier
G:\Documents and Settings\Hacene\Bureau\wgetwget
--proxy=on -x -r -l 2 -k --lim
it-rate=50k --tries=45 --directory-prefix=AsptDD
http://www.gnu.org/software/wg
et/manual/
Hallo!
Je ne parle pas francais (ou presque pas du tout)...
C:\wgetwget --proxy=on -x -r -l 2 -k -x -l
imit-rate=50k --tries=45 --directory-prefix=AsptDD
Je pense que ce doit être:
C:\wgetwget --proxy=on -x -r -l 2 -k -x -limit-rate=50k --tries=45
--directory-prefix=AsptDD
dans un ligne de
Hi Jorge!
Current wget versions do not support large files 2GB.
However, the CVS version does and the fix will be introduced
to the normal wget source.
Jens
(just another user)
When downloading a file of 2GB and more, the counter get crazy, probably
it should have a long instead if a int
Is it still useful to mail to [EMAIL PROTECTED] I don't think
anybody's home. Shall the address be closed?
I don't know why you say that. I see bug reports and discussion of fixes
flowing through here on a fairly regular basis.
Mark Post
-Original Message-
From: Dan Jacobson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 15, 2005 3:04 PM
To: [EMAIL PROTECTED]
Subject: bug-wget still
Dan Jacobson [EMAIL PROTECTED] writes:
Is it still useful to mail to [EMAIL PROTECTED] I don't think
anybody's home. Shall the address be closed?
If you're referring to Mauro being busy, I don't see it as a reason to
close the bug reporting address.
P I don't know why you say that. I see bug reports and discussion of fixes
P flowing through here on a fairly regular basis.
All I know is my reports for the last few months didn't get the usual (any!)
cheery replies. However, I saw them on Gmane, yes.
Jesus Legido wrote:
I'm getting a file from https://mfi-assets.ecb.int/dla/EA/ea_all_050303.txt:
The problem is not with wget. The file on the server
starts with 0xFF 0xFE. Put the following into an HTML file (say temp.html) on
your hard drive, open it in your web browser, right click on
OS = Solaris
8
Platform =
Sparc
Test command =
/usr/local/bin/wget -r -t0 -m ftp://root:[EMAIL PROTECTED]/usr/openv/var
The directory will
count to some sub-direcotry's andfiles to
synchronize.
Example
:
# ls -la
/usr/openv/total 68462drwxr-xr-x 14 root
bin 512 set 1 17:52
When downloading a 4.2 gig file (such as from
ftp://movies06.archive.org/2/movies/abe_lincoln_of_the_4th_ave/abe_lincoln_o
f_the_4th_ave.mpeg ) cause the status text (i.e.
100%[+===] 38,641,328 213.92K/sETA
00:00) to print invalid things (in this case, that
Quoting Alan Robinson [EMAIL PROTECTED]:
When downloading a 4.2 gig file (such as from
ftp://movies06.archive.org/2/movies/abe_lincoln_of_the_4th_ave/abe_lincoln_o
f_the_4th_ave.mpeg ) cause the status text (i.e.
100%[+===] 38,641,328 213.92K/sETA
00:00)
Hello List,
we are currently setting up a SSL secured Domain with SSL on both sides
(client and server). It works fine with any browser, but i do have
problems with wget.
The login/auth seems to work and apache reports a 200 code with a
correct filesize, but wget says Read error (Unknown
Hi,
When trying to mirror an ftp server via an ftp proxy (set in the ftp_proxy
environment variable), recursion breaks.
The following command should in theory download an index.html from the
ftp proxy and parse it, downloading all links. In reality only index.html
is downloaded.
wget -r -N
Hi Jason!
If I understood you correctly, this quote from the manual should help you:
***
Note that these two options [accept and reject based on filenames] do not
affect the downloading of HTML files; Wget must load all the HTMLs to know
where to go at all--recursive retrieval would make no
When the -R option is specified to reject files by name in recursive mode, wget
downloads them anyway then deletes them after downloading. This is a problem
when you are trying to be picky about the files you are downloading to save
bandwidth. Since wget appears to know the name of the file it
Hi,
The wget manual page misses documentation on what returncode is returned by
wget in what situation.
Folkert van Heusden
Op zoek naar een IT of Finance baan? Mail me voor de mogelijkheden!
+--+
|UNIX admin? Then give MultiTail
I don't really know if this is a bug or something I am doing wrong, if not a bug
then don't really bother getting too involved on this and just point me to where
I should be going.
Anyway the pages I retrieve using wget are not showing me the related pictures
for the page even though
Hello!
I've been trying to get wget to retrieve a file off of our AS/400 using ftp.
I'm using wget 1.8.2 on a RedHat Enterprise Linux 3.0 box. The debug output
is below.
I don't know if you're familiar with AS/400's. They use libraries instead
of a directory structure. All libraries are
Hi
When I try (command in one line, of course):
wget -rH -Dvirtualdub.org --exclude-domains forums.virtualdub.org
http://www.virtualdub.org/
wget still get other sites i.e. mikecrash.wz.cz, sourceforge.net,
www.google.com etc. Is it a bug? wget work bad with php?
Yours Sincererly
Greg
Zitat von Tony O'Hagan [EMAIL PROTECTED]:
Original path: abc def/xyz pqr.gif
After wget mirroring: abc%20def/xyz pqr.gif (broken link)
wget --version is GNU Wget 1.8.2
This was a well-known error in the 1.8 versions of wget, which is already
corrected in the 1.9
Recently I used the following wget command under a hosted linux account:
$ wget -mirror url -o mirror.log
The web site contained files and virtual directories that contained spaces
in the names.
URL encoding translated these spaces to %20.
wget correctly URL decoded the file names (creating
a negative number so it exits.
Of course, this is all speculation on my part about what the code looks like but
none the less, the bug does exist on both linux and cygwin.
Thanks,
Matt
---
BTW:
great job, really...
on wget and all the GNU software in general...
THANKS
Hi,
I have seen a strange bug:
echo test test
wget -O - http://www.w3c.org test
Actually, wget should append to test, right? Well, it does in version
1.9, but it does not do that in 1.8 (tested with bash 2.x and 3.0). In
version 1.8 it overwrites (!) the file.
OK, I see
Quoting Christoph Anton Mitterer [EMAIL PROTECTED]:
It seems that the joecartoon.com server sends the gzip file
intentionally with an appended 0xA (perhaps is even an error).
Can you check if the additional 0xA byte is included in the Content-Length or
not? Does it increase the C-L by one or
something like gzip --decompress --force --stdout
joebutton.swf decompressed.swf it works.
I also notived that gzip --decompress --force joebutton.swf.gz (same
thing without writing to stdout but directly to a file) does not work.
Very strange imho.
So my solution to the bug is: A very big
201 - 300 of 678 matches
Mail list logo