Hello.
I've tried to download some files from joecartoon.com an perhaps I've
found a bug in wget.
I did the following:
[EMAIL PROTECTED]:~/test$ wget -S
http://joecartoon.atomfilms.com/media/swf/0/public/1/joebutton.swf deb
--22:57:11--
http://joecartoon.atomfilms.com/media/swf
the
correct swf-file.
So I think thats not a bug of wget. But I urgently suggest that in such
a case wget should come up with a very big warning message or something
like this ;)
Regards,
Christoph.
begin:vcard
fn:Mitterer, Christoph Anton
n:Mitterer;Christoph Anton
org:Munich University of Applied
WGET can not download the following
link:
Wget --tries=5 http://extremetracking.com/free-2/scripts/reports/display/edit?server=clogin=flashani
I tested it with other downloader and it was
working.
Hi,
I've experienced this bug, while retriving a 0 byte document via http
proxy..
wget localhost/~antonio/
stalls after displaing the string Lengh: 0 [text/html]
(gdb said it was on accept() )
wget --use-proxy off localhost/~antonio/
exits correctly
wget is the HEAD version of CVS repository
Hi, Simone,
Santa put a patch for you in http://software.lpetrov.net/wget-LFS/
Unwrap carefully and enjoy. Merry Christmas,
Leonid
24-DEC-2004 21:02:03
Hello.
I was retrieving this iso:
ftp://ftp.slackware.no/pub/linux/ISO-images/Slackware/Current-ISO-build/slackware-10.0-DVD.iso
I killed wget and then I resumed it with wget -c (file was downlaoded
for 2285260288 bytes)
here's the output:
--19:31:47--
tracking this bug
Thanks,
--
Roberto Sebastiano [EMAIL PROTECTED]
hello.
recently i tried to downoad a link directly to an CD by using
wget -O- -c http://site/file.iso | cdrecord dev=[device] -
everything was fine to the moment when connection died,
wget started downloading of the file _from byte 0_
it seems -c doesn't recognize that when one uses -O-
of the
development effort behind GNU wget.
Great. I will forward reported bugs from the Debian user
(http://bugs.debian.org/wget) to your bug system.
if i don't find any major problem, i am planning to release wget 1.9.2 with
LFS support and a long list of bugfixes before the end of the year
.
hi noèl,
very simple: because i don't have the root password on savannah ;-)
BTW, i also find that roundup is a pretty cool bug trackind system. very
simple to use and to maintain (bugzilla and RT would have been overkill for
wget), extremely flexible (can be used with almost any open source db
if i don't find any major problem, i am planning to release wget 1.9.2
with
LFS support and a long list of bugfixes before the end of the year.
Are you planning to fix session cookies?
In the current release version they don't work. In the tip build they nearly
work, but I got problems
hi to everybody,
thanks to the kind hosting provided by the ferrara linux user group, i have
finally been finally able to set up a bug tracking system for GNU wget:
http://wget-bugs.ferrara.linux.it
it will definitely be an invaluable tool for the coordination of the
development effort
Hello,
On a few website, the filename of file is extracted from http server
response headers.
I did a bash script to get the real filename from a wget -sS command,
cause I didn't find that in options...
I wonder if that option would be usefull directly in wget options to
obtain the filename
Hello!
I am very pleased to use wget to crawl pages. It is an excellent tool.
Recently I find a bug in using wget, although I am not sure wether it's a bug
or an incorrect usage. I just to want to report here.
When I use wget to mirror or recursively download a web site with -O
option, I
Hi,
When wget 1.9.1, issued with --spider option, is given a ftp link (or
http link redirecting it to ftp site), it doesn't put a newline char
after doing PORT command.
gophi (not subscribed to list).
--
Adam Wysocki * http://www.gophi.apcoh.org/ * GG 1234 * GSM 508878856
the url with ~
and the url with %7E downloaded files differently!
I also added new log outputs and while testing them with the
problem sites, surprise, there seemed to be no problems.
So, the fact that urls are not downloaded, could be just some
code bug in wget. But why this problem appears when
Hi,
i'm using wget 1.9.1 andgot
aproblem:
when using wget -r -d
--referer='http://domain.invalid/login.htm'
'http://user:[EMAIL PROTECTED]://domain.invalid/members/'the first request is
properly, but the third and following(second is the one for robots.txt)
sends an incorrect referer:
Hello.
Has the ~ / %7E bug been always in wget? When it was added to wget?
Who wrote the code?
I would like to suggest that the person who made this severe bug
should immediately fix it back. It does not make sense that we waste
time in trying to fix this bug if the person did not use any moment
the -k option seems to ignore unquoted href parameters. Bad form though
they are, there are a LOT of pages out there that have these:
a href=foo.htmlfoo/a
-Chris
instead of two folders def\ghi.
Maybe it's not a bug, but I think it could be a feature to convert this char
in next version.
with kind regards
chris faeh
There is a bug in the -np option (don't ascend to the parent
directory) of wget 1.9.1:
When the URL ends in a slash (/), it works OK, but when the slash is
missing, wget apparently doesn't care about the option and happily
continues above the parent directory.
Compare these two lines (only
wget-1.9.1
in file log.c, in function log_close() :
-
if (logfp)
fclose (logfp);
-
closes the logfile file descriptor even if it is stderr !
i think we should test like this :
if (logfp logfp != stderr)
Title: Message
When using wget on a
large file e.g. (wget -c ftp://ftp.tu-chemnitz.de/pub/linux/knoppix-remastered/knoppix-34-dvd-by-iso-top.info.iso)
around 2.5gigs I get the following progress bar
[
= ] -1,852,766,472 343.26K/s
and
later
[
= ] -1,842,437,408
351.40K/s
I noticed
not behave
as documented, it's a bug. -- according to man, -- I am taking a liberty to
'file a bug'.
(The expected behavior I'm talking about is this: if I use
--spider, I expect wget do nothing after finding the server -- like
sending GET to the server and getting HTML back).
That's my bug
Patrik,
Patch for wget with large file support (2Gb) under Unix can be
found at http://software.lpetrov.net/wget-LFS/
Leonid
hi ive found the following bug / issue with wget.
due to limitations wget bugs on files larger then unsigned long and
displays incorrect size and also acts incorrectly when trying to download
one of these files.
//patrik
') {
+ /*
+* I've spotted wget printing CRLF line terminators
+* while communicating with ftp://ftp.debian.org. This
+* is a bug: wget should print whatever the platform
+* line terminator is (CR on Mac
tags 261755 +patch
thanks
On Sun, Aug 22, 2004 at 11:39:07AM +0200, Thomas Hood wrote:
The changes contemplated look very invasive. How quickly can this
bug be fixed?
Here we go: Hacky, non-portable, but pretty slick non-invasive,
whatever that means. Now I'm going to check whether
On Wed, 21 Jan 2004 23:07:30 -0800, you wrote:
Hello,
I think I've come across a little bug in wget when using it to get a file
via ftp.
I did not specify the passive option, yet it appears to have been used
anyway Here's a short transcript:
Passive FTP can be specified in /etc/wgetrc or /usr
Hello,
here a bugreport:
(http://bugs.debian.org/197916)
-Weitergeleitete Nachricht-
From: Antoni Bella Perez [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Bug#197916: wget: Mutual incompatibility between arguments -k and -O
Date: Wed, 18 Jun 2003 16:49:22 +0200
Package: wget
Hello,
maybe someone can document this (http://bugs.debian.org/182957) in one
or two sentences in wget.texi.
thx.
-Weitergeleitete Nachricht-
From: Daniel B. dsb smart.net
...
The wget manual page doesn't document the format of the comma-separated values
for the --rejlist and
Tristan Miller [EMAIL PROTECTED] writes:
There appears to be a bug in the documentation (man page, etc.) for
wget 1.9.1.
I think this is a bug in the man page generation process.
Hello,
I think wget cannot to store more than one cookie at the time.
This is a bug?
Installed from wget-cvs_1.9.1-20040319_i386.deb
Some log entries following:
Best regards,
Valdas
DEBUG output created by Wget 1.9+cvs-dev on linux-gnu.
Created socket 8.
Releasing 0x8090868 (new
Greetings.
There appears to be a bug in the documentation (man page, etc.) for wget
1.9.1. Specifically, the section about the command-line option for
proxies ends abruptly:
-Y on/off
--proxy=on/off
Turn proxy support on or off. The proxy is on by default
Hi folks!
Sometimes I experience very unpleasant behavior of wget (using some
not-really-recent CVS version of wget 1.9, under W98SE). I have a partially
downloaded file (usually a big one, there is not so big probability of
interrupted download of a small file), so I want to finish the
H, sorry, I have just discovered that it has been reported about a week
ago (http://www.mail-archive.com/wget%40sunsite.dk/msg06527.html). I really
did try to search for some overwrite, etc. in the archive, honestly. :-)
But that e-mail does not use the word overwrite at all...
Regards,
Yup; 1.9.1 cannot download large files. I hope to fix this by the
next release.
Hi Ben!
Not at a bug as far as I can see.
Use -A to accept only certain files.
Furthermore, the pdf and ppt files are located across various servers,
you need to allow wget to parse other servers than the original one by -H
and then restrict it to only certain ones by -D.
wget -nc -x -r -l2 -p
Hi,
I use wget on a i386 redhat 9 box to download 4G DVD from a ftp site.
The process stops at:
$ wget -c --proxy=off
ftp://redhat.com/pub/fedora/linux/core/2/i386/iso/FC2-i386-DVD.iso
--12:47:24--
ftp://redhat.com/pub/fedora/linux/core/2/i386/iso/FC2-i386-DVD.iso
=
Hi,
How can I download all pdf and ppt
file by the following url
with command line of:
wget -k -r -l 1 http://devresource.hp.com/drc/topics/utility_comp.jsp
I am on windows 2000 server sp4 with latest update.
E:\Releasewget -V
GNU Wget 1.9.1
Copyright (C) 2003 Free Software Foundation,
Hello!
I just found a feature in embedded system (no source) with ftp server.
In listing, there are two spaces between fileize and month.
As a consequence, wget allways thinks size is 0.
In procedure ftp_parse_unix_ls it just steps back one blank
before cur.size is calculated.
My quick hack is
Dear Reader,
some may not really consider it a Bug so it is maybe
more a nice-2-have
When I try to mirror the Internetpages I develop
http://www.nachttraum.de
http://www.felixfrisch.de
wget complains that the linux complains that the file
name is to long. It is not exactly a Bug as I use cgi
Hello.
Problem: When downloading all in
http://udn.epicgames.com/Technical/MyFirstHUD
wget overwrites the downloaded MyFirstHUD file with
MyFirstHUD directory (which comes later).
GNU Wget 1.9.1
wget -k --proxy=off -e robots=off --passive-ftp -q -r -l 0 -np -U Mozilla $@
Solution: Use of -E
, something which I cannot control. So, again, I say this
is a bug.
I see that frontcmp() is also called by (recur.c)download_child_p which is
an HTTP function, so any possible patch would probably need to just create
a new function in utils.c solely for use in FTP directory matching. It's
only
I sent this message to [EMAIL PROTECTED] as directed in the wget man page, but it
bounced and said to try this email address.
This bug report is for GNU Wget 1.8.2 tested on both RedHat Linux 7.3 and 9
rpm -q wget
wget-1.8.2-9
When I use a wget with the -S to show the http headers, and I use
Hello. This is report on some wget bugs. My wgetdir command looks
the following (wget 1.9.1):
wget -k --proxy=off -e robots=off --passive-ftp -q -r -l 0 -np -U Mozilla $@
Bugs:
Command: wgetdir http://www.directfb.org;.
Problem: In file www.directfb.org/index.html the hrefs of type
Juhana Sadeharju [EMAIL PROTECTED] writes:
Command: wgetdir http://liarliar.sourceforge.net;.
Problem: Files are named as
content.php?content.2
content.php?content.3
content.php?content.4
which are interpreted, e.g., by Nautilus as manual pages and are
displayed as plain texts. Could
Hi,
While downloading a file of about 3,234,550,172 bytes with wget
http://foo/foo.mpg; I get an error:
HTTP request sent, awaiting response... 200 OK
Length: unspecified [video/mpeg]
[
=
] -1,060,417,124 13.10M/s
; the bug only happens when the url is
passed in with:
cat EOF | wget -i -
http://...
EOF
But I cannot repeat that, either. As long as the consecutive slashes
are in the query string, they're not stripped.
Using this method is necessary since it is the ONLY secure way I
know of to do
Good day!
I use wget 1.9.1.
By default all link to root site / or somedomain.com/ wget convert
to /index.html or somedomain.com/index.html.
But some site don't use index.html as default page and if use timestamp
and continue download site in more than 1 session
1. wget first download index.html
The whole matter of conversion of / to /index.html on the file
system is a hack. But I really don't know how to better represent
empty trailing file name on the file system.
Hrvoje Niksic wrote:
The whole matter of conversion of / to /index.html on the file
system is a hack. But I really don't know how to better represent
empty trailing file name on the file system.
Another, for now rather limited, hack: on file systems which support some
sort of file attributes
. Then I remembered that I was using -i. Wget seems to work
fine with the url on the command line; the bug only happens when the
url is passed in with:
cat EOF | wget -i -
http://...
EOF
Using this method is necessary since it is the ONLY secure way I know
of to do a password-protected http request
D Richard Felker III [EMAIL PROTECTED] writes:
The following code in url.c makes it impossible to request urls that
contain multiple slashes in a row in their query string:
[...]
That code is removed in CVS, so multiple slashes now work correctly.
Think of something like
On Mon, Mar 01, 2004 at 03:36:55PM +0100, Hrvoje Niksic wrote:
D Richard Felker III [EMAIL PROTECTED] writes:
The following code in url.c makes it impossible to request urls that
contain multiple slashes in a row in their query string:
[...]
That code is removed in CVS, so multiple
D Richard Felker III [EMAIL PROTECTED] writes:
Think of something like http://foo/bar/redirect.cgi?http://...
wget translates this into: [...]
Which version of Wget are you using? I think even Wget 1.8.2 didn't
collapse multiple slashes in query strings, only in paths.
I was using
The following code in url.c makes it impossible to request urls that
contain multiple slashes in a row in their query string:
else if (*h == '/')
{
/* Ignore empty path elements. Supporting them well is hard
(where do you save http://x.com///y.html;?), and
Interesting. Is it really necessary to zero out sockaddr/sockaddr_in
before using it? I see that some sources do it, and some don't. I
was always under the impression that, as long as you fill the relevant
members (sin_family, sin_addr, sin_port), other initialization is not
necessary. Was I
Manfred Schwarb [EMAIL PROTECTED] writes:
Interesting. Is it really necessary to zero out sockaddr/sockaddr_in
before using it? I see that some sources do it, and some don't. I
was always under the impression that, as long as you fill the relevant
members (sin_family, sin_addr, sin_port),
francois eric [EMAIL PROTECTED] writes:
after some test:
bug is when: ftp, with username and password, with bind address specifyed
bug is not when: http, ftp without username and password
looks like memory leaks. so i made some modification before bind:
src/connect.c
) ready.
...
--
after some test:
bug is when: ftp, with username and password, with bind address specifyed
bug is not when: http, ftp without username and password
looks like memory leaks. so i made some modification before bind:
src/connect.c:
--
...
/* Bind the client side
** High Priority **
Hi
On My server AIX I use wget with this command
/usr/local/bin/wget http://www.???.?? -O /exploit/log/test.log
but when I read my file test.log its date it's January 30 2003 ???
that's incredible
What's the problem please
Regards
olivier
don [EMAIL PROTECTED] writes:
I did not specify the passive option, yet it appears to have been used
anyway Here's a short transcript:
[EMAIL PROTECTED] sim390]$ wget ftp://musicm.mcgill.ca/sim390/sim390dm.zip
--21:05:21-- ftp://musicm.mcgill.ca/sim390/sim390dm.zip
=
Kairos [EMAIL PROTECTED] writes:
$ cat wget.exe.stackdump
[...]
What were you doing with Wget when it crashed? Which version of Wget
are you running? Was it compiled for Cygwin or natively for Windows?
$ cat wget.exe.stackdump
Exception: STATUS_ACCESS_VIOLATION at eip=77F51BAA
eax= ebx= ecx=0700 edx=610CFE18 esi=610CFE08 edi=
ebp=0022F7C0 esp=0022F74C program=C:\nonspc\cygwin\bin\wget.exe
cs=001B ds=0023 es=0023 fs=0038 gs= ss=0023
Stack trace:
Frame Function
Hi again,
I found something what can be called a bug.
The command line and the output (shortened):
$ wget -k www.seznam.cz
--14:14:28-- http://www.seznam.cz/
= `index.html'
Resolving www.seznam.cz... done.
Connecting to www.seznam.cz[212.80.76.18]:80... connected.
HTTP request sent
I'm playing around with the wget tool and I ran into this website that I
don't believe the -e robots=off works. http://www.quickmba.com/ any idea
why?
I've tried a few combinations and I keep on getting this message in the
response.
We're sorry, but the way that you have attempted to
Hi, I've just noticed a weird behavior of wget 1.8.2 while downloading a
partial file with command:
wget http://ardownload.adobe.com/pub/adobe/acrobatreader/unix/5.x/
linux-508.tar.gz -c
The connection was very unstable, so it had to reconnect many times. What i
noticed is not a big thing, just
/group/sammydavisjr/message/56
retrieves a standard page (HTTP 200).
Is this a bug (of GET, wget?) or a feature?
I realized this problem when testing two different Java program to
download pages from a URL. One uses a Java socket, the other uses Java
URLConnection. Well, **even if the request
hi
i tried to download the following:
wget
ftp://ftp.suse.com/pub/suse/i386/7.3/full-names/src/traceroute-nanog_6.1.1-94.src.rpm
this is a symbolic link.
downloading just this single file, wget should follow the link, but it
creates only a symbolic link.
excerpt from man wget, section
Dan Jacobson [EMAIL PROTECTED] writes:
And stop making me have to confirm each and every mail to this list.
Hrvoje Currently the only way to avoid confirmations is to
Hrvoje subscribe to the list. I'll try to contact the list owners
Hrvoje to see if the mechanism can be improved.
And stop making me have to confirm each and every mail to this list.
Hrvoje Currently the only way to avoid confirmations is to subscribe to the
Hrvoje list. I'll try to contact the list owners to see if the mechanism can
Hrvoje be improved.
subscribe me with the nomail option, if it can't be
Here is debug output
:/FTPD# wget ftp://ftp.dcn-asu.ru/pub/windows/update/winxp/xpsp2-1224.exe -d
DEBUG output created by Wget 1.8.1 on linux-gnu.
--13:25:55--
The problem is that the server replies with login incorrect, which
normally means that authorization has failed and that further retries
would be pointless. Other than having a natural language parser
built-in, Wget cannot know that the authorization is in fact correct,
but that the server
Kempston [EMAIL PROTECTED] writes:
Yeah, i understabd that, but lftp hadles it fine even without
specifying any additional option ;)
But then lftp is hammering servers when real unauthorized entry
occurs, no?
I`m sure you can work something out
Well, I'm satisfied with what Wget does now.
Wget don't work properly when the URL contains characters which are not
allowed in file names
on the file system which is currently used. These are often '\', '?', '*'
and ':'.
Affected are at least:
- Windows and related OS
- Linux when using FAT or Samba as file system
Possibilty to solve:
On
Frank Klemm [EMAIL PROTECTED] writes:
Wget don't work properly when the URL contains characters which are
not allowed in file names on the file system which is currently
used. These are often '\', '?', '*' and ':'.
Affected are at least:
- Windows and related OS
- Linux when using FAT or
PROTECTED]
Sent: Friday, October 17, 2003 7:18 PM
To: Tony Lewis
Cc: Wget List
Subject: Re: Wget 1.8.2 bug
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
Incidentally, Wget is not the only browser that has a problem with
that. For me, Mozilla is simply showing the source
??? ?? [EMAIL PROTECTED] writes:
I've seen pages that do that kind of redirections, but Wget seems
to follow them, for me. Do you have an example I could try?
[EMAIL PROTECTED]:~/ /usr/local/bin/wget -U
All.by -np -r -N -nH --header=Accept-Charset: cp1251, windows-1251, win,
Hrvoje Niksic wrote:
Incidentally, Wget is not the only browser that has a problem with
that. For me, Mozilla is simply showing the source of
http://www.minskshop.by/cgi-bin/shop.cgi?id=1cookie=set, because
the returned content-type is text/plain.
On the other hand, Internet Explorer will
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
Incidentally, Wget is not the only browser that has a problem with
that. For me, Mozilla is simply showing the source of
http://www.minskshop.by/cgi-bin/shop.cgi?id=1cookie=set, because
the returned content-type is text/plain.
On
I use wget 1.8.2.
When I try recursive download site site.com where
site.com/ first page redirect to site.com/xxx.html that have first link in
the page to site.com/
then Wget download only xxx.html and stop.
Other links from xxx.html not followed!
Sergey Vasilevsky [EMAIL PROTECTED] writes:
I use wget 1.8.2. When I try recursive download site site.com where
site.com/ first page redirect to site.com/xxx.html that have first
link in the page to site.com/ then Wget download only xxx.html and
stop. Other links from xxx.html not followed!
Hello,
which this download you will get a segfault.
wget --passive-ftp --limit-rate 32k -r -nc -l 50 \
-X */binary-alpha,*/binary-powerpc,*/source,*/incoming \
-R alpha.deb,powerpc.deb,diff.gz,.dsc,.orig.tar.gz \
ftp://ftp.gwdg.de/pub/x11/kde/stable/3.1.4/Debian
Philip Stadermann [EMAIL
You're right -- that code was broken. Thanks for the patch; I've now
applied it to CVS with the following ChangeLog entry:
2003-10-15 Philip Stadermann [EMAIL PROTECTED]
* ftp.c (ftp_retrieve_glob): Correctly loop through the list whose
elements might have been deleted.
Stephen Hewitt [EMAIL PROTECTED] writes:
Attempting to mirror a particular web site, with wget 1.8.1, I got
many nested directories like .../images/images/images/images etc For
example the log file ended like this:
[...]
Thanks for the detailed report and for taking the time to find the
From: Gisle Vanem [mailto:[EMAIL PROTECTED]
Jens Rösner [EMAIL PROTECTED] said:
...
I assume Heiko didn't notice it because he doesn't have that function
in his kernel32.dll. Heiko and Hrvoje, will you correct this ASAP?
--gv
Probably.
Currently I'm compiling and testing on NT 4.0
and the output was exactly the same.
I then tested
wget 1.9 beta 2003/09/18 (earlier build!)
from the same place and it works smoothly.
Can anyone reproduce this bug?
Yes, but the MSVC version crashed on my machine. But I've found
the cause caused by my recent change :(
A simple case of wrong
Gisle Vanem [EMAIL PROTECTED] writes:
--- mswindows.c.org Mon Sep 29 11:46:06 2003
+++ mswindows.c Sun Oct 05 17:34:48 2003
@@ -306,7 +306,7 @@
DWORD set_sleep_mode (DWORD mode)
{
HMODULE mod = LoadLibrary (kernel32.dll);
- DWORD (*_SetThreadExecutionState) (DWORD) = NULL;
+
Hi,
doing the following:
# /tmp/wget-1.9-beta3/src/wget -r --timeout=5 --tries=1
http://weather.cod.edu/digatmos/syn/
--11:33:16-- http://weather.cod.edu/digatmos/syn/
= `weather.cod.edu/digatmos/syn/index.html'
Resolving weather.cod.edu... 192.203.136.228
Connecting to
This problem is not specific to timeouts, but to recursive download (-r).
When downloading recursively, Wget expects some of the specified
downloads to fail and does not propagate that failure to the code that
sets the exit status. This unfortunately includes the first download,
which should
OK, I see.
But I do not agree.
And I don't think it is a good idea to treat the first download special.
In my opinion, exit status 0 means everything during the whole
retrieval went OK.
My prefered solution would be to set the final exit status to the highest
exit status of all individual
on many platforms that Wget supports.
The issue will likely be addressed in 1.10.
Having said that:
I tried the patch Debian bug report 137989 and didnt work. Can
anybody explain:
1 - why I have to make to directories for patch work: one
wget-1.8.2.orig and one wget-1.8.2 ?
You don't. Just enter
I tried the patch Debian bug report 137989 and didnt work. Can anybody explain:
1 - why I have to make to directories for patch work: one wget-1.8.2.orig and one
wget-1.8.2 ?
2 - why after compilation the wget still cant download the file 2GB ?
note : I cut the patch for debian use ( the first
It's probably a bug:
bug: when downloading
wget -mirror ftp://somehost.org/somepath/3acv14~anivcd.mpg,
wget saves it as-is, but when downloading
wget ftp://somehost.org/somepath/3*, wget saves the files as 3acv14%7Eanivcd.mpg
--
The human knowledge belongs to the world
Hi Jack :)
* Jack Pavlovsky [EMAIL PROTECTED] dixit:
It's probably a bug:
bug: when downloading
wget -mirror ftp://somehost.org/somepath/3acv14~anivcd.mpg,
wget saves it as-is, but when downloading
wget ftp://somehost.org/somepath/3*, wget saves the files as
3acv14%7Eanivcd.mpg
Jack Pavlovsky [EMAIL PROTECTED] writes:
It's probably a bug: bug: when downloading wget -mirror
ftp://somehost.org/somepath/3acv14~anivcd.mpg, wget saves it as-is,
but when downloading wget ftp://somehost.org/somepath/3*, wget saves
the files as 3acv14%7Eanivcd.mpg
Thanks for the report
Not sure if this is a bug or not.
i can not get a file over 2GB (i get a MAX file Exceeded error message)
this is on a redhat 9 box. GNU Wget 1.8.2,
Thanks
Randy
Randy Paries [EMAIL PROTECTED] writes:
Not sure if this is a bug or not.
I guess it could be called a bug, although it's no simple oversight.
Wget currently doesn't support large files.
how do I get off this list? I tried a few times before
got no response from the server.
thank you-
Matt
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 23, 2003 8:53 PM
To: Randy Paries
Cc: [EMAIL PROTECTED]
Subject: Re: bug maybe
301 - 400 of 678 matches
Mail list logo