Follow-up Comment #2, bug #23281 (project wget):
This bug was opened back when I was a maintainer, which I'm not now, nor am I
actually involved in wget development in any way these days, so perhaps this
should be reconsidered by current the maintainer.
But yes, improving gnu_getpass would
On Mon, Oct 20, 2014 at 7:02 PM, Yousong Zhou yszhou4t...@gmail.com wrote:
I am not sure here. Do we always assume sizeof(char) to be 1 for
platforms supported by wget?
FWIW, sizeof(char) is always 1 by definition; the C standard
guarantees it. Even on systems with no addressable values
On Tue, Oct 21, 2014 at 8:55 AM, Pär Karlsson feino...@gmail.com wrote:
Thanks!
It didn't even occur to me to check this out. My only excuse is gratuitous
consistency and lack of pure C experience; a malloc() without a
corresponding sizeof() seemed a little arbitrary to me, but it makes sense
On Oct 18, 2014 6:43 AM, Bryan Baas bb...@weycogroup.com wrote:
Hi,
I was wondering about the command output of wget. I used a Java Runtime
exec and, although the wget process ended with a 0 completion code, the
results appeared in the error stream and not the output stream.
Hi!
It should
On Sat, Sep 13, 2014 at 1:30 PM, Micah Cowan mi...@addictivecode.org wrote:
There is currently _one_ textcha still in operation. If that
falls, I'll switch to an unanswerable textcha (no one can edit) until
I can figure out a better long-term solution.
Update: editing is now disabled, spammers
Hey folks,
So the Wget Wgiki is still alive and kicking, but dealing with spam is
getting out of hand.
Moin anti-spam works with a combination of global blocklisting, and
textchas (text questions designed to be easy for humans, hard for
robots).
The problem is that eventually a human gets
Anyone have thoughts on a designated prefix (say, make-style -) that
indicates a line that can be safely ignored if not understood?
Might also work to have a pragma thingie in the .wgetrc, to turn
fail-on-error on and off.
Naturally, the value of such a thing wouldn't be seen until wgets
On Mon, Dec 2, 2013 at 3:15 PM, Fernando Cassia fcas...@gmail.com wrote:
Hi Micah,
You are listed as the current maintainer of wget.
Listed where? See http://www.gnu.org/software/wget/ for official
information about wget; I haven't been maintaining it for a few years now.
That'd be Giuseppe
Hi Andrew.
I no longer have much involvement with Wget, other than in discussions. I'm
copying the Wget mailing list in my reply.
To my knowledge, there are no officially builds of Wget for Windows (the
developers only provide the source code releases), and neither of the links
you mention are
On Sat, Aug 03, 2013 at 11:50:48PM +0200, Ángel González wrote:
On 03/08/13 21:07, Micah Cowan wrote:
On Sat, Aug 03, 2013 at 04:11:59PM +0200, Tim Rühsen wrote:
As a second option, we could introduce (now or later)
--name-filter-program=program REGEX
The 'program' answers each line
On Fri, Aug 02, 2013 at 11:53:24AM +0200, Tim Ruehsen wrote:
Hi Dagobert,
All this added complexity seems highly overengineered for a feature
that is not in the core functionality of the tool and that only a
fraction of the users use. Keep in mind: a good tool is one that does
a single
On Thu, Aug 01, 2013 at 01:24:12PM +0200, Giuseppe Scrivano wrote:
Tim Ruehsen tim.rueh...@gmx.de writes:
That is basically a good idea.
Do you have in mind to keep as close to the standard CGI environment
variables
as possible ? Or do you think of the CGI environment principle ?
On Fri, Jul 26, 2013 at 02:30:00PM -0400, Andrew Cady wrote:
Incidentally, the former maintainer of wget, Micah Cowan, actually
started working on a wget competitor (so to speak) based on a plugin
architecture designed around this concept:
Thanks for the mention. :)
Not plugins; it's based
From: Towle, Jonathan J. jonathan.j.to...@chase.com
To: bug-wget@gnu.org bug-wget@gnu.org
Cc:
Date: Fri, 31 May 2013 15:12:40 +
Subject: [Bug-wget] Compatibility
We are running various versions of wget on our servers.
On the server we need to use, we are running the very old version
On Fri, May 10, 2013 at 12:25 AM, Hauke Hoffmann
haukebjoernhoffm...@googlemail.com wrote:
Is that correct or is that a typo in the manpage so that it should be:
/Set number of tries to number. Specify 0 or inf for infinite
retrying. [...]/
Great catch!
-mjc
On Sat, May 4, 2013 at 12:53 AM, Micah Cowan mi...@cowan.name wrote:
I just did mass deletions of pages and users.
I MAY HAVE ACCIDENTALLY DELETED IMPORTANT PAGES AND/OR REAL USERS
I failed to follow this up with the more important point: I have
backups, and can restore individual pages
I believe you want -H -D gnu.org. That's what it's for. Wget doesn't
know which hostnames under a domain should be allowed and which should
not be (do you want images.gnu.org? git.gnu.org? lists.gnu.org?), so
turns 'em all off unless you ask for them explicitly.
HTH,
-mjc
On Thu, May 2, 2013 at
On Thu, May 2, 2013 at 9:00 PM, Tim Ruehsen tim.rueh...@gmx.de wrote:
Darshit, I guess you are talking about redirection.
That is 'wget -r gnu.org' is being redirected to www.gnu.org (via Location
header). Wget now follows the redirection, but only downloads index.html
since
all included
On Thu, May 2, 2013 at 12:32 PM, Tim Rühsen tim.rueh...@gmx.de wrote:
Am Donnerstag, 2. Mai 2013 schrieb Micah Cowan:
Ah, yeah that's a decent point. I like it, but then, we run into
name-trusting problems along the lines of why --trust-server-names was
introduced, if we just happily
On 04/01/2013 11:35 PM, Noël Köthe wrote:
Am Montag, den 01.04.2013, 20:53 -0700 schrieb RJ:
Is it possible to get a copy of the Latex source code for the Wget
instruction manual?
I would like to print a few copies for instructional purposes, but
need to first compile in a smaller font.
It
On 03/14/2013 12:15 PM, Darshit Shah wrote:
In fact I wrote this to specifically expand command line options, since
bash did not expand the tilde in the filename I gave through the
command line.
Here is the output I got.
$ wget --post-file=~/vimrc www.example.com
--2013-03-15
On 12/09/2012 04:11 AM, Giuseppe Scrivano wrote:
7382...@gmail.com writes:
I think wget should HTTP compression (Accept-Encoding: gzip, deflate). It
would put less strain on servers being downloading from, and use less of
their bandwidth. Is it okay to add this idea to the
On 12/09/2012 02:45 AM, Tim Rühsen wrote:
Am Samstag, 8. Dezember 2012 schrieb 7382...@gmail.com:
Hello
I think wget should HTTP compression (Accept-Encoding: gzip, deflate). It
would put less strain on servers being downloading from, and use less of
their bandwidth. Is it okay to add this
On 11/17/2012 02:24 PM, Voytek Eymont wrote:
what;s the best way to reduce the log verbosity to minimum
Is the -nv option perhaps what you're looking for?
-mjc
On 10/30/2012 01:30 PM, Ángel González wrote:
On 30/10/12 19:37, Alex wrote:
Greetings, Dmitry Bogatov.
Thanks for reply.
Yes, thanks it is may be possible to get all files list, convert it to
readable codepage and rename files. Sorry, inertia of thinking - Far
1.75 ever can't find|open
On 08/30/2012 10:50 PM, Andriansah wrote:
Dear bug tracker
I have finish download file with command
wget -c --content-disposition 'link_download'
Accidentally I run that program again and surprisingly, it download the
file again
I'm using latest version of wget 1.13.4 on ubuntu 12.04
Hi Patrick. I don't maintain (or do development) on wget currently. I'm
copying to the Wget mailing list. Thanks!
-mjc
On 08/30/2012 09:21 AM, Patrick Castet wrote:
hi,
about:
man wget
wget --help
--spider option
just this humble suggestion:
present man:
On 08/24/2012 08:56 AM, Tim Ruehsen wrote:
Meanwhile I am working on more test routines. So far it's only kind of unit
testing. But after finishing that, i'll write a test small http/https server
(using mget net routines) that could offer as many tests as we need
(timeouts,
authorization,
On 08/22/2012 03:03 PM, David Linn wrote:
Hi,
I was wondering how to maintain a git branch where I can test things out.
Running bootstrap and configure generate a bunch of extra files. Do these
files need to be tracked ? Can I just put all of them in .gitignore ? I'm
new to git (version
On 08/13/2012 02:01 AM, Tim Ruehsen wrote:
And now back to Micah and Niwt. How can we join forces ?
It should make sense to share code / libraries and parts of the test code.
It should be noted that I chose a MIT/2-clause BSD-style license for
Niwt, so any sharing would necessarily be
On 08/09/2012 12:42 AM, ptrk mj wrote:
Greetings everyone,
I'd like to know what is the technical difference between
Connection closed at byte x.
and
Read error at byte x/y (Connection timed out).
AIUI,
Connection closed at byte x means that the remote end closed the
connection while
On 08/07/2012 11:29 PM, Ray Satiro wrote:
From: Alex gnfa...@rambler.ru
[...]
First three with assertion ('ioctl() failed. The socket could not be set as
blocking.').
That's fatal. You shouldn't see 'ioctl() failed. The socket could not be set
as blocking.'. I don't see that on Vista
On Thursday 24 May 2012 17:14:30 Mike Frysinger wrote:
On Thursday 24 May 2012 16:31:04 Giuseppe Scrivano wrote:
Mike Frysinger vap...@gentoo.org writes:
Newer versions of openssl ship with pkg-config files, so if we can
detect it via those, do so. If that fails, fall back to the classic
On 08/07/2012 06:18 PM, illusionoflife wrote:
On Tuesday, August 07, 2012 11:08:40 you wrote:
I think the maintainer is aware that Wget's code quality is poor, and
would welcome sweeping architectural changes; I know I would have, when
I was maintainer.
Of course, but we can have different
On 07/27/2012 07:02 AM, Hongyi Zhao wrote:
But, when I ssh'ed to the remote HPC of my university and then run the
same command as mentioned0 above, I'll meet the following issue:
-
zhaohongsheng@node32:~ wget -c http://registrationcenter-
download.intel.com/akdlm/irc_nas/2699/
Presumably, there are a number of other new HTML5 tags whose attributes
should be getting checked as well.
-mjc
On 07/11/2012 09:37 PM, Timothy Gibbon wrote:
Hello,
wget currently ignores the source tag used in HTML5 video/audio:
http://www.w3.org/wiki/HTML/Elements/video
Attached is a
On 07/09/2012 10:24 PM, Owen Watson wrote:
Would --local-encoding=UTF-8 fix it?
Unlikely. IIRC, that changes how wget behaves in terms of deciding how
to translate non-ascii URLs (IRIs) on the command-line, and I think how
it saves non-ascii file names, but I don't believe it will modify file
Yes, --post-data has been around for quite some time, so you should be
fine, at least as far as form-based data submission is concerned.
-mjc
On 07/06/2012 02:17 AM, Gargiulo Antonio (EURIS) wrote:
Now I’ve another question for you.
On our environment machine, we can upgrade only the wget
On 06/22/2012 08:50 AM, illusionoflife wrote:
On Thursday, June 21, 2012 13:39:07 you wrote:
IIRC, that was to allow the URL-extraction portion of wget to be built
stand-alone, so that it would create a tool that just extract URLs and
spit them out, and not as part of some wget run.
Gah. This
On 06/21/2012 11:12 AM, illusionoflife wrote:
Hello, Free Hackers!
Currently, I got idea to feed wget sources to GNU complexity tool
and try to simplify some of extremely long functions. During exploring,
I found, that we have two independed implementations of *read_whole_line* in
On 06/21/2012 01:33 PM, Micah Cowan wrote:
On 06/21/2012 11:12 AM, illusionoflife wrote:
Hello, Free Hackers!
Currently, I got idea to feed wget sources to GNU complexity tool
and try to simplify some of extremely long functions. During exploring,
I found, that we have two independed
On 06/21/2012 04:08 PM, John wrote:
Hello.
After looking at the manual for wget while online here
https://www.gnu.org/software/wget/manual/wget.html;, I created the
following command to download it:
wget --secure-protocol=auto --convert-links --page-requisites
On 06/20/2012 11:19 AM, John wrote:
Jochen Roderburg roderb...@uni-koeln.de wrote in message
news:20120614153556.18525j321wre0...@webmail-test.rrz.uni-koeln.de...
Zitat von Jochen Roderburg roderb...@uni-koeln.de:
Search an archive for this mailing list. ;-)
Windows test builds from
On 06/18/2012 03:42 AM, Jan Engelhardt wrote:
On Sunday 2012-06-17 22:33, Giuseppe Scrivano wrote:
Hi,
please report these problems to the translation project[1], translation
files are not maintained by us but we just distribute them.
Thanks,
Giuseppe
1) http://translationproject.org
On 06/16/2012 08:31 AM, Ángel González wrote:
On 16/06/12 12:07, jjDaNiMoTh wrote:
Hi list,
It's not a bug, but I don't find any other ml for this awesome project.
Don't worry. It's the appropiate list.
I want to download files from a Web Server which hasn't the Range
support, so
On 06/09/2012 11:39 AM, Ángel González wrote:
On 08/06/12 18:26, hito...@mpi-inf.mpg.de wrote:
Hi,
I have a problem when using --convert-links (-k) on a utf-8 encoded web page.
How to reproduce is:
wget -k --restrict-file-names=nocontrol
On 06/06/2012 03:08 AM, Fernando Cassia wrote:
I think if you look at the source (log-in form) there' s a session
token there, apparently handled via Javascript.
I don't know whether Javascript may be modifying the form or not, but
there are clearly several input items to the login-form form
On 06/05/2012 12:47 AM, Mr Cracker wrote:
hi all
I want to add option to wget to able it download a file with multi
connections like Axel.
and now I am looking for any help or idea.
thanks.
Thanks for you interest in this feature.
The functionality for doing that is being actively developed
The behavior you described, is because you haven't properly quoted the
*. The shell will interpret * first, and if there is even a single file
in the current directory, the shell will expand to that file (and any
others), BEFORE it calls wget. Make sure to quote the * properly (say,
-R *, instead
On 05/03/2012 03:02 AM, Castet JR wrote:
*** THE JOB SEEMS TO BE STUCKED ON THIS DAMNED TRAP
www.cisco.com/web/fw/c/global_print.css ***
If I had to guess, you're slamming their system too hard, with too many
requests, so they stop sending to your IP for a while. Try your wget
runs with
In both cases, your shell is transforming your arguments before wget
gets a chance to see it.
Don't percent-encode ampersands - they need to be literal ampersands in
order to maintain their function as separating key/value pairs.
Instead, be sure to wrap the URL in double quotes (), to protect
On 04/27/2012 04:02 PM, z...@telia.com wrote:
Welcome to Windows' infamous DLL hell ;)
Several years ago I read Gordon Letwin's book Inside OS/2. There he speaks
very, very
highly of the concept of Dynamically Linked Libraries.
Well, surely DLLs are a vast improvement over static
Probably in combination with -nd (no directories), -k (convert links)
and -E (adjust filename extensions).
-mjc
On 04/19/2012 08:27 AM, Tony Lewis wrote:
You're looking for:
--page-requisitesget all images, etc. needed to display HTML page.
wget URL --page-requisites
should give
On 04/13/2012 01:44 AM, Tim Ruehsen wrote:
Am Thursday 12 April 2012 schrieb Micah Cowan:
On 04/12/2012 01:23 AM, TeeRasen wrote:
In main.c we have
opt.progress_type = dot;
In C, a string literal is of type char[] (which automatically transforms
to char*), not const char
On 04/12/2012 01:23 AM, TeeRasen wrote:
In main.c we have
opt.progress_type = dot;
In C, a string literal is of type char[] (which automatically transforms
to char*), not const char[] or const char* (even though one must still
not modify it. You're either compiling with C++ (a bad
On 04/12/2012 03:13 PM, David H. Lipman wrote:
From: Ángel González keis...@gmail.com
On 12/04/12 18:23, David H. Lipman wrote:
From: David H. Lipman dlip...@verizon.net
Is it possible to add; --trust-server-names
To the WGETRC file ?
Nevermind. It wasn't in the version of my
On 04/10/2012 08:52 AM, Tim Ruehsen wrote:
Meanwhile, I wrote a simple proof of concept (parallel dummy downloads using
threads, dummy downloading of chunks, etc.).
I am at the point where I want to implement HTTP-Header metalink (RFC 6249).
I just can't find any servers to test with... maybe
On 04/10/2012 10:34 PM, illusionoflife wrote:
Yes, you are right: I missed that perl module. 68/69 now.
One stupid question: Theese tests are meant to be run by user,
building from source or by developer?
Well, the more people running them, the better, but the main purpose for
them was for
On 04/04/2012 12:02 PM, Ángel González wrote:
On 04/04/12 20:16, Gijs van Tulder wrote:
1. You can match complete urls, instead of just the directory prefix
or the file name suffix (which you can do with --accept and
--include-directories).
2. You can use regular expressions to do the
On 03/29/2012 11:23 AM, Giuseppe Scrivano wrote:
Tim Ruehsen tim.rueh...@gmx.de writes:
Hi,
the wget man page says a timeout value of 0 means 'forever'.
Even if seldom used, 0 seems to be a legal value.
it can't be a legal value. It means the value you are waiting for is
immediately
On 03/18/2012 03:24 PM, JD wrote:
When using wget with the -c option, it does recover and resume the download
after network failures. However, after it finishes the download (in my case
downloading
Fedora-16-i386-DVD.iso), I run the sha256sum on the downloaded ISO and it is
completely
Binary packages aren't provided on the GNU web site (for Windows, nor
Unixen). Did you download the Wget sources and build them yourself - and
if so, what did you use? Cygwin? Msys?
-mjc
On 03/19/2012 11:29 AM, JD wrote:
The Fedora Distribution does not list MD5 sums. Only sha256 sums.
Also,
On 03/18/2012 11:50 AM, Boris Bobrov wrote:
В сообщении от Sunday 18 of March 2012 03:15:01 Micah написал:
On 03/17/2012 09:45 AM, Boris Bobrov wrote:
Hello!
I've noticed the task with adding concurrency to wget and was really
happy to see that wget will soon get that feature - I needed it a
On 03/19/2012 01:13 PM, JD wrote:
I am sorry -
Range requests??
How can I see that when I run wget -c
You're asking for info I am at a loss as to how to obtain.
Sorry, I was slipping into potential technical explanations. You don't
need to know what ranged requests are.
As long as you
I think you're misunderstanding what was supposed to happen.
The robots.txt file is only followed for links that wget is
automatically following. This means (a) wget has to be in
recursive-descent mode (-r or -m), and (b) it only applies to links that
weren't explicitly requested by the user. In
On 03/17/2012 09:45 AM, Boris Bobrov wrote:
Hello!
I've noticed the task with adding concurrency to wget and was really happy to
see that wget will soon get that feature - I needed it a lot some time ago.
I would also like to implement that feature. But I've got some question
beforehand.
On 03/14/2012 06:03 AM, Borja Ruiz-Castro wrote:
This can lead to race-condition attacks and privilege scalation.
The new downloaded file must own to the user who exec the wget command.
This race condition exists for every program that writes to an existing
file, or any shell redirection.
I believe hh's suggestion is to have the format reflect the way it would look
in a URL; so [ and ] around ipv6, and nothing around ipv4 (since ipv4 format
isn't ambiguous in the way ipv6 is).
(Sent by my Kindle Fire)
-mjc
Sent from my Kindle Fire
_
One likely explanation for this difference would be if samplefile ends
with a newline.
Note that the shell will strip any trailing whitespace after expanding a
command substitution like $(cat samplefile).
HTH,
-mjc
(2011年12月22日 06:36), Guillaume Baudot wrote:
Hello there.
Using wget version
(2011年12月16日 11:19), Ángel González wrote:
Maciej Pilichowski wrote:
Of course I wish for behavior not for the exactly this
wording allow-directories :-).
Can't you do it with --include-directories ?
I believe it doesn't chain the way he wants. --no-parents
--include-directories would
A cwd to where? Normally one wants to download into the current
directory (particularly if you have placed wget's location in the PATH).
If you want it somewhere else, just use the -P option.
-mjc
(2011年11月30日 10:31), Henrik Holst wrote:
Could this be solved by having wget do a change to cwd,
Also, I think Tony Lewis's updated NTLM support went in sometime after
1.11.4.
(To my knowledge, which may have gaps, the only Redhat-modified wgets
that were seriously different from GNU's, was the 1.10.2 series, which
included many incomplete/untested bits from 1.11.4, and maybe a couple
of
(10/14/2011 03:24 PM), Vishwanath Reddy Beemidi wrote:
Hi,
I have trouble getting wget to work when downloading a file using http.curl
works fine for the same URL.
It would appear that the site in question requires NTLM authentication,
but you haven't supplied a username/password
On 09/27/2011 10:22 PM, Steven M. Schweda wrote:
It's still early, but here are the initial complaints...
lib/snprintf.c now ignores HAVE_SNPRINTF. In previous wget
versions, I could compile snprintf.c and not get a redundant
snprintf() if HAVE_SNPRINTF was defined (%LINK-W-MULDEF,
On 09/28/2011 06:39 AM, Steven M. Schweda wrote:
From: Micah Cowanmi...@cowan.name
In this case, the logic that does a rename of snprintf seems to be at
the end of vasnprintf.h rather than directly in snprintf.c.
Those aren't the droids you're looking for. Try lib/stdio.in.h
(which I
(09/23/2011 01:14 PM), Kevin Doty wrote:
I am using the short script below using wget to try to download files
like narrmonhr-a_221_19790101_2100_000.grb from the location
http://nomads.ncdc.noaa.gov/thredds/catalog/narrmonthly/197901/19790101/.
The resulting log file is shown below the
On 08/19/2011 12:18 AM, H.Merijn Brand wrote:
With HP-UX 11.00 and HP C-ANSI-C it doesn't even *compile* anymore!
(Re Support of non-linux OS's going down the drain?)
If folks would like to see better support for non-GNU/Linux platforms,
then folks using those platforms might do well to
On 08/12/2011 11:56 AM, phil curb wrote:
I've been looking at downloading a site that's on archive.org
Archive.org's TOS on their website expressly forbids the use of
downloading agents, and names wget explicitly.
All URLs on archive.org always point at the _original_ (either modern,
or
(06/29/2011 04:42 AM), David H. Lipman wrote:
From: Giuseppe Scrivano gscriv...@gnu.org
though these tools have two problems, first of all they are not free.
RoboCopy is free and was even included in the NT Resource Kit.
(By free, Giuseppe was of course speaking of freedom (free software
or
On 06/12/2011 06:34 AM, david chou wrote:
If there is a robots file ,
how could I retrieve file from
that site? I 've used the -e robots=off instruction ,
but it still can not mirror a web site.
Use the --debug flag to diagnose, and compare the output with
If you read the most recent output of wget that you gave (after quoting
the URL), it _does_ treat the string of characters as a whole URL. The
server redirects it to a shorter URL. If I enter that same URL into a
browser, it does the same redirection there, and results in an HTML
page, just
So... looks like it works, then. Your command shell isn't complaining
about weird command names, wget is clearly requesting the full and
correct URL, it follows redirections, and saves using the final
redirection URL (the latest sources wouldn't follow that last step -
it'd save using the request
(05/25/2011 08:05 AM), Thiago Braga de Souza wrote:
Thanks for the help, but now I have another problem. Why he is not accepting
the User name and Password that I am putting to him?
Does that put the commands in the wrong order?
If it's the newer-version NTLM authentication, there aren't
(05/25/2011 01:37 PM), Micah Cowan wrote:
(05/25/2011 07:28 AM), Mohamed Elsayed wrote:
After I had been downloading Youtube video, I found the video's size was
not correct, then I checked on status code, but I found it equals 1024.
I searched for it via Internet, but I found nothing. Do you
As you've discovered the IRI support doesn't change anything about how
filenames are saved; it only translates between IRIs and URIs (which,
since there are no IRIs involved here, doesn't affect anything).
As a workaround until filename transcoding is supported in wget, you may
find that
(05/24/2011 06:40 AM), Thiago Braga de Souza wrote:
Hello,
My name is Thiago and I'm starting to use the tool WGET. Usually I can make
my company's downloads page for HTML. With WGET cannot. Can you help me?
Following is the error message below.
Thank you.
C:\set
(05/09/2011 12:39 PM), Giuseppe Scrivano wrote:
Yang Zhang yanghates...@gmail.com writes:
I mentioned --include-directories in my original email. I couldn't
figure out how to use it to this effect. Could you demonstrate?
have you already tried the following one?
wget -r -I /host/foo/
On 04/24/2011 05:36 PM, David Skalinder wrote:
On 04/19/2011 06:07 AM, Alexander Moser wrote:
Hi!
I'm missing a commandline switch for a search mask for follow up a link.
So, if i have more then one ZIP into a HTML file now a can only download
all but not one because the names of the files
On 04/19/2011 06:07 AM, Alexander Moser wrote:
Hi!
I'm missing a commandline switch for a search mask for follow up a link.
So, if i have more then one ZIP into a HTML file now a can only download all
but not one because the names of the files has revision information into the
filename
On 04/07/2011 05:26 AM, Giuseppe Scrivano wrote:
David Skalinder da...@skalinder.net writes:
I want to mirror part of a website that contains two links pages, each of
which contains links to many root-level directories and also to the other
links page. I want to download recursively all the
On 03/31/2011 03:45 PM, Karl Berry wrote:
Then I thought I would try
to exclude the numerous stats items, but failed. I tried
wget -m -np -nv -R login.php -X stats
http://sourceforge.net/projects/biblatex-biber/files/biblatex-biber/current/
wget -m -np -nv -R login.php,stats
(03/30/2011 02:37 PM), Karl Berry wrote:
The bug (?) -- running
wget -m -np -nv \
http://sourceforge.net/projects/biblatex-biber/files/biblatex-biber/current/
ends up downloading many things above that directory, despite the -np.
Doesn't that seem wrong?
This is with wget 1.12 compiled
Thanks Tony.
I wonder if it's possible that that file is a redirection from a
correct URL. Because wget would expect to download all URLs from a
redirection, and would use the redirected name (but AIUI the current dev
sources wouldn't use that name without --trust-server-name or something).
In
On 03/27/2011 11:36 PM, Cory Sanders wrote:
wget -r ftp://myurl/test.txt --ftp-user=username
--ftp-password=password
The above does not erase the existing file test.txt from my directory. The
download says it has downloaded the new file, which is 139 bits:
On 03/22/2011 07:26 AM, Hrvoje Niksic wrote:
Sebastian Pipping sebast...@pipping.org writes:
I noticed that wget writes data to disk as it comes in.
This is not strictly true, it is up to the OS to write data to disk.
What Wget does is that it doesn't hold the data in stdio buffers after
(03/21/2011 10:39 AM), Alexander Chernyavsky wrote:
Hello,
I'm doing wget http://tex.imm.uran.ru/tex/beameruserguide.pdf -O
beameruserguide.pdf. Download completes and ls -l shows that file is
dated as 2005-10-23 20:48. Date command shows Mon Mar 21 20:37:56 MSK
2011.
I'm reading in
(03/21/2011 02:41 PM), Sebastian Pipping wrote:
Micah replied to me that he is no longer actively maintaining wget.
The man page of wget 1.12 reads
Currently maintained by Micah Cowan mi...@cowan.name
It has already been updated. At the time wget 1.12 was released, I was
the maintainer
(03/21/2011 02:44 PM), Micah Cowan wrote:
(03/21/2011 02:41 PM), Sebastian Pipping wrote:
Micah replied to me that he is no longer actively maintaining wget.
The man page of wget 1.12 reads
Currently maintained by Micah Cowan mi...@cowan.name
It has already been updated. At the time wget
(03/21/2011 02:47 PM), Sebastian Pipping wrote:
On 03/21/2011 10:44 PM, Micah Cowan wrote:
(03/21/2011 02:41 PM), Sebastian Pipping wrote:
Micah replied to me that he is no longer actively maintaining wget.
The man page of wget 1.12 reads
Currently maintained by Micah Cowan mi
(03/17/2011 06:13 AM), Steven M. Schweda wrote:
I have prepared a new alpha release containing the last changes:
ftp://alpha.gnu.org/gnu/wget/wget-1.12-2460.tar.bz2
[...]
About a year ago (back in the Cowan era?), I was looking at something
like:
1 - 100 of 274 matches
Mail list logo