Hi wget list!
Is it intended that
wget -Pd:\goog http://www.google.com/;
works, whereas
wget -Pd:\goog\ http://www.google.com/;
does give the error message
wget: missing URL
?
Running wget 1.10 on Windows XP.
Cheers
Jens
Hi Hrvoje,
Thanks for the detailed report!
Thanks for your detailed answer ;-)
Jens Schleusener [EMAIL PROTECTED] writes:
1) Only using the configure-option --disable-nls and the C compiler
gcc 4.0.0 the wget-binary builds successfully
I'd be interested in seeing the error log without
Hi Hrvoje,
Jens Schleusener [EMAIL PROTECTED] writes:
--12:36:51-- http://www.example.com/
= `index.html'
Resolving www.example.com... failed: Invalid flags in hints.
This is really bad. Apparently your version of getaddrinfo is broken
or Wget is using it incorrectly. Can
Jens
--
Dr. Jens SchleusenerT-Systems Solutions for Research GmbH
Tel: +49 551 709-2493 Bunsenstr.10
Fax: +49 551 709-2169 D-37073 Goettingen
[EMAIL PROTECTED] http://www.t-systems.com/
=\/usr/local/contrib/share/locale\ -O -c connect.c
host.h, line 52.17: 1506-275 (S) Unexpected text ',' encountered.
Again removing the , on line 52 helps and I got a working wget
binary.
Greetings
Jens
P.S.: Under Linux SuSE 9.3 all compiles and works well.
--
Dr. Jens Schleusener
listing. When mirroring, index.html will
be re-written if/when it has changed on the server since the last mirroring.
and I expect that problem to be
corrected in future wget versions.
You expect??
Jens
--
+++ Sparen beginnt mit GMX DSL: http://www.gmx.net/de/go/dsl
. Thus, it created an index.html as
Tony Lewis explained. Now, _you_ uploaded (If I understood correctly) the
copy from your HDD but did not save the index.html. Otherwise it would be
there and it would work.
Jens
--
+++ GMX - die erste Adresse für Mail, Message, More +++
10 GB Mailbox, 100 FreeSMS
(having an index.html), but wget
1.5 creates a working mirror - as it is supposed to do.
CU
Jens
--
+++ Sparen beginnt mit GMX DSL: http://www.gmx.net/de/go/dsl
with wget 1.5 just was a simple wget15 -m -np URL and it worked.
So maybe the convert/rename problem/bug was solved with 1.9.1
This would also explain the missing gif file, I think.
Jens
--
+++ GMX - die erste Adresse für Mail, Message, More +++
10 GB Mailbox, 100 FreeSMS http://www.gmx.net/de
Hi Alan!
As the URL starts with https, it is a secure server.
You will need to log in to this server in order to download stuff.
See the manual for info how to do that (I have no experience with it).
Good luck
Jens (just another user)
I am having trouble getting the files I want using
Hi!
Yes, I see now, I misread Alan's original post.
I thought he would not even be able to download the single .pdf.
Don't know why, as he clearly said it works getting a single pdf.
Sorry for the confusion!
Jens
Tony Lewis [EMAIL PROTECTED] writes:
PS) Jens was mistaken when he said
Hi Jerry!
AFAIK, RegExp for (HTML?) file rejection was requested a few times, but is
not implemented at the moment.
CU
Jens (just another user)
The -R option is not working in wget 1.9.1 for anything but
specifically-hardcoded filenames..
file[Nn]ames such as [Tt]hese are simply ignored
text. You had a line-break between -l and imit...:
wget: reclevel: Invalid specification `imit-rate=50k'.
Encore, votre version de wget est très ancienne.
Furthermore, your version of wget is very old.
Download a newer version for windows here:
http://xoomer.virgilio.it/hherold/
CU
Jens
creates?
Don't you mean create index.html?
CU
Jens
--
Happy ProMail bis 24. März: http://www.gmx.net/de/go/promail
Zum 6. Geburtstag gibt's GMX ProMail jetzt 66 Tage kostenlos!
Hi Jorge!
Current wget versions do not support large files 2GB.
However, the CVS version does and the fix will be introduced
to the normal wget source.
Jens
(just another user)
When downloading a file of 2GB and more, the counter get crazy, probably
it should have a long instead if a int
these
directories, regardless of what I put in --exclude-directories, but when
it is done fetching the URL, will it then discard those directories?
As far as I can tell from a log file I just created, wget does not follow
links into these directories. So no files downloaded from them.
CU
Jens
in one dir?
Does -np work for you? No-parent means it will only descend (go deeper) into
the directory tree. Not up.
Or try -IdownloadDIR which means that wget will only accept the directory
of the file/directory it is started with.
CU
Jens (just another user)
--
DSL Komplett von GMX
whether the . in your dir name causes any problem.
Good luck!
Jens (just another user)
--
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen!
AKTION Kein Einrichtungspreis nutzen: http://www.gmx.net/de/go/dsl
that, but
http://mrpip.orcon.net.nz/href/asciichar.html
lists
%2E
as the code. Does this work?
CU
Jens
I've also tried this on my linux box running v1.9.1 as well. Same results.
Any other ideas?
Thanks a lot for your tips, and quick reply!
/vjl/
--
Lassen Sie Ihren Gedanken freien
how handy you are in your OS, but this should be doable with
one or two small batch files.
Maybe one of the pros has a better idea. :)
CU
Jens (just another user)
--
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen!
AKTION Kein Einrichtungspreis nutzen: http://www.gmx.net/de/go/dsl
- not Wordpad
(it works with Wordpad, but Notepad is better)
d) you use -i and not -I (as you wrote in your first line - wget is
case-sensitive)
Does anyone else have this problem?
At least not me.
CU
Jens (just another user)
--
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen
Hi Mike!
Strange!
I suspect that you have some kind of typo in your test.txt
If you cannot spot one, try
wget -d -o logi.txt -i test.txt
as a command line and send the debug output.
Good luck
Jens (just another user)
a) I've verified that they both exist
b) All of the URLs are purely HTTP
, the or replaces the , in this enumeration =)
CU
Jens
--
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen!
AKTION Kein Einrichtungspreis nutzen: http://www.gmx.net/de/go/dsl
is backup all the
html and pictures on the entire site.
use
wget -m -k www.helpusall.com
it should be all you'll need from what I have seen.
CU
Jens
--
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen!
AKTION Kein Einrichtungspreis nutzen: http://www.gmx.net/de/go/dsl
Hi Deryck!
As far as I know, wget cannot parse CSS code
(and neither JavaScript code).
It has been requested often, but so far noone
has tackled this (probably rather huge) task.
CU
Jens
(just another user)
Hello,
I can make wget copy the necessary CSS files referenced from a webpage
no sense otherwise.
***
If you are seeing wget behaviour different from this, please a) update your
wget and b) provide more details where/how it happens.
CU good luck!
Jens (just another user)
When the -R option is specified to reject files by name in recursive mode,
wget downloads them
and
thought I'd mention it.
I hope I am not missing something!
Jens
refers to the time wget will try an action until it
considers the trials a fail.
E.g.: when after 2 seconds there is no DNS conection, wget will time out,
if after 2 seconds during get there is no data transferred, it will time
out.
CU
Jens
--
+++ Sparen Sie mit GMX DSL +++ http://www.gmx.net/de
options don't exist, you are not to blame ;)
Should I get a newer version of wget?
1.9.1 is the latest stable version according to http://wget.sunsite.dk/
CU
Jens (just another user)
--
GMX ProMail mit bestem Virenschutz http://www.gmx.net/de/go/mail
+++ Empfehlung der Redaktion +++ Internet
dynsrc is Microsoft DHTML for IE, if I am not mistaken.
As wget is -thankfully- not MS IE, it fails.
I just did a quick google and it seems that the use of
dynsrc is not recommended anyway.
What you can do is to download
http://www.wideopenwest.com/~nkuzmenko7225/Collision.mpg
Jens
robot rules
You could also add
-k: converts absolute to local links for maximum offline browsability.
CU
Jens
I tried
wget -r -np -nc http://www.vatican.va/archive/DEU0035/_FA.HTM
both with cygwin / Wget 1.9.1 and Linux / Wget 1.8.2.
They return just one single file but none of
http
you have to use wget with cookies.
For info on how to do that, use the manual.
CU
Jens
hi all:
some link use IE open is normal,but use wget download have
somewrong, i cant find some slove way, i think it maybe a bug :
example link:
http://www.interwetten.com/webclient/betting
) hosts, otherwise, wget will
only download google pages
And lastly -but you obviously did so- think about restricting the recursion
depth.
Hope that helps a bit
Jens
I have been trying to wget several levels deep from a Google search page
(e.g., http://www.google.com/search?=deepwater+oil
/fplan_landung_imm.aspu=2t=Timetable%20visitorsbr=
You should be able to use that one in wget.
CU
Jens
Hello,
I'm using wget since months for saving the daily
arrival/departure information of the local airport.
Now they changed the design of the website and started
to use frames. I'm stucked now
sense), it
will not download the info frame.
If one would like to download the complete page,
then a combination of -D and -H must be used
to allow wget to travel to different hosts.
CU
Jens
Hi François!
Well, it seems to work for me. Here's how:
Open the frame in another window (works
for me. It was generated using my gui front-end to wget, so it is not
streamlined ;)
Jens
Hi,
How can I download all pdf and ppt file by the following url with command
line of:
wget -k -r -l 1 http://devresource.hp.com/drc/topics/utility_comp.jsp
I am on windows 2000 server sp4
/ will create a structure of directories
beginning with fly.srk.fer.hr/. This option disables such behavior.
So, I'd recommend trying the -x switch, although I am not sure what your
problem is exactly.
CU
Jens
--
NEU : GMX Internet.FreeDSL
Ab sofort DSL-Tarif ohne Grundgebühr
://cvs.sunsite.dk/viewcvs.cgi/*checkout*/wget/PATCHES?rev=1.5
Thanks, I tried to understand that. Let's see if I understood it.
Sorry if I am not sending this to the patches list, the document above
says that it is ok to evaluate the patch with the general list.
CU
Jens
Patch sum up:
a) Tell users
file.txt
so I cannot have snipped anything. Is my shell (win2000)
doing something wrongly or is the missing bit there now (when using the -u
switch).
Jens
Once more:
Patch sum up:
a) Tell users how to --execute more than one wgetrc command
b) Tell about and link to --execute when listing wgetrc
will save to
.\C3A\temp\*.*
wget -r -P 'C:\temp\' URL
will save to
.\'C3A\temp\'\*.*
wget -r -P C:\temp\ URL
does not work at all ('Missing URL') error
however
wget -r -P ..\temp2\ URL
works like a charme.
CU
Jens
--
GMX ProMail (250 MB Mailbox, 50 FreeSMS, Virenschutz, 2,99 EUR/Monat
over them. If you need to use more than one wgetrc command in your
command-line, use -e preceeding each.
Hope this is ok
Jens
--
GMX ProMail (250 MB Mailbox, 50 FreeSMS, Virenschutz, 2,99 EUR/Monat...)
jetzt 3 Monate GRATIS + 3x DER SPIEGEL +++ http://www.gmx.net/derspiegel +++
and/or -s you can print the http headers, if you need to.)
However, I noticed that quite many servers do not provide a
last-modified header.
Did this answer your question?
Jens
I'd love to have an option so that, when mirroring, it
will backup only files that are replaced because they
are newer
use
robots = on/off in your wgetrc
or
wget -e robots = on/off URL in your command line
Jens
PS: One note to the manual editor(s?):
The -e switch could be (briefly?) mentioned
also at the wgetrc commands paragraph.
I think it would make sense to mention it there again
without clustering
people on slow
connections.
Kind regards
Jens
--
GMX ProMail (250 MB Mailbox, 50 FreeSMS, Virenschutz, 2,99 EUR/Monat...)
jetzt 3 Monate GRATIS + 3x DER SPIEGEL +++ http://www.gmx.net/derspiegel +++
,
it could be necessary/beneficial to
wget -r -l0 -A *.pdf,*.htm* -np URL
Hope that helps (and is correct ;) )
Jens
In the docs I've seen on wget, I see that I can use wildcards to
download multiple files on ftp sites. So using *.pdf would get me all
the pdfs in a directory. It seems
for everything after all.
I agree.
What do you think about adding a latest-ssl-libraries.zip?
Kind regards
Jens
--
+++ Mailpower für Multimedia-Begeisterte: http://www.gmx.net/topmail +++
250 MB Mailbox, 1 GB Online-Festplatte, 100 FreeSMS. Jetzt kostenlos testen!
:\wget\ with Windows explorer and
c) doubleclick on startupdate.bat
d) afterwards, do the CD writing
Thinking about it, you could distribute wget with the
SSL and startupdate.bat file unzipped on a 1.44MB floppy disk.
CU
Jens
http://www.jensroesner.de/wgetgui/
--
+++ Mailpower für
Hi Tommy!
Does this option, first shown in 1.9.1 (I think) help you:
--restrict-file-names=mode
It controls file-name escaping.
I'll mail the complete extract from the manual to your private mail address.
You can download the current wget version from
http://www.sunsite.dk/wget/
CU
Jens
--relax_html_rules inf
or
--relax_html_rules 0
or
--relax_html_rules another-combination-that-makes-most-sense
should be default, is up to negotiation.
However, I would vote for complete relaxation.
I hope that made a bit of sense
Jens
--
GMX Weihnachts-Special: Seychellen-Traumreise zu gewinnen
Hi Jing-Shin!
Thanks for the pointers. Where can I get a version that support
the --post-data option? My newest version is 1.8.2, but it doesn't
have this option. -JS
Current version is 1.9.1.
The wget site lists download options on
http://wget.sunsite.dk/#downloading
Good luck
Jens
to call wget once in the standard way so the files are
locally available but that probably wouldn't work correctly if the
benchmarked page were be changed.
Any ideas to that correctly with wget? Or any pointers to more
appropriate tools?
Greetings
Jens
--
Dr. Jens SchleusenerT-Systems
for the debug output.
The message was:
debug support not compiled in
and wget would continue with normal downloading.
Is this an oversight or does it serve a purpose?
CU
Jens
--
NEU FÜR ALLE - GMX MediaCenter - für Fotos, Musik, Dateien...
Fotoalbum, File Sharing, MMS, Multimedia-Gruß, GMX
for a secure server, isn't it?
Does this make sense?
Jens
A slight correction the first wget should read:
wget --save-cookies=cookies.txt
http://customer.website.com/supplyweb/general/default.asp?UserAccount=U
SERAccessCode=PASSWORDLocale=en-usTimeZone=EST:-300action-Submi
t=Login
I tried
you closed and restarted your browser or redialed
your connection. That's what reminded my of Suhas' problem.
Even if it were the case, you could tell Wget to use the same
connection, like this:
wget http://URL1... http://URL2...
Right, I always forget that, thanks!
Cya
Jens
--
NEU FÜR
the links to
wget binaries and the SSL binaries.
As you can see, different wget versions need
different SSL versions-
Just download the matching SSL,
everything else should then be easy :)
Jens
This is the command I am using:
wget http://www.website.com --http-user=username
--http-passwd
this could indeed be helpful.
Hopefully someone with more knowledge than me
can elaborate a bit more on this :)
CU
Jens
`--no-clobber' is very usfull option, but i retrive document not only with
.html/.htm suffix.
Make addition option that like -A/-R define all allowed/rejected rules
for -nc
Hi Stacee,
a quick cut'n'paste into google revealed the following page:
http://curl.haxx.se/mail/archive-2001-06/0017.html
Hope that helps
Jens
Stacee Kinney wrote:
Hello,
I installed Wget.exe on a Windows 2000 system and has setup Wget.exe
to run a maintenance file on an hourly bases
of - hassle without benefit.
You furthermore said:
generally, that leads to the whole Internet
That is wrong, if I understand you correctly.
Wget will always stay at the start-host, except when you
allow different hosts via a smart combination of
-D -H -I
switches.
H2H
Jens
Karl Berry wrote:
I
to HTML, or using the
--base command-line option.
-B URL
--base=URL
When used in conjunction with -F, prepends URL to relative links in the
file specified by -i.
#
I think that should help, or I am missing your point.
CU
Jens
Thomas Otto wrote:
Hi!
I miss
1.8.1)
I however remember that I once had the same problem,
that -p -np will only get page requsites under or at the current
directory.
I currently run wget 1.9-beta and haven't seen the problem yet.
CU
Jens
Dominic Chambers wrote:
Hi again,
I just noticed that one of the inline images
get all the files (or the wrong ones),
it maybe that you should ignore robots by the
wgetrc command
robots = on/off
or you need a special referrer if you want
to start in the middle of the site.
CU
Jens
(Jakub Grosman) wrote:
Hi all,
I am using wget a long time ago and it is realy great
Hi Chris!
Using the -k switch (convert local files to relative links)
should do what you want.
CU
Jens
Christopher Stone wrote:
Hi.
I am new to wget, and although it doesn't seem to
difficult, I am unable to get the desired results that
I am looking for.
I currently have a web
.
However, I think, this makes no sense for .exe files
and wanted to ask if this behaviour of wget
maybe could get reconsidered.
Kind regards
Jens
.
I had a look into the wget documentation html file, but could not find
my mistake.
I tried both wget 1.5 and 1.9-beta.
Kind regards
Jens
a certain bot to a bot-specific page was outside my scope.
CU
Jens
definitionem complete, that is a
pleonasmus.)
If you are really interested, do
a) a search in Google
b) a search in the wget Mailing list archive
CU
Jens
Joonas Kortesalmi wrote:
Wget seems top repots speeds with wrong units. It uses for example KB/s
rather than kB/s which would be correct. Any
, especially as it is
probably a w-ows box : But I ask: Is this a bad thing?
Whuahaha!
[/rant]
Ok, sorry vor my sarcasm, but I think you overestimate the benefits of
robots.txt for mankind.
CU
Jens
IT group will not open up a hole for
me to pull these files.
No problem, use
proxy = on
http_proxy = IP/URL
ftp_proxy = IP/URL
proxy_user = username
proxy_password = proxypass
in your wgetrc.
This is also included in the wget manual,
but I, too, was too dumb to find it. ;)
CU
Jens
, but then it is the user's decision and the story is therefore
quite different (I think).
CU
Jens
--
GMX - Die Kommunikationsplattform im Internet.
http://www.gmx.net
online is to have wget write the downloaded
file into a temp file (like *.wg! or something) and renaming it only
after completing the download.
Sorry for not paying attention.
It sounds like a good idea :)
But I am no coder...
CU
Jens
...
I hope I did not miss your point.
CU
Jens
--
GMX - Die Kommunikationsplattform im Internet.
http://www.gmx.net
its options ignored.
CU
Jens
--
GMX - Die Kommunikationsplattform im Internet.
http://www.gmx.net
about that. *sigh*
I can now think about changing my wgetgui in this aspect :)
Thanks again
Jens
Hrvoje Niksic wrote:
Noel Koethe [EMAIL PROTECTED] writes:
Ok got it. But it is possible to get this option as a switch for
using it on the command line?
Yes, like this:
wget
it is apparently not necessary.
Kind regards
Jens
Hi Gérard!
I think you should have a look at the -p option.
It stands for page requisites and should do exactly what you want.
If I am not mistaken, -p was introduced in wget 1.8
and improved for 1.8.1 (the current version).
CU
Jens
I'd like to download a html file with its embedded
already complained that many old scripts now break and suggested
that entering -nh at the command line would
either be completely ignored or the user would be
informed and wget executed nevertheless.
Apparently this was not regarded as useful.
CU
Jens
The option --no-host-directories
changed
/alexandr.gif
as well as with
wget http://www.cuj.com/images/resource/experts/alexandr.gif
So, I do not know what your problem is, but is neither wget#s nor cuj's
fault, AFAICT.
CU
Jens
This problem is independent on whether a proxy is used or not:
The download hangs, though I can read the content
audistory.de: Everything
audi100-online: Everything
kolaschnik.de: nothing
Independent from the the question how the string audi
should be matched within the URL, I think rejected URLs
should not be parsed or be retrieved.
I hope I could articulate what I wanted to say :)
CU
Jens
Hi List!
As a non-wget-programmer I also think that this
option may be very useful.
I'd be happy to see it wget soon :)
Just thought to drop in some positive feedback :)
CU
Jens
-u, --unfollowed-links=FILE log unfollowed links to FILE.
Nice. It sounds useful.
here)
CU
Jens
Noel Koethe schrieb:
Hello,
I tested pavuk (http://www.pavuk.org/, GPL) and there are some features
I miss in wget:
-supports HTTP POST requests
-can automaticaly fill forms from HTML documents and make POST or GET
requestes based on user input and form content
-you
-01zip/abr
a href=patches/112518readme112518readme/abr
[snip]
look at the file names you want, none of them includes 103*, they all
start with 112*
So, wget works absolutely ok, I think
Or am I missing something here?
CU
Jens
--
GMX - Die Kommunikationsplattform im Internet
http://wwwgmxnet
Hello,
for wget I would suggest a switch that allows to send the output directly
to stdout. It would be easier to use it in pipes.
with best regards
JR
--
Jens Röder, Braunschweig
://host.com/page.html)
and nights... Have a virtual drink on me.
Cheers! :)
CU
Jens
http://www.jensroesner.de/wgetgui/
--
GMX - Die Kommunikationsplattform im Internet.
http://www.gmx.net
,
animated gifs and blink tags.
Kind regards
Jens
--
GMX - Die Kommunikationsplattform im Internet.
http://www.gmx.net
exists and that it is not
far from those two examples.
Ok, but I understand you correctly that these two examples (mine was
intended to be equivalent, but without JS) should be on the parse and retrieve
side of this line, not the ignore and blame Frontpage side?
CU
Jens
--
GMX - Die
and dirs you should be able to do this.
Or is the problem to load the lists from an external file?
Then, please ignore my comment, I have no experience in this.
CU
Jens
--
GMX - Die Kommunikationsplattform im Internet.
http://www.gmx.net
too little to judge where it belongs :}
CU
Jens
http://www.JensRoesner.de/wgetgui/
Note: tests done on NT4. W9x probably would behave different (even
worse).
starting from (for example) c:, with d: being another writable disk of
some kind, something like
wget -nd -P d:/dir http
Jens
http://www.JensRoesner.de/wgetgui/
It would be nice to have some way to limit the total size of any job, and
have it exit gracefully upon reaching that size, by completing the -k -K
process upon termination, so that what one has downloaded is useful. A
switch that would set the total
are treated as different)
I am not sure which version first had this problem, but 1.7 did not show
it.
I really would like to have this option back.
Does anyone know where it is gone to?
Maybe doing holidays?
CU
Jens
http://www.jensroesner.de/wgetgui/
versions still work.
This would greatly enhance (forward) compatibility between different
versions,
something I would regard as at least desirable?
CU
Jens
his/her old copy of wgetgui, which now of course produces
invalid 1.8 command lines :(
CU
Jens
http://www.jensroesner.de/wgetgui/
thought that this option is quite important nowadays?!
Any help appreciated.
CU and a Merry Christmas
Jens
access the Netscape internal Cache? (Nahh, can't be...)
I cannot provide you with a debug or -v log file. :(
CU
Jens *still confused
http://www.jensroesner.de/wgetgui/
Hi guys!
Yes, you all are right.
Proxy is the answer. I feel stupid now.
/me goes to bed, maybe that helps! :|
Thanks anyway! :)
Until the next intelligent question :D
CU
Jens
Man, I really hate ads like the following:
--
GMX - Die Kommunikationsplattform im Internet.
http
--exclude-domains
`-A ACCLIST' `--accept ACCLIST' `accept = ACCLIST'
`-R REJLIST' `--reject REJLIST' `reject = REJLIST'
`-I LIST' `--include LIST' `include_directories = LIST'
`-X LIST' `--exclude LIST' `exclude_directories = LIST'
CU
Jens
http://www.jensroesner.de/wgetgui
browser.
That normally should work.
Maybe you also have to try both at the same time for your problem?
Right now I am a bit puzzled what you meant by
I can't get wget 1.7 react on the following:
I thought you wanted wGet to ignore robots?!
Correct?
Good luck
Jens
http://www.jensroesner.de/wgetgui
This is cheating,
What does cheating mean here?
Now I know the meaning of cheating, but I do not understand it in this
context.
Could someone please elaborate a bit?
To me that sounds like a logical combination of -r -np -p?
Any correction appreciated.
Thanks
Jens
Hi Andreas!
AFAIK wGet has cookie support.
At least the 1.7 I use.
If this does not help you, I did not understand your question.
But I am sure there are smarter guys than me on the list! ;)
CU
Jens
http://www.JensRoesner.de/wGetGUI/
[snip]
Would it make sense to add basic cookie support
Hi Vladi!
If you are using windows, you might try
http://www.jensroesner.de/wgetgui/
it is a GUI for wGet written in VB 6.0.
If you click on the checkbox identify as browser, wGetGUI
will create a command line like you want.
I use it and it works for me.
Hope this helps?
CU
Jens
Vladi wrote
and --user-agent !
If I find time to do a user's manual, I will make this clear.
Sorry for the confusion.
@Vladi
Ok, I know Windows sucks ;) But I am tooo lazy!
BTW: I would like that --auto-referer, too ;)
So go ahead! ;D
CU
Jens
1 - 100 of 108 matches
Mail list logo