Wget does not support NTLM auth (yet).
For the ones who feel like helping out, I have already
donated (and properly assigned copyright to FSF with papers
and everything) fully working NTLM code to the wget project
(mailed privately to Hrvoje) that I'm sure everyone would be
happy if
On platform HP-UX 11.00 PA-RISC
wget version 1.9.1
-T parameter not work in a certain condition
On a deadly HTTP server (HTTP server accept connection, but never send data)
wget wait indefinitely
Regards
Nicolas Varney
I am trying to use wget to retrieve a file from an OpenVMS server but have been unable
to make wget to process a path with a volume name in it. For example:
disk:[directory.subdirectory]filename
How would I go about entering this type of path in a way that wget can understand?
Sorry I can't help you Benjamin, but could you or anyone on the list
help me?
Hello,
I hope someone can help me. I don't know much about wget, so I need
help with downloading a file on the internet.
I need to download this file every 30 minutes into a directory on my
computer.
Each new
How do you enter the path in your web browser?
- Original Message -
From: Bufford, Benjamin (AGRE) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, May 26, 2004 7:32 AM
Subject: OpenVMS URL
I am trying to use wget to retrieve a file from an OpenVMS server but have
been unable
If it to works once, I would make a shell script running wget with the -r option to
replace the file and use cron or at to run the shell script you make in 30 minute
intervals.
It looks like you're running windows, so as long as it's NT 4.0 or better (ie non DOS
based) you'll have to use at
On Wed, May 26, 2004 at 08:32:29AM -0600, Bufford, Benjamin (AGRE) wrote:
I am trying to use wget to retrieve a file from an OpenVMS server but have been
unable to make wget to process a path with a volume name in it. For example:
disk:[directory.subdirectory]filename
How would I go
I've got Wget up and running fine, but am having trouble with some specific
commands I'm trying to run. Thanks in advance for any help.
I'm trying to retrieve content from 50 or so pages, merge it into one file,
and convert all of the links in that that one file. I've created an input
file with
Then your problem isn't with wget. Once you figure out how to access the
file in a web browser, use the same URL in wget.
Tony
- Original Message -
From: Bufford, Benjamin (AGRE) [EMAIL PROTECTED]
To: Tony Lewis [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, May 26, 2004 8:41 AM
I realize that, this mailing list is just one of the places I'm checking. Wget is
the specific tool I would like to use and I have seen frequent references to wget used
on and with VMS. With that being the case I thought it may be likely that someone on
this list may readily know offhand.
On Wed, 26 May 2004, Bufford, Benjamin (AGRE) wrote:
That's the problem I'm having. With all the looking and reading I've
done I haven't found a way to specify the type of pathname I used as an
example (disk:[directory.subdirectory]filename) as a URL for a broswer
or anything else that
Thanks Maciej,
That is exactly what I was having trouble with. I have been able to specify
directories in the same way that you have mentioned using
[directory.subdirectory]filename, but I have had no success with any attempts to
incorporate the disk: part into the URL. That is frustrating.
On Wed, 26 May 2004, Bufford, Benjamin (AGRE) wrote:
That is exactly what I was having trouble with. I have been able to
specify directories in the same way that you have mentioned using
[directory.subdirectory]filename, but I have had no success with any
attempts to incorporate the disk:
The colon is allowed in directory names and file names as far as I know. When it comes
before a set of square brackets [] containing a path it is used to separate the disk
name from the path. The filename comes at the end of the specified path outside of
the square brackets. Like the example
This isn't really a bug, more a suggestion for how wget might behave
better in a special situation.
I'm using Wget 1.9.1 to recursively retrieve a web site that has
index.cgi files with arguments. Internally, the pages refer to these
with links like:
blah.com/?arg=hello.
When wget fetches
Hello,
I have read all the man and i see that there is a -k option to convert links
from absolute to relative i think but i don't find any option to convert
windows paths into linux. I mean those url's with paths like
http://some.webpage/with\Images
I do -r and it finds some html files that
I'm also trying to automatically login to
https://online.wellsfargo.com/cgi-bin/signon.cgi using
wget but with no luck so far.
Any ideas to get this working is greatly appreciated.
Thanks.
PM
* From: Greg Underwood
* Subject: Re: recursive and form posts in wget
1.9.1
* Date: Mon,
Oops. Shame on me. I just needed to RTFM: I just need to use the -k
option...
Carl
On Wed, 2004-05-26 at 09:38, Carl McTague wrote:
This isn't really a bug, more a suggestion for how wget might behave
better in a special situation.
I'm using Wget 1.9.1 to recursively retrieve a web site
18 matches
Mail list logo