Small bug in Wget manual page

2005-06-02 Thread Herb Schilling
Title: Small bug in Wget manual page


Hi,

On
http://www.gnu.org/software/wget/manual/wget.html, the section
on
protocol-directories has a paragraph that is a duplicate of the
section on
no-host-directories. Other than that, the manual is terrific!
Wget is wonderful also. I don't know what I would do without it.



--protocol-directories
 Use the protocol name as a directory component of
local file names. For example, with this option, wget -r http://host
will save to http/host/... rather than just to host/


Disable generation of host-prefixed directories. By default, invoking
Wget with -r http://fly.srk.fer.hr/ will create a structure of
directories beginning with fly.srk.fer.hr/. This option disables such
behavior.

-- 

Herb Schilling
NASA Glenn Research Center
Brook Park, OH 44135
[EMAIL PROTECTED]

If all our misfortunes were
laid in one common heap whence everyone must take an equal portion,
most people would be contented to take their own and depart. -Socrates
(469?-399 B.C.)



How to download filenames only?

2005-06-02 Thread wierzbowski
Hi! Perhaps someone can help me craft a wget command that will
accomplish the following. I want to mirror a website but I only want
to download the directory structure and names of each file. I do not
want to download the files themselves, but I need the names. A zero
byte copy of each file would be ideal. I can't seem to figure out how
to do this. Maybe it's not even possible. Any suggestions would be
appreciated. Thanks!


Re: How to download filenames only?

2005-06-02 Thread Tushar Joshi

I presume wget needs to actually download the files otherwise
how will it know what other files to link to (if it's a html 
file). However if you don't mind downloading the file
and just want a 0 byte structure afterwards you could 
do something like this.

find -type f -exec dd count=0 if=/dev/zero of='{}' \;

However you'll have to download the file this way. 

Tushar

On Thu, Jun 02, 2005 at 12:56:02PM -0700, wierzbowski wrote:
 Hi! Perhaps someone can help me craft a wget command that will
 accomplish the following. I want to mirror a website but I only want
 to download the directory structure and names of each file. I do not
 want to download the files themselves, but I need the names. A zero
 byte copy of each file would be ideal. I can't seem to figure out how
 to do this. Maybe it's not even possible. Any suggestions would be
 appreciated. Thanks!

-- 
| Turtle Networks Ltd. |
|  Unit 48, Concord Road, London W3 0TH|
|  Tel: (020) 8896 2600 |  Fax: (020) 8992 7017|
|  www.turtle.net   |  [EMAIL PROTECTED]   |


Fwd: How to download filenames only?

2005-06-02 Thread wierzbowski
Oops, I believe I did not send this to the whole list.

-- Forwarded message --
From: wierzbowski [EMAIL PROTECTED]
Date: Jun 2, 2005 1:28 PM
Subject: Re: How to download filenames only?
To: Tushar Joshi [EMAIL PROTECTED]


Thank you (and Tony) for your suggestions. However, downloading the
files in their entirety and then truncating them is not an option for
me because most the files in question are massive (approaching 4GB
each). What I'm trying to do is create a skeleton of the structure so
that I can then selectively download just the files that are
necessary.

On 6/2/05, Tushar Joshi [EMAIL PROTECTED] wrote:

 I presume wget needs to actually download the files otherwise
 how will it know what other files to link to (if it's a html
 file). However if you don't mind downloading the file
 and just want a 0 byte structure afterwards you could
 do something like this.

 find -type f -exec dd count=0 if=/dev/zero of='{}' \;

 However you'll have to download the file this way.

 Tushar