How to download filenames only?

2005-06-02 Thread wierzbowski
Hi! Perhaps someone can help me craft a wget command that will
accomplish the following. I want to mirror a website but I only want
to download the directory structure and names of each file. I do not
want to download the files themselves, but I need the names. A zero
byte copy of each file would be ideal. I can't seem to figure out how
to do this. Maybe it's not even possible. Any suggestions would be
appreciated. Thanks!


Re: How to download filenames only?

2005-06-02 Thread Tushar Joshi

I presume wget needs to actually download the files otherwise
how will it know what other files to link to (if it's a html 
file). However if you don't mind downloading the file
and just want a 0 byte structure afterwards you could 
do something like this.

find -type f -exec dd count=0 if=/dev/zero of='{}' \;

However you'll have to download the file this way. 

Tushar

On Thu, Jun 02, 2005 at 12:56:02PM -0700, wierzbowski wrote:
 Hi! Perhaps someone can help me craft a wget command that will
 accomplish the following. I want to mirror a website but I only want
 to download the directory structure and names of each file. I do not
 want to download the files themselves, but I need the names. A zero
 byte copy of each file would be ideal. I can't seem to figure out how
 to do this. Maybe it's not even possible. Any suggestions would be
 appreciated. Thanks!

-- 
| Turtle Networks Ltd. |
|  Unit 48, Concord Road, London W3 0TH|
|  Tel: (020) 8896 2600 |  Fax: (020) 8992 7017|
|  www.turtle.net   |  [EMAIL PROTECTED]   |


Fwd: How to download filenames only?

2005-06-02 Thread wierzbowski
Oops, I believe I did not send this to the whole list.

-- Forwarded message --
From: wierzbowski [EMAIL PROTECTED]
Date: Jun 2, 2005 1:28 PM
Subject: Re: How to download filenames only?
To: Tushar Joshi [EMAIL PROTECTED]


Thank you (and Tony) for your suggestions. However, downloading the
files in their entirety and then truncating them is not an option for
me because most the files in question are massive (approaching 4GB
each). What I'm trying to do is create a skeleton of the structure so
that I can then selectively download just the files that are
necessary.

On 6/2/05, Tushar Joshi [EMAIL PROTECTED] wrote:

 I presume wget needs to actually download the files otherwise
 how will it know what other files to link to (if it's a html
 file). However if you don't mind downloading the file
 and just want a 0 byte structure afterwards you could
 do something like this.

 find -type f -exec dd count=0 if=/dev/zero of='{}' \;

 However you'll have to download the file this way.

 Tushar