Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-25 Thread Jason King
On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens wrote: > Michael Schuster wrote: >> >> Mike Gerdts wrote: >>> >>> On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-20 Thread Miles Nordin
> "ml" == Mikko Lammi writes: ml> "rm -rf" to problematic directory from parent level. Running ml> this command shows directory size decreasing by 10,000 ml> files/hour, but this would still mean close to ten months ml> (over 250 days) to delete everything! interesting. does

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-19 Thread Matthew Ahrens
Michael Schuster wrote: Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any i

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Dyer-Bennet
On Tue, January 5, 2010 10:25, Richard Elling wrote: > On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: >> It's interesting how our ability to build larger disks, and our >> software's >> ability to do things like create really large numbers of files, >> comes back >> to bite us on the ass ev

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote: Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the "rm -rf directory" method started to perform significantly faster. Now it's deleting

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Fajar A. Nugraha
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster wrote: >>> we need to get rid of them (because they eat 80% of disk space) it seems >>> to be quite challenging. >>> >> >> I've been following this thread.  Would it be faster to do the reverse. >>  Copy the 20% of disk then format then move the 20

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
Paul Gress wrote: On 01/ 5/10 05:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Paul Gress
On 01/ 5/10 05:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of them

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joe Blount
On 01/ 5/10 10:01 AM, Richard Elling wrote: How are the files named? If you know something about the filename pattern, then you could create subdirs and mv large numbers of files to reduce the overall size of a single directory. Something like: mkdir .A mv A* .A mkdir .B mv B*

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Daniel Rock
Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the "rm -rf directory" method started to perform significantly faster. Now it's deleting around 4,000 files/minute (240,000/h - quite an impr

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Tim Cook
On Tue, Jan 5, 2010 at 11:25 AM, Richard Elling wrote: > On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: > >> >> On Tue, January 5, 2010 10:01, Richard Elling wrote: >> >>> OTOH, if you can reboot you can also run the latest >>> b130 livecd which has faster stat(). >>> >> >> How much faster i

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote: On Tue, January 5, 2010 10:01, Richard Elling wrote: OTOH, if you can reboot you can also run the latest b130 livecd which has faster stat(). How much faster is it? He estimated 250 days to rm -rf them; so 10x faster would get that down to

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Dyer-Bennet
On Tue, January 5, 2010 10:01, Richard Elling wrote: > OTOH, if you can reboot you can also run the latest > b130 livecd which has faster stat(). How much faster is it? He estimated 250 days to rm -rf them; so 10x faster would get that down to 25 days, 100x would get it down to 2.5 days (assumin

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
>no - mv doesn't know about zpools, only about posix filesystems. "mv" doesn't care about filesystems only about the interface provided by POSIX. There is no zfs specific interface which allows you to move a file from one zfs to the next. Casper ___

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joerg Schilling
Michael Schuster wrote: > >> "rm -rf" would be at least as quick. > > > > Normally when you do a move with-in a 'regular' file system all that's > > usually done is the directory pointer is shuffled around. This is not the > > case with ZFS data sets, even though they're on the same pool? > > no

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 10:50, Michael Schuster wrote: > David Magda wrote: >> Normally when you do a move with-in a 'regular' file system all that's >> usually done is the directory pointer is shuffled around. This is not >> the case with ZFS data sets, even though they're on the same pool? > > no

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Dennis Clarke
> On Tue, January 5, 2010 10:12, casper@sun.com wrote: > >>>How about creating a new data set, moving the directory into it, and >>> then >>>destroying it? >>> >>>Assuming the directory in question is /opt/MYapp/data: >>> 1. zfs create rpool/junk >>> 2. mv /opt/MYapp/data /rpool/junk/ >>> 3

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
David Magda wrote: On Tue, January 5, 2010 10:12, casper@sun.com wrote: How about creating a new data set, moving the directory into it, and then destroying it? Assuming the directory in question is /opt/MYapp/data: 1. zfs create rpool/junk 2. mv /opt/MYapp/data /rpool/junk/ 3. zfs dest

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
>On Tue, January 5, 2010 10:12, casper@sun.com wrote: > >>>How about creating a new data set, moving the directory into it, and then >>>destroying it? >>> >>>Assuming the directory in question is /opt/MYapp/data: >>> 1. zfs create rpool/junk >>> 2. mv /opt/MYapp/data /rpool/junk/ >>> 3. zfs

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 10:12, casper@sun.com wrote: >>How about creating a new data set, moving the directory into it, and then >>destroying it? >> >>Assuming the directory in question is /opt/MYapp/data: >> 1. zfs create rpool/junk >> 2. mv /opt/MYapp/data /rpool/junk/ >> 3. zfs destroy r

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mikko Lammi
On Tue, January 5, 2010 17:08, David Magda wrote: > On Tue, January 5, 2010 05:34, Mikko Lammi wrote: > >> As a result of one badly designed application running loose for some >> time, >> we now seem to have over 60 million files in one directory. Good thing >> about ZFS is that it allows it withou

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Casper . Dik
>On Tue, January 5, 2010 05:34, Mikko Lammi wrote: > >> As a result of one badly designed application running loose for some time, >> we now seem to have over 60 million files in one directory. Good thing >> about ZFS is that it allows it without any issues. Unfortunatelly now that >> we need to g

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread David Magda
On Tue, January 5, 2010 05:34, Mikko Lammi wrote: > As a result of one badly designed application running loose for some time, > we now seem to have over 60 million files in one directory. Good thing > about ZFS is that it allows it without any issues. Unfortunatelly now that > we need to get rid

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
Mike Gerdts wrote: On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote: Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now t

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mike Gerdts
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote: > Hello, > > As a result of one badly designed application running loose for some time, > we now seem to have over 60 million files in one directory. Good thing > about ZFS is that it allows it without any issues. Unfortunatelly now that > we need

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Markus Kovero
s.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mikko Lammi Sent: 5. tammikuuta 2010 12:35 To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] Clearing a directory with more than 60 million files Hello, As a result of one badly designed application running loose for some time, w

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Joerg Schilling
"Mikko Lammi" wrote: > Hello, > > As a result of one badly designed application running loose for some time, > we now seem to have over 60 million files in one directory. Good thing > about ZFS is that it allows it without any issues. Unfortunatelly now that > we need to get rid of them (because

[zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mikko Lammi
Hello, As a result of one badly designed application running loose for some time, we now seem to have over 60 million files in one directory. Good thing about ZFS is that it allows it without any issues. Unfortunatelly now that we need to get rid of them (because they eat 80% of disk space) it see