Re: [zfs-discuss] ZSF Solaris

2008-10-01 Thread Bob Friesenhahn
On Tue, 30 Sep 2008, Al Hopper wrote: I *suspect* that there might be something like a hash table that is degenerating into a singly linked list as the root cause of this issue. But this is only my WAG. That seems to be a reasonable conclusion. BTFW that my million file test directory uses

Re: [zfs-discuss] ZSF Solaris

2008-10-01 Thread Bob Friesenhahn
On Wed, 1 Oct 2008, Ian Collins wrote: A million files in ZFS is no big deal: But how similar were your file names? The file names are like: image.dpx[000] image.dpx[001] image.dpx[002] image.dpx[003] image.dpx[004] . . . So they will surely trip up Al Hopper's bad

Re: [zfs-discuss] ZSF Solaris

2008-10-01 Thread Bob Friesenhahn
On Wed, 1 Oct 2008, Ram Sharma wrote: So for storing 1 million MYISAM tables (MYISAM being a good performer when it comes to not very large data) , I need to save 3 million data files in a single folder on disk. This is the way MYISAM saves data. I will never need to do an ls on this folder.

Re: [zfs-discuss] ZSF Solaris

2008-10-01 Thread Toby Thain
On 1-Oct-08, at 1:56 AM, Ram Sharma wrote: Hi Guys, Thanks for so many good comments. Perhaps I got even more than what I asked for! I am targeting 1 million users for my application.My DB will be on solaris machine.And the reason I am making one table per user is that it will be a

[zfs-discuss] ZSF Solaris

2008-09-30 Thread Ram Sharma
Hi, can anyone please tell me what is the maximum number of files that can be there in 1 folder in Solaris with ZSF file system. I am working on an application in which I have to support 1mn users. In my application I am using MySql MyISAM and in MyISAM there is 3 files created for 1 table. I

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Mark J Musante
On Tue, 30 Sep 2008, Ram Sharma wrote: Hi, can anyone please tell me what is the maximum number of files that can be there in 1 folder in Solaris with ZSF file system. By folder, I assume you mean directory and not, say, pool. In any case, the 'limit' is 2^48, but that's effectively no

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Marcelo Leal
ZFS has not limit for snapshots and filesystems too, but try to create a lot snapshots and filesytems and you will have to wait a lot for your pool to import too... ;-) I think you should not think about the limits, but performance. Any filesytem with *too many entries by directory will

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Toby Thain
On 30-Sep-08, at 7:50 AM, Ram Sharma wrote: Hi, can anyone please tell me what is the maximum number of files that can be there in 1 folder in Solaris with ZSF file system. I am working on an application in which I have to support 1mn users. In my application I am using MySql MyISAM

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Nathan Kroenert
Actually, the one that'll hurt most is ironically the most closely related to bad database schema design... With a zillion files in the one directory, if someone does an 'ls' in that directory, it'll not only take ages, but steal a whole heap of memory and compute power... Provided the only

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Bob Friesenhahn
On Wed, 1 Oct 2008, Nathan Kroenert wrote: zillion I/O's you need to deal with each time you list the entire directory. an ls -1rt on a directory with about 1.2 million files with names like afile1202899 takes minutes to complete on my box, and we see 'ls' get to in excess of 700MB rss...

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Bob Friesenhahn
On Wed, 1 Oct 2008, Nathan Kroenert wrote: That being said, there is a large delta in your results and mine... If I get a chance, I'll look into it... I suspect it's a cached versus I/O issue... The first time I posted was the first time the directory has been read in well over a month so

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Al Hopper
On Tue, Sep 30, 2008 at 6:30 PM, Nathan Kroenert [EMAIL PROTECTED] wrote: Actually, the one that'll hurt most is ironically the most closely related to bad database schema design... With a zillion files in the one directory, if someone does an 'ls' in that directory, it'll not only take ages,

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Ian Collins
Bob Friesenhahn wrote: On Wed, 1 Oct 2008, Nathan Kroenert wrote: zillion I/O's you need to deal with each time you list the entire directory. an ls -1rt on a directory with about 1.2 million files with names like afile1202899 takes minutes to complete on my box, and we see 'ls' get to

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Jens Elkner
On Tue, Sep 30, 2008 at 09:44:21PM -0500, Al Hopper wrote: This behavior is common to tmpfs, UFS and I tested it on early ZFS releases. I have no idea why - I have not made the time to figure it out. What I have observed is that all operations on your (victim) test directory will max out

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Ram Sharma
Hi Guys, Thanks for so many good comments. Perhaps I got even more than what I asked for! I am targeting 1 million users for my application.My DB will be on solaris machine.And the reason I am making one table per user is that it will be a simple design as compared to keeping all the data in