Allen Eastwood wrote:
Does DNLC even play a part in ZFS, or are the Docs out of date?
"Defines the number of entries in the directory name look-up cache
(DNLC). This parameter is used by UFS and NFS to cache elements of path
names that have been resolved."
No mention of ZFS. Noticed that wh
Does DNLC even play a part in ZFS, or are the Docs out of date?
"Defines the number of entries in the directory name look-up cache (DNLC).
This parameter is used by UFS and NFS to cache elements of path names that
have been resolved."
No mention of ZFS. Noticed that when discussing that with a c
On Sat, Aug 8, 2009 at 20:20, Ed Spencer wrote:
>
> On Sat, 2009-08-08 at 08:14, Mattias Pantzare wrote:
>
>> Your scalability problem may be in your backup solution.
> We've eliminated the backup system as being involved with the
> performance issues.
>
> The servers are Solaris 10 with the OS on
On Sat, 8 Aug 2009, Ed Spencer wrote:
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
11.9 43.0 528.9 1972.8 0.0 2.10.0 38.9 0 31
c4t60A98000433469764E4A2D456A644A74d0
17.0 19.6 496.9 1499.0 0.0 1.40.0 38.8 0 39
c4t60A98000433469764E4A2D456A69657
On Aug 8, 2009, at 5:02 AM, Ed Spencer wrote:
On Fri, 2009-08-07 at 19:33, Richard Elling wrote:
This is very unlikely to be a "fragmentation problem." It is a
scalability problem
and there may be something you can do about it in the short term.
You could be right.
Out test mail server co
On Sat, 2009-08-08 at 17:25, Mike Gerdts wrote:
> ndd -get /dev/tcp tcp_xmit_hiwat
> ndd -get /dev/tcp tcp_recv_hiwat
> grep tcp-nodelay /kernel/drv/iscsi.conf
# ndd -get /dev/tcp tcp_xmit_hiwat
2097152
# ndd -get /dev/tcp tcp_recv_hiwat
2097152
# grep tcp-nodelay /kernel/drv/iscsi.conf
#
> Whil
On Sat, 2009-08-08 at 15:05, Mike Gerdts wrote:
> On Sat, Aug 8, 2009 at 12:51 PM, Ed Spencer wrote:
> >
> > On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
> >> Many of us here already tested our own systems and found that under
> >> some conditions ZFS was offering up only 30MB/second for bu
On Sat, 2009-08-08 at 16:09, Mike Gerdts wrote:
> Right... but ZFS doesn't understand your application. The reason that
> a file system would put files that are in the same directory in the
> same general area on a disk is to minimize seek time. I would argue
> that seek time doesn't matter a w
On Sat, 2009-08-08 at 15:20, Bob Friesenhahn wrote:
> A SSD slog backed by a SAS 15K JBOD array should perform much better
> than a big iSCSI LUN.
Now...yes. We implemented this pool years ago. I believe, then, the
server would crash if you had a zfs drive fail. We decided to let the
netapp han
On Sat, Aug 8, 2009 at 3:25 PM, Ed Spencer wrote:
>
> On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote:
>
>> The DBA's that I know use files that are at least hundreds of
>> megabytes in size. Your problem is very different.
> Yes, definitely.
>
> I'm relating records in a table to my small files be
On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote:
> The DBA's that I know use files that are at least hundreds of
> megabytes in size. Your problem is very different.
Yes, definitely.
I'm relating records in a table to my small files because our email
system treats the filesystem as a database.
On Sat, 8 Aug 2009, Ed Spencer wrote:
On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
Enterprise storage should work fine without needing to run a tool to
optimize data layout or repair the filesystem. Well designed software
uses an approach which does not unravel through use.
H, th
On Sat, Aug 8, 2009 at 3:02 PM, Ed Spencer wrote:
>
> On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
>
>> Enterprise storage should work fine without needing to run a tool to
>> optimize data layout or repair the filesystem. Well designed software
>> uses an approach which does not unravel th
On Sat, Aug 8, 2009 at 12:51 PM, Ed Spencer wrote:
>
> On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
>> Many of us here already tested our own systems and found that under
>> some conditions ZFS was offering up only 30MB/second for bulk data
>> reads regardless of how exotic our storage pool
On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
> Enterprise storage should work fine without needing to run a tool to
> optimize data layout or repair the filesystem. Well designed software
> uses an approach which does not unravel through use.
H, this is counter to my understanding.
On Sat, 2009-08-08 at 08:14, Mattias Pantzare wrote:
> Your scalability problem may be in your backup solution.
We've eliminated the backup system as being involved with the
performance issues.
The servers are Solaris 10 with the OS on UFS filesystems. (In zfs
terms, the pool is old/mature). So
On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
> Many of us here already tested our own systems and found that under
> some conditions ZFS was offering up only 30MB/second for bulk data
> reads regardless of how exotic our storage pool and hardware was.
Just so we are using the same units
On Sat, 8 Aug 2009, Ed Spencer wrote:
What is the point of a filesystem the can grow to such a huge size and
not have functionality built in to optimize data layout? Real world
implementations of filesystems that are intended to live for
years/decades need this functionality, don't they?
Ente
>> > Adding another pool and copying all/some data over to it would only
>> > a short term solution.
>>
>> I'll have to disagree.
>
> What is the point of a filesystem the can grow to such a huge size and
> not have functionality built in to optimize data layout? Real world
> implementations of fi
On Fri, 2009-08-07 at 19:33, Richard Elling wrote:
> This is very unlikely to be a "fragmentation problem." It is a
> scalability problem
> and there may be something you can do about it in the short term.
You could be right.
Out test mail server consists of the exact same design, same hardwa
20 matches
Mail list logo