[Warning: semi-useless information ahead]
On Wed, Sep 03, 2003 at 11:06:15AM +, Geoff Buckingham wrote:
However I just read the newfs man page and am intrigued to know what effect
the -g and -h options have
Somewhere in -STABLE between 4.8-RELEASE and a month or so ago I recreated
a
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning In message [EMAIL PROTECTED], Petri Helenius
Poul-Henning writes:
fsck problem should be gone with less inodes and less blocks since
if I read the code correctly, memory is consumed according to used
inodes and blocks so
In message [EMAIL PROTECTED], David Gilbert writes:
That reminds me... has anyone thought of designing the system to have
more than 8 frags per block? Increasingly, for large file
performance, we're pushing up the block size dramatically. This is
with the assumption that large disks will
David Gilbert wrote:
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning I am not sure I would advocate 64k blocks yet.
Poul-Henning I tend to stick with 32k block, 4k fragment myself.
That reminds me... has anyone thought of designing the system to have
more than 8
PK == Poul-Henning Kamp [EMAIL PROTECTED] writes:
PK I am not sure I would advocate 64k blocks yet.
PK I tend to stick with 32k block, 4k fragment myself.
At what file system size do you recommend bumping the block size?
I've got a 226Gb RAID array and right now it is using the default
newfs
On Wed, Sep 03, 2003 at 11:06:15AM +, Geoff Buckingham wrote:
However I just read the newfs man page and am intrigued to know what effect
the -g and -h options have
-g avgfilesize
The expected average file size for the file system.
-h avgfpdir
Geoff Buckingham wrote:
- This is a big problem (no pun intended), my smallest requirement is still
5TB... what would you recommend? The smallest file on the storage will be
500MB.
If you files are all going this large I imagine you should look carefully at
what you do with inodes, block and
In message [EMAIL PROTECTED], Petri Helenius writes:
fsck problem should be gone with less inodes and less blocks since if
I read the code correctly, memory is consumed according to used inodes
and blocks so having like 2 inodes and 64k blocks should allow
you to build 5-20T filesystem and
Poul-Henning Kamp wrote:
I am not sure I would advocate 64k blocks yet.
Good to know, I have stuck with 16k so far due to the fact that our
database has pagesize of 16k and I found little benefit tuning that.
(but it´s completely different application)
I tend to stick with 32k block, 4k
In message [EMAIL PROTECTED], Petri Helenius writes:
You have any insight into the fsck memory consumption? I remember getting
myself saved quite a long time ago by reducing the number of inodes.
I have not studied it. I always try to avoid having more than an
order of magnitude more inodes
On Tue, Sep 02, 2003 at 03:53:53PM -0700, Max Clark wrote:
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you want fsck to
succeed. I don't know if that's
Sorry for the cross post.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Max Clark
Sent: Tuesday, September 02, 2003 11:00 AM
To: [EMAIL PROTECTED]
Subject: 20TB Storage System
Hi all,
I need to attach 20TB of storage to a network (as low cost
In message [EMAIL PROTECTED], Max Clark writ
es:
Given the above:
1) What would my expected IO be using vinum to stripe the storage enclosures
detailed above?
That depends a lot on the applications I/O pattern, an I doubt a
precise prediction is possible.
In particular the FibreChannel is hard
[This isn't really a performance issue so I trimmed it.]
On Tue, Sep 02, 2003 at 12:48:29PM -0700, Max Clark wrote:
I need to attach 20TB of storage to a network (as low cost as possible), I
need to sustain 250Mbit/s or 30MByte/s of sustained IO from the storage to
the disk.
I have found
Poul-Henning Kamp wrote:
2) What is the maximum size of a filesystem that I can present to the host
OS using vinum/ccd? Am I limited anywhere that I am not aware of?
Good question, I'm not sure we currently know the exact barrier.
Just make sure you run UFS2, which is the default on
Just make sure you run UFS2, which is the default on -CURRENT because
UFS1 has
a 1TB limit.
- What's the limit with UFS2?
Are there major requirements to run FreeBSD 5.x or can I still run stable
with this?
Thanks,
Max
___
[EMAIL PROTECTED] mailing
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you want fsck to
succeed. I don't know if that's using the default newfs settings
(which would create an insane
In the last episode (Sep 02), Max Clark said:
[ quoting format manually recovered ]
Dan Nelson wrote
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you
18 matches
Mail list logo