Re: Very low disk performance Highpoint 1820a

2005-04-29 Thread Petri Helenius
Eric Anderson wrote:
I'm using fiber channel SATA, and I get 2x write as I do read, which 
doesn't make sense to me.  What kind of write speeds do you get?  My 
tiny brain tells me that reads should be faster than writes with a RAID5.

I'm seeing similar sequential performance on RELENG_5_3 and RELENG_5_4 
on dual-Xeons using 3ware controllers so it does not seem to be a driver 
issue but somewhere else in the architechture. Depending on the array 
configuration, 40-60MB/s reads and 100-160MB/s writes. The write 
performance is as expected but the read performance should be above the 
write performance.

Pete
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 4M page size

2005-02-22 Thread Petri Helenius
Scott wrote:
Petri Helenius wrote:
FreeBSD/i386 uses 4MB pages to hold the kernel text and data, but there
is no way (to my knowledge) to ask the pmap layer for a 4MB page after
that either from the kernel or from userland.  However, it's also my
understanding that most non-Xeon CPUs only have a 4kb TLB, and 4MB pages
are just broken down into 4kb chunks for it.
Does this hold true for amd64 too?
And what does happen to the page size if the memory is mapped to 
userland? (probably 4k?)
(I assume mmap does 4k, not 4M pages?)

Pete
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


4M page size

2005-02-21 Thread Petri Helenius
Is there a way currently to utilize 4M page size with FreeBSD for large 
data set programs (to optimize TLB misses)?

Pete
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 20TB Storage System (fsck????)

2003-09-03 Thread Petri Helenius
Max Clark wrote:

Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
 

It does. However as long as a pointer is 32 bits, your address space for 
a process
is maxed out at 4G which translates to about 2.5G user after kernel and 
other
things have taken their toll.

If fsck requires 700K for each 1GB of Disk, we are talking about 7GB of Ram
for 10TB of disk. Is this correct? Will PAE not function correctly to give
me 8GB of Ram? To check 10TB of disk?
PAE functions correctly but does not provide for 7G address space.

Is there anyway to bypass this requirement and split fsck into smaller
chunks? Being able to fsck my disk is kinda important.
 

Yes, you do that by splitting up the filesystem to smaller filesystems. 
Kind of obvious?

I have zero experience with either itanium or opteron. What is the current
status of support for these processors in FreeBSD? What would the preferred
CPU be? Will there be PCI cards that I would not be able to use in either of
these systems?
 

I´m personally biased towards the Opteron, but that´s more based on that 
it makes
more sense than their technical merits so far (because neither has too 
much).

Both CPU´s should work fine with 5.2 according to the TODO list. 
Meanwhile I suggest
you play with the number of inodes on the 10TB filesystem and see how 
that affects
the memory usage.

Pete

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 20TB Storage System

2003-09-03 Thread Petri Helenius
Geoff Buckingham wrote:

- This is a big problem (no pun intended), my smallest requirement is still
5TB... what would you recommend? The smallest file on the storage will be
500MB.
   

If you files are all going this large I imagine you should look carefully at
what you do with inodes, block and cluster sizes 
 

fsck problem should be gone with less inodes and less blocks since if
I read the code correctly, memory is consumed according to used inodes
and blocks so having like 2 inodes and 64k blocks should allow
you to build 5-20T filesystem and actually fsck them.
Pete

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 20TB Storage System

2003-09-03 Thread Petri Helenius
Poul-Henning Kamp wrote:

I am not sure I would advocate 64k blocks yet.
 

Good to know, I have stuck with 16k so far due to the fact that our
database has pagesize of 16k and I found little benefit tuning that.
(but it´s completely different application)
I tend to stick with 32k block, 4k fragment myself.

This is a problem which is in the cross-hairs for 6.x
 

You have any insight into the fsck memory consumption? I remember getting
myself saved quite a long time ago by reducing the number of inodes.
Pete

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FW: 20TB Storage System

2003-09-02 Thread Petri Helenius
Poul-Henning Kamp wrote:

2) What is the maximum size of a filesystem that I can present to the host
OS using vinum/ccd? Am I limited anywhere that I am not aware of?
   

Good question, I'm not sure we currently know the exact barrier.

Just make sure you run UFS2, which is the default on -CURRENT because 
UFS1 has
a 1TB limit.

3) Could I put all 20TB on one system, or will I need two to sustain the IO
required?
   

Spreading it will give you more I/O bandwidth.

 

Can you say why? Usually putting more spindles into one pile gives you 
more I/O,
unless you have very evenly distributed sequential access in pattern you 
can predict
in advance.

Pete

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]