Kris Kennaway wrote:
I'm saying that the 7.0-CVS sources, which are graphed, are unlikely
to differ significantly from 6.2-CVS, i.e. they do not show good
scaling on this benchmark because of the problems with filedesc
locking in CVS.
Could you give a link to the 7.0-CVS graph?
Pete
_
Francisco Reyes wrote:
Petri Helenius writes:
The point in threading is that different threads can execute
simultaneously on multiple CPU's.
What combination of FreeBSD+Mysql will have multiple threads run by
different CPUs?
In the few SMP FreeBSD + Mysql setups (mysql 4.X) that I ha
Francisco Reyes wrote:
A little confused.
Does this mean FreeBSD will split the threads into multiple CPUs?
The point in threading is that different threads can execute
simultaneously on multiple CPU's.
Pete
___
freebsd-performance@freebsd.org ma
With this great progress, it would be even greater if there would be way
to run virtualization (Xen) when 7.0 hits the street.
Pete
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To uns
Kris Kennaway wrote:
On Sun, Feb 25, 2007 at 09:00:32AM +0200, Petri Helenius wrote:
Kris Kennaway wrote:
This shows the graph of MySQL transactions/second performed by a
multi-threaded client workload against a local MySQL database with
varying numbers of client threads, with
Kris Kennaway wrote:
This shows the graph of MySQL transactions/second performed by a
multi-threaded client workload against a local MySQL database with
varying numbers of client threads, with identically configured FreeBSD
and Linux systems on the same machine.
How does that compare to 6.2-
Ivan Voras wrote:
I don't know whose fault this is, VMWares or FreeBSD's, but
virtualization is popular, and since FreeBSD is very much lagging behind
for server-side virtualization (Xen, VMWare, etc. - jails and vimage
What is the status of Xen port to FreeBSD ? (haven't heard about it
late
Greg 'groggy' Lehey wrote:
Single stream tests aren't very good examples for RAID-5, because it
performs writes in two steps: first it reads the old data, then it
writes the new data.
If it really does it this way, instead doing write-only when writing
sufficiently large blocks, that would e
disk
subsystem that supports an aggregate ~100MB/sec transfer raw to the
underlying disks, is it reasonable to expect a ~5MB/sec transfer rate
for a RAID5 hosted on that subsystem -- a 95% overhead.
In my opinion, no.
Pete
Steve
At 01:19 PM 10/28/2006, Petri Helenius wrote:
According
According to my understanding vinum does not overlap requests to
multiple disks when running in raid5 configuration so you're not going
to achieve good numbers with just "single stream" tests.
Pete
Steve Peterson wrote:
Eric -- thanks for looking at my issue. Here's a dd reading from one
Mike Tancsa wrote:
At 09:15 PM 02/11/2005, Michael VInce wrote:
I have seen some network based SMP related performance problems
vanish in 6.0 tests, admittedly I haven't done hard drive based tests
but I wouldn't surprise me of performance drops on HDs in SMP on 6.0
are gone as well.
Yes,
What are the parameters that need to be tuned on amd64 platform to get,
say 1-1.5G buffer cache?
Pete
___
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[E
This sounds somewhat similar to Solaris dtrace stuff?
Pete
Bakul Shah wrote:
This thread makes me wonder if there is value in runing
performance tests on a regular basis. This would give an
early warning of any peformance loss and can be a useful
forensic tool (one can pinpoint when some performan
Robert Watson wrote:
The next thing that would be quite nice to measure is the rate of I/O
transactions per second we can get to the disk using the disk device
directly, with a minimal transaction size. I have a vague
recollection that you have to be careful in Linux because their
character de
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Petri Helenius writes:
My tests were using RAID10 and just striping. (RAID0 might be the right
name for it)
Same thing applies, and it depends on how the reqeust alignment/size and
stripe alignment/size interacts.
I
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
I'll be honest here, I don't care much if the speed difference between
4.X and 5.X is measureable, or whatever. What I find is a little
telling of an issue somewhere, is that READS are slower than WRITES!
This is
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not just
me - other users in this very thread see the same issue..
Uhm, if
kama wrote:
dev-null is not the issue... my own written testprogram that only read up
data to a buffer in memory showed the same results as doing a dd to
dev-null.
And dd from zero to null does:
114541264896 bytes transferred in 27.716454 secs (4132608911 bytes/sec)
Pete
/Bjorn
__
Arne WXrner wrote:
--- Petri Helenius <[EMAIL PROTECTED]> wrote:
Eric Anderson wrote:
I'm seeing similar sequential performance on RELENG_5_3 and
RELENG_5_4
on dual-Xeons using 3ware controllers so it does not seem to be
a driver issue [...]
Why?
I can remember, that some
Eric Anderson wrote:
I'm using fiber channel SATA, and I get 2x write as I do read, which
doesn't make sense to me. What kind of write speeds do you get? My
tiny brain tells me that reads should be faster than writes with a RAID5.
I'm seeing similar sequential performance on RELENG_5_3 and REL
Eivind Hestnes wrote:
It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot.
If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133
MByte/s. However, when pulling 180 mbit/s without the polling enabled
the system is very little responsive due to the interrupt load. I'l
21 matches
Mail list logo