then create a ZVOL and share it over iSCSI and from the initiator host, run 
some benchmarks. You'll never get good results from local tests. For that sort 
of load, I'd guess a stripe of mirrors should be good. RAIDzN will probably be 
rather bad

roy

----- Original Message -----
> This system is for serving VM images through iSCSI to roughly 30
> xenserver hosts. I would like to know what type of performance I can
> expect in the coming months as we grow this system out. We currently
> have 2 intel ssds mirrored for the zil and 2 intel ssds for the l2arc
> in a stripe. I am interested more in max throughput of the local
> storage at this point and time.
> 
> On Wed, Aug 10, 2011 at 12:01 PM, Roy Sigurd Karlsbakk
> <r...@karlsbakk.net> wrote:
> > What sort of load will this server be serving? sync or async writes?
> > what sort of reads? random i/o or sequential? if sequential, how
> > many streams/concurrent users? those are factors you need to
> > evaluate before running a test. A local test will usually be using
> > async i/o and a dd with only 4k blocksize is bound to be slow,
> > probably because of cpu overhead.
> >
> > roy
> >
> > ----- Original Message -----
> >> Hello All,
> >> Sorry for the lack of information. Here is some answers to some
> >> questions:
> >> 1) createPool.sh:
> >> essentially can take 2 params, one is number of disks in pool, the
> >> second is either blank or mirrored, blank means number of disks in
> >> the
> >> pool i.e. raid 0, mirrored makes 2 disk mirrors.
> >>
> >> #!/bin/sh
> >> disks=( `cat diskList | grep Hitachi | awk '{print $2}' | tr '\n' '
> >> '`
> >> )
> >> #echo ${disks[1]}
> >> #$useDisks=" "
> >> for (( i = 0; i < $1; i++ ))
> >> do
> >> #echo "Thus far: "$useDisks
> >> if [ "$2" = "mirrored" ]
> >> then
> >> if [ $(($i % 2)) -eq 0 ]
> >> then
> >> useDisks="$useDisks mirror ${disks[i]}"
> >> else
> >> useDisks=$useDisks" "${disks[i]}
> >> fi
> >> else
> >> useDisks=$useDisks" "${disks[i]}
> >> fi
> >>
> >> if [ $(($i - $1)) -le 2 ]
> >> then
> >> echo "spares are: ${disks[i]}"
> >> fi
> >> done
> >>
> >> #echo $useDisks
> >> zpool create -f fooPool0 $useDisks
> >>
> >>
> >>
> >> 2) hardware:
> >> Each server attached to each storage array is a dell r710 with 32
> >> GB
> >> memory each. To test for issues with another platform the below
> >> info,
> >> is from a dell 1950 server with 8GB memory. However, I see similar
> >> results from the r710s as well.
> >>
> >>
> >> 3) In order to deal with caching, I am writing larger amounts of
> >> data
> >> to the disk then I have memory for.
> >>
> >> 4) I have tested with bonnie++ as well and here are the results, i
> >> have read that it is best to test with 4x the amount of memory:
> >> /usr/local/sbin/bonnie++ -s 32000 -d /fooPool0/test -u gdurham
> >> Using uid:101, gid:10.
> >> Writing with putc()...done
> >> Writing intelligently...done
> >> Rewriting...done
> >> Reading with getc()...done
> >> Reading intelligently...done
> >> start 'em...done...done...done...
> >> Create files in sequential order...done.
> >> Stat files in sequential order...done.
> >> Delete files in sequential order...done.
> >> Create files in random order...done.
> >> Stat files in random order...done.
> >> Delete files in random order...done.
> >> Version 1.03d ------Sequential Output------ --Sequential Input-
> >> --Random-
> >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> >> %CP
> >> cm-srfe03 32000M 230482 97 477644 76 223687 44 209868 91 541182
> >> 41 1900 5
> >> ------Sequential Create------ --------Random Create--------
> >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> >> 16 29126 100 +++++ +++ +++++ +++ 24761 100 +++++ +++ +++++ +++
> >> cm-srfe03,32000M,230482,97,477644,76,223687,44,209868,91,541182,41,1899.7,5,16,29126,100,+++++,+++,+++++,+++,24761,100,+++++,+++,+++++,+++
> >>
> >>
> >> I will run these with the r710 server as well and will report the
> >> results.
> >>
> >> Thanks for the help!
> >>
> >> -Greg
> >>
> >>
> >>
> >> On Wed, Aug 10, 2011 at 9:16 AM, phil.har...@gmail.com
> >> <phil.har...@gmail.com> wrote:
> >> > I would generally agree that dd is not a great benchmarking tool,
> >> > but you
> >> > could use multiple instances to multiple files, and larger block
> >> > sizes are
> >> > more efficient. And it's always good to check iostat and mpstat
> >> > for
> >> > io and
> >> > cpu bottlenecks. Also note that an initial run that creates files
> >> > may be
> >> > quicker because it just allocates blocks, whereas subsequent
> >> > rewrites
> >> > require copy-on-write.
> >> >
> >> > ----- Reply message -----
> >> > From: "Peter Tribble" <peter.trib...@gmail.com>
> >> > To: "Gregory Durham" <gregory.dur...@gmail.com>
> >> > Cc: <zfs-discuss@opensolaris.org>
> >> > Subject: [zfs-discuss] Issues with supermicro
> >> > Date: Wed, Aug 10, 2011 10:56
> >> >
> >> >
> >> > On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
> >> > <gregory.dur...@gmail.com> wrote:
> >> >> Hello,
> >> >> We just purchased two of the sc847e26-rjbod1 units to be used in
> >> >> a
> >> >> storage environment running Solaris 11 express.
> >> >>
> >> >> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI
> >> >> SAS
> >> >> 9200-8e hba. We are not using failover/redundancy. Meaning that
> >> >> one
> >> >> port of the hba goes to the primary front backplane interface,
> >> >> and
> >> >> the
> >> >> other goes to the primary rear backplane interface.
> >> >>
> >> >> For testing, we have done the following:
> >> >> Installed 12 disks in the front, 0 in the back.
> >> >> Created a stripe of different numbers of disks. After each test,
> >> >> I
> >> >> destroy the underlying storage volume and create a new one. As
> >> >> you
> >> >> can
> >> >> see by the results, adding more disks, makes no difference to
> >> >> the
> >> >> performance. This should make a large difference from 4 disks to
> >> >> 8
> >> >> disks, however no difference is shown.
> >> >>
> >> >> Any help would be greatly appreciated!
> >> >>
> >> >> This is the result:
> >> >>
> >> >> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> >> >> of=/fooPool0/86gb.tst bs=4096 count=20971520
> >> >> ^C3503681+0 records in
> >> >> 3503681+0 records out
> >> >> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s
> >> >
> >> > So, the problem here is that you're not testing the storage at
> >> > all.
> >> > You're basically measuring dd.
> >> >
> >> > To get meaningful results, you need to do two things:
> >> >
> >> > First, run it for long enough so you eliminate any write cache
> >> > effects. Writes go to memory and only get sent to disk in the
> >> > background.
> >> >
> >> > Second, use a proper benchmark suite, and one that isn't itself
> >> > a bottleneck. Something like vdbench, although there are others.
> >> >
> >> > --
> >> > -Peter Tribble
> >> > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
> >> > _______________________________________________
> >> > zfs-discuss mailing list
> >> > zfs-discuss@opensolaris.org
> >> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >> >
> >> _______________________________________________
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> > --
> > Vennlige hilsener / Best regards
> >
> > roy
> > --
> > Roy Sigurd Karlsbakk
> > (+47) 97542685
> > r...@karlsbakk.net
> > http://blogg.karlsbakk.net/
> > --
> > I all pedagogikk er det essensielt at pensum presenteres
> > intelligibelt. Det er et elementært imperativ for alle pedagoger å
> > unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de
> > fleste tilfeller eksisterer adekvate og relevante synonymer på
> > norsk.
> >

-- 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to