Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Bohdan Tashchuk
> Where can I find a list of these? This leads to the more generic question of: where are *any* release notes? I saw on Genunix that Community Edition 3.0.3 was replaced by 3.0.3-1. What changed? I went to nexenta.org and looked around. But it wasn't immediately obvious where to find release n

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
> Is that 38% of one CPU or 38% of all CPU's? How many CPU's does the > Linux box have? I don't mean the number of sockets, I mean number of > sockets * number of cores * number of threads per core. My The server has two Intel X5570s, they are quad core and have hyperthreading. It would say

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 2:08 PM, Ian D wrote: > Mem:  74098512k total, 73910728k used,   187784k free,    96948k buffers > Swap:  2104488k total,      208k used,  2104280k free, 63210472k cached > >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND > 17652 mysql     20   0 3553m

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Richard Elling
On Jul 4, 2010, at 8:08 AM, Ian D wrote: > >Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one > >disk in each of the two JBODs. >Now we're getting about 500-1000 IOPS > >(according to zpool iostat) and 20-30MB/sec in random read on a big > >>database. Does that sound

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
> In what way is CPU contention being monitored? "prstat" without > options is nearly useless for a multithreaded app on a multi-CPU (or > multi-core/multi-thread) system. mpstat is only useful if threads > never migrate between CPU's. "prstat -mL" gives a nice picture of how > busy each LWP (t

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 10:08 AM, Ian D wrote: > What I don't understand is why, when I run a single query I get <100 IOPS > and <3MB/sec.  The setup can obviously do better, so where is the > bottleneck?  I don't see any CPU core on any side being maxed out so it > can't be it... In what way is C

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 11:28 AM, Bob Friesenhahn wrote: >> >> Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having >> one disk in each of the two JBODs.  Now we're getting about 500-1000 IOPS >> (according to zpool iostat) and 20-30MB/sec in random read on a big >> database.  D

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Bob Friesenhahn
Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one disk in each of the two JBODs.  Now we're getting about 500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random read on a big database.  Does that sounds right? I am not sure who wrote the above text sin

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-04 Thread Andrew Jones
> > - Original Message - > > Victor, > > > > The zpool import succeeded on the next attempt > following the crash > > that I reported to you by private e-mail! > > > > For completeness, this is the final status of the > pool: > > > > > > pool: tank > > state: ONLINE > > scan: resilvere

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
>Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one >disk in each of the two JBODs. >Now we're getting about 500-1000 IOPS >(according to zpool iostat) and 20-30MB/sec in random read on a big >database. > Does that sounds right?>Seems right, as Erik said. Btw, do you

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Roy Sigurd Karlsbakk
- Original Message - > Compared to b134? Yes! We have fixed many bugs that still exist in > 134. Where can I find a list of these? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensi

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Garrett D'Amore
Compared to b134? Yes! We have fixed many bugs that still exist in 134. "Fajar A. Nugraha" wrote: >On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore wrote: >> I am sorry you feel that way.  I will look at your issue as soon as I am >> able, but I should say that it is almost certain that what

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-04 Thread Roy Sigurd Karlsbakk
- Original Message - > Victor, > > The zpool import succeeded on the next attempt following the crash > that I reported to you by private e-mail! > > For completeness, this is the final status of the pool: > > > pool: tank > state: ONLINE > scan: resilvered 1.50K in 165h28m with 0 erro

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Fajar A. Nugraha
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore wrote: > I am sorry you feel that way.  I will look at your issue as soon as I am > able, but I should say that it is almost certain that whatever the problem > is, it probably is inherited from OpenSolaris and the build of NCP you were > testing

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Roy Sigurd Karlsbakk
>To summarise, putting 28 disks in a single vdev is nothing you would do if you >want performance. You'll end >up with as many IOPS a single drive can do. >Split it up into smaller (<10 disk) vdevs and try again. If you need >high >performance, put them in a striped mirror (aka RAID1+0)