> Where can I find a list of these?
This leads to the more generic question of: where are *any* release notes?
I saw on Genunix that Community Edition 3.0.3 was replaced by 3.0.3-1. What
changed? I went to nexenta.org and looked around. But it wasn't immediately
obvious where to find release n
> Is that 38% of one CPU or 38% of all CPU's? How many CPU's does the
> Linux box have? I don't mean the number of sockets, I mean number of
> sockets * number of cores * number of threads per core. My
The server has two Intel X5570s, they are quad core and have hyperthreading.
It would say
On Sun, Jul 4, 2010 at 2:08 PM, Ian D wrote:
> Mem: 74098512k total, 73910728k used, 187784k free, 96948k buffers
> Swap: 2104488k total, 208k used, 2104280k free, 63210472k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 17652 mysql 20 0 3553m
On Jul 4, 2010, at 8:08 AM, Ian D wrote:
> >Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one
> >disk in each of the two JBODs. >Now we're getting about 500-1000 IOPS
> >(according to zpool iostat) and 20-30MB/sec in random read on a big
> >>database. Does that sound
> In what way is CPU contention being monitored? "prstat" without
> options is nearly useless for a multithreaded app on a multi-CPU (or
> multi-core/multi-thread) system. mpstat is only useful if threads
> never migrate between CPU's. "prstat -mL" gives a nice picture of how
> busy each LWP (t
On Sun, Jul 4, 2010 at 10:08 AM, Ian D wrote:
> What I don't understand is why, when I run a single query I get <100 IOPS
> and <3MB/sec. The setup can obviously do better, so where is the
> bottleneck? I don't see any CPU core on any side being maxed out so it
> can't be it...
In what way is C
On Sun, Jul 4, 2010 at 11:28 AM, Bob Friesenhahn
wrote:
>>
>> Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having
>> one disk in each of the two JBODs. Now we're getting about 500-1000 IOPS
>> (according to zpool iostat) and 20-30MB/sec in random read on a big
>> database. D
Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair
having one disk in each of the two JBODs. Now we're getting about
500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random
read on a big database. Does that sounds right?
I am not sure who wrote the above text sin
>
> - Original Message -
> > Victor,
> >
> > The zpool import succeeded on the next attempt
> following the crash
> > that I reported to you by private e-mail!
> >
> > For completeness, this is the final status of the
> pool:
> >
> >
> > pool: tank
> > state: ONLINE
> > scan: resilvere
>Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one
>disk in each of the two JBODs. >Now we're getting about 500-1000 IOPS
>(according to zpool iostat) and 20-30MB/sec in random read on a big >database.
> Does that sounds right?>Seems right, as Erik said. Btw, do you
- Original Message -
> Compared to b134? Yes! We have fixed many bugs that still exist in
> 134.
Where can I find a list of these?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensi
Compared to b134? Yes! We have fixed many bugs that still exist in 134.
"Fajar A. Nugraha" wrote:
>On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore wrote:
>> I am sorry you feel that way. I will look at your issue as soon as I am
>> able, but I should say that it is almost certain that what
- Original Message -
> Victor,
>
> The zpool import succeeded on the next attempt following the crash
> that I reported to you by private e-mail!
>
> For completeness, this is the final status of the pool:
>
>
> pool: tank
> state: ONLINE
> scan: resilvered 1.50K in 165h28m with 0 erro
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore wrote:
> I am sorry you feel that way. I will look at your issue as soon as I am
> able, but I should say that it is almost certain that whatever the problem
> is, it probably is inherited from OpenSolaris and the build of NCP you were
> testing
>To summarise, putting 28 disks in a single vdev is nothing you would do if you
>want performance. You'll end >up with as many IOPS a single drive can do.
>Split it up into smaller (<10 disk) vdevs and try again. If you need >high
>performance, put them in a striped mirror (aka RAID1+0)
15 matches
Mail list logo