On 06/12/2012 07:19 PM, Roch Bourbonnais wrote:
>
> Try with this /etc/system tunings :
>
> set mac:mac_soft_ring_thread_bind=0 set mac:mac_srs_thread_bind=0
> set zfs:zio_taskq_batch_pct=50
>
Thanks for the recommendations, I'll try and see whether it helps, but
this is going to take me a whi
Try with this /etc/system tunings :
set mac:mac_soft_ring_thread_bind=0
set mac:mac_srs_thread_bind=0
set zfs:zio_taskq_batch_pct=50
Le 12 juin 2012 à 11:37, Roch Bourbonnais a écrit :
>
> So the xcall are necessary part of memory reclaiming, when one needs to tear
> down the TLB entry mapp
On Tue, Jun 12, 2012 at 11:17 AM, Sašo Kiselkov wrote:
> On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
>> find where your nics are bound too
>>
>> mdb -k
>> ::interrupts
>>
>> create a processor set including those cpus [ so just the nic code will
>> run there ]
>>
>> andy
>
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
> find where your nics are bound too
>
> mdb -k
> ::interrupts
>
> create a processor set including those cpus [ so just the nic code will
> run there ]
>
> andy
Tried and didn't help, unfortunately. I'm still seeing drops. Wh
On 06/12/2012 06:06 PM, Jim Mauro wrote:
>
>>
>>> So try unbinding the mac threads; it may help you here.
>>
>> How do I do that? All I can find on interrupt fencing and the like is to
>> simply set certain processors to no-intr, which moves all of the
>> interrupts and it doesn't prevent the xcal
>
>> So try unbinding the mac threads; it may help you here.
>
> How do I do that? All I can find on interrupt fencing and the like is to
> simply set certain processors to no-intr, which moves all of the
> interrupts and it doesn't prevent the xcall storm choosing to affect
> these CPUs either…
2012-06-12 19:52, Sašo Kiselkov wrote:
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm choosing to
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
>
> So the xcall are necessary part of memory reclaiming, when one needs to tear
> down the TLB entry mapping the physical memory (which can from here on be
> repurposed).
> So the xcall are just part of this. Should not cause trouble, but they do.
On 06/12/2012 05:21 PM, Matt Breitbach wrote:
> I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low
> memory pressure seemed to be the cuplrit. Happened usually during storage
> vmotions or something like that which effectively nullified the data in the
> ARC (sometimes 50GB of
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do. They
consume a cpu for some time.
That in turn can caus
I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low
memory pressure seemed to be the cuplrit. Happened usually during storage
vmotions or something like that which effectively nullified the data in the
ARC (sometimes 50GB of data would be purged from the ARC). The system was
so
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
> Seems the problem is somewhat more egregious than I thought. The xcall
> storm causes my network drivers to stop receiving IP multicast packets
> and subsequently my recording applications record bad data, so
> ultimately, this kind of isn't workable..
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I need to somehow resolve
this... I'm running four
On Jun 11, 2012, at 6:05 AM, Jim Klimov wrote:
> 2012-06-11 5:37, Edward Ned Harvey wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Kalle Anka
>>>
>>> Assume we have 100 disks in one zpool. Assume it takes 5 hours to scrub
>> one
2012-06-12 16:45, Roch Bourbonnais wrote:
The process should be scalable.
Scrub all of the data on one disk using one disk worth of IOPS
Scrub all of the data on N disks using N disk's worth of IOPS.
THat will take ~ the same total time.
IF the uplink or processing power or some other bottl
The process should be scalable.
Scrub all of the data on one disk using one disk worth of IOPS
Scrub all of the data on N disks using N disk's worth of IOPS.
THat will take ~ the same total time.
-r
Le 12 juin 2012 à 08:28, Jim Klimov a écrit :
> 2012-06-12 16:20, Roch Bourbonnais wrote:
>>
2012-06-12 16:20, Roch Bourbonnais wrote:
Scrubs are run at very low priority and yield very quickly in the presence of
other work.
So I really would not expect to see scrub create any impact on an other type of
storage activity.
Resilvering will more aggressively push forward on what is has t
Scrubs are run at very low priority and yield very quickly in the presence of
other work.
So I really would not expect to see scrub create any impact on an other type of
storage activity.
Resilvering will more aggressively push forward on what is has to do, but
resilvering does not need to
rea
18 matches
Mail list logo