On 06/19/2012 11:05 AM, Sašo Kiselkov wrote:
> On 06/18/2012 07:50 PM, Roch wrote:
>>
>> Are we hitting :
>> 7167903 Configuring VLANs results in single threaded soft ring fanout
>
> Confirmed, it is definitely this.
Hold the phone, I just tried unconfiguring all of the VLANs in the
system a
On 06/18/2012 12:05 AM, Richard Elling wrote:
> You might try some of the troubleshooting techniques described in Chapter 5
> of the DTtrace book by Brendan Gregg and Jim Mauro. It is not clear from your
> description that you are seeing the same symptoms, but the technique should
> apply.
> -- r
On 06/13/2012 03:43 PM, Roch wrote:
>
> Sašo Kiselkov writes:
> > On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
> > >
> > > So the xcall are necessary part of memory reclaiming, when one needs to
> tear down the TLB entry mapping the physical memory (which can from here on
> be repurposed)
Sao Kiselkov writes:
> On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
> >
> > So the xcall are necessary part of memory reclaiming, when one needs to
> > tear down the TLB entry mapping the physical memory (which can from here
> > on be repurposed).
> > So the xcall are just part of thi
On 06/12/2012 07:19 PM, Roch Bourbonnais wrote:
>
> Try with this /etc/system tunings :
>
> set mac:mac_soft_ring_thread_bind=0 set mac:mac_srs_thread_bind=0
> set zfs:zio_taskq_batch_pct=50
>
Thanks for the recommendations, I'll try and see whether it helps, but
this is going to take me a whi
Try with this /etc/system tunings :
set mac:mac_soft_ring_thread_bind=0
set mac:mac_srs_thread_bind=0
set zfs:zio_taskq_batch_pct=50
Le 12 juin 2012 à 11:37, Roch Bourbonnais a écrit :
>
> So the xcall are necessary part of memory reclaiming, when one needs to tear
> down the TLB entry mapp
On Tue, Jun 12, 2012 at 11:17 AM, Sašo Kiselkov wrote:
> On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
>> find where your nics are bound too
>>
>> mdb -k
>> ::interrupts
>>
>> create a processor set including those cpus [ so just the nic code will
>> run there ]
>>
>> andy
>
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
> find where your nics are bound too
>
> mdb -k
> ::interrupts
>
> create a processor set including those cpus [ so just the nic code will
> run there ]
>
> andy
Tried and didn't help, unfortunately. I'm still seeing drops. Wh
On 06/12/2012 06:06 PM, Jim Mauro wrote:
>
>>
>>> So try unbinding the mac threads; it may help you here.
>>
>> How do I do that? All I can find on interrupt fencing and the like is to
>> simply set certain processors to no-intr, which moves all of the
>> interrupts and it doesn't prevent the xcal
>
>> So try unbinding the mac threads; it may help you here.
>
> How do I do that? All I can find on interrupt fencing and the like is to
> simply set certain processors to no-intr, which moves all of the
> interrupts and it doesn't prevent the xcall storm choosing to affect
> these CPUs either…
2012-06-12 19:52, Sašo Kiselkov wrote:
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm choosing to
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
>
> So the xcall are necessary part of memory reclaiming, when one needs to tear
> down the TLB entry mapping the physical memory (which can from here on be
> repurposed).
> So the xcall are just part of this. Should not cause trouble, but they do.
On 06/12/2012 05:21 PM, Matt Breitbach wrote:
> I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low
> memory pressure seemed to be the cuplrit. Happened usually during storage
> vmotions or something like that which effectively nullified the data in the
> ARC (sometimes 50GB of
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do. They
consume a cpu for some time.
That in turn can caus
[zfs-discuss] Occasional storm of xcalls on segkmem_zio_free
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
> Seems the problem is somewhat more egregious than I thought. The xcall
> storm causes my network drivers to stop receiving IP multicast packets
> and subsequently my recording applicati
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
> Seems the problem is somewhat more egregious than I thought. The xcall
> storm causes my network drivers to stop receiving IP multicast packets
> and subsequently my recording applications record bad data, so
> ultimately, this kind of isn't workable..
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I need to somehow resolve
this... I'm running four
On 06/06/2012 09:43 PM, Jim Mauro wrote:
>
> I can't help but be curious about something, which perhaps you verified but
> did not post.
>
> What the data here shows is;
> - CPU 31 is buried in the kernel (100% sys).
> - CPU 31 is handling a moderate-to-high rate of xcalls.
>
> What the data doe
I can't help but be curious about something, which perhaps you verified but
did not post.
What the data here shows is;
- CPU 31 is buried in the kernel (100% sys).
- CPU 31 is handling a moderate-to-high rate of xcalls.
What the data does not prove empirically is that the 100% sys time of
CPU 31
On Jun 6, 2012, at 8:22 AM, Sašo Kiselkov wrote:
> On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
>> I'll try and load the machine with dd(1) to the max to see if access
>> patterns of my software have something to do with it.
>
> Tried and tested, any and all write I/O to the pool causes this xca
On Jun 6, 2012, at 8:01 AM, Sašo Kiselkov wrote:
> On 06/06/2012 04:55 PM, Richard Elling wrote:
>> On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
>>
>>> So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
>>> to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (v
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
> I'll try and load the machine with dd(1) to the max to see if access
> patterns of my software have something to do with it.
Tried and tested, any and all write I/O to the pool causes this xcall
storm issue, writing more data to it only exacerbates it
On 06/06/2012 04:55 PM, Richard Elling wrote:
> On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
>
>> So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
>> to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
>> LSI 9200 controllers and MPxIO) running OpenIn
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
> So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
> to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
> LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm
> occasionally seeing a storm of
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>10
xcalls a second). Th
25 matches
Mail list logo