This is what Oracle says about swap for 11gR2. The comment about
subtracting ISM is not correct. A simple test shows that ISM does consume swap (even if it's not DISM). Think about what happens when a memory segment is created (before it goes to ISM), if someone happens to attach in non-ISM mode and when everyone detaches from the segment and it ceases to be ISM). In the first and last stage swap space is *required* and the VM system reserves the space needed when the segment is first created. I would be cautious about Oracle assurances... Jim --- go to the following for full list of available oracle book. On 10/29/2010 2:01 PM, Jim Mauro wrote: Thanks Mike. Good point on the script. Indeed, use of speculative tracing would be a better fit here. I'll see if I can get something together and send it out.Thanks, /jim On Oct 29, 2010, at 4:45 PM, Mike Gerdts wrote:On Fri, Oct 29, 2010 at 2:50 PM, Robin Cotgrove <ro...@rjcnet.co.uk> wrote:Sorry guys. Swap is not the issue. We've had this confirmed by Oracle and I can clearly see there is 96GB of swap awailable on the system and ~50GB of main memory.By who at Oracle? Not everyone is equally qualified. I would tend to trust Jim Mauro (who co-wrote the books[1] on Solaris internals, performance, & dtrace) over most of the people you will get to through normal support channels. 1. http://www.amazon.com/Jim-Mauro/e/B001ILM8NC/ How do you know that available swap doesn't momentarily drop? I've run into plenty of instances where a system has tens of gigabytes of free memory but is woefully short on reservable swap (virtual memory, as Jim approximates). Usually "vmstat 1" is helpful in observing spikes, but as I said before this could miss very short spikes. If you've already done this to see that swap is unlikely to be an issue, knowing that would be useful to know. If you are measuring the amount of reservable swap with "swap -l", you are doing it wrong. I do agree that there can be other shortfalls that can cause this. This may call for speculative tracing of stacks across the fork entry and return calls, displaying results only when the fork fails with EAGAIN. Jim's second script is similar to what I suggest, except that it doesn't show the code path taken between syscall::forksys:entry and syscall::forksys:return. Also, I would be a little careful running the second script as is for long periods of time if you have a lot of forksys activity with unique stacks. I think that as it is @ks may grow rather large over time because the successful forks are not cleared. -- Mike Gerdts http://mgerdts.blogspot.com/ _______________________________________________ dtrace-discuss mailing list dtrace-discuss@opensolaris.org_______________________________________________ dtrace-discuss mailing list dtrace-discuss@opensolaris.org --
![]() James Litchfield | Senior Consultant Phone: +1 4082237059 | Mobile: +1 4082180790 Oracle Oracle ACS California ![]() |
_______________________________________________ dtrace-discuss mailing list dtrace-discuss@opensolaris.org