I checked again, and apparently setting the priority of the  
finalization process to 61 does _not_ solve the socket problem.
Maybe I did a wrong manipulation previously.

Grrr...

Alexandre


On 30 Apr 2009, at 05:40, John M McIntosh wrote:

> Further to this I"m wondering if there is something else "clever"
> going on.
>
> Should the finalization process run at priority 61?  See
> WeakArray>>restartFinalizationProcess
> Perhaps someone can check that?
>
> Let me ramble on...
>
> In the past you had the event polling going on at prioirty 40 in the
> 'UI process'
> That would grind away...
>
> You also had the 'event tickler' that ran every 500 ms at priority 60.
>
> And there in the middle why the key player here, the 'WeakArray
> finalization process' at 50
>
> In this problem area, if we consider the socket creation fails because
> the number of sockets allocated has reached the limit (aka unix limit)
> not seen in windows.
> then this is because either (a) we are actualy holding onto thousands
> of sockets on purpose?
> Or we have 1, and thousands of zombies that have been tenured to
> OldSpace but not yet GCed, and Unix is unhappy with us.
>
> Now when the socket create fails that causes well another pointless
> attempt at creation (why?) but also a full GC.
> The full GC will of course signal to the 'WeakArray finalization
> process' for it to gently destroy sockets.
>
> But let's consider what's NEW here is the 'input events fetching
> process' is now running at 2x the speed of the older UI process and
> consuming 10-30% of the cpu?
> I'm not sure why before it was acceptable to look every 1/50 second,
> but now it has to be every 1/100 of a second.
>
> But it's interesting that it''s running at priority 60, which means
> it's sucking CPU away from the weak array finalization process.
>
> Now as we know the weak array finalization process is rather cpu
> intensive, so I wonder if there is just enough CPU taken away from
> finalization process
> so that it can't do enough work before the retry for the socket
> allocation leaps in and fails for a final time?
>
> Well of course I'm not sure why the finalization process wouldn't
> finalize all the zombie sockets in one go when the full gc completes,
> but that would require some more testing...
>
>
> On 29-Apr-09, at 8:09 PM, John M McIntosh wrote:
>
>> Er maybe someone doing the testing can stick a
>> Socket allInstances size inspect
>> in at the pointer where the exception is signaled. I think it would
>> be enlightening what the value is.
>>
>> On 29-Apr-09, at 7:52 PM, Cameron Sanders wrote:
>>
>>> Socket status must Unconnected before opening a new
>>> connection
>>
>> --
>> =
>> =
>> =
>> =
>> =
>> = 
>> =====================================================================
>> John M. McIntosh <[email protected]>   Twitter:
>> squeaker68882
>> Corporate Smalltalk Consulting Ltd.  http://
>> www.smalltalkconsulting.com
>> =
>> =
>> =
>> =
>> =
>> = 
>> =====================================================================
>>
>>
>>
>>
>
> --
> =
> =
> =
> = 
> = 
> ======================================================================
> John M. McIntosh <[email protected]>   Twitter:
> squeaker68882
> Corporate Smalltalk Consulting Ltd.  http:// 
> www.smalltalkconsulting.com
> =
> =
> =
> = 
> = 
> ======================================================================
>
>
>
>
>
> _______________________________________________
> Pharo-project mailing list
> [email protected]
> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to