On 10/18/06, Nikolay Kuznetsov <[EMAIL PROTECTED]> wrote:

> hmm.... I never thought of it that way.  My initial reaction is
no.  Suspend
> enable/disable and global thread lock are seperate, distinct
concepts.  The
> thread lock should protect the VM internal thread structs when they
> are being modified.   For example, the thread lock should allow only
> one thread create/die at any given instant.  The enable/disable state is
> incidental to this event. This is independent of the concept of a thread
> running native code being in a state where the GC can find all its live
> references.  If a thread needs to grab the thread lock, of course, it
needs
> to put itself in a suspend enable mode because it might have to wait for
the
> lock.

Yes I agree that global lock allows only one thread to create/die (and
so on) at any given moment, while suspend_disable/enable affect only
suspension functionality. But in fact
suspend_disable is per_thread lock for suspension, and if it's
taken(suspend_disable called) other thread can't suspend particular
thread while this lock is not released(suspend_enable called). And I
believe that additional synchronization is excessive and very
expensive.


This is interesting.  A thread's suspend enable/disable state is basically
one bit of thread-local storage info that is only written by the owning
thread.  And is only read by other threads in the system.  There is no lock
protocol on this bit.  It should be very cheap operation.  Is there evidence
that this operation is expensive?

Also, note we have to take into account the hardware memory model.  And, as
fate would have it, different HW has different memory models.  For example,
Intel 32-bit has what is known as "write ordering".  Basically this means
that writes inside of a CPU will hit the SMP coherency domain in the order
of the program.  There is no guarantee precisely when the writes hit the
bus.  Bottom line: Thread A can toggle its enable/disable bit and eventually
other CPUs will _eventually_ see the writes in the order they happened.  PPC
is different, IPF is different.

Grabbing the thread system lock will get expensive if it is done at a high
rate.  My initial hunch is that grabbing the thread system lock happens at
low frequency.  Why?  Because operations such as thread create/kill, thread
suspend/resume, get thread group, thread interrrupt,etc happen at rather low
frequency.   Is there evidence that workloads we care about will cause high
frequency thread system lock?

Also my opinion is that suspension scheme is the last place in DRLVM
that should be changed w/o any open issue or problem which is depends
on it (or we do have a problems in GC in regard to suspension). Do you
really think that current scheme is unsafe and should be redesigned?


If the "current scheme" is the same that we had 1 or 2 years ago, the answer
is no.  I am really hoping that all of this is simply an implementation
bug.  The bottom line is that to make the system easy to reason about, a
thread should always be in suspend_enable mode before it does anything that
might block.

Nik.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
Weldon Washburn
Intel Middleware Products Division

Reply via email to