On Jan 31, 2011, at 9:20 PM, David Holmes wrote:


Where we have:

961 void JvmtiDeferredEventQueue::enqueue(const JvmtiDeferredEvent& event) {
...
967   QueueNode* node = new QueueNode(event);

and

1007 void JvmtiDeferredEventQueue::add_pending_event(
1008     const JvmtiDeferredEvent& event) {
1009
1010   QueueNode* node = new QueueNode(event);

Will an allocation failure here terminate the VM? I suspect it will but I don't think it should. Can we not add a link field to the event objects rather than have to create Nodes to hold them?
I can add a NULL check to those spots. There's no need for us to segv if we can't allocate memory for this. We do need a C-heap allocated object since we're passing these events from thread-to- thread. Adding the link in the event object itself wouldn't help - it would just mean we have to allocate the event object in C-heap instead.

As per your other email NULL check is pointless as "new" will never return null but will instead call vm_exit_out_of_memory to abort the VM - which is exactly my point. I don't believe that a failure to allocate in this code should be considered a fatal error for the VM.

I agree, there may be situations where memory allocation failures should be considered non-fatal, and the Hotspot allocation routines should offer some support for that. However, it's not clear to me that the various specifications we adhere to have "escape-clause" situations where events can be dropped or features scaled back when/if we run into memory constraints. They should, no doubt, but I don't recall seeing anything like that at the present. It's a can of worms that is probably worth opening up, but it's a bigger issue than this bugfix.


I don't understand why we would have to allocate the event objects in C-heap, all we want is a link field so that they can form their own queue without the need for wrapper Nodes.

447 class JvmtiDeferredEvent VALUE_OBJ_CLASS_SPEC {
448   friend class JvmtiDeferredEventQueue;
449  private:
      JvmtiDeferredEvent _next; // for chaining events together
     public:
      JvmtiDeferredEvent next() { return _next; }
      void set_next(JvmtiDeferredEvent e) {
       assert(_next == NULL, "overwriting linked event!");
       _next = e;
      }

Further this would allow you to dequeue all the events at once instead of the service thread going through pulling them off one at a time.

That code doesn't work in C++. You can't have a type contain itself. How big would it be? I suppose what you're suggesting is the Java semantics where the '_next' field is a reference/pointer. If so, you've just described the 'QueueNode' class. Could I collapse the definitions and have just one instead of both JvmtiDeferredEvent and QueueNode? Sure, but I see value in having them separate. For one, you can have different allocation schemes - QueueNodes go into the C- heap and JvmtiDeferredEvents can be value objects. Another reason is to separately encapsulate two different concepts that aren't necessarily related (list functionality and event specification).

And regardless of where you put the link, the event specification has to be allocated in the C-heap in order to be enqueued on one thread (the compiler thread) and consumed on the other (the service thread) asynchronously. If the compiler thread waited for event to be posted before continuing, then one could conceivably stack-allocate on the compiler thread and pass a reference to the event to the service thread, but making the compiler thread wait introduces deadlock possibilities when run with -Xbatch.

Batching is a separate issue altogether. The existing code could be written to batch-process events using the existing classes.

988   if (_queue_head == NULL) {
989 // Just in case this happens in product; it shouldn't be let's not crash
990     return JvmtiDeferredEvent();
991   }

doesn't look right. Are we returning a stack allocated instance here? (There's also a typo in the comment.)
Yes, we're returning a stack-allocated instance, which returns to the caller as a temp and is copied into a local variable there. I'll fix the comment.

It isn't "legal" to return a stack-allocated object from a method. I expect we'll get away with it but still ...

Couldn't you just allocate a static instance to use in this case?

There's nothing wrong with returning an object by value.


Ok so in general I like the form of the new architecture but I'm a bit unclear on the details. In general non-service threads will only call:

- enqueue() to add an event
- add_pending_event() to add an event in the special case of a safepoint being active
- flush_queue() to wait for earlier events to get posted

The service thread will:

- loop in wait() until there is an event (enqueue will do the notifyAll)
- dequeue() to pull off that event

Right.

Now it seems that flush_queue is also called by the service thread, but I can't see where from?

It's not called directly, but it could happen if a user's JVMTI compiled-method-load/unload event handler method calls JVMTI's GenerateEvents(), so we need to code defensively here to prevent any inadvertent self-deadlocks.

I'm a little concerned about all the locking/unlocking that seems to happen; and the fact that there are potentially a lot of spurious notifications. Though this is mitigated if we only expect a couple of threads to be involved, there are ways to restructure things so that event processing can be batched, including a single notifyAll once all flush events are complete.

The enqueue code does not know the state of the service thread (nor should it have to), so it will always have to send a notification to wake it up if it's sleeping (even if the events are batched) And all the waiting code loops checking for conditions, so a spurious notification ought to be harmless.

There is a single notifyAll that occurs when flushing is complete. This is implemented by the flush pseudo-event. (see JvmtiDeferredEvent::post()).

No doubt this could be restructured a number of different ways, but this way is rather straightforward (easy to analyze) and appears correct. The possibility to batch-process the events in the future is there if we need it for some reason, but I don't see a need for it at the moment.

--
- Keith

Reply via email to