Yes, that's what I meant. I'd probably choose a more descriptive name, like
"DiscardThreadSpecificMetadata()" or somesuch. I'm not the right person to
approve/review an API change like this, though, so sending a proposal to
v8-dev sounds good.

On Mon, Oct 19, 2015 at 5:30 PM, Alex Kodat <alexko...@gmail.com> wrote:

> Fair enough. I now understand that by "unintrusive" you meant "can't cause
> bugs" -- a very good interpretation.
>
> So would you be amenable to something like the following in v8.h (in the
> Isolate class block):
>
>   /**
>    * Indicates that the current thread is finished using this isolate.
>    * The isolate must not be locked or entered by the current thread.
>    */
>   void ThreadDone();
>
> ? I promise I won't touch a single existing method. If you're OK with this
> I should probably switch over to using v8-dev for any other issues? I
> appreciate the  time you've given this.
>
> On Monday, October 19, 2015 at 3:01:47 AM UTC-7, Jakob Kummerow wrote:
>>
>> Well, there can be N threads acquiring and releasing locks for M isolates
>> in arbitrary sequences. I don't think extending the Locker in any way would
>> be the right approach. If anything, it should be a separate API call with
>> the semantics "I, the embedder, promise that the thread with the id X will
>> never come back, please discard any and all data associated with it".
>>
>> That said, this issue isn't really on my list of things to spend time on.
>> In particular, that means I don't intend to think through the implications
>> and/or refactorings that may or may not be necessary or desirable. When in
>> doubt, I'd prefer to keep everything as it is.
>>
>> On Mon, Oct 19, 2015 at 7:56 AM, Alex Kodat <alex...@gmail.com> wrote:
>>
>>> Sorry, I was too hasty. Obviously, one doesn't need the Isolate lock to
>>> remove PerIsolateThreadData from the linked list, one needs
>>> the thread_data_table_mutex_ lock. FWIW, it strikes me as very odd that
>>> ThreadDataTable is process-wide with a static pointer. Given that its lone
>>> data member is the chain anchor for PerIsolateThreadData objects it seems
>>> that a ThreadDataTable object could be inside the Isolate and not take up
>>> any more space and probably be protected by the Isolate lock so eliminate
>>> the need for a separate mutex.
>>>
>>> Neverthless, I was wrong and, in theory, one could have a totally
>>> Locker-independent call to free a thread's PerIsolateThreadData. But, since
>>> one could not free the current thread's PerIsolateThreadData while there is
>>> a Locker on the thread's stack, freeing PerIsolateThreadData still seems
>>> pretty tightly associated with the Locker class so I think I'd stick with
>>> my proposals.
>>>
>>> Also, FWIW, I did a bit more research and the only thing of substance
>>> that seems to survive a top_level_ && has_lock_ Locker destruction is a
>>> thread's Simulator. And while, I admit I don't fully understand the
>>> Simulator code it seems unlikely that a thread's Simulator would hold state
>>> that would need to survive an unlock/lock. Admittedly, deleting and then
>>> reconstructing a Simulator would not be cheap but I would assume someone
>>> using a Simulator would not be expecting particularly high performance,
>>> anyway?
>>>
>>>
>>> On Sunday, October 18, 2015 at 11:06:27 AM UTC-7, Alex Kodat wrote:
>>>>
>>>> Jakob,
>>>>
>>>> Thanks for that. I might just take a swing at an unintrusive patch.
>>>> Along those lines it seems that thread resource cleanup would be closely
>>>> tied to the Locker as one would need the isolate lock while freeing
>>>> PerIsolateThreadData but would presumably want to release the lock
>>>> immediately after. So it seems the most logical approach would be a Locker
>>>> class variable called say cleanup_ that's set either with an extra
>>>> parameter on the Locker constructor or a Locker method to clean up at
>>>> destruction time (both with appropriate semantics if not top_level_ or
>>>> has_lock_).
>>>>
>>>> But just want to make sure that this couldn't be even made less
>>>> intrusive by just always cleaning up if top_level_ and has_lock_ in the
>>>> Locker destructor. As it is, in this case
>>>> ThreadManager::FreeThreadResources ends up being called which doesn't seem
>>>> to leave a heck a lot of useful stuff around for the thread so leaving a
>>>> few crumbs around seems mainly an efficiency issue -- if someone has a lot
>>>> of top level Locker brackets in a loop, there might be a bit of extra
>>>> object creation/destruction if ThreadManager::FreeThreadResources always
>>>> does a full thread cleanup but it's hard to imagine that being significant
>>>> for anyone. In the unlikely event that it is, an embedder can create an
>>>> outer Locker with nested Unlockers, an approach that would have no v8
>>>> release dependencies.
>>>>
>>>> So which approach would you prefer: just do it, Locker class method, or
>>>> Locker constructor parameter?
>>>>
>>>> Thanks
>>>>
>>>> On Sunday, October 18, 2015 at 4:07:07 AM UTC-7, Jakob Kummerow wrote:
>>>>>
>>>>> On Sun, Oct 18, 2015 at 8:16 AM, Alex Kodat <alex...@gmail.com> wrote:
>>>>>
>>>>>> If I have an app that steadily creates and joins threads, is there a
>>>>>> good way of cleaning up the thread-specific data when a thread 
>>>>>> terminates?
>>>>>> Looking at the v8 code, it seems that ThreadManager::FreeThreadResources 
>>>>>> in
>>>>>> v8threads.cc would be a place this might happen when called from the 
>>>>>> Locker
>>>>>> destructor when top_level_ is set. But maybe for good reason not all
>>>>>> thread-specific data seems to be cleaned up?
>>>>>>
>>>>>
>>>>> Just because a thread releases a lock doesn't mean that thread is
>>>>> about to die. It might come back to V8 later. So any cleanup would have to
>>>>> be triggered explicitly by the embedder, it can't just happen 
>>>>> automatically.
>>>>>
>>>>>
>>>>>> More specifically, as I have lots of threads use an isolate and then
>>>>>> terminate, their Isolate::PerIsolateThreadData remains queued off
>>>>>> of Isolate::ThreadDataTable list_. While this is essentially a memory 
>>>>>> leak,
>>>>>> even worse, the queue just gets crazy long and the master (so first 
>>>>>> Isolate
>>>>>> user) thread's Isolate::PerIsolateThreadData (being at the end) takes 
>>>>>> crazy
>>>>>> long to find. And the entire massive queue has to be scanned each time a
>>>>>> new thread uses the Isolate to determine that the thread does not yet 
>>>>>> have
>>>>>> an Isolate::PerIsolateThreadData. Eventually v8 ends up spending most of
>>>>>> its time scanning the Isolate::PerIsolateThreadData queue.
>>>>>>
>>>>>
>>>>> Yes.
>>>>>
>>>>>
>>>>>> The solution would be to be able to get to
>>>>>> Isolate::ThreadDataTable::Remove for defunct threads but I didn't see a
>>>>>> path there from any embedder API functions. Is there one? And it looks 
>>>>>> like
>>>>>> might be a few other thread-specific bits and bobs that might be left
>>>>>> laying around, anyway.
>>>>>>
>>>>>> Maybe v8 just isn't built with apps that steadily create and destroy
>>>>>> thread's in mind and such apps should just use their own thread pool
>>>>>> managers and avoid allowing threads to terminate?
>>>>>>
>>>>>
>>>>> Precisely.
>>>>>
>>>>> I think we would probably accept a patch (as long as it's not overly
>>>>> intrusive) that adds the ability to discard thread-specific data. It's 
>>>>> just
>>>>> never been high enough on our priority list, because as you already said,
>>>>> embedders can easily employ a thread pool to get by.
>>>>>
>>>>>
>>>>>> Easy enough to do that though it's even easier if we don't have to --
>>>>>> the overhead of creating and joining threads is pretty tiny these days,
>>>>>> especially relative to the "real work" that likely happens on each 
>>>>>> thread.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> --
>>>>>> --
>>>>>> v8-users mailing list
>>>>>> v8-u...@googlegroups.com
>>>>>> http://groups.google.com/group/v8-users
>>>>>> ---
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "v8-users" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to v8-users+u...@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>> --
>>> --
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>> http://groups.google.com/group/v8-users
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "v8-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to v8-users+u...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to