No way I'd be aware of (you can specify the rule directly in annotation, 
but that's not what I'd like to do). Though, I don't think it would be 
too complicated to implement.
But as I've said, I was inclining towards another AOP frameworks, or 
more low-level solutions such as Javassist. For example similar tool 
Kamon [1] uses AspectJ Weaver.

Roman, do you have the document describing pros and cons of those other 
AOP frameworks?

[1] http://kamon.io/

On 11/10/2014 11:05 AM, Bela Ban wrote:
> Does Byteman allow you to use annotations as injection points ? Didn't
> know that. Can you show a sample RULE ?
>
> On 10/11/14 10:22, Radim Vansa wrote:
>> On 11/07/2014 02:27 PM, Bela Ban wrote:
>>> On 07/11/14 13:45, Radim Vansa wrote:
>>>> Hijacking thread 'Remoting package refactor' as the discussion has shifted.
>>>>
>>>> Sure, AOP is another approach. However, besided another limitations,
>>>> Byteman rules are quite fragile with respect to different versions: if
>>>> you're injecting code based on internal implementation method, when the
>>>> name/signature changes, the rule is broken. Sometimes you even have to
>>>> use AT LINE to formulate the injection point.
>>> Right. This is the same problem though as when support needs to create a
>>> (e.f. one-off) patch to be applied by a customer: they need to grab the
>>> exact same version the customer is running.
>>>
>>> So each diagnosis package would have to be dependent on the version (of
>>> JGroups or JDG) used. Regardless of whether custom rules are added by a
>>> support engineer, this has to be tested anyway before sending it off to
>>> the customer.
>>>
>>>> Would you accept a compile-time dependency to some annotations package
>>>> in JGroups that could 'tag' the injection points? The idea is that
>>>> anyone changing the source code would move the injection point
>>>> annotations as well.
>>> You mean something like this ?
>>>
>>> @InjectionPoint("down") public void down(Event e)
>>>
>>> or
>>>
>>> @InjectingPoint ("num_msgs_sent")
>>> protected int num_msgs_sent;
>>>
>>> No, this won't work... how would you do that ?
>> Yes, this is the annotation syntax I had in mind, though, I was thinking
>> about more high-level abstraction what's happening than just marking
>> down injection points.
>> Such as
>>
>> @ReceivedData
>> public void receive(@From Address sender, byte[] data, int offset, @Size
>> int length) {...}
>>
>> @ProcessingMessage
>> protected void passMessageUp(@Message msg, ...) { ... }
>>
>> @ProcessingBatch
>> protected void deliverBatch(@Batch MessageBatch batch)  { ... }
>>
>>
>>> I don't really like this, on a general principle: AOP should *not* have
>>> to change the src code in order to work. And the fact of the matter is
>>> that you won't be able to identify *all* injection points beforehand...
>>> unless you want to sprinkle your code with annotations.
>> I have to agree with the fact that AOP should not have to change source.
>> I had a special case in mind, that is tied to JGroups inspection and
>> offers a way the monitoring with zero overhead when the monitoring is
>> not in place. There, you'd just conceptually describe what JGroups does.
>>
>>>
>>>> I was already thinking about this in relation with Message Flow Tracer
>>>> [1] (not working right now as the JGroups have changed since I was
>>>> writing that)?
>>> I took a quick look: nice !
>>>
>>> This is exactly what I meant. Should be some sort of rule base in a VCS,
>>> to which support engineers add rules when they have a case which
>>> requires it and they deem it to be generally useful.
>>>
>>> Re API changes: doesn't Byteman have functionality which can check a
>>> rule set against a code base (offline), to find out incompatibilities ?
>>> Something like a static rule checker ?
>> Right, this is possible - but you won't find if you've added another
>> place that should be checked (e.g. MFT has to determine whether now
>> you're processing a whole batch, or message alone - when you add a
>> functionality to grab some stored messages and start processing them, as
>> you do in UNICASTx, you won't spot that automatically).
>>
>> Beyond that, there are many false positives. E.g. if you have a never
>> terminating loop in Runnable.run(), there is no place to inject the AT
>> EXIT code and Byteman complains.
>>
>> In the end, human intervention is always required.
>>
>> Radim
>>
>>>> Roman Macor is right now updating the rules and I was
>>>> hoping that we could insert annotations into JGroups that would be used
>>>> instead of the rules (I was already considering different AOP framework
>>>> as Byteman does not allow AT EXIT to catch on leaving exceptions [2]).
>>> Yes, I've also run into this before, not really nice.
>>>
>>>> Radim
>>>>
>>>> [1] https://github.com/rvansa/message-flow-tracer
>>>> [2] https://issues.jboss.org/browse/BYTEMAN-237
>>>>
>>>> On 11/07/2014 01:21 PM, Bela Ban wrote:
>>>>> Hi Radim,
>>>>>
>>>>> no I haven't. However, you can replace the thread pools used by JGroups
>>>>> and use custom pools.
>>>>>
>>>>> I like another idea better: inject Byteman code at runtime that keeps
>>>>> track of this, and *other useful stats as well*.
>>>>>
>>>>> It would be very useful to support if we could ship a package to a
>>>>> customer that is injected into their running system and grabs all the
>>>>> vital stats we need for a few minutes, then removes itself again and
>>>>> those stats are then sent to use as a ZIP file.
>>>>> The good thing about byteman is that it can remove itself without a
>>>>> trace; ie. there's no overhead before / after running byteman.
>>>>>
>>>>>
>>>>> On 07/11/14 09:31, Radim Vansa wrote:
>>>>>> Btw., have you ever considered checks if a thread returns to pool
>>>>>> reasonably often? Some of the other datagrids use this, though there's
>>>>>> not much how to react upon that beyond printing out stack traces (but
>>>>>> you can at least report to management that some node seems to be broken).
>>>>>>
>>>>>> Radim
>>>>>>
>>>>>> On 11/07/2014 08:35 AM, Bela Ban wrote:
>>>>>>> That's exactly what I suggested. No config gives you a shared global
>>>>>>> thread pool for all caches.
>>>>>>>
>>>>>>> Those caches which need a separate pool can do that via configuration
>>>>>>> (and of course also programmatically)
>>>>>>>
>>>>>>> On 06/11/14 20:31, Tristan Tarrant wrote:
>>>>>>>> My opinion is that we should aim for less configuration, i.e.
>>>>>>>> threadpools should mostly have sensible defaults and be shared by
>>>>>>>> default unless there are extremely good reasons for not doing so.
>>>>>>>>
>>>>>>>> Tristan
>>>>>>>>
>>>>>>>> On 06/11/14 19:40, Radim Vansa wrote:
>>>>>>>>> I second the opinion that any threadpools should be shared by default.
>>>>>>>>> There are users who have hundreds or thousands of caches and having
>>>>>>>>> separate threadpool for each of them could easily drain resources. And
>>>>>>>>> sharing resources is the purpose of threadpools, right?
>>>>>>>>>
>>>>>>>>> Radim
>>>>>>>>>
>>>>>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote:
>>>>>>>>>> #1 I would by default have 1 thread pool shared by all caches
>>>>>>>>>> #2 This global thread pool should be configurable, perhaps in the
>>>>>>>>>> <global> section ?
>>>>>>>>>> #3 Each cache by default uses the gobal thread pool
>>>>>>>>>> #4 A cache can define its own thread pool, then it would use this one
>>>>>>>>>> and not the global thread pool
>>>>>>>>>>
>>>>>>>>>> I think this gives you a mixture between ease of use and flexibility 
>>>>>>>>>> in
>>>>>>>>>> configuring pool per cache if needed
>>>>>>>>>>
>>>>>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote:
>>>>>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote:
>>>>>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote:
>>>>>>>>>>>>> * added a single thread remote executor service. This will handle 
>>>>>>>>>>>>> the
>>>>>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups 
>>>>>>>>>>>>> incoming
>>>>>>>>>>>>> threads and with a new executor service, each cache can process 
>>>>>>>>>>>>> their
>>>>>>>>>>>>> own FIFO commands concurrently.
>>>>>>>>>>>> +1000. This allows multiple updates from the same sender but to
>>>>>>>>>>>> different caches to be executed in parallel, and will speed thing 
>>>>>>>>>>>> up.
>>>>>>>>>>>>
>>>>>>>>>>>> Do you intend to share a thread pool between the invocations 
>>>>>>>>>>>> handlers of
>>>>>>>>>>>> the various caches, or do they each have their own thread pool ? 
>>>>>>>>>>>> Or is
>>>>>>>>>>>> this configurable ?
>>>>>>>>>>>>
>>>>>>>>>>> That is question that cross my mind and I don't have any idea what 
>>>>>>>>>>> would
>>>>>>>>>>> be the best. So, for now, I will leave the thread pool shared 
>>>>>>>>>>> between
>>>>>>>>>>> the handlers.
>>>>>>>>>>>
>>>>>>>>>>> Never thought to make it configurable, but maybe that is the best
>>>>>>>>>>> option. And maybe, it should be possible to have different 
>>>>>>>>>>> max-thread
>>>>>>>>>>> size per cache. For example:
>>>>>>>>>>>
>>>>>>>>>>> * all caches using this remote executor will share the same instance
>>>>>>>>>>> <remote-executor name="shared" shared="true" max-threads=4.../>
>>>>>>>>>>>
>>>>>>>>>>> * all caches using this remote executor will create their own thread
>>>>>>>>>>> pool with max-threads equals to 1
>>>>>>>>>>> <remote-executor name="low-throughput-cache" shared="false"
>>>>>>>>>>> max-threads=1 .../>
>>>>>>>>>>>
>>>>>>>>>>> * all caches using this remote executor will create their own thread
>>>>>>>>>>> pool with max-threads equals to 1000
>>>>>>>>>>> <remote executor name="high-throughput-cache" shared="false"
>>>>>>>>>>> max-thread=1000 .../>
>>>>>>>>>>>
>>>>>>>>>>> is this what you have in mind? comments?
>>>>>>>>>>>
>>>>>>>>>>> Cheers,
>>>>>>>>>>> Pedro
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> infinispan-dev mailing list
>>>>>>>>>>> infinispan-dev@lists.jboss.org
>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> infinispan-dev mailing list
>>>>>>>> infinispan-dev@lists.jboss.org
>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>>
>>


-- 
Radim Vansa <rva...@redhat.com>
JBoss DataGrid QA

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to