ns that consensus is only needed
for leader election and never for log replication, so no 2PC needed
here. Somewhat similar to sequencer based total order (SEQUENCER)...
Do you guys see this as being potentially beneficial to Infinispan ?
On 16/05/14 17:35, Emmanuel Bernard wrote:
> On Fri 2014
uys should check out.
>
> http://thesecretlivesofdata.com/raft/
>
> Some other good resources.
>
> http://thinkdistributed.io/blog/2013/07/12/consensus.html
> https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf
> https://github.com/mgodave/barge
--
B
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
>
> --
> Romain PELISSE,
No, my good friend Ales sent me that link...
On 05/03/14 16:31, Vladimir Blagojevic wrote:
> +1 Are you in Slovenia?
> On 3/5/2014, 10:26 AM, Bela Ban wrote:
>> https://www.facebook.com/HumanFishBrewery
>
> ___
> infinispan-dev mail
https://www.facebook.com/HumanFishBrewery
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
.2 will be used for UDP and TCP only, I'm not talking about the shmem
transport.
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
ISPN-JGRPs-OSI-transport-choices-and-ambitions-tp4028925.html
> Sent from the Infinispan Developer List mailing list archive at Nabble.com.
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@
-
>>>> Galder Zamarreño
>>>> gal...@redhat.com
>>>> twitter.com/galderz
>>>>
>>>> Project Lead, Escalante
>>>> http://escalante.io
>>>>
>>>> Engineer, Infinispan
>>>> http://infinispan.
iling list infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
he presentation.
>
> [1] https://www.youtube.com/watch?v=04qNcovQKLA
> [2]
> https://drive.google.com/file/d/0B2Kv0W2Q52iOQkQ5a09Fam5PMjA/edit?usp=sharing
>
> Ryan
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev ma
thanks to everybody involved! Please visit our downloads
> <http://infinispan.org/download/>section to find the latest release.
> Also if you have any questions please check our forums
> <http://infinispan.org/community/>, our mailing lists
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>or ping us
&
4Object SingleKeyNonTxInvocationContext.key
> 28 4CacheEntry
> SingleKeyNonTxInvocationContext.cacheEntry
> 32 (object boundary, size estimate)
>
> which recovers 20% ..
>
> Looking fo
On 11/11/13 9:27 AM, Radim Vansa wrote:
> On 11/08/2013 05:47 PM, Bela Ban wrote:
>>
>> On 11/8/13 4:15 PM, Radim Vansa wrote:
>>> First of all, I think that naming "old", "new" where 3.2.7 new == 3.4.0
>>> old sucks. Let's use some
re's an 3.2.7 old as well.
> Bela, jgroups protocols are complicated as as they are :-)
On purpose; it's called job security for me !! :-)
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infini
Hi Mircea,
On 11/8/13 8:05 PM, Mircea Markus wrote:
> On Nov 8, 2013, at 10:47 AM, Bela Ban wrote:
>
>> I think I'll ignore the DONT_BUNDLE flag on the sender's side if we have
>> the right bundler in place. Take a look at [1] and let me know what you
" bundler
(which I want to kill at some point) would not perform well when
bundling all messages.
> Radim
>
> On 11/08/2013 11:47 AM, Bela Ban wrote:
>> I think I'll ignore the DONT_BUNDLE flag on the sender's side if we have
>> the right bundler in place. Tak
I think I'll ignore the DONT_BUNDLE flag on the sender's side if we have
the right bundler in place. Take a look at [1] and let me know what you
think...
[1] https://issues.jboss.org/browse/JGRP-1737
--
Bela Ban, JGroups lead (http://www.j
ing.
>
> I'd expect the talk on Tuesday to be a significant improvement, so
> holding out for that is probably preferable.
>
> Paul.
>
> On 6 Nov 2013, at 07:03, Bela Ban <mailto:b...@redhat.com>> wrote:
>
>>
>> There's a talk on this already on
ck on. Is it possible for someone from
> the team to attend remotely?
>
> The talk is on Tuesday the 12th November at 18:15 GMT and will be
> broadcast over Google Hangouts on air.
>
> More details here: http://bit.ly/19fc1a3
>
&
un...@lists.jboss.org] On Behalf Of Mircea Markus
> Sent: Wednesday, October 30, 2013 3:05 PM
> To: infinispan -Dev List
> Subject: [infinispan-dev] design for cluster events (wiki page)
>
> Hi,
>
> I've added a wiki page[1] capturing our discussions around cluster events.
> Any feedback welcomed!
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Handling-cluster-partitions
>
> Cheers,
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
On 10/31/13 11:20 PM, Sanne Grinovero wrote:
> On 31 October 2013 20:07, Mircea Markus wrote:
>>
>> On Oct 31, 2013, at 3:45 PM, Dennis Reed wrote:
>>
>>> On 10/31/2013 02:18 AM, Bela Ban wrote:
>>>>
>>>>> Also if we did have rea
On 10/31/13 4:45 PM, Dennis Reed wrote:
> On 10/31/2013 02:18 AM, Bela Ban wrote:
>>
>>> Also if we did have read only, what criteria would cause those nodes
>>> to be writeable again?
>> Once you become the primary partition, e.g. when a view is received
>&
On 10/31/13 1:23 PM, Mircea Markus wrote:
>
>> On 31 Oct 2013, at 07:18, Bela Ban wrote:
>>
>>
>>
>>> On 10/30/13 8:28 PM, William Burns wrote: Since it seems I can't
>>> comment on the wiki itself, I am just replying here.
>>>
>&g
On 10/31/13 1:05 PM, Mircea Markus wrote:
>
> On Oct 31, 2013, at 7:10 AM, Bela Ban wrote:
>
>> Just to clarify, can you comment on whether the statements below are true ?
>>
>> #1 When mode=repl is used, the event itself is never broadcast; it is
>> always loc
y Partition approach, then it can become
completely inaccessible (read-only). In this case, I envisage that a
sysadmin will be notified, who can then start additional nodes for the
system to acquire primary partition and become accessible again.
--
Bela Ban, JGroups lead (http://www.
ons around cluster events.
>>> Any feedback welcomed!
>>>
>>> [1]
>>> https://github.com/infinispan/infinispan/wiki/Handling-cluster-partitions
>>>
>>> Cheers,
>>> --
>>> Mircea Markus
>>> Infinispan lead (www.infin
ctor out of
convenience, but this names a new thread twice:
https://issues.jboss.org/browse/JGRP-1719
> On Thu, Oct 17, 2013 at 5:48 PM, Dennis Reed <mailto:der...@redhat.com>> wrote:
>
> On 10/17/2013 05:26 AM, Bela Ban wrote:
> > The other thing to look at is
On 10/17/13 4:48 PM, Dennis Reed wrote:
> On 10/17/2013 05:26 AM, Bela Ban wrote:
>> The other thing to look at is the apparent cost of Thread.setName(): if
>> we cannot avoid thread many creations, we could experiment with *not*
>> naming threads, although this is IMO n
gradation culprit, especially since I haven't been working with 6.0.
>
> Erik
>
> -Original Message-
> From: infinispan-dev-boun...@lists.jboss.org
> [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Bela Ban
> Sent: Wednesday, October 16, 2013 9:50 AM
> To
6.0.0.Final was moved to 23
> Oct (Dan)
> - we have some 20% performance regressions we need to look at before going
> final
> - I've updated JIRA:
> - added tasks for creating documentation and quickstarts
> - some JIRAs were moved
infinispan-dev
> > ___
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
tinfo/infinispan-dev
>
> _______
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
know whether QA
> is alone experiencing that or if there are more of us.
>
> Radim
>
> [1] https://issues.jboss.org/browse/JGRP-1675
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
http://belaban.blogspot.ch/2013/10/jgroups-340final-released.html
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Does this delay the final release accordingly ?
On 9/19/13 12:14 AM, Mircea Markus wrote:
> ..because of some errors in the server build:
> https://gist.github.com/mmarkus/6616484
>
> Cheers,
>
--
Bela Ban, JGroups lead (http://
000 nodes, then we'll use
less bandwidth to install views. On an almost saturated network, this
can make a difference.
> On Sep 12, 2013, at 4:15 PM, Bela Ban wrote:
>
>> FYI,
>>
>>
>> if you're running large clusters, you might be interested to know that
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
nly send delta views. I believe JGRP-1317 and JGRP-1354 will have a
huge impact on systems with a large number of nodes.
[1] https://issues.jboss.org/browse/JGRP-1317
[2] https://issues.jboss.org/browse/JGRP-1354
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
FYI:
http://belaban.blogspot.ch/2013/08/how-to-hijack-jgroups-channel-inside.html
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan
FYI.
Original Message
Subject: [jgroups-dev] Request for comments on FORK: grabbing a private
channel for communication from an existing channel
Date: Fri, 09 Aug 2013 17:37:46 +0200
From: Bela Ban
To: jg-dev
I wanted to solicit feedback on FORK [1].
Using FORK, one can
re there's a problem somewhere, as the system always
>> deadlocks in that protocol on my machine. I'll try to narrow it down.
>>
>> On 18 Jul 2013, at 08:53, Bela Ban wrote:
>>
>>> I could not reproduce this, I ran the test *many times* with both
&
node number 1 to 4 etc.
>>
>> What I do is open 3 terminals/tabs/screens, whichever you prefer, each run:
>> mvn -e exec:exec -Dnode=1
>> mvn -e exec:exec -Dnode=2
>> mvn -e exec:exec -Dnode=3
>> ...
>>
>> It'll pr
Mon given that the above would make it in.
>>
>> [1] https://github.com/infinispan/protostream
>>
>> Cheers,
>
> _______
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/lis
https://issues.jboss.org/browse/JGRP/fixforversion/12322127
[2] https://issues.jboss.org/browse/JGRP/fixforversion/12321921
[3] https://issues.jboss.org/browse/JGRP-1648
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispa
DLE removed.
> Interesting enough, using the DONT_BUNDLE + old bundler made performance not
> look good.
> Radim can provide the exact numbers.
That's completely counter to what I've measured; and it should be
exactly the other way round !
> On 25 Jun 2013, at 16:37, Bela Ban
1 July 2013 15:33, Bela Ban wrote:
>> This is re [1].
>>
>> I want to replace shorts as site-ids with strings. This would make EAP
>> and JDG configuration simpler, as both configure the mapping from site
>> string (e.g. "sfo") to ID (e.g. 1) on the fly. If we
ould be in 3.4 (not 3.3.x), and
as it will be used by Infinispan 6.0, I don't think this should be an
issue...
Thoughts ?
[1] https://issues.jboss.org/browse/JGRP-1640
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing
gt; old bundler: 150k reads/s, 250 writes/s
>
> ISPN-3221 fix (Pedro's branch)
> new bundler: 1.5M reads/s, 6050 writes/s
> old bundler: 1.3M reads/s, 3100 writes/s
>
> ISPN-3221 cherry-picked onto 73da108cdcf9db4f3edbcd6dbda6938d6e45d148
> new bundler: 1.5M read
-
> Radim Vansa
> Quality Assurance Engineer
> JBoss Datagrid
> tel. +420532294559 ext. 62559
>
> Red Hat Czech, s.r.o.
> Brno, Purkyňova 99/71, PSČ 612 45
> Czech Republic
>
>
> ___
> infinisp
We're currently holding the 5.3.0.CR2 release for this. I'm thinking to:
> - support old bundling as well
> - based on the when the jgroups perf problem is fixed we might release CR3 or
> include the fix in 5.3.1.Final
OK, sounds good
--
Bela Ban
Lead JGroups / Clustering Team
JBo
On 06/12/2013 09:30 AM, Mircea Markus wrote:
>
> On 12 Jun 2013, at 14:16, Bela Ban wrote:
>
>>
>>
>> On 06/12/2013 04:54 AM, Radim Vansa wrote:
>>> Hi,
>>>
>>> I was going through the commits (running tests on each of them) to
>>>
issue ?
>
> Radim
>
> Note: you may have seen my conversation with Pedro Ruivo on IRC about
> the bundler several days ago, in that time our configuration had old
> bundler. This was fixed, but as I have not built Infinispan properly
> (something got cached),
nks, Bela and sorry for my confusing question!
>
> Would you say that the bundler_type="new" is more performant than the
> bundler_type="old" in 3.2.7?
>
> Thanks,
> Alan
>
> - Original Message -
>> From: "Bela Ban"
>>
s getting 2.7.x, I meant 3.2.x. (The version
> included with Infinispan 5.2)
>
> Sorry,
> Alan
>
> ----- Original Message -
>> From: "Bela Ban"
>> To: infinispan-dev@lists.jboss.org
>> Sent: Wednesday, June 5, 2013 10:11:34 AM
>> Su
2.7.x config file has UDP.bundler_type="new"?
> Will it just be ignored?
bundler_type doesn't exist in 2.7, so this will throw an exception and
the stack won't be started
--
Bela Ban, JGroups lead (http://www.jgroups.org)
e this. This is not supported, so bundling of OOB messages
might break things.
Or it might as well work, but I for one won't make any guarantees... :-)
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
with an older Infinispan version may or may not work. I don't
recommend it
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
> Recall that this warning is valid for Infinispan 5.3 (and superior in
>>>> the future).
>>>>
>>>> Thank You.
>>>>
>>>> Regards,
>>>> Pedro Ruivo
>>>>
; reuse the cluster topology definition and failure detection, making it
> less awkward to use as in such
> a case you really don't want the topology from the AS to be out of sync with
> the
> topology used by some application cache.
>
> Sanne
--
Bela Ban, JGroups lead (http
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> <mailto:infinispan-dev@lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
7;ll blog about this shortly.
[1] http://docs.oracle.com/javase/tutorial/sdp/sockets/enable.html
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
them in sequence (gathering writes). One of the
> buffers will be the buffer passed by Infinispan to JGroups; JGroups then
> doesn't copy it into one single larger buffer, but passes it directly to
> the socket, avoiding a copy.
>
> - https://issues.jboss.org/browse/JGRP-815
&g
ttps://issues.jboss.org/browse/JGRP-809
- https://issues.jboss.org/browse/JGRP-816
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
t;
>> Yup, it needs to happen sometime. But the question is, when?
>>
>> Is there anyone out there who desperately needs Infinispan to work on Java 6?
>>
>> Cheers
>> Manik
--
Bela Ban, JGroups lead (http://www.jgroups.org)
sers would then need
> Java7 only if they want to use the Infinispan extensions.
>
> Sanne
>
> On 23 May 2013 12:06, Bela Ban wrote:
>>
>>
>> On 5/23/13 12:38 PM, Manik Surtani wrote:
>>> Yup, it needs to happen sometime. But the question is, when?
>>
2 stuff won't be in JGroups 4, only in 5.
But if you guys need to move to JDK 7 in Infinispan 6 already, then
that's fine with me, too.
I guess the AS has some requirements, too, so if they decide to move to
JDK 7, then we'll have to make that move too
--
Bela Ban, JG
On 5/22/13 1:17 PM, Mircea Markus wrote:
>
> On 21 May 2013, at 11:14, Bela Ban wrote:
>
>> [Mircea]
>>
>> Might be a problem in xsite replication when the keys that are updated
>> are not present. This happens all the time as xsite state transfer is
>>
.
I suggest to use a straight put() for updates, or a new internal
replaceIfPresentOrPutIfNotPresent()...
On 5/21/13 11:43 AM, Bela Ban wrote:
>
> Can someone investigate why CacheImpl.replaceInternal() throws an NPE ?
> I can reproduce this every time. Using the latest JDG.
--
Bela Ban
Can someone investigate why CacheImpl.replaceInternal() throws an NPE ?
I can reproduce this every time. Using the latest JDG.
See the attached stack trace for details.
--
Bela Ban, JGroups lead (http://www.jgroups.org)
11:39:36,342 WARN
picks
a random node (possibly excluding itself) in the cluster and forwards
the *not-yet deserialized* message to that node which then applies the
changes.
Do you have any numbers how the changes you made affect performance ?
> Any other thoughts? Feedback?
>
> [1] https://gi
; Brno, Purkyňova 99/71, PSČ 612 45
> | > Czech Republic
> | >
> | >
> |
> |
> | --
> | Galder Zamarreño
> | gal...@redhat.com
> | twitter.com/galderz
> |
> | Project Lead, Escalante
> | http://escalante.io
> |
> | Engineer, Infinispan
> | http://infinispan.org
> |
> |
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Yes, for regular messages we should always enable a queue. Don't
know/remember why we changed that for TCP...
On 4/25/13 3:04 PM, Mircea Markus wrote:
> Thanks Bela.
> On 23 Apr 2013, at 16:27, Bela Ban wrote:
>
>> Erik and I had a call and concluded that
>> - the regul
right
On 4/23/13 7:26 PM, Manik Surtani wrote:
>
> On 23 Apr 2013, at 16:37, Bela Ban wrote:
>
>>
>>
>> On 4/22/13 6:46 PM, Manik Surtani wrote:
>>>
>>> On 22 Apr 2013, at 16:46, Mircea Markus wrote:
>>>
>>>> would the
solve it ? Always prefer the data from the
primary partition ?
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
't happen. As a matter of fact, I've seen
this in practice, and MERGE{2,3} contain code that deals with asymmetric
partitions.
> If the effect of a nework failure is a completely isolated group, can
> we assume Hot Rod clients can't reach them either?
Partitions may inclu
choose what it is better for it. For example, if the
> application does not care about consistency, it can allow all partitions
> to read and write. On other hand, if the application is more restrict,
> it can allow only one partition to read and write and the others
> partitions reject
etermine if a partition
occurred, or if some member(s) crashed, so in the above case a hard and
fast rule of the primary partition have at 4 or more members make sense
for a primary partition approach.
--
Bela Ban, JGroups lead (http://www.jgroups.org)
_
ndling for state transfer if a MergeView is detected?
> | >
> | > - M
> | >
> | > [1] https://community.jboss.org/wiki/DesignDealingWithNetworkPartitions
> | >
> | > On 6 Apr 2013, at 04:26, Bela Ban wrote:
> | >
> | >>
> | >> On 4/5/13 3:
ng for state transfer if a MergeView is detected?
>>
>> - M
>>
>> [1] https://community.jboss.org/wiki/DesignDealingWithNetworkPartitions
>>
>> On 6 Apr 2013, at 04:26, Bela Ban wrote:
>>
>>>
>>> On 4/5/13 3:53 PM, Manik Surtani wrote:
>>>>
obj instead.
[1] https://issues.jboss.org/browse/JGRP-1620
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
On 4/13/13 1:42 PM, Sanne Grinovero wrote:
> On 13 April 2013 11:20, Bela Ban wrote:
>>
>>
>> On 4/13/13 2:02 AM, Sanne Grinovero wrote:
>>
>>> @All, the performance problem seemed to be caused by a problem in
>>> JGroups, which I've logged h
e same host.
> Also I'm wondering how hard it would be to
> have a log parser which converts my 10GB of text log from today in a
> graphical sequence diagram.
Yes, something like wireshark "follow TCP" feature would be very helpful !
--
Bela Ban, JGroups lead (http:/
ons won't hurt either,
> but first I'll set the blackhole to confirm that indexing is the cause.
>
> On Apr 12, 2013 11:12 AM, "Bela Ban" <mailto:b...@redhat.com>> wrote:
>
> OK, then it isn't JGroups related. Probably some indexing work don
#x27;
> 11:20:54,730 INFO [org.jboss.web] (ServerService Thread Pool -- 59)
> JBAS018224: Unregister web context: /capedwarf-tests
>
>
> On Apr 12, 2013, at 9:43 AM, Bela Ban wrote:
>
>> You need to set enable_bundling to *false*, not true !
>>
>> On 4/11/13 9:13 PM
> 841ms
>
> ---
>
> I added
>
> true
>
> to AS jgroups transport config, but no improvement.
>
> Any (other) idea?
>
> -Ales
>
>
> _______
> infinispan-dev maili
eleteAndQueryInA(org.jboss.test.capedwarf.cluster.test.QueryTest):
>>> Should not be here: null
>>>
>>> This is the "null" that is the cause of top NPE.
>>>
>>> -Ales
>>>
>>
>>> ___
> JProfiler, but I wanted to ask you about them. In this job configuration, 4
> nodes are used in the cluster, (edg-perf01 to edg-perf04) but JProfiler is
> only attached to the JVM on edg-perf01.
>
> Thanks,
> Alan
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
_
rtani ma...@jboss.org twitter.com/maniksurtani
>>
>> Platform Architect, JBoss Data Grid http://red.ht/data-grid
>>
>>
>> _______ infinispan-dev
>> mailing list infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> __
etter whether we have a partition, or not.
[1] http://www.jgroups.org/manual-3.x/html/user-advanced.html, section 5.6.2
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
after calling
> cache.put(key, value), so to me using async marshalling is just
> asking for trouble.
+1
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
ular reason for that?
>
> In the new thread pool needed by ISPN-2808, I cannot have the messages (i.e.
> the runnables) discarded, because it can cause some inconsistent state and
> even block all the cluster.
>
> I have set in my branch a CallerRunPolicy. If you see an
On 3/6/13 1:33 PM, Dan Berindei wrote:
> On Tue, Mar 5, 2013 at 6:04 PM, Bela Ban <mailto:b...@redhat.com>> wrote:
>
>
>
> On 3/5/13 3:30 PM, Erik Salter wrote:
> > Hi guys,
> >
> > Keep in mind that some of your customers may have b
Erik
>
> -Original Message-
> From: infinispan-dev-boun...@lists.jboss.org
> [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Bela Ban
> Sent: Tuesday, March 05, 2013 1:50 AM
> To: infinispan-dev@lists.jboss.org
> Subject: Re: [infinispan-dev] [infinispan-internal]
On 3/4/13 6:35 PM, Dan Berindei wrote:
>
> On Mon, Mar 4, 2013 at 10:28 AM, Bela Ban <mailto:b...@redhat.com>> wrote:
>
> Another node: in general, would it make sense to use shorter names ?
> E.g. instead of
>
> ** New view: [jdg-perf-01-60164|9] [
nity.jboss.org/wiki/ClusteringMeetingLondonFeb2013
>
> Sanne
>
> On 24 February 2013 10:26, Bela Ban wrote:
>>
>> Mircea, Dan, Pedro, Sanne and I had a meeting in London this week on how
>> to use the new features of JGroups 3.3 in Infinispan 5.3, I've copied
>> the minute
5330,
> | jdg-perf-01-24793, jdg-perf-01-35602, jdg-perf-02-7751,
> | jdg-perf-02-37056, jdg-perf-02-50381, jdg-perf-02-53449,
> | jdg-perf-02-64954, jdg-perf-02-34066, jdg-perf-02-61515,
> | jdg-perf-02-65045 ...]
--
Bela Ban, JGroups lead (http://www.jgroups.org)
educe, then using
Infinispan is fine.
Comments inline ...
On 3/4/13 5:09 AM, matt hoffman wrote:
> Bela,
>
> Thanks a lot for the response. I really appreciate your input.
>
> Some comments inline:
>
> On Mar 3, 2013 4:16 AM, "Bela Ban" <mailto:b...@redhat.com&g
yService to target the task to the
> given Address, it might get picked up by another node that shares that
> same hash. But I can add in a direct-to-single-Address capability in if
> that seems worthwhile. Alternately, I can just use entirely different
> interfaces (DistributedFJExecutorService, DistributedFJTask?).
>
> Thoughts? Concerns? Glaring issues?
>
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
ords, do you expect the 3.3 line to be included
> in EAP 6.1 ?
> Otherwise I'm stuck on older version.
>
> Sanne
>
> On 28 February 2013 11:58, Bela Ban wrote:
>> FYI,
>>
>> added to Nexus.
>>
>> This completes the implementation of message batc
101 - 200 of 462 matches
Mail list logo