when i use indexFactory="org.fusesource.leveldbjni.JniDBFactory" i am getting
following errors
o.a.a.l.LevelDBClient [Log.scala:112] Could not load factory:
org.fusesource.leveldbjni.JniDBFactory due to:
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no
leveldbjni64-1.8 in java.
The following links states:
http://activemq.apache.org/masterslave.html
For those willing to try out new tech, the Replicated LevelDB Store gives
speeds similar to a SAN solution without the hassle of having to setup a
highly available shared file system.
During our testing with Replicated LevelD
this is similar error I saw after failovers.. you can try the latest
snapshots seems much better except for one issue.. see
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-Store-getting-EOF-exception-td4676541.html
--
View this message in context:
http://activemq.2283324.n4.nabble.com/
which version you are using? getting error after failover? I saw some
exception during testing with 5.9 most of them are fixed in 5.10 snapshot.
Any other exception or stack trace prior to this?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/replicatedLevelDB-not-r
I have 3 brokers setup for ReplicatedLevelDB master/slave conf.
I am running producer and slow consumer and doing failover by stopping
master broker.
After few failovers, the leveldb throws EOF exception while reading record
from the store. I am using the latest source code from activemq 5.10
bra
The leveldb seems to be corrupted after 10 failovers, broker was not able to
load records from leveldb, similar to the issue described in early post but
now i see these additional error msg
--
View this message in context:
http://activemq.2283324.n4.nabble.com/replicatedLevelDB-errors-after-fai
Tried testing with dec. 5 snapshot and getting following errors after about
10 failovers:
2013-12-06 12:06:39,673 | WARN | Could not load message seq: 45760 from
DataLocator(630f8b1, 2262) | org.apache.activemq.leveldb.LevelDBClient |
ActiveMQ NIO Worker 2
2013-12-06 12:06:39,673 | WARN | No rea
Ran failover test again and after about 25 or so fail overs I got the below
errors on the master. I did not see any other errors. I will try to run
with debug turned on next week to see it helps with debugging.
Note: even after this error i was able to do few more failover before things
stopped w
Today after the exception occurred, I updated the activemq start script to
redirect output to file instead of null and at the start i see the following
error. Does this help?
INFO | Attaching... Downloaded 295.12/295.12 kb and 4/4 files
INFO | Attached
java.io.IOException: invalid record positi
I believe there was no other errors prior to that exception, i will try to
reproduce and update you on it.
I downloaded the source yesterday form git:
parent: e57aeb3) | patch
See AMQ-4886. Updated tearDown so it can't hang, reduced timeouts, updated
to JUnit4
--
View this message in con
the issue seems to happen when broker that is being stopped generated the
following exception.
Store getting corrupted due to the way the stop is handled?
2013-11-15 09:26:52,881 | WARN | Transport Connection to:
tcp://10.44.173.146:47163 failed: java.io.IOException: Connection reset by
peer | o
I am using the latest changes from trunk and i see following error after
failover (still getting corrupted?)
2013-11-15 06:44:53,606 | INFO | jolokia-agent: Using access restrictor
classpath:/jolokia-access.xml | /hawtio | main
2013-11-15 06:44:53,743 | INFO | ActiveMQ WebConsole available at
ht
How I see following:
Is leveldb putting msg (Eg. WireFormatInfo, FlushCOmmand?) in store that
should not go in store?
evelDBStore | hawtdispatch-DEFAULT-3
2013-11-12 11:40:50,486 | INFO | Stopping BrokerService[largeamq] due to
exception, java.io.IOException: org.apache.activemq.command.WireForm
What would cause this unknown data type error -58? Looks like
replicatedleveldb is reading msg from disk (using activemq 5.9).
2013-11-11 16:42:42,625 | INFO | Slave has now caught up:
e2a40b80-1fee-4d13-a94d-521c05bb0130 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch
We received this error after the failover any idea why LevelDbClient is
getting these error, causes the broker to stop.
.activemq.leveldb.replicated.MasterLevelDBStore | hawtdispatch-DEFAULT-2
2013-11-05 12:09:57,386 | INFO | Stopping BrokerService[largeamq] due to
exception, java.io.IOException
I didn't understand why leveldb had the impact either, same conf. works when
we us kahadb..
For now we changed the nonpersistence queues to use vmcursors and it seem
much better..
--
View this message in context:
http://activemq.2283324.n4.nabble.com/non-persistence-msgs-very-slow-with-levedb-t
was this resolved? can you let us know how, i think we might be hitting
similar issue
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Slow-acknowledgement-tp4672049p4673266.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
We see only 1-4 per second msgs when we use leveldb and msg are
non-persistence and consumers are running. With kahadb the rate is 6000
msgper/sec.
what could be the reason for this big difference and leveldb should even
play a role since there are non-persistence msg right?
--
View this mess
I have 2 slaves and 1 master running and using persistent messges. The
master's memory usage is going high.
The following are the objects i see building up:
byte: 2.77G
LinkedList:235MB (9.8 mil instances)
org.apache.activemq.leveldb.replicated.FileTransferFrame:78.7M (2.45 mil.
instances)
..Repl
could this fix the memory issue?
http://activemq.2283324.n4.nabble.com/git-commit-Fixes-bug-in-replicated-leveldb-where-log-files-on-slaves-were-not-getting-GCed-td4671551.html
I will try this out...
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Replicated-LevelDB-Stor
When I test with Replicated LevelDB Store and try to fill up the store
(queue) by just running producer without any consumers, the memory usage
keeps going up and VM runs out of memory.
Has anyone see this issue.. any conf. or workarounds for this..
--
View this message in context:
http://
21 matches
Mail list logo