Just for others who receive the same kind of stacktraces. I was able to
reproduce the problem by starting the component with the aggregators
multiple times.

Obviously I started accidentally multiple instances and therefore multiple
camel/java processes "shared" the same aggregator-hawtdbs. So, I need to
check on startup if another instance for the same environment is already
running and then this problem should no more occur.

If I understand the docs correctly, LevelDB does this "automatically" with
a file lock that a second instance would not be able to acquire.

Thanks to all
Stephan



************************************************************************
Von "aabändle" bis "zwüsche" - das umfangreichste Berndeutsch-Wörterbuch
im Internet: http://www.berndeutsch.ch
Ausserdem: Das Blog zur Website unter http://blog.berndeutsch.ch
Facebook-Seite unter https://www.facebook.com/berndeutsch
Google+ Seite unter http://www.google.com/+berndeutsch


On Mon, Jul 22, 2013 at 12:44 PM, Stefan Burkard <sburk...@gmail.com> wrote:

> @Claus: I read there (
> http://leveldb.googlecode.com/svn/trunk/doc/index.html) that it is safe
> when only my camel/java process or even the writing process dies, but it is
> not safe when the whole operating system dies (power outage). Perhaps I get
> it wrong?
>
> @Raul: I don't expect that the Camel commiters build these tools, but the
> people who build the database. I just wanted to point out that I use
> frameworks to avoid building all by myself.
>
>
> ************************************************************************
> Von "aabändle" bis "zwüsche" - das umfangreichste Berndeutsch-Wörterbuch
> im Internet: http://www.berndeutsch.ch
> Ausserdem: Das Blog zur Website unter http://blog.berndeutsch.ch
> Facebook-Seite unter https://www.facebook.com/berndeutsch
> Google+ Seite unter http://www.google.com/+berndeutsch
>
>
> On Sun, Jul 21, 2013 at 1:27 PM, Raul Kripalani <r...@evosent.com> wrote:
>
>> > After all I use the great Camel framework to
>> avoid building general-purpose functionality like this by myself.
>>
>> At Apache Camel we only offer a component to interact with HawtDB, but
>> we're not responsible for the development nor tooling around that project.
>> Sorry.
>>
>> Regards,
>> Raúl.
>> On 18 Jul 2013 08:48, "Stefan Burkard" <sburk...@gmail.com> wrote:
>>
>> > Hi Claus
>> >
>> > I'm using camel-hawtdb 2.9.6 and (according to the classpath) hawtdb
>> 1.6.
>> > The fact that hawtdb has no recovery tools but I need to build them by
>> > myself is bad news to me. After all I use the great Camel framework to
>> > avoid building general-purpose functionality like this by myself.
>> >
>> > How about LevelDB? Has it better tool support? And can I already use
>> > leveldb with a Camel 2.x release? I stumbled over LevelDB while
>> searching
>> > for recovery options for HawtDB, but I didn't find an example how to
>> use it
>> > with the aggregator. Is there a unittest or similar I can look at?
>> >
>> > Thanks
>> > Stefan
>> >
>> >
>> >
>> >
>> > ************************************************************************
>> > Von "aabändle" bis "zwüsche" - das umfangreichste Berndeutsch-Wörterbuch
>> > im Internet: http://www.berndeutsch.ch
>> > Ausserdem: Das Blog zur Website unter http://blog.berndeutsch.ch
>> > Facebook-Seite unter https://www.facebook.com/berndeutsch
>> > Google+ Seite unter http://www.google.com/+berndeutsch
>> >
>> >
>> > On Wed, Jul 17, 2013 at 10:10 AM, Claus Ibsen <claus.ib...@gmail.com>
>> > wrote:
>> >
>> > > What version of Camel and HawtDB are you using?
>> > >
>> > > To try to recover you would possible need to write some java code with
>> > > the HawtDB API to load the corrupted file(s) and peak inside.
>> > >
>> > > Down the road we recommend using camel-leveldb instead of
>> > > camel-hawtdb. This uses LevelDB as the store instead which is a much
>> > > more mature and widespread used store.
>> > > https://code.google.com/p/leveldb/
>> > >
>> > > Apache ActiveMQ 5.9 offers leveldb out of the box, and is being
>> > > considered as the recommended/default store over its KahaDB store.
>> > >
>> > > The camel-leveldb has the same functionality as camel-hawtdb and is
>> > > very similar to setup.
>> > >
>> > >
>> > > On Tue, Jul 16, 2013 at 10:12 AM, Stefan Burkard <sburk...@gmail.com>
>> > > wrote:
>> > > > Hi Camel users
>> > > >
>> > > > I have a component with 2 persistent aggregators. One receives all
>> > > messages,
>> > > > one only a part of them. After a lot of test runs without problems I
>> > had
>> > > > yesterday a serious problem with the aggregator persistence
>> (hawtdb).
>> > > >
>> > > > I don't know yet what causes the problems, but however, problems can
>> > > occur.
>> > > > My problem is that I cannot recover the data from the hawtdb-files.
>> > > >
>> > > > Im my logs, I got first of all about 8 stacktraces like the attached
>> > > > "stacktrace1.txt". The number in the error message "The requested
>> page
>> > > was
>> > > > not an extent: 35" is growing from stacktrace to stacktrace from 35
>> to
>> > > 1163.
>> > > >
>> > > > Then, I got some stacktraces like the attached "stacktrace2.txt".
>> > > >
>> > > > Finally I got A LOT of stacktraces like the attached
>> "stacktrace3.txt".
>> > > >
>> > > > After shutting down the component gracefully, I tried to restart it,
>> > but
>> > > > this throws stacktraces like the attached "stacktrace-startup.txt".
>> > > >
>> > > > I can only start the component again if I rename the hawtdb-files so
>> > they
>> > > > are ignored and new hawtdb-files are created.
>> > > >
>> > > > This leaves me with the question: how can I recover the corrupted
>> > > > hawtdb-files? I didn't found anything about this subject and if
>> this is
>> > > not
>> > > > possible, this would be a real show-stopper.
>> > > >
>> > > > Thanks for any help
>> > > > Stefan
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Claus Ibsen
>> > > -----------------
>> > > Red Hat, Inc.
>> > > Email: cib...@redhat.com
>> > > Twitter: davsclaus
>> > > Blog: http://davsclaus.com
>> > > Author of Camel in Action: http://www.manning.com/ibsen
>> > >
>> >
>>
>
>

Reply via email to