Re: Error "evicting cold readers" when launching an EmbeddedCassandraService for a second time

2014-05-02 Thread DuyHai Doan
"What do you mean by truncating tables BTW?"

"truncate table ;"  in CQL3

 I think truncating table is sufficient, as long as you do not run your
tests in multi-threaded env.

If multi-threaded env I would advise to randomize partition keys so the
tests do not step over each other.

 If you want a sample impl of test resource with table truncating, have a
look here:
https://github.com/doanduyhai/Achilles/blob/master/achilles-junit/src/main/java/info/archinnov/achilles/junit/AchillesResource.java


On Fri, May 2, 2014 at 7:41 PM, Clint Kelly  wrote:

> Hi Duy Hai,
>
> I was just trying to be extra-paranoid and to make sure that any screw up
> in one unit test did not at all affect the environment for my other unit
> tests.
>
> What do you mean by truncating tables BTW?
>
> Best regards,
> Clint
>
>
>
>
> On Thu, May 1, 2014 at 11:05 AM, DuyHai Doan  wrote:
>
>> Hello Clint
>>
>>  Why do you need to remove all SSTables or dropping keyspace between
>> tests ? Truncating tables is not enough to have clean and repeatable tests ?
>>
>>  Regards
>>
>>  Duy Hai DOAN
>>
>>
>> On Thu, May 1, 2014 at 5:54 PM, Clint Kelly wrote:
>>
>>> Hi,
>>>
>>> I am deleting all of the directories for SSTables, etc. between tests.
>>> My goal is for each test to start off with a completely blank-slate
>>> Cassandra install.
>>>
>>> I can more-or-less get what I want by just keeping the same
>>> EmbeddedCassandraSession active through *all* of my unit tests and then
>>> just creating and dropping keyspaces every test, but I'd like to know how
>>> to totally start over if I'd like to.
>>>
>>> Thanks!
>>>
>>> Best regards,
>>> Clint
>>>
>>>
>>>
>>>
>>> On Thu, May 1, 2014 at 2:15 AM, DuyHai Doan wrote:
>>>
 Hello Clint

  Just one question, are you sure that nothing from your code remove the
 SSTables between tests ? I'm using extensively the same infrastructure than
 the EmbeddedCassandraService with Achilles and I have no such issue so far

  Regards



 On Wed, Apr 30, 2014 at 8:43 PM, Clint Kelly wrote:

> Hi all,
>
> I have a unit test framework for a Cassandra project that I'm working
> on.  For every one of my test classes, I delete all of the data file,
> commit log, and saved cache locations, start an EmbeddedCassandraService,
> and populate a keyspace and tables from scratch.
>
> Currently, the unit tests that run in my first test class work fine,
> but those in my second class die with this error:
>
> java.io.FileNotFoundException:
> /Users/clint/work/external-repos/cassandra2-hadoop2/target/cassandra/data/system/local/system-local-jb-5-Data.db
> (No such file or directory)
>
> This error happens immediately after I call
> EmbeddedCassandraService.start();
>
> I turned on debugging and traced through the code, and I see this
> right before the error message:
>
> 14/04/30 11:22:47 DEBUG org.apache.cassandra.service.FileCacheService:
> Evicting cold readers for
> /Users/clint/work/external-repos/cassandra2-hadoop2/target/cassandra/data/system/local/system-local-jb-5-Data.db
>
> This seems to happen in a callback when a value (in this case, a file
> reader) is evicted from a Guava cache.
>
> I assume that the problem that I have is something like the following:
>
>- There is some kind of reading thread associated with
>target/cassandra/data/system/local/system-local-jb-5-Data.db
>- Even after I stop my EmbeddedCassandraService and blow away all
>of the data file, commit log, and saved cache locations from my first 
> unit
>test, the information about the reader for the now-deleted data file 
> still
>exists.
>- Later when this reference expires in the cache and Cassandra
>goes to notify the reader, the error occurs because the file no longer
>exists.
>
> Does anyone have any suggestions on how to deal with this?
>
> Best regards,
> Clint
>


>>>
>>
>


Re: Error "evicting cold readers" when launching an EmbeddedCassandraService for a second time

2014-05-02 Thread Clint Kelly
Hi Duy Hai,

I was just trying to be extra-paranoid and to make sure that any screw up
in one unit test did not at all affect the environment for my other unit
tests.

What do you mean by truncating tables BTW?

Best regards,
Clint




On Thu, May 1, 2014 at 11:05 AM, DuyHai Doan  wrote:

> Hello Clint
>
>  Why do you need to remove all SSTables or dropping keyspace between tests
> ? Truncating tables is not enough to have clean and repeatable tests ?
>
>  Regards
>
>  Duy Hai DOAN
>
>
> On Thu, May 1, 2014 at 5:54 PM, Clint Kelly  wrote:
>
>> Hi,
>>
>> I am deleting all of the directories for SSTables, etc. between tests.
>> My goal is for each test to start off with a completely blank-slate
>> Cassandra install.
>>
>> I can more-or-less get what I want by just keeping the same
>> EmbeddedCassandraSession active through *all* of my unit tests and then
>> just creating and dropping keyspaces every test, but I'd like to know how
>> to totally start over if I'd like to.
>>
>> Thanks!
>>
>> Best regards,
>> Clint
>>
>>
>>
>>
>> On Thu, May 1, 2014 at 2:15 AM, DuyHai Doan  wrote:
>>
>>> Hello Clint
>>>
>>>  Just one question, are you sure that nothing from your code remove the
>>> SSTables between tests ? I'm using extensively the same infrastructure than
>>> the EmbeddedCassandraService with Achilles and I have no such issue so far
>>>
>>>  Regards
>>>
>>>
>>>
>>> On Wed, Apr 30, 2014 at 8:43 PM, Clint Kelly wrote:
>>>
 Hi all,

 I have a unit test framework for a Cassandra project that I'm working
 on.  For every one of my test classes, I delete all of the data file,
 commit log, and saved cache locations, start an EmbeddedCassandraService,
 and populate a keyspace and tables from scratch.

 Currently, the unit tests that run in my first test class work fine,
 but those in my second class die with this error:

 java.io.FileNotFoundException:
 /Users/clint/work/external-repos/cassandra2-hadoop2/target/cassandra/data/system/local/system-local-jb-5-Data.db
 (No such file or directory)

 This error happens immediately after I call
 EmbeddedCassandraService.start();

 I turned on debugging and traced through the code, and I see this right
 before the error message:

 14/04/30 11:22:47 DEBUG org.apache.cassandra.service.FileCacheService:
 Evicting cold readers for
 /Users/clint/work/external-repos/cassandra2-hadoop2/target/cassandra/data/system/local/system-local-jb-5-Data.db

 This seems to happen in a callback when a value (in this case, a file
 reader) is evicted from a Guava cache.

 I assume that the problem that I have is something like the following:

- There is some kind of reading thread associated with
target/cassandra/data/system/local/system-local-jb-5-Data.db
- Even after I stop my EmbeddedCassandraService and blow away all
of the data file, commit log, and saved cache locations from my first 
 unit
test, the information about the reader for the now-deleted data file 
 still
exists.
- Later when this reference expires in the cache and Cassandra goes
to notify the reader, the error occurs because the file no longer 
 exists.

 Does anyone have any suggestions on how to deal with this?

 Best regards,
 Clint

>>>
>>>
>>
>


Re: repair -pr does not return

2014-05-02 Thread Robert Coli
On Fri, May 2, 2014 at 12:29 AM, Jan Kesten  wrote:

> I'm running a cassandra cluster with 2.0.6 and 6 nodes. As far as I know,
> routine repairs are still mandatory for handling tombstones - even I
> noticed that the cluster now does a "snapshot-repair" by default.
>
> Now my cluster is running a while and has a load of about 200g per node -
> running a "nodetool repair -pr" on one of the nodes seems to run forever,
> right now it's running for 2 complete days and does not return.
>

https://issues.apache.org/jira/browse/CASSANDRA-5220

The reports I am getting on this list and in #cassandra about the newly
re-written repair in 2.0.x line, with vnodes on a real sized data set, is
that it often does not work, and if it does work, not in tractable time. As
other posters have said, it is continually being fixed and improved. If I
were you, I would consider increasing gc_grace_seconds to something like 34
days until repair starts working more efficiently with vnodes.

https://issues.apache.org/jira/browse/CASSANDRA-5850

=Rob


Re: Backup procedure

2014-05-02 Thread tommaso barbugli
In my tests compressing with lzop sstables (with cassandra compression
turned on) resulted in approx. 50% smaller files.
Thats probably because the chunks of data compressed by lzop are way bigger
than the average size of writes performed on Cassandra (not sure how data
is compressed but I guess it is done per single cell so unless one stores)


2014-05-02 19:01 GMT+02:00 Robert Coli :

> On Fri, May 2, 2014 at 2:07 AM, tommaso barbugli wrote:
>
>> If you are thinking about using Amazon S3 storage I wrote a tool that
>> performs snapshots and backups on multiple nodes.
>> Backups are stored compressed on S3.
>> https://github.com/tbarbugli/cassandra_snapshotter
>>
>
> https://github.com/JeremyGrosser/tablesnap
>
> SSTables in Cassandra are compressed by default, if you are re-compressing
> them you may just be wasting CPU.. :)
>
> =Rob
>
>


Re: Cassandra slow on PasswordAuthenticator

2014-05-02 Thread Robert Coli
On Fri, May 2, 2014 at 10:00 AM, Patricia Gorla
wrote:

> The latency you're seeing is likely just the cost of using authentication.
>

To expand slightly, it's relatively likely that no one has done performance
optimization of auth related code.

2 seconds seems "too long" for auth, I would probably file a JIRA were I
you.

=Rob


Re: Backup procedure

2014-05-02 Thread Robert Coli
On Fri, May 2, 2014 at 2:07 AM, tommaso barbugli wrote:

> If you are thinking about using Amazon S3 storage I wrote a tool that
> performs snapshots and backups on multiple nodes.
> Backups are stored compressed on S3.
> https://github.com/tbarbugli/cassandra_snapshotter
>

https://github.com/JeremyGrosser/tablesnap

SSTables in Cassandra are compressed by default, if you are re-compressing
them you may just be wasting CPU.. :)

=Rob


Re: Cassandra slow on PasswordAuthenticator

2014-05-02 Thread Patricia Gorla
Bhaskarjya,

The latency you're seeing is likely just the cost of using authentication.

Cheers,
-- 
Patricia Gorla
@patriciagorla

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com 


Re:

2014-05-02 Thread Patricia Gorla
Ebot,

Could you share a bit more about what you are trying to achieve? CQL3 does
have an analogy to dynamic
columns,
and you could potentially use collections (if your data isn't too large).

Hard to say more without details.

Cheers,
-- 
Patricia Gorla
@patriciagorla

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com 


Re: Some questions to adding a new datacenter into cassandra cluster.

2014-05-02 Thread Patricia Gorla
On Wed, Apr 30, 2014 at 10:21 AM, Arindam Barua  wrote:

> Since we don’t change the seeds configuration in the yaml files of DC1 and
> DC2, how do DC1 and DC2 know the nodes in the DC3 if they reboot for some
> reason later?


Additional note: you want to have at least one seed node per availability
zone, and 3 seed nodes per DC. If you add a new DC into the cluster, the
safe route would be to update the seed list across all nodes.

Cheers,
-- 
Patricia Gorla
@patriciagorla

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com 


Re: Backup procedure

2014-05-02 Thread Patricia Gorla
Artur,

Replies inline.

On Fri, May 2, 2014 at 10:42 AM, Artur Kronenberg <
artur.kronenb...@openmarket.com> wrote:

> we are running a 7 node cluster with an RF of 5. Each node holds about 70%
> of the data and we are now wondering about the backup process.
>

What are you using for a backup process at the moment? Or, even just your
application stack. If you're using Amazon's AWS it is simple to get started
with a project like tablesnap ,
which listens for new sstables and uploads them to S3.

You can also take snapshots of the data on each node with 'nodetool
snapshot', and move the data manually.


>  1. Is there a best practice procedure or a tool that we can use to have
> one backup that holds 100 % of the data or is it necessary for us to take
> multiple backups.
>

Backups on a distributed system generally refers to the concept that you
have captured the state of the database at a particular point in time. The
size and spread of your data will be the limiting factor in having one
backup — you can store the data from each node on a single computer, you
just won't be able to combine the data into one node without some extra
legwork.


> 2. If we have to use multiple backups, is there a way to combine them? We
> would like to be able to start up a 1 node cluster that holds 100% of data
> if necessary. Can we just chug all sstables into the data directory and
> cassandra will figure out the rest?
>


> 4. If all of the above would work, could we in case of emergency setup a
> massive 1-node cluster that holds 100 % of the data and repair the rest of
> our cluster based of this? E.g. have the 1 node run with the correct data,
> and then hook it into our existing cluster and call repair on it to restore
> data on the rest of our nodes?
>

You could bulk load the sstable data to a smaller cluster using the
'sstableloader' tool. I gave a
webinar for
Planet Cassandra a few months ago about how to backfill in data to your
cluster, this could help here.

3. How do we handle the commitlog files from all of our nodes? Given we'd
> like to restore to a certain point in time and we have all the commitlogs,
> can we have commitlogs from multiple locations in the commitlog folder and
> cassandra will pick and execute the right thing?
>

You'll want to use 'nodetool
drain'
beforehand to avoid this issue. This makes the node unavailable for writes,
flushes the memtables and replays the commitlog.

Cheers,
-- 
Patricia Gorla
@patriciagorla

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com 


Re: *Union* data type modeling in Cassandra

2014-05-02 Thread DuyHai Doan
Hello Ngoc Minh

 I'd go with the first data model. To solve the null <-> tombstone issue,
just do not insert them at runtime if value is null.

 If only numvalue double != null -> INSERT INTO data_table(key,numvalue)
VALUES(...,...);
 If only numvalues list != null -> INSERT INTO
data_table(key,numvalues) VALUES(...,...);
and so on ...

 It means that you'll need to somehow perform null check in your code at
runtime but it's the price to pay to avoid tombstones and avoid heavy
compaction

Regards

 Duy Hai DOAN


On Fri, May 2, 2014 at 11:40 AM, Ngoc Minh VO wrote:

>  Hello all,
>
>
>
> I don’t know whether it is the right place to discuss about data modeling
> with Cassandra.
>
>
>
> We would like to have your feedbacks/recommendations on our schema
> modeling:
>
> 1.   Our data are stored in a CF by their unique key (K)
>
> 2.   Data type could be one of the following: Double, List,
> String, List
>
> 3.   Hence we create a data table with:
>
> CREATE TABLE data_table (
>
>  key text,
>
>
>
>  numvalue double,
>
>  numvalues list,
>
>  strvalue text,
>
>  strvalues list,
>
>
>
>  PRIMARY KEY (key)
>
> );
>
> 4.   *One and only one* of the four columns contains a non-null
> value. The three others always contain null.
>
> 5.   Pros: easy to debug
>
>
>
> This modeling works fine for us so far. But C* considers null values as
> tombstones and we start having tombstone overwhelming when the number
> reaches the threshold.
>
>
>
> We are planning to move to a simpler schema with only two columns:
>
> CREATE TABLE data_table (
>
>  key text,
>
>  value blob, -- containing serialized data
>
>  PRIMARY KEY (key)
>
> );
>
> Pros: no null values, more efficient in term of storage?
>
> Cons: deserialization is handled on client side instead of in the Java
> driver (not sure which one is more efficient…)
>
>
>
> Could you please confirm that using “null” values in CF for non-expired
> “rows” is not a good practice?
>
>
>
> Thanks in advance for your help.
>
> Best regards,
>
> Minh
>
> This message and any attachments (the "message") is
> intended solely for the intended addressees and is confidential.
> If you receive this message in error,or are not the intended recipient(s),
> please delete it and any copies from your systems and immediately notify
> the sender. Any unauthorized view, use that does not comply with its
> purpose,
> dissemination or disclosure, either whole or partial, is prohibited. Since
> the internet
> cannot guarantee the integrity of this message which may not be reliable,
> BNP PARIBAS
> (and its subsidiaries) shall not be liable for the message if modified,
> changed or falsified.
> Do not print this message unless it is necessary,consider the environment.
>
>
> --
>
> Ce message et toutes les pieces jointes (ci-apres le "message")
> sont etablis a l'intention exclusive de ses destinataires et sont
> confidentiels.
> Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
> merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
> immediatement l'expediteur. Toute lecture non autorisee, toute utilisation
> de
> ce message qui n'est pas conforme a sa destination, toute diffusion ou
> toute
> publication, totale ou partielle, est interdite. L'Internet ne permettant
> pas d'assurer
> l'integrite de ce message electronique susceptible d'alteration, BNP
> Paribas
> (et ses filiales) decline(nt) toute responsabilite au titre de ce message
> dans l'hypothese
> ou il aurait ete modifie, deforme ou falsifie.
> N'imprimez ce message que si necessaire, pensez a l'environnement.
>


*Union* data type modeling in Cassandra

2014-05-02 Thread Ngoc Minh VO
Hello all,

I don't know whether it is the right place to discuss about data modeling with 
Cassandra.

We would like to have your feedbacks/recommendations on our schema modeling:

1.   Our data are stored in a CF by their unique key (K)

2.   Data type could be one of the following: Double, List, String, 
List

3.   Hence we create a data table with:

CREATE TABLE data_table (

 key text,



 numvalue double,

 numvalues list,

 strvalue text,

 strvalues list,



 PRIMARY KEY (key)

);

4.   One and only one of the four columns contains a non-null value. The 
three others always contain null.

5.   Pros: easy to debug

This modeling works fine for us so far. But C* considers null values as 
tombstones and we start having tombstone overwhelming when the number reaches 
the threshold.

We are planning to move to a simpler schema with only two columns:

CREATE TABLE data_table (

 key text,

 value blob, -- containing serialized data

 PRIMARY KEY (key)

);
Pros: no null values, more efficient in term of storage?
Cons: deserialization is handled on client side instead of in the Java driver 
(not sure which one is more efficient...)

Could you please confirm that using "null" values in CF for non-expired "rows" 
is not a good practice?

Thanks in advance for your help.
Best regards,
Minh


This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary,consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de 
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite. L'Internet ne permettant pas 
d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas 
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans 
l'hypothese
ou il aurait ete modifie, deforme ou falsifie. 
N'imprimez ce message que si necessaire, pensez a l'environnement.


Re: Backup procedure

2014-05-02 Thread tommaso barbugli
If you are thinking about using Amazon S3 storage I wrote a tool that
performs snapshots and backups on multiple nodes.
Backups are stored compressed on S3.
https://github.com/tbarbugli/cassandra_snapshotter

Cheers,
Tommaso


2014-05-02 10:42 GMT+02:00 Artur Kronenberg :

> Hi,
>
> we are running a 7 node cluster with an RF of 5. Each node holds about 70%
> of the data and we are now wondering about the backup process.
>
> 1. Is there a best practice procedure or a tool that we can use to have
> one backup that holds 100 % of the data or is it necessary for us to take
> multiple backups.
>
> 2. If we have to use multiple backups, is there a way to combine them? We
> would like to be able to start up a 1 node cluster that holds 100% of data
> if necessary. Can we just chug all sstables into the data directory and
> cassandra will figure out the rest?
>
> 3. How do we handle the commitlog files from all of our nodes? Given we'd
> like to restore to a certain point in time and we have all the commitlogs,
> can we have commitlogs from multiple locations in the commitlog folder and
> cassandra will pick and execute the right thing?
>
> 4. If all of the above would work, could we in case of emergency setup a
> massive 1-node cluster that holds 100 % of the data and repair the rest of
> our cluster based of this? E.g. have the 1 node run with the correct data,
> and then hook it into our existing cluster and call repair on it to restore
> data on the rest of our nodes?
>
> Thanks for your help!
>
> Cheers,
>
> Artur
>


Backup procedure

2014-05-02 Thread Artur Kronenberg

Hi,

we are running a 7 node cluster with an RF of 5. Each node holds about 
70% of the data and we are now wondering about the backup process.


1. Is there a best practice procedure or a tool that we can use to have 
one backup that holds 100 % of the data or is it necessary for us to 
take multiple backups.


2. If we have to use multiple backups, is there a way to combine them? 
We would like to be able to start up a 1 node cluster that holds 100% of 
data if necessary. Can we just chug all sstables into the data directory 
and cassandra will figure out the rest?


3. How do we handle the commitlog files from all of our nodes? Given 
we'd like to restore to a certain point in time and we have all the 
commitlogs, can we have commitlogs from multiple locations in the 
commitlog folder and cassandra will pick and execute the right thing?


4. If all of the above would work, could we in case of emergency setup a 
massive 1-node cluster that holds 100 % of the data and repair the rest 
of our cluster based of this? E.g. have the 1 node run with the correct 
data, and then hook it into our existing cluster and call repair on it 
to restore data on the rest of our nodes?


Thanks for your help!

Cheers,

Artur


Re: repair -pr does not return

2014-05-02 Thread Artur Kronenberg

Hi,

to be honest 2 days for 200GB nodes doesn't sound too unreasonable to me 
(depending on your hardware of course). We were running a ~20 GB cluster 
with regualr hard drives (no SSD) and our first repair ran a day as well 
if I recall correctly. We since improved our hardware and got it down to 
a couple of hours (~5h for all nodes triggering a -pr repair).


As far as I know you can use nodetool compactionstats and nodetool 
netstats to check for activity on your repairs. There may be a chance 
that it is hanging but also that it just really takes a quite long time.


Cheers,

-- artur

On 02/05/14 09:12, Jan Kesten wrote:

Hi Duncan,

is it actually doing something or does it look like it got stuck?  
2.0.7 has a fix for a getting stuck problem.


it starts with sending merkle trees and streaming for some time (some 
hours in fact) and then seems just to hang. So I'll try to update and 
see it that's solves the issue. Thanks for that hint!


Cheers,
Jan






Re: repair -pr does not return

2014-05-02 Thread Jan Kesten

Hi Duncan,

is it actually doing something or does it look like it got stuck?  
2.0.7 has a fix for a getting stuck problem.


it starts with sending merkle trees and streaming for some time (some 
hours in fact) and then seems just to hang. So I'll try to update and 
see it that's solves the issue. Thanks for that hint!


Cheers,
Jan




Re: repair -pr does not return

2014-05-02 Thread Duncan Sands

Hi Jan,

On 02/05/14 09:29, Jan Kesten wrote:

Hello together,

I'm running a cassandra cluster with 2.0.6 and 6 nodes. As far as I know,
routine repairs are still mandatory for handling tombstones - even I noticed
that the cluster now does a "snapshot-repair" by default.

Now my cluster is running a while and has a load of about 200g per node -
running a "nodetool repair -pr" on one of the nodes seems to run forever, right
now it's running for 2 complete days and does not return.


is it actually doing something or does it look like it got stuck?  2.0.7 has a 
fix for a getting stuck problem.


Ciao, Duncan.



Any suggestions?

Thanks in advance,
Jan







repair -pr does not return

2014-05-02 Thread Jan Kesten

Hello together,

I'm running a cassandra cluster with 2.0.6 and 6 nodes. As far as I 
know, routine repairs are still mandatory for handling tombstones - even 
I noticed that the cluster now does a "snapshot-repair" by default.


Now my cluster is running a while and has a load of about 200g per node 
- running a "nodetool repair -pr" on one of the nodes seems to run 
forever, right now it's running for 2 complete days and does not return.


Any suggestions?

Thanks in advance,
Jan