Read consistency

2016-03-15 Thread Arko Provo Mukherjee
Hello,

I am designing a system where for a situation, I need to have SERIAL
consistency during writes.

I understand that if the write was with QUORUM, a Read with QUORUM would
have fetched me the correct (most recent) data.

My question is what is the minimum consistency level required for read,
when my write consistency is SERIAL.

Any pointers would be much appreciated.

Thanks & regards
Arko


Re: Compaction Filter in Cassandra

2016-03-15 Thread Eric Stevens
We have been working on filtering compaction for a month or so (though we
call it deleting compaction, its implementation is as a filtering
compaction strategy).  The feature is nearing completion, and we have used
it successfully in a limited production capacity against DSE 4.8 series.

Our use case is that our records are written anywhere between a month, up
to several years before they are scheduled for deletion.  Tombstones are
too expensive, as we have tables with hundreds of billions of rows.  In
addition, traditional TTLs don't work for us because our customers are
permitted to change their retention policy such that already-written
records should not be deleted if they increase their retention after the
record was written (or vice versa).

We can clean up data more cheaply and more quickly with filtered compaction
than with tombstones and traditional compaction.  Our implementation is a
wrapper compaction strategy for another underlying strategy, so that you
can have the characteristics of whichever strategy makes sense in terms of
managing your SSTables, while interceding and removing records during
compaction (including cleaning up secondary indexes) that otherwise would
have survived into the new SSTable.

We are hoping to contribute it back to the community, so if you'd be
interested in helping test it out, I'd love to hear from you.

On Sat, Mar 12, 2016 at 5:12 AM Marcus Eriksson  wrote:

> We don't have anything like that, do you have a specific use case in mind?
>
> Could you create a JIRA ticket and we can discuss there?
>
> /Marcus
>
> On Sat, Mar 12, 2016 at 7:05 AM, Dikang Gu  wrote:
>
>> Hello there,
>>
>> RocksDB has the feature called "Compaction Filter" to allow application
>> to modify/delete a key-value during the background compaction.
>> https://github.com/facebook/rocksdb/blob/v4.1/include/rocksdb/options.h#L201-L226
>>
>> I'm wondering is there a plan/value to add this into C* as well? Or is
>> there already a similar thing in C*?
>>
>> Thanks
>>
>> --
>> Dikang
>>
>>
>


Modeling Audit Trail on Cassandra

2016-03-15 Thread I PVP
Hi everyone,

I am looking for your  feedback or advice on modeling an audit trail log table 
on Cassandra that stores information from tracking everything an employee 
changes within the application.

The existing application is being migrated from mysql to Cassandra.

Is text the most appropriate data type to store JSON that contain couple of 
dozen lines ?

CREATE TABLE audit_trail (
auditid timeuuid,
actiontype text,
objecttype text,
executedby timeuuid,
executedat text,
objectbefore text,
objectafter text,
clientipaddr text,
serveripaddr text,
servername text,
channel text,
PRIMARY KEY (auditid)
);

auditid // an UUID of the audit log
actiontype // create, retrieve, update, delete, approve, activate, unlock, lock 
etc.
objecttype // order, customer, ticket, message,account, 
paymenttransaction,refund
executedby // the UUID of the employee
executedat // timestamp when the action happened
objectbefore // the json of the object before the change
objectafter  // the json of the object after the change
clientipaddr // the ip address of the client
serveripaddr // the server ip address that handled the request
servername // the server name that handled the request
channel  //web, mobile, call center

The query requirement  is  "where executedby = ?”.
Will be using Stratio’s Cassandra Lucene Index to support querying/filtering.


Thanks!
--
IPVP



Re: Removing Node causes bunch of HostUnavailableException

2016-03-15 Thread Peddi, Praveen
Hi Alain,
Sorry I completely missed your email until my colleague pointed it out.

>From the testing we have done so far, We still have this issue when removing 
>nodes on 2.0.9 but not on 2.2.4. We will be upgrading to 2.2.4 pretty soon so 
>I am not too worried about errors in 2.0.9. Since we haven’t changed any code 
>on our side between 2.0.9 and 2.2.4, I am guessing this is probably a bug in 
>2.0.9 and since 2.0.9 is not supported anymore, I didn’t create a jira ticket 
>for it.

Thanks for following up though.

Praveen





From: Alain RODRIGUEZ >
Reply-To: "user@cassandra.apache.org" 
>
Date: Thursday, March 10, 2016 at 5:30 AM
To: "user@cassandra.apache.org" 
>
Subject: Re: Removing Node causes bunch of HostUnavailableException

Hi Praveen, how is this going ?

I have been out for a while, did you manage to remove the nodes ? Do you need 
more help ? If so, I could use a status update and more information about the 
remaining issues.

C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2016-03-04 19:39 GMT+01:00 Peddi, Praveen 
>:
Hi Jack,
My answers below…

What is the exact exception you are getting and where do you get it? Is it 
UnavailableException or NoHostAvailableException and does it occur on the 
client, using the Java driver?
We saw different types of exceptions. One I could quickly grep are:
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout 
during write query at consistency SERIAL (2 replica were required but only 1 
acknowledged the write)
com.datastax.driver.core.exceptions.UnavailableException: Not enough replica 
available for query at consistency QUORUM (2 required but only 1 alive)
QueryTimeoutException



What is your LoadBalancingPolicy?

new TokenAwarePolicy(new RoundRobinPolicy()))

What consistency level is the client using?
QUORUM for reads. For writes some APIs use SERIAL and some use QUORUM 
dependending if we want to do optimistic locking,

What retry policy is the client using?
Default Retry Policy


When you say that the failures don't last for more than a few minutes, you mean 
from the moment you perform the nodetool removenode? And is operation 
completely normal after those few minutes?
That is correct. All operations recover from failures after few minutes.



-- Jack Krupansky

On Thu, Mar 3, 2016 at 4:40 PM, Peddi, Praveen 
> wrote:
Hi Jack,
Which node(s) were getting the HostNotAvailable errors - all nodes for every 
query, or just a small portion of the nodes on some queries?
Not all read/writes are failing with Unavalable or Timeout exception. Writes 
failures were around 10% of total calls. Reads were little worse (as worse as 
35% of total calls).


It may take some time for the gossip state to propagate; maybe some of it is 
corrupted or needs a full refresh.

Were any of the seed nodes in the collection of nodes that were removed? How 
many seed nodes does each node typically have?
We currently use all hosts as seed hosts which I know is a very bad idea and we 
are going to fix that soon. The reason we use all hosts as seed hosts is 
because these hosts can get recycled for many reasons and we didn’t want to 
hard code the host names so we programmatically get host names (we wrote our 
own seed host provider). Could that be the reason for these failures? If a dead 
node is in the seed nodes list and we try to remove that node, could that lead 
to blip of failures. The failures don’t last for more than few minutes.


-- Jack Krupansky

On Thu, Mar 3, 2016 at 4:16 PM, Peddi, Praveen 
> wrote:
Thanks Alain for quick and detailed response. My answers inline. One thing I 
want to clarify is, the nodes got recycled due to some automatic health check 
failure. This means old nodes are dead and new nodes got added w/o our 
intervention. So replacing nodes would not work for us since the new nodes were 
already added.


We are not removing multiple nodes at the same time. All dead nodes are from 
same AZ so there were no errors when the nodes were down as expected (because 
we use QUORUM)

Do you use at leat 3 distinct AZ ? If so, you should indeed be fine regarding 
data integrity. Also repair should then work for you. If you have less than 3 
AZ, then you are in troubles...
Yes we use 3 distinct AZs and replicate to all 3 Azs which is why when 8 nodes 
were recycled, there were absolutely no outage on Cassandra (other two nodes 
wtill satisfy quorum consistency)

About the unreachable errors, I believe it can be due to the overload due to 
the 

Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread Kathiresan S
ok, thanks...

On Tue, Mar 15, 2016 at 1:29 PM, ssiv...@gmail.com 
wrote:

> Note, that DataStax Enterprise still uses C* v2.1..
>
>
> On 03/15/2016 08:25 PM, Kathiresan S wrote:
>
> Thank you all !
>
> Thanks,
> Kathir
>
>
> On Tue, Mar 15, 2016 at 5:50 AM, ssiv...@gmail.com <
> ssiv...@gmail.com> wrote:
>
>> I think that it's not ready, since it has critical bugs. See emails about
>> C* memory leaks
>>
>>
>> On 03/15/2016 01:15 AM, Robert Coli wrote:
>>
>> On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S <
>> kathiresanselva...@gmail.com> wrote:
>>
>>> We are planning for Cassandra upgrade in our production environment.
>>> Which version of Cassandra is stable and is advised to upgrade to, at
>>> the moment?
>>>
>>
>>
>> https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/
>>
>> (IOW, you should run either 2.1.MAX or 2.2.5)
>>
>> Relatively soon, the answer will be "3.0.x", probably around the time
>> where 3.0.x is >= 6.
>>
>> After this series, the change in release cadence may change the above
>> rule of thumb.
>>
>> =Rob
>>
>>
>> --
>> Thanks,
>> Serj
>>
>>
>
> --
> Thanks,
> Serj
>
>


Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread ssiv...@gmail.com

Note, that DataStax Enterprise still uses C* v2.1..

On 03/15/2016 08:25 PM, Kathiresan S wrote:

Thank you all !

Thanks,
Kathir


On Tue, Mar 15, 2016 at 5:50 AM, ssiv...@gmail.com 
 > wrote:


I think that it's not ready, since it has critical bugs. See
emails about C* memory leaks


On 03/15/2016 01:15 AM, Robert Coli wrote:

On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S
> wrote:

We are planning for Cassandra upgrade in our production
environment.
Which version of Cassandra is stable and is advised to
upgrade to, at the moment?



https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/

(IOW, you should run either 2.1.MAX or 2.2.5)

Relatively soon, the answer will be "3.0.x", probably around the
time where 3.0.x is >= 6.

After this series, the change in release cadence may change the
above rule of thumb.

=Rob


-- 
Thanks,

Serj




--
Thanks,
Serj



Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread Kathiresan S
Thank you all !

Thanks,
Kathir


On Tue, Mar 15, 2016 at 5:50 AM, ssiv...@gmail.com 
wrote:

> I think that it's not ready, since it has critical bugs. See emails about
> C* memory leaks
>
>
> On 03/15/2016 01:15 AM, Robert Coli wrote:
>
> On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S <
> kathiresanselva...@gmail.com> wrote:
>
>> We are planning for Cassandra upgrade in our production environment.
>> Which version of Cassandra is stable and is advised to upgrade to, at the
>> moment?
>>
>
>
> https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/
>
> (IOW, you should run either 2.1.MAX or 2.2.5)
>
> Relatively soon, the answer will be "3.0.x", probably around the time
> where 3.0.x is >= 6.
>
> After this series, the change in release cadence may change the above rule
> of thumb.
>
> =Rob
>
>
> --
> Thanks,
> Serj
>
>


Re:

2016-03-15 Thread Jack Krupansky
Be sure to post your final (working) insert for others to learn from!

-- Jack Krupansky

On Tue, Mar 15, 2016 at 11:56 AM, Rami Badran 
wrote:

> thanks got it
>
> On Tue, Mar 15, 2016 at 5:54 PM, Jack Krupansky 
> wrote:
>
>> There's a UDT example in the doc, showing that  you don't put quotes
>> around the UDT key names:
>> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useInsertUDT.html
>>
>>
>> -- Jack Krupansky
>>
>> On Tue, Mar 15, 2016 at 11:52 AM, Jack Krupansky <
>> jack.krupan...@gmail.com> wrote:
>>
>>> In any case, please post any diagnostic message/exception that you may
>>> be getting.
>>>
>>> -- Jack Krupansky
>>>
>>> On Tue, Mar 15, 2016 at 11:13 AM, Rami Badran 
>>> wrote:
>>>
 sorry like this
 insert into users (uid,loginIds) values ('111','{ 'emails'  : '{'
 f...@baggins.com', 'bagg...@gmail.com'}','unverifiedEmails' : '{'
 f...@baggins.com', 'bagg...@gmail.com'}' }');

 On Tue, Mar 15, 2016 at 5:01 PM, Jack Krupansky <
 jack.krupan...@gmail.com> wrote:

> No quotes around the UDT key names. (Or use double quotes.)
>
> -- Jack Krupansky
>
> On Tue, Mar 15, 2016 at 10:56 AM, Rami Badran  > wrote:
>
>> here is the CQL
>>
>> insert into users (uid,loginIds) values ('111',{ 'emails'  : {'
>> f...@baggins.com', 'bagg...@gmail.com'},'unverifiedEmails' : {'
>> a...@baggins.com', 'c...@gmail.com'} });
>>
>>
>> On Tue, Mar 15, 2016 at 4:54 PM, Spencer Brown 
>> wrote:
>>
>>> Should be loginIds map, with s at the end
>>> of the loginId in the map definition.
>>>
>>> On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran <
>>> ramibadran...@gmail.com> wrote:
>>>
 Hi

 i have the following cassandra schema structure:

 CREATE TABLE users (
 uid TEXT,
 loginIds map,
 primary key (uid)
 );

 CREATE TYPE loginId (
 emails set,
 unverifiedEmails set,
  );

 and i tried to insert record to my table,but i have problem with
 loginIds attribute,
 could you please advice how i can insert record

 --

 Regards
 Rami Badran

>>>
>>>
>>
>>
>> --
>>
>> Regards
>> Rami Badran
>>
>
>


 --

 Regards
 Rami Badran

>>>
>>>
>>
>
>
> --
>
> Regards
> Rami Badran
>


Re:

2016-03-15 Thread Jack Krupansky
There's a UDT example in the doc, showing that  you don't put quotes around
the UDT key names:
https://docs.datastax.com/en/cql/3.3/cql/cql_using/useInsertUDT.html


-- Jack Krupansky

On Tue, Mar 15, 2016 at 11:52 AM, Jack Krupansky 
wrote:

> In any case, please post any diagnostic message/exception that you may be
> getting.
>
> -- Jack Krupansky
>
> On Tue, Mar 15, 2016 at 11:13 AM, Rami Badran 
> wrote:
>
>> sorry like this
>> insert into users (uid,loginIds) values ('111','{ 'emails'  : '{'
>> f...@baggins.com', 'bagg...@gmail.com'}','unverifiedEmails' : '{'
>> f...@baggins.com', 'bagg...@gmail.com'}' }');
>>
>> On Tue, Mar 15, 2016 at 5:01 PM, Jack Krupansky > > wrote:
>>
>>> No quotes around the UDT key names. (Or use double quotes.)
>>>
>>> -- Jack Krupansky
>>>
>>> On Tue, Mar 15, 2016 at 10:56 AM, Rami Badran 
>>> wrote:
>>>
 here is the CQL

 insert into users (uid,loginIds) values ('111',{ 'emails'  : {'
 f...@baggins.com', 'bagg...@gmail.com'},'unverifiedEmails' : {'
 a...@baggins.com', 'c...@gmail.com'} });


 On Tue, Mar 15, 2016 at 4:54 PM, Spencer Brown 
 wrote:

> Should be loginIds map, with s at the end of
> the loginId in the map definition.
>
> On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran  > wrote:
>
>> Hi
>>
>> i have the following cassandra schema structure:
>>
>> CREATE TABLE users (
>> uid TEXT,
>> loginIds map,
>> primary key (uid)
>> );
>>
>> CREATE TYPE loginId (
>> emails set,
>> unverifiedEmails set,
>>  );
>>
>> and i tried to insert record to my table,but i have problem with
>> loginIds attribute,
>> could you please advice how i can insert record
>>
>> --
>>
>> Regards
>> Rami Badran
>>
>
>


 --

 Regards
 Rami Badran

>>>
>>>
>>
>>
>> --
>>
>> Regards
>> Rami Badran
>>
>
>


Re:

2016-03-15 Thread Rami Badran
sorry like this
insert into users (uid,loginIds) values ('111','{ 'emails'  : '{'
f...@baggins.com', 'bagg...@gmail.com'}','unverifiedEmails' : 
'{'f...@baggins.com',
'bagg...@gmail.com'}' }');

On Tue, Mar 15, 2016 at 5:01 PM, Jack Krupansky 
wrote:

> No quotes around the UDT key names. (Or use double quotes.)
>
> -- Jack Krupansky
>
> On Tue, Mar 15, 2016 at 10:56 AM, Rami Badran 
> wrote:
>
>> here is the CQL
>>
>> insert into users (uid,loginIds) values ('111',{ 'emails'  : {'
>> f...@baggins.com', 'bagg...@gmail.com'},'unverifiedEmails' : 
>> {'a...@baggins.com',
>> 'c...@gmail.com'} });
>>
>>
>> On Tue, Mar 15, 2016 at 4:54 PM, Spencer Brown 
>> wrote:
>>
>>> Should be loginIds map, with s at the end of
>>> the loginId in the map definition.
>>>
>>> On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran 
>>> wrote:
>>>
 Hi

 i have the following cassandra schema structure:

 CREATE TABLE users (
 uid TEXT,
 loginIds map,
 primary key (uid)
 );

 CREATE TYPE loginId (
 emails set,
 unverifiedEmails set,
  );

 and i tried to insert record to my table,but i have problem with
 loginIds attribute,
 could you please advice how i can insert record

 --

 Regards
 Rami Badran

>>>
>>>
>>
>>
>> --
>>
>> Regards
>> Rami Badran
>>
>
>


-- 

Regards
Rami Badran


Re:

2016-03-15 Thread Rami Badran
sorry am new in cassandra and i want to check if i got you, you mean like
this

insert into users (uid,loginIds) values ('111','{ 'emails'  : {'
f...@baggins.com', 'bagg...@gmail.com'},'unverifiedEmails' : 
{'a...@baggins.com',
'c...@gmail.com'} }');

On Tue, Mar 15, 2016 at 5:01 PM, Jack Krupansky 
wrote:

> No quotes around the UDT key names. (Or use double quotes.)
>
> -- Jack Krupansky
>
> On Tue, Mar 15, 2016 at 10:56 AM, Rami Badran 
> wrote:
>
>> here is the CQL
>>
>> insert into users (uid,loginIds) values ('111',{ 'emails'  : {'
>> f...@baggins.com', 'bagg...@gmail.com'},'unverifiedEmails' : 
>> {'a...@baggins.com',
>> 'c...@gmail.com'} });
>>
>>
>> On Tue, Mar 15, 2016 at 4:54 PM, Spencer Brown 
>> wrote:
>>
>>> Should be loginIds map, with s at the end of
>>> the loginId in the map definition.
>>>
>>> On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran 
>>> wrote:
>>>
 Hi

 i have the following cassandra schema structure:

 CREATE TABLE users (
 uid TEXT,
 loginIds map,
 primary key (uid)
 );

 CREATE TYPE loginId (
 emails set,
 unverifiedEmails set,
  );

 and i tried to insert record to my table,but i have problem with
 loginIds attribute,
 could you please advice how i can insert record

 --

 Regards
 Rami Badran

>>>
>>>
>>
>>
>> --
>>
>> Regards
>> Rami Badran
>>
>
>


-- 

Regards
Rami Badran


Re:

2016-03-15 Thread Jack Krupansky
No quotes around the UDT key names. (Or use double quotes.)

-- Jack Krupansky

On Tue, Mar 15, 2016 at 10:56 AM, Rami Badran 
wrote:

> here is the CQL
>
> insert into users (uid,loginIds) values ('111',{ 'emails'  : {'
> f...@baggins.com', 'bagg...@gmail.com'},'unverifiedEmails' : 
> {'a...@baggins.com',
> 'c...@gmail.com'} });
>
>
> On Tue, Mar 15, 2016 at 4:54 PM, Spencer Brown 
> wrote:
>
>> Should be loginIds map, with s at the end of
>> the loginId in the map definition.
>>
>> On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran 
>> wrote:
>>
>>> Hi
>>>
>>> i have the following cassandra schema structure:
>>>
>>> CREATE TABLE users (
>>> uid TEXT,
>>> loginIds map,
>>> primary key (uid)
>>> );
>>>
>>> CREATE TYPE loginId (
>>> emails set,
>>> unverifiedEmails set,
>>>  );
>>>
>>> and i tried to insert record to my table,but i have problem with
>>> loginIds attribute,
>>> could you please advice how i can insert record
>>>
>>> --
>>>
>>> Regards
>>> Rami Badran
>>>
>>
>>
>
>
> --
>
> Regards
> Rami Badran
>


Re:

2016-03-15 Thread Rami Badran
here is the CQL

insert into users (uid,loginIds) values ('111',{ 'emails'  : 
{'f...@baggins.com',
'bagg...@gmail.com'},'unverifiedEmails' : {'a...@baggins.com', 'c...@gmail.com'}
});


On Tue, Mar 15, 2016 at 4:54 PM, Spencer Brown  wrote:

> Should be loginIds map, with s at the end of the
> loginId in the map definition.
>
> On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran 
> wrote:
>
>> Hi
>>
>> i have the following cassandra schema structure:
>>
>> CREATE TABLE users (
>> uid TEXT,
>> loginIds map,
>> primary key (uid)
>> );
>>
>> CREATE TYPE loginId (
>> emails set,
>> unverifiedEmails set,
>>  );
>>
>> and i tried to insert record to my table,but i have problem with loginIds
>> attribute,
>> could you please advice how i can insert record
>>
>> --
>>
>> Regards
>> Rami Badran
>>
>
>


-- 

Regards
Rami Badran


Re:

2016-03-15 Thread Spencer Brown
Should be loginIds map, with s at the end of the
loginId in the map definition.

On Tue, Mar 15, 2016 at 10:38 AM, Rami Badran 
wrote:

> Hi
>
> i have the following cassandra schema structure:
>
> CREATE TABLE users (
> uid TEXT,
> loginIds map,
> primary key (uid)
> );
>
> CREATE TYPE loginId (
> emails set,
> unverifiedEmails set,
>  );
>
> and i tried to insert record to my table,but i have problem with loginIds
> attribute,
> could you please advice how i can insert record
>
> --
>
> Regards
> Rami Badran
>


Re:

2016-03-15 Thread Rakesh Kumar
How are you trying to insert. Paste your code here.














[no subject]

2016-03-15 Thread Rami Badran
Hi

i have the following cassandra schema structure:

CREATE TABLE users (
uid TEXT,
loginIds map,
primary key (uid)
);

CREATE TYPE loginId (
emails set,
unverifiedEmails set,
 );

and i tried to insert record to my table,but i have problem with loginIds
attribute,
could you please advice how i can insert record

-- 

Regards
Rami Badran


Re: C* memory leak during compaction

2016-03-15 Thread Paulo Motta
Did you check bloom filter sizes with nodetool tablestats to see if you're
hitting CASSANDRA-11344? If that`s the case there's a patch available along
with instructions in some other recent thread on how to fix it.

2016-03-15 6:49 GMT-03:00 ssiv...@gmail.com :

> Duplicate the answer from Russell Hatch
>
> On 03/14/2016 07:32 PM, Russell Hatch wrote:
>
> Of course, no problem.
>
> On Sat, Mar 12, 2016 at 3:35 PM, ssiv...@gmail.com <
> ssiv...@gmail.com> wrote:
>
>> Hi,
>>
>> Thank you for you reply!
>> The thing is that I've only inserted the data and just waiting until
>> compaction is finished. C* process allocates all available memory during
>> compaction... I'd added swap ~700GB and C* has occupied it too..
>>
>> Will itl be "ok" if I duplicate your answer to user@cassandra ?
>>
>> On 03/12/2016 02:46 AM, Russell Hatch wrote:
>>
>> Hi there -- not sure if anyone got back to you on this question. I think
>> I saw your question on irc the other day -- I'm not aware of any memory
>> specific issues with 2.2.5.
>>
>> It might be worthwhile to see if you have any very large partitions in
>> your database, and any potential code that could be trying to retrieve
>> those very large partitions -- I think that could be one source for a
>> problem such as this.
>>
>> You might get some more traction on your question using the regular
>> cassandra mailing list (this list is for development of cassandra itself,
>> not development with cassandra).
>>
>> Cheers,
>>
>> Russ
>>
>> On Fri, Mar 11, 2016 at 5:38 AM, ssiv...@gmail.com 
>> wrote:
>>
>>> I have 7 nodes of C* v2.2.5 running on CentOS 7 and using jemalloc for
>>> dynamic storage allocation.
>>> Use only one keyspace and one table with Leveled compaction strategy.
>>> I've loaded ~500 GB of data into the cluster with replication factor
>>> equals to 3 and waiting until compaction is finished. But during compaction
>>> each of the C* nodes allocates all the available memory (~128GB) and just
>>> stops its process.
>>>
>>> This is a known bug ?
>>>
>>> --
>>> Thanks,
>>> Serj
>>>
>>>
>>
>> --
>> Thanks,
>> Serj
>>
>>
>
> --
> Thanks,
> Serj
>
>


Re: Cassandra Upgrade 3.0.x vs 3.x (Tick-Tock Release)

2016-03-15 Thread ssiv...@gmail.com
I think that it's not ready, since it has critical bugs. See emails 
about C* memory leaks


On 03/15/2016 01:15 AM, Robert Coli wrote:
On Mon, Mar 14, 2016 at 12:40 PM, Kathiresan S 
> 
wrote:


We are planning for Cassandra upgrade in our production environment.
Which version of Cassandra is stable and is advised to upgrade to,
at the moment?


https://www.eventbrite.com/engineering/what-version-of-cassandra-should-i-run/

(IOW, you should run either 2.1.MAX or 2.2.5)

Relatively soon, the answer will be "3.0.x", probably around the time 
where 3.0.x is >= 6.


After this series, the change in release cadence may change the above 
rule of thumb.


=Rob


--
Thanks,
Serj



Re: C* memory leak during compaction

2016-03-15 Thread ssiv...@gmail.com

Duplicate the answer from Russell Hatch

On 03/14/2016 07:32 PM, Russell Hatch wrote:

Of course, no problem.

On Sat, Mar 12, 2016 at 3:35 PM, ssiv...@gmail.com 
 > wrote:


Hi,

Thank you for you reply!
The thing is that I've only inserted the data and just waiting
until compaction is finished. C* process allocates all available
memory during compaction... I'd added swap ~700GB and C* has
occupied it too..

Will itl be "ok" if I duplicate your answer to user@cassandra ?

On 03/12/2016 02:46 AM, Russell Hatch wrote:

Hi there -- not sure if anyone got back to you on this question.
I think I saw your question on irc the other day -- I'm not aware
of any memory specific issues with 2.2.5.

It might be worthwhile to see if you have any very large
partitions in your database, and any potential code that could be
trying to retrieve those very large partitions -- I think that
could be one source for a problem such as this.

You might get some more traction on your question using the
regular cassandra mailing list (this list is for development of
cassandra itself, not development with cassandra).

Cheers,

Russ

On Fri, Mar 11, 2016 at 5:38 AM, ssiv...@gmail.com
 > wrote:

I have 7 nodes of C* v2.2.5 running on CentOS 7 and using
jemalloc for dynamic storage allocation.
Use only one keyspace and one table with Leveled compaction
strategy.
I've loaded ~500 GB of data into the cluster with replication
factor equals to 3 and waiting until compaction is finished.
But during compaction each of the C* nodes allocates all the
available memory (~128GB) and just stops its process.

This is a known bug ?

-- 
Thanks,

Serj




-- 
Thanks,

Serj




--
Thanks,
Serj



Experiencing strange disconnect issue

2016-03-15 Thread Bo Finnerup Madsen
Hi,

We are currently trying to convert an existing java web application to use
cassandra, and while most of it works great :) we have a "small" issue.

After some time, we all connectivity seems to be lost and we get the
following errors:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: /10.61.70.107:9042
(com.datastax.driver.core.exceptions.TransportException: [/10.61.70.107]
Connection has been closed), /10.61.70.108:9042
(com.datastax.driver.core.exceptions.TransportException: [/10.61.70.108]
Connection has been closed))

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: /10.61.70.107:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout while trying
to acquire available connection (you may want to increase the driver number
of per-host connections)), /10.61.70.108:9042
(com.datastax.driver.core.exceptions.TransportException: [/10.61.70.108]
Connection has been closed), /10.61.70.110:9042
(com.datastax.driver.core.exceptions.TransportException: [/10.61.70.110]
Connection has been closed))

The errors persists, and the application needs to be restarted to recover.

At application startup we create a cluster and a session which we reuse
through out the application as pr. the documentation. We don't specify any
other options when connecting than the IP's of the three servers. We are
running cassandra 3.0.3 tar ball in EC2 in a cluster of three machines. The
connections are made using v3.0.0 java driver.

I have uploaded the configuration and logs from our cassandra cluster here:
https://gist.github.com/anonymous/452e736b401317b5b38d
The issue happend at 00:44:46.

I would greatly appreciate any ideas as to what we are doing wrong to
experience this? :)

Thank you in advance!

Yours sincerely,
  Bo Madsen