Re: Deploy latest cassandra on top of datastax-ddc ?

2016-03-19 Thread Mohamed Lrhazi
because I have no clue... :)

So, after doing an ant build from the latest source... how would one
"install" or deploy cassandra?  Could not find a document on the install
from source part... any pointers?  All I find makes use of yum or apt
repo's, or deploy from binary tarball...

Thanks a lot,
Mohamed.


On Fri, Mar 18, 2016 at 4:50 PM, Robert Coli <rc...@eventbrite.com> wrote:

> On Thu, Mar 17, 2016 at 10:38 PM, Mohamed Lrhazi <
> mohamed.lrh...@georgetown.edu> wrote:
>
>> Would simply overriding this one jar file do it? else could you please
>> share a procedure?
>>
>
> This seems like an odd thing to want to do. Why do you believe it is
> likely to work?
>
> =Rob
>
>


Deploy latest cassandra on top of datastax-ddc ?

2016-03-19 Thread Mohamed Lrhazi
Would simply overriding this one jar file do it? else could you please
share a procedure?

[root@avesterra-prod-1 ~]# rpm -qa| grep stax

datastax-ddc-tools-3.2.1-1.noarch
datastax-ddc-3.2.1-1.noarch

[root@avesterra-prod-1 ~]# cp /tmp/apache-cassandra-3.6-SNAPSHOT.jar
/usr/share/cassandra/apache-cassandra-3.2.1.jar

[root@avesterra-prod-1 ~]# systemctl restart cassandra

[root@avesterra-prod-1 ~]# cassandra -v
3.6-SNAPSHOT
[root@avesterra-prod-1 ~]#



Thanks a lot,
Mohamed.


Re: Deploy latest cassandra on top of datastax-ddc ?

2016-03-18 Thread Mohamed Lrhazi
Thanks Robert.

FYI.. for the curious.. what I did resulted in a cluster where i tested
these two things:

- nodetool status shows all 8 nodes as Up and Nomral.
- A couple of cql select statements seem to return correct data.

I have no inclination to keep using such a setup.. just reporting the
experiment :)

Thanks,
Mohamed.


On Fri, Mar 18, 2016 at 7:19 PM, Robert Coli <rc...@eventbrite.com> wrote:

> On Fri, Mar 18, 2016 at 2:18 PM, Mohamed Lrhazi <
> mohamed.lrh...@georgetown.edu> wrote:
>
>> So, after doing an ant build from the latest source... how would one
>> "install" or deploy cassandra?  Could not find a document on the install
>> from source part... any pointers?  All I find makes use of yum or apt
>> repo's, or deploy from binary tarball...
>>
>
> Per jeffj@IRC :
>
> "'ant release' creates a binary package that's runnable."
>
> =ROB
>
>
>


Re: Cassandra 3.2.1: Memory leak?

2016-03-14 Thread Mohamed Lrhazi
I am trying to recapture again... but my first attempt, it does not look
like these numbers vary all that much, from when the cluster reboots, till
when the nodes start crashing:

[root@avesterra-prod-1 ~]# nodetool -u cassandra -pw '..'  tablestats|
grep "Bloom filter space used:"
Bloom filter space used: 2041877200
Bloom filter space used: 0
Bloom filter space used: 1936840
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 352
Bloom filter space used: 0
Bloom filter space used: 48
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 48
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 72
Bloom filter space used: 720
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 32
Bloom filter space used: 56
Bloom filter space used: 0
Bloom filter space used: 32
Bloom filter space used: 32
Bloom filter space used: 56
Bloom filter space used: 56
Bloom filter space used: 32
Bloom filter space used: 32
Bloom filter space used: 32
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
Bloom filter space used: 0
[root@avesterra-prod-1 ~]#





On Mon, Mar 14, 2016 at 4:43 PM, Paulo Motta <pauloricard...@gmail.com>
wrote:

> Sorry, the command is actually nodetool tablestats and you should watch
> the bloom filter size or similar metrics.
>
> 2016-03-14 17:35 GMT-03:00 Mohamed Lrhazi <mohamed.lrh...@georgetown.edu>:
>
>> Hi Paulo,
>>
>> Which metric should I watch for this ?
>>
>> [root@avesterra-prod-1 ~]# rpm -qa| grep datastax
>> datastax-ddc-3.2.1-1.noarch
>> datastax-ddc-tools-3.2.1-1.noarch
>> [root@avesterra-prod-1 ~]# cassandra -v
>> 3.2.1
>> [root@avesterra-prod-1 ~]#
>>
>> [root@avesterra-prod-1 ~]# nodetool -u cassandra -pw ''  tpstats
>>
>>
>> Pool NameActive   Pending  Completed   Blocked
>>  All time blocked
>> MutationStage 0 0  13609 0
>>   0
>> ViewMutationStage 0 0  0 0
>>   0
>> ReadStage 0 0  0 0
>>   0
>> RequestResponseStage  0 0  8 0
>>   0
>> ReadRepairStage   0 0  0 0
>>   0
>> CounterMutationStage  0 0  0 0
>>   0
>> MiscStage 0 0  0 0
>>   0
>> CompactionExecutor1 1  17556 0
>>   0
>> MemtableReclaimMemory 0 0 38 0
>>   0
>> PendingRangeCalculator0 0  8 0
>>   0
>> GossipStage   0 0 118094 0
>>   0
>> SecondaryIndexManagement  0 0  0 0
>>   0
>> HintsDispatcher   0 0  0 0
>>   0
>> MigrationStage0 0  0 0
>>   0
>> MemtablePostFlush 0 0 55 0
>>   0
>> PerDiskMemtableFlushWriter_0 0 0 38 0
>> 0
>> ValidationExecutor0 0  0 0
>>   0
>> Sampler   0 0  0 0
>>   0
>> MemtableFlushWriter  

Re: Cassandra 3.2.1: Memory leak?

2016-03-14 Thread Mohamed Lrhazi
Hi Paulo,

Which metric should I watch for this ?

[root@avesterra-prod-1 ~]# rpm -qa| grep datastax
datastax-ddc-3.2.1-1.noarch
datastax-ddc-tools-3.2.1-1.noarch
[root@avesterra-prod-1 ~]# cassandra -v
3.2.1
[root@avesterra-prod-1 ~]#

[root@avesterra-prod-1 ~]# nodetool -u cassandra -pw ''  tpstats


Pool NameActive   Pending  Completed   Blocked  All
time blocked
MutationStage 0 0  13609 0
0
ViewMutationStage 0 0  0 0
0
ReadStage 0 0  0 0
0
RequestResponseStage  0 0  8 0
0
ReadRepairStage   0 0  0 0
0
CounterMutationStage  0 0  0 0
0
MiscStage 0 0  0 0
0
CompactionExecutor1 1  17556 0
0
MemtableReclaimMemory 0 0 38 0
0
PendingRangeCalculator0 0  8 0
0
GossipStage   0 0 118094 0
0
SecondaryIndexManagement  0 0  0 0
0
HintsDispatcher   0 0  0 0
0
MigrationStage0 0  0 0
0
MemtablePostFlush 0 0 55 0
0
PerDiskMemtableFlushWriter_0 0 0 38 0
  0
ValidationExecutor0 0  0 0
0
Sampler   0 0  0 0
0
MemtableFlushWriter   0 0 38 0
0
InternalResponseStage 0 0  0 0
0
AntiEntropyStage  0 0  0 0
0
CacheCleanupExecutor  0 0  0 0
0
Native-Transport-Requests 0 0  0 0
0

Message type   Dropped
READ 0
RANGE_SLICE  0
_TRACE   0
HINT 0
MUTATION 0
COUNTER_MUTATION 0
BATCH_STORE  0
BATCH_REMOVE 0
REQUEST_RESPONSE 0
PAGED_RANGE  0
READ_REPAIR  0
[root@avesterra-prod-1 ~]#




Thanks a lot,
Mohamed.



On Mon, Mar 14, 2016 at 8:22 AM, Paulo Motta <pauloricard...@gmail.com>
wrote:

> Can you check with nodetool tpstats if bloom filter mem space utilization
> is very large/ramping up before the node gets killed? You could be hitting
> CASSANDRA-11344.
>
> 2016-03-12 19:43 GMT-03:00 Mohamed Lrhazi <mohamed.lrh...@georgetown.edu>:
>
>> In my case, all nodes seem to be constantly logging messages like these:
>>
>> DEBUG [GossipStage:1] 2016-03-12 17:41:19,123 FailureDetector.java:456 -
>> Ignoring interval time of 2000928319 for /10.212.18.170
>>
>> What does that mean?
>>
>> Thanks a lot,
>> Mohamed.
>>
>>
>> On Sat, Mar 12, 2016 at 5:39 PM, Mohamed Lrhazi <
>> mohamed.lrh...@georgetown.edu> wrote:
>>
>>> Oh wow, similar behavior with different version all together!!
>>>
>>> On Sat, Mar 12, 2016 at 5:28 PM, ssiv...@gmail.com <ssiv...@gmail.com>
>>> wrote:
>>>
>>>> Hi, I'll duplicate here my email with the same issue
>>>>
>>>> "
>>>>
>>>>
>>>> *I have 7 nodes of C* v2.2.5 running on CentOS 7 and using jemalloc for
>>>> dynamic storage allocation. Use only one keyspace and one table with
>>>> Leveled compaction strategy. I've loaded ~500 GB of data into the cluster
>>>> with replication factor equals to 3 and waiting until compaction is
>>>> finished. But during compaction each of the C* nodes allocates all the
>>>> available memory (~128GB) and just stops its process. This is a known bug ?
>>>> *"
>>>>
>>>>
>>>> On 03/13/2016 12:56 AM, Mohamed Lrhazi wrote:
>>>>
>>>> Hello,
>>>>
>>>> We installed Datastax community edition, on 8 nodes, RHEL7. We inserted
>>>> some 7 billion rows into a pretty simple table. the inserts seem to have
>>>> completed without issues. but ever since, we find that the nodes reliably
>>>> r

Re: Cassandra 3.2.1: Memory leak?

2016-03-12 Thread Mohamed Lrhazi
In my case, all nodes seem to be constantly logging messages like these:

DEBUG [GossipStage:1] 2016-03-12 17:41:19,123 FailureDetector.java:456 -
Ignoring interval time of 2000928319 for /10.212.18.170

What does that mean?

Thanks a lot,
Mohamed.


On Sat, Mar 12, 2016 at 5:39 PM, Mohamed Lrhazi <
mohamed.lrh...@georgetown.edu> wrote:

> Oh wow, similar behavior with different version all together!!
>
> On Sat, Mar 12, 2016 at 5:28 PM, ssiv...@gmail.com <ssiv...@gmail.com>
> wrote:
>
>> Hi, I'll duplicate here my email with the same issue
>>
>> "
>>
>>
>> *I have 7 nodes of C* v2.2.5 running on CentOS 7 and using jemalloc for
>> dynamic storage allocation. Use only one keyspace and one table with
>> Leveled compaction strategy. I've loaded ~500 GB of data into the cluster
>> with replication factor equals to 3 and waiting until compaction is
>> finished. But during compaction each of the C* nodes allocates all the
>> available memory (~128GB) and just stops its process. This is a known bug ?
>> *"
>>
>>
>> On 03/13/2016 12:56 AM, Mohamed Lrhazi wrote:
>>
>> Hello,
>>
>> We installed Datastax community edition, on 8 nodes, RHEL7. We inserted
>> some 7 billion rows into a pretty simple table. the inserts seem to have
>> completed without issues. but ever since, we find that the nodes reliably
>> run out of RAM after few hours, without any user activity at all. No reads
>> nor write are sent at all.  What should we look for to try and identify
>> root cause?
>>
>>
>> [root@avesterra-prod-1 ~]# cat /etc/redhat-release
>> Red Hat Enterprise Linux Server release 7.2 (Maipo)
>> [root@avesterra-prod-1 ~]# rpm -qa| grep datastax
>> datastax-ddc-3.2.1-1.noarch
>> datastax-ddc-tools-3.2.1-1.noarch
>> [root@avesterra-prod-1 ~]#
>>
>> The nodes had 8 GB RAM, which we doubled twice and now are trying with
>> 40GB... they still manage to consume it all and cause oom_killer to kick in.
>>
>> Pretty much all the settings are the default ones the installation
>> created.
>>
>> Thanks,
>> Mohamed.
>>
>>
>> --
>> Thanks,
>> Serj
>>
>>
>


Re: Cassandra 3.2.1: Memory leak?

2016-03-12 Thread Mohamed Lrhazi
Oh wow, similar behavior with different version all together!!

On Sat, Mar 12, 2016 at 5:28 PM, ssiv...@gmail.com <ssiv...@gmail.com>
wrote:

> Hi, I'll duplicate here my email with the same issue
>
> "
>
>
> *I have 7 nodes of C* v2.2.5 running on CentOS 7 and using jemalloc for
> dynamic storage allocation. Use only one keyspace and one table with
> Leveled compaction strategy. I've loaded ~500 GB of data into the cluster
> with replication factor equals to 3 and waiting until compaction is
> finished. But during compaction each of the C* nodes allocates all the
> available memory (~128GB) and just stops its process. This is a known bug ?
> *"
>
>
> On 03/13/2016 12:56 AM, Mohamed Lrhazi wrote:
>
> Hello,
>
> We installed Datastax community edition, on 8 nodes, RHEL7. We inserted
> some 7 billion rows into a pretty simple table. the inserts seem to have
> completed without issues. but ever since, we find that the nodes reliably
> run out of RAM after few hours, without any user activity at all. No reads
> nor write are sent at all.  What should we look for to try and identify
> root cause?
>
>
> [root@avesterra-prod-1 ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> [root@avesterra-prod-1 ~]# rpm -qa| grep datastax
> datastax-ddc-3.2.1-1.noarch
> datastax-ddc-tools-3.2.1-1.noarch
> [root@avesterra-prod-1 ~]#
>
> The nodes had 8 GB RAM, which we doubled twice and now are trying with
> 40GB... they still manage to consume it all and cause oom_killer to kick in.
>
> Pretty much all the settings are the default ones the installation created.
>
> Thanks,
> Mohamed.
>
>
> --
> Thanks,
> Serj
>
>


Cassandra 3.2.1: Memory leak?

2016-03-12 Thread Mohamed Lrhazi
Hello,

We installed Datastax community edition, on 8 nodes, RHEL7. We inserted
some 7 billion rows into a pretty simple table. the inserts seem to have
completed without issues. but ever since, we find that the nodes reliably
run out of RAM after few hours, without any user activity at all. No reads
nor write are sent at all.  What should we look for to try and identify
root cause?


[root@avesterra-prod-1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[root@avesterra-prod-1 ~]# rpm -qa| grep datastax
datastax-ddc-3.2.1-1.noarch
datastax-ddc-tools-3.2.1-1.noarch
[root@avesterra-prod-1 ~]#

The nodes had 8 GB RAM, which we doubled twice and now are trying with
40GB... they still manage to consume it all and cause oom_killer to kick in.

Pretty much all the settings are the default ones the installation created.

Thanks,
Mohamed.