Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread wxn...@zjqunshuo.com
Petrus & Kiran,
Thank you for the guide and suggestions. I will have a try.

Cheers,
Simon
 
From: Petrus Gomes
Date: 2017-07-21 00:45
To: user
Subject: Re: Quick question to config Prometheus to monitor Cassandra cluster
I use the same environment. Follow  a few links:
Use this link, is the best one to connect Cassandra and prometheus: 
https://www.robustperception.io/monitoring-cassandra-with-prometheus/
JMX agent: https://github.com/nabto/cassandra-prometheus

https://community.grafana.com/t/how-to-connect-prometheus-to-cassandra/1153
Grafana dashboard : https://grafana.com/dashboards/371

Create a connection on Grafana connecting to Prometheus.



On Thu, Jul 20, 2017 at 4:13 AM, Kiran mk  wrote:
You have to download the Prometheus HTTP jmx dependencies jar and download the 
Cassandra yaml and mention the jmx port in the config (7199).

Run the agent on specific port" on all the Cassandra nodes.

After this go to your Prometheus server and make the scrape config to metrics 
from all clients.



On 20-Jul-2017 3:27 PM, "wxn...@zjqunshuo.com"  wrote:
Hi,
I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I 
installed Prometheus and started it, but don't know how to config it to support 
Cassandra.
Any ideas or related articles are appreciated.

Cheers,
Simon 




Re: Multi datacenter node loss

2017-07-20 Thread Michael Shuler
Datacenter replication is defined in the keyspace schema, so I believe that
  ...
  WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': 1,
'DC2': 1}
  ...
you ought to be able to repair DC1 from DC2, once you have the DC1 node
healthy again.

If using the SimpleStrategy replication class, it appears that
replication_factor is the only option, which applies to the entire
cluster, so only one node in both datacenters would have the data.

https://cassandra.apache.org/doc/latest/cql/ddl.html#create-keyspace

-- 
Kind regards,
Michael

On 07/20/2017 03:23 PM, Roger Warner wrote:
> Hi
> 
>  
> 
> I’m a little dim on what multi datacenter implies in the 1 replica
> case. I know about replica recovery, how about “node recovery”
> 
>  
> 
> As I understand if there a node failure or disk crash with a single node
> cluster with replication factor 1 I lose data.Easy.
> 
>  
> 
> nodetool tells me each node in my 3 node X 2 datacenters is responsible
> for ~1/3 of the data.If in this cluster with RF=1 a node fails in
> dc1 what happens ? in 1 dc with data loss can the node be “restored”
> from a node in dc2?.  Automatically?
> 
>  
> 
> I’m also asking tangentially how does the data map from nodes in dc1 to
> dc2.
> 
>  
> 
> I hope I made that coherent.
> 
>  
> 
> Roger
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread Akhil Mehra
Hi Asad,

http://cassandra.apache.org/doc/latest/faq/index.html#why-message-dropped 


As mentioned in the link above this is a load shedding mechanism used by 
Cassandra.

Is you cluster under heavy load?

Regards,
Akhil


> On 21/07/2017, at 3:27 AM, ZAIDI, ASAD A  wrote:
> 
> Hello Folks –
>  
> I’m using apache-cassandra 2.2.8.
>  
> I see many messages like below in my system.log file. In Cassandra.yaml file 
> [ cross_node_timeout: true] is set and NTP server is also running correcting 
> clock drift on 16node cluster. I do not see pending or blocked HintedHandoff  
> in tpstats output though there are bunch of MUTATIONS dropped observed.
>  
> 
> INFO  [ScheduledTasks:1] 2017-07-20 08:02:52,511 MessagingService.java:946 - 
> MUTATION messages were dropped in last 5000 ms: 822 for internal timeout and 
> 2152 for cross node timeout
> 
>  
> I’m seeking help here if you please let me know what I need to check in order 
> to address these cross node timeouts.
>  
> Thank you,
> Asad



Multi datacenter node loss

2017-07-20 Thread Roger Warner
Hi

I’m a little dim on what multi datacenter implies in the 1 replica case. I 
know about replica recovery, how about “node recovery”

As I understand if there a node failure or disk crash with a single node 
cluster with replication factor 1 I lose data.Easy.

nodetool tells me each node in my 3 node X 2 datacenters is responsible for 
~1/3 of the data.If in this cluster with RF=1 a node fails in dc1 what 
happens ? in 1 dc with data loss can the node be “restored” from a node in 
dc2?.  Automatically?

I’m also asking tangentially how does the data map from nodes in dc1 to dc2.

I hope I made that coherent.

Roger


Re: RE: Cassandra 2.2.6 Fails to Boot Up correctly - JNA Class

2017-07-20 Thread Jeff Jirsa
So what precisely changed? You've got a custom build based on the jar name, 
which is perfectly reasonable, but what upgrade did you do? 2.2.5 to 2.2.6 ? 
Any other changes?


On 2017-07-20 05:41 (-0700), William Boutin  
wrote: 
> Thank you for your help.
> We have been using jna-4.0.0.jar since using Cassandra 2.2(.6). Until last 
> week, we had no issues. Now, we are experiencing the exception that I 
> identified. We only have jna-4.0.0.jar loaded on our machines and the 
> CLASSPATH that Cassandra builds only uses the jar from the 
> /usr/share/cassandra/lib directory. Jna-3.2.4.jar was only introduced when I 
> was debugging the original issue and will never be loaded on our machines. 
> See the output of ps below.
> 
> Should I be looking elsewhere? Some have brought up that low memory on a JVM 
> or LINUX machine could cause jar loading issues. I have tried doubling 
> -Xss256k to -Xss512k.
> 
> Thanks
> 
> Billy S. Boutin 
> Office Phone No. (913) 241-5574 
> Cell Phone No. (732) 213-1368 
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread Subroto Barua
In a cloud environment, cross_node_timeout = true can cause issues; we had this 
issue in our environment and it is set to false now.
Dropped messages is an another issue

Subroto 

> On Jul 20, 2017, at 8:27 AM, ZAIDI, ASAD A  wrote:
> 
> Hello Folks –
>  
> I’m using apache-cassandra 2.2.8.
>  
> I see many messages like below in my system.log file. In Cassandra.yaml file 
> [ cross_node_timeout: true] is set and NTP server is also running correcting 
> clock drift on 16node cluster. I do not see pending or blocked HintedHandoff  
> in tpstats output though there are bunch of MUTATIONS dropped observed.
>  
> 
> INFO  [ScheduledTasks:1] 2017-07-20 08:02:52,511 MessagingService.java:946 - 
> MUTATION messages were dropped in last 5000 ms: 822 for internal timeout and 
> 2152 for cross node timeout
> 
>  
> I’m seeking help here if you please let me know what I need to check in order 
> to address these cross node timeouts.
>  
> Thank you,
> Asad
>  


Re: MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread Anuj Wadehra
Hi Asad, 
You can do following things:
1.Increase memtable_flush_writers especially if you have a write heavy load. 
2.Make sure there are no big gc pauses on your nodes. If yes,  go for heap 
tuning. 

Please let us know whether above measures fixed your problem or not. 

ThanksAnuj

Sent from Yahoo Mail on Android 
 
  On Thu, 20 Jul 2017 at 20:57, ZAIDI, ASAD A wrote:

Hello Folks –
 
  
 
I’m using apache-cassandra 2.2.8.
 
  
 
I see many messages like below in my system.log file. In Cassandra.yaml file [ 
cross_node_timeout: true] is set and NTP server is also running correcting 
clock drift on 16node cluster. I do not see pending or blocked HintedHandoff  
in tpstats output though there are bunch of MUTATIONS dropped observed.
 
  
 

 
INFO  [ScheduledTasks:1] 2017-07-20 08:02:52,511 MessagingService.java:946 - 
MUTATION messages were dropped in last 5000 ms: 822 for internal timeout and 
2152 for cross node timeout
 

 
  
 
I’m seeking help here if you please let me know what I need to check in order 
to address these cross node timeouts.
 
  
 
Thank you,
 
Asad
 
  
   


Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Petrus Gomes
I use the same environment. Follow  a few links:
Use this link, is the best one to connect Cassandra and prometheus:
https://www.robustperception.io/monitoring-cassandra-with-prometheus/
JMX agent: https://github.com/nabto/cassandra-prometheus

https://community.grafana.com/t/how-to-connect-prometheus-to-cassandra/1153
Grafana dashboard : https://grafana.com/dashboards/371

Create a connection on Grafana connecting to Prometheus.



On Thu, Jul 20, 2017 at 4:13 AM, Kiran mk  wrote:

> You have to download the Prometheus HTTP jmx dependencies jar and download
> the Cassandra yaml and mention the jmx port in the config (7199).
>
> Run the agent on specific port" on all the Cassandra nodes.
>
> After this go to your Prometheus server and make the scrape config to
> metrics from all clients.
>
>
>
> On 20-Jul-2017 3:27 PM, "wxn...@zjqunshuo.com" 
> wrote:
>
> Hi,
> I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I
> installed Prometheus and started it, but don't know how to config it to
> support Cassandra.
> Any ideas or related articles are appreciated.
>
> Cheers,
> Simon
>
>
>


Re: write time for nulls is not consistent

2017-07-20 Thread Jeff Jirsa


On 2017-07-20 08:17 (-0700), Nitan Kainth  wrote: 
> Jeff,
> 
> It is really strange, look at below log, I inserted your data and then few 
> additional; finally, the issue is reproduced:
> 
..
> (6 rows)
> cqlsh> insert into test.t(a) values('b');
> cqlsh> select a,b, writetime (b) from test.t;
> 
>  a | b| writetime(b)
> ---+--+--
>  z | null | null
>  a | null | null
>  e | null | null
>  r | null | null
>  w | null | null
>  t |x | 1500563698113119
>  b | null | 1500563698113119
> 
> (7 rows)
> 


Repro'd on my side too.

cqlsh> insert into test.t(a,b) values('t','x');
cqlsh> insert into test.t(a) values('b');
cqlsh> select a,b, writetime (b) from test.t;

 a | b| writetime(b)
---+--+--
 z | null | null
 e | null | null
 r | null | null
 w | null | null
 t |x | 1500565131354883
 b | null | 1500565131354883

Here's what the data on disk looks like (starting with row 'w'):

  {
"partition" : {
  "key" : [ "w" ],
  "position" : 69
},
"rows" : [
  {
"type" : "row",
"position" : 92,
"liveness_info" : { "tstamp" : "2017-07-20T04:41:59.005382Z" },
"cells" : [ ]
  }
]
  },
  {
"partition" : {
  "key" : [ "t" ],
  "position" : 93
},
"rows" : [
  {
"type" : "row",
"position" : 120,
"liveness_info" : { "tstamp" : "2017-07-20T15:38:51.354883Z" },
"cells" : [
  { "name" : "b", "value" : "x" }
]
  }
]
  },
  {
"partition" : {
  "key" : [ "b" ],
  "position" : 121
},
"rows" : [
  {
"type" : "row",
"position" : 146,
"liveness_info" : { "tstamp" : "2017-07-20T15:39:03.631297Z" },
"cells" : [ ]
  }
]
  }

This is a bug, cqlsh is displaying the timestamp from partition 't' for 
partition 'b'. 

I've created https://issues.apache.org/jira/browse/CASSANDRA-13711 based on 
your report. I havent yet verified if this is a cqlsh bug or a storage engine 
bug.


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread ZAIDI, ASAD A
Hello Folks -

I'm using apache-cassandra 2.2.8.

I see many messages like below in my system.log file. In Cassandra.yaml file [ 
cross_node_timeout: true] is set and NTP server is also running correcting 
clock drift on 16node cluster. I do not see pending or blocked HintedHandoff  
in tpstats output though there are bunch of MUTATIONS dropped observed.


INFO  [ScheduledTasks:1] 2017-07-20 08:02:52,511 MessagingService.java:946 - 
MUTATION messages were dropped in last 5000 ms: 822 for internal timeout and 
2152 for cross node timeout


I'm seeking help here if you please let me know what I need to check in order 
to address these cross node timeouts.

Thank you,
Asad



Re: write time for nulls is not consistent

2017-07-20 Thread Nitan Kainth
Jeff,

It is really strange, look at below log, I inserted your data and then few 
additional; finally, the issue is reproduced:

[cqlsh 5.0.1 | Cassandra 3.0.10.1443 | DSE 5.0.4 | CQL spec 3.4.0 | Native 
protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE test WITH replication = {'class':'SimpleStrategy', 
'replication_factor': 1};
cqlsh> CREATE TABLE test.t ( a text primary key, b text );
cqlsh> 
cqlsh> 
cqlsh> 
cqlsh> insert into test.t(a) values('z');
cqlsh> insert into test.t(a) values('w');
cqlsh> insert into test.t(a) values('e');
cqlsh> insert into test.t(a) values('r');
cqlsh> insert into test.t(a) values('t');
cqlsh> 
cqlsh> select a,b, writetime (b) from test.t;

 a | b| writetime(b)
---+--+--
 z | null | null
 e | null | null
 r | null | null
 w | null | null
 t | null | null

(5 rows)
cqlsh> 
cqlsh> insert into test.t(a,b) values('t','x');
cqlsh> select a,b, writetime (b) from test.t;

 a | b| writetime(b)
---+--+--
 z | null | null
 e | null | null
 r | null | null
 w | null | null
 t |x | 1500563698113119

(5 rows)
cqlsh> insert into test.t(a) values('a');
cqlsh> select a,b, writetime (b) from test.t;

 a | b| writetime(b)
---+--+--
 z | null | null
 a | null | null
 e | null | null
 r | null | null
 w | null | null
 t |x | 1500563698113119

(6 rows)
cqlsh> insert into test.t(a) values('b');
cqlsh> select a,b, writetime (b) from test.t;

 a | b| writetime(b)
---+--+--
 z | null | null
 a | null | null
 e | null | null
 r | null | null
 w | null | null
 t |x | 1500563698113119
 b | null | 1500563698113119

(7 rows)

> On Jul 19, 2017, at 11:43 PM, Jeff Jirsa  wrote:
> 
> cqlsh> insert into test.t(a) values('z');
> cqlsh> insert into test.t(a) values('w');
> cqlsh> insert into test.t(a) values('e');
> cqlsh> insert into test.t(a) values('r');
> cqlsh> insert into test.t(a) values('t');
> cqlsh> select a,b, writetime (b) from test.t;



Re: secondary index use case

2017-07-20 Thread Vladimir Yudovin
Hi,



You didn't mention your C* version, but starting from 3.4 SASI indexes are 
available. You can try it with SPARSE option, as uuid corresponds to only one 
row.



Best regards, Vladimir Yudovin, 

Winguzone - Cloud Cassandra Hosting






 On Thu, 20 Jul 2017 05:21:31 -0400 Micha mich...@fantasymail.de 
wrote 




Hi, 

 

even after reading much about secondary index usage I'm not sure if I 

have the correct use case for it. 

 

My table will contain about 150'000'000 records (each about 2KB data). 

There are two uuids used to identify a row. One uuid is unique for each 

row, the other uuid is something like a groupid, which give mostly 20 

records when queried. 

 

So, if I define my primary key as (groupuuid, uuid) then: 

"select * ... where groupuuid = X" gives me 0 - 20 rows 

 

"select * ... where groupuuid = X and uuid = Y" gives me 0 | 1 row 

 

now, sometimes I want to query only with uuid: 

"select * ... where uuid = X" to get exactly one row (without using 

groupid) 

 

Is this a good use case for a secondary index on uuid? 

 

 

Thanks for helping, 

 Michael 

 

 

 

 

 

 

- 

To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 

For additional commands, e-mail: user-h...@cassandra.apache.org 

 








RE: Cassandra 2.2.6 Fails to Boot Up correctly - JNA Class

2017-07-20 Thread William Boutin
Thank you for your help.
We have been using jna-4.0.0.jar since using Cassandra 2.2(.6). Until last 
week, we had no issues. Now, we are experiencing the exception that I 
identified. We only have jna-4.0.0.jar loaded on our machines and the CLASSPATH 
that Cassandra builds only uses the jar from the /usr/share/cassandra/lib 
directory. Jna-3.2.4.jar was only introduced when I was debugging the original 
issue and will never be loaded on our machines. See the output of ps below.

Should I be looking elsewhere? Some have brought up that low memory on a JVM or 
LINUX machine could cause jar loading issues. I have tried doubling -Xss256k to 
-Xss512k.

Thanks

Billy S. Boutin 
Office Phone No. (913) 241-5574 
Cell Phone No. (732) 213-1368 

496   3318 1  0 Jul19 ?00:06:20 /usr/java/latest/bin/java 
-Dorg.xerial.snappy.tempdir=/var/tmp -Djava.io.tmpdir=/var/tmp -ea 
-javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1968M -Xmx1968M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
-XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-XX:+UseTLAB -XX:+PerfDisableSharedMem 
-XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
-XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
-XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC 
-XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
-XX:+PrintPromotionFailure -Xloggc:/usr/share/cassandra/logs/gc.log 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M 
-Djava.net.preferIPv4Stack=true -Dcassandra.jmx.local.port=7199 
-XX:+DisableExplicitGC -Djava.library.path=/usr/share/cassandra/lib/sigar-bin 
-Dlogback.configurationFile=logback.xml 
-Dcassandra.logdir=/usr/share/cassandra/logs -Dcassandra.storagedir= 
-Dcassandra-pidfile=/var/run/cassandra/cassandra.pid -cp 
/etc/cassandra/conf:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/apache-cassandra-clientutil-2.2.6-E002.jar:/usr/share/cassandra/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share
/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/crc32ex-0.1.1.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-16.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/ohc-core-0.3.4.jar:/usr/share/cassandra/lib/ohc-core-j8-0.3.4.jar:/usr/share/cassandra/lib/reporter-config3-3.0.0.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.0.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/super-c
sv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-2.2.6-E002.jar:/usr/share/cassandra/apache-cassandra-thrift-2.2.6-E002.jar:/usr/share/cassandra/stress.jar:
 org.apache.cassandra.service.CassandraDaemon

-Original Message-
From: Jeff Jirsa [mailto:jji...@apache.org] 
Sent: Wednesday, July 19, 2017 11:47 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra 2.2.6 Fails to Boot Up correctly - JNA Class



On 2017-07-19 10:41 (-0700), William Boutin  
wrote: 
> We are running apache-cassandra-2.2.6 for months with no JNA startup issues.
> Recently, we have updated some of our cassandra machines and we ran into a 
> Cassandra startup issue with JNA. See the stack trace 1 below.
> 
> Question 

Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Kiran mk
You have to download the Prometheus HTTP jmx dependencies jar and download
the Cassandra yaml and mention the jmx port in the config (7199).

Run the agent on specific port" on all the Cassandra nodes.

After this go to your Prometheus server and make the scrape config to
metrics from all clients.



On 20-Jul-2017 3:27 PM, "wxn...@zjqunshuo.com"  wrote:

Hi,
I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I
installed Prometheus and started it, but don't know how to config it to
support Cassandra.
Any ideas or related articles are appreciated.

Cheers,
Simon


Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Kiran mk
You have to download the Prometheus HTTP jmx dependencies jar and download
the Cassandra yaml and mention the jmx port in the config (7199).

Run the agent on specific port" on all the Cassandra nodes.

After this go to your Prometheus server and make the scrape config to
metrics from all clients.



On 20-Jul-2017 3:27 PM, "wxn...@zjqunshuo.com"  wrote:

Hi,
I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I
installed Prometheus and started it, but don't know how to config it to
support Cassandra.
Any ideas or related articles are appreciated.

Cheers,
Simon


Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread wxn...@zjqunshuo.com
Hi,
I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I 
installed Prometheus and started it, but don't know how to config it to support 
Cassandra.
Any ideas or related articles are appreciated.

Cheers,
Simon 


secondary index use case

2017-07-20 Thread Micha
Hi,

even after reading much about secondary index usage I'm not sure if I
have the correct use case for it.

My table will contain about 150'000'000 records (each about 2KB data).
There are two uuids used to identify a row. One uuid is unique for each
row, the other uuid is something like a groupid, which give mostly 20
records when queried.

So, if I define my primary key as (groupuuid, uuid) then:
"select * ... where groupuuid = X" gives me 0 - 20 rows

"select * ... where groupuuid = X and uuid = Y" gives me 0 | 1 row

now, sometimes I want to query only with uuid:
"select * ... where uuid = X"  to get exactly one row (without using
groupid)

Is this a good use case for a secondary index on uuid?


Thanks for helping,
 Michael






-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org