Re: handling down node cassandra 2.0.15

2015-11-16 Thread Anishek Agarwal
hey Anuj,

Ok I will try that next time, so you are saying since i am replacing the
machine in place(trying to get the same machine back in cluster) which
already has some data, I dont clean the commitlogs/data directories and set
auto_bootstrap = false and then restart the node, followed by repair on
this machine right ?

thanks
anishek

On Mon, Nov 16, 2015 at 11:40 PM, Anuj Wadehra 
wrote:

> Hi Abhishek,
>
> In my opinion, you already have data and bootstrapping is not needed here.
> You can set auto_bootstrap to false in Cassandra.yaml and once the
> cassandra is rebooted, you should run repair to fix the inconsistent data.
>
>
> Thanks
> Anuj
>
>
>
> On Monday, 16 November 2015 10:34 PM, Josh Smith <
> josh.sm...@careerbuilder.com> wrote:
>
>
> Sis you set the JVM_OPTS to replace address? That is usually the error I
> get when I forget to set the replace_address on Cassandra-env.
>
> JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=address_of_dead_node
>
>
> *From:* Anishek Agarwal [mailto:anis...@gmail.com]
> *Sent:* Monday, November 16, 2015 9:25 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: handling down node cassandra 2.0.15
>
> nope its not
>
> On Mon, Nov 16, 2015 at 5:48 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> Is that a seed node?
>
> On Mon, Nov 16, 2015, 05:21 Anishek Agarwal  wrote:
>
> Hello,
>
> We are having a 3 node cluster and one of the node went down due to a
> hardware memory failure looks like. We followed the steps below after the
> node was down for more than the default value of *max_hint_window_in_ms*
>
> I tried to restart cassandra by following the steps @
>
>
>1.
>
> http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
>2.
>
> http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
>
> *except the "clear data" part as it was not specified in second blog
> above.*
>
> i was trying to restart the same node that went down, however I did not
> get the messages in log files as stated in 2 against "StorageService"
>
> instead it just tried to replay and then stopped with the error message as
> below:
>
> *ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584)
> Exception encountered during startup*
> *java.lang.RuntimeException: Cannot replace address with a node that is
> already bootstrapped*
>
> Can someone please help me if there is something i am doing wrong here.
>
> Thanks for the help in advance.
>
> Regards,
> Anishek
>
>
>
>
>


Re: handling down node cassandra 2.0.15

2015-11-16 Thread Anishek Agarwal
Hey Josh

I did set the replace address which was same as the address of the machine
which went down so it was in place.

anishek

On Mon, Nov 16, 2015 at 10:33 PM, Josh Smith 
wrote:

> Sis you set the JVM_OPTS to replace address? That is usually the error I
> get when I forget to set the replace_address on Cassandra-env.
>
>
>
> JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=address_of_dead_node
>
>
>
>
>
> *From:* Anishek Agarwal [mailto:anis...@gmail.com]
> *Sent:* Monday, November 16, 2015 9:25 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: handling down node cassandra 2.0.15
>
>
>
> nope its not
>
>
>
> On Mon, Nov 16, 2015 at 5:48 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> Is that a seed node?
>
>
>
> On Mon, Nov 16, 2015, 05:21 Anishek Agarwal  wrote:
>
> Hello,
>
>
>
> We are having a 3 node cluster and one of the node went down due to a
> hardware memory failure looks like. We followed the steps below after the
> node was down for more than the default value of *max_hint_window_in_ms*
>
>
>
> I tried to restart cassandra by following the steps @
>
>
>
>1.
>
> http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
>2.
>
> http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
>
> *except the "clear data" part as it was not specified in second blog
> above.*
>
>
>
> i was trying to restart the same node that went down, however I did not
> get the messages in log files as stated in 2 against "StorageService"
>
>
>
> instead it just tried to replay and then stopped with the error message as
> below:
>
>
>
> *ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584)
> Exception encountered during startup*
>
> *java.lang.RuntimeException: Cannot replace address with a node that is
> already bootstrapped*
>
>
>
> Can someone please help me if there is something i am doing wrong here.
>
>
>
> Thanks for the help in advance.
>
>
>
> Regards,
>
> Anishek
>
>
>


Re: Overriding timestamp with light weight transactions

2015-11-16 Thread Peddi, Praveen
Adding sleep was our last resort but I was hoping to find a way that doesn't 
affect our API latencies. Thanks for the suggestion though.

Praveen

On Nov 16, 2015, at 6:29 PM, Laing, Michael 
mailto:michael.la...@nytimes.com>> wrote:

So you are reading the row before writing as you say you have the timestamp.

If you really need CAS for the write and the timestamp you read is in the 
future (by local reckoning), why not delay that write until the future arrives 
and forget about explicitly setting the timestamp?

Backtracking on timestamps is definitely a consistency risk anyway, as I 
understand it, since the 'latest' one wins and could easily be lurking in a 
hint somewhere etc.

On Mon, Nov 16, 2015 at 4:27 PM, Peddi, Praveen 
mailto:pe...@amazon.com>> wrote:
Jon,
Thanks for your response. Our custom supplied timestamp is only provided if 
current timestamp on the row is in future. We just add few millis to current 
timestamp value and override the timestamp. That will ensure the updates are 
read in the correct order. We don’t completely manage the timestamp field by 
ourselves, but only when row’s current timestamp is in future. This approach 
seems to be working fine for us (running for last 6 weeks).

As far as multiple updates with in few ms, that is the nature of our system. 
Its only a very small percent of our requests that can come in rapid fire but 
when it happens, we need to handle it. With out ntp drift issues (or minimal 
drift), there are absolutely no issues. Only when the drift is significant 
(Europe’s case), it becomes a problem and the above approach has solved it 
nicely.

Praveen

From: Jon Haddad mailto:jonathan.had...@gmail.com>> 
on behalf of Jon Haddad mailto:j...@jonhaddad.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Monday, November 16, 2015 at 4:05 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Overriding timestamp with light weight transactions

LWT uses the coordinator’s machine’s timestamp to generate a timeuuid, which is 
used as the timestamp of the paxos ballot.  You cannot supply a paxos ballot 
that’s behind the current time because it’s invalid.

You’re issuing multiple updates within a few ms in a distributed system, it 
sounds like you’re trying to ignore the real world problem of clock variance.  
If you understand that you’ve got clocks that are going to be more than 10ms 
off, and you’re issuing queries within a few ms of each other, why do you think 
that your custom supplied timestamps are going to be correct?


On Nov 16, 2015, at 1:01 PM, Peddi, Praveen 
mailto:pe...@amazon.com>> wrote:

We have some rapid fire updates (multiple updates with in few millis). I wish 
we had control over ntp drifts but AWS doesn’t guarantee “0 drift”. In North 
America, its minimal (<5 to 10 ms) but Europe has longer drifts. We override 
the timestamp only if we see current timestamp on the row is in future. Why do 
you think overriding timestamp is a work around? It seems like a valid reason 
to override timestamps.

Thanks
Praveen



From: Jon Haddad mailto:jonathan.had...@gmail.com>> 
on behalf of Jon Haddad mailto:j...@jonhaddad.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Monday, November 16, 2015 at 3:42 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Overriding timestamp with light weight transactions

Perhaps you should fix your clock drift issues instead of trying to use a 
workaround?

On Nov 16, 2015, at 11:39 AM, Peddi, Praveen 
mailto:pe...@amazon.com>> wrote:

Hi,
We are using Cassandra 2.0.9 and we currently have “using timestamp” clause in 
all our update queries. We did this to fix occasional issues with ntp drift on 
AWS. We recently introduced conditional update in couple of our API and we 
realized that I can’t have “using timestamp” and “if column1=?” in the same 
query.

com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide 
custom timestamp for conditional update

How do I achieve this if I want to override timestamp in a query with 
conditional update? Also, does anyone know the reason behind not supporting 
“using timestamp” for conditional update? I am trying to understand the 
problems this would cause.

Thanks
Praveen





Re: Repair Hangs while requesting Merkle Trees

2015-11-16 Thread Anuj Wadehra
Hi Bryan,


Thanks for the reply !!

I didnt mean streaming_socket_tomeout_in_ms. I meant when you run netstats 
(Linux cmnd) on  node A in DC1, you will notice that there is connection in 
established state with node B in DC2. But when you run netstats on node B, you 
wont find any connection with node A. Such connections are there across dc? Is 
it a problem.


We havent set streaming_socket_timeout_in_ms which I know must be set. But I am 
not  sure wtheher setting this property has any impact on merkle tree requests. 
I thought its valid for data streaming if some mismatch is found and data needs 
to be streamed.Please confirm. Whats the value you use for streaming socket 
timeout?


Morever, if socket timeout is the issue, that should happen on other nodes 
too...repair is not running on just one node, as merkle tree request is getting 
lost n not transmitted to one or more nodes in remote dc.


I am not sure about exact distance. But they are connected with a very high 
speed 10gbps link.


When you say different TCP stack tuning..do u have any document/blog/link 
describing recommendations for multi Dc Cassandra setup?  Can you elaborate 
what all settings need to be different? 



Thanks

Anuj









Sent from Yahoo Mail on Android

From:"Bryan Cheng" 
Date:Tue, 17 Nov, 2015 at 5:54 am
Subject:Re: Repair Hangs while requesting Merkle Trees

Hi Anuj,


Did you mean streaming_socket_timeout_in_ms? If not, then you definitely want 
that set. Even the best network connections will break occasionally, and in 
Cassandra < 2.1.10 (I believe) this would leave those connections hanging 
indefinitely on one end.


How far away are your two DC's from a network perspective, out of curiosity? 
You'll almost certainly be doing different TCP stack tuning for cross-DC, 
notably your buffer sizes, window params, cassandra-specific stuff like 
otc_coalescing_strategy, inter_dc_tcp_nodelay, etc.


On Sat, Nov 14, 2015 at 10:35 AM, Anuj Wadehra  wrote:

One more observation.We observed that there are few TCP connections which node 
shows as Established but when we go to node at other end,connection is not 
there. They are called "phantom" connections I guess. Can this be a possible 
cause?


Thanks

Anuj


Sent from Yahoo Mail on Android

From:"Anuj Wadehra" 
Date:Sat, 14 Nov, 2015 at 11:59 pm


Subject:Re: Repair Hangs while requesting Merkle Trees

Thanks Daemeon !!


I wil capture the output of netstats and share in next few days. We were 
thinking of taking tcp dumps also. If its a network issue and increasing 
request timeout worked, not sure how Cassandra is dropping messages based on 
timeout.Repair messages are non droppable and not supposed to be timedout.


2 of the 3 nodes in the DC are able to complete repair without any issue. Just 
one node is problematic.


I also observed frequent messages in logs of other nodes which say that hints 
replay timedout..and the node where hints were being replayed is always a 
remote dc node. Is it related some how?


Thanks

Anuj

Sent from Yahoo Mail on Android

From:"daemeon reiydelle" 
Date:Thu, 12 Nov, 2015 at 10:34 am
Subject:Re: Repair Hangs while requesting Merkle Trees



Have you checked the network statistics on that machine? (netstats -tas) while 
attempting to repair ... if netstats show ANY issues you have a problem. If you 
can put the command in a loop running every 60 seconds for maybe 15 minutes and 
post back?

Out of curiousity, how many remote DC nodes are getting successfully repaired?



...
“Life should not be a journey to the grave with the intention of arriving 
safely in a
pretty and well preserved body, but rather to skid in broadside in a cloud of 
smoke,
thoroughly used up, totally worn out, and loudly proclaiming “Wow! What a 
Ride!” 
- Hunter Thompson

Daemeon C.M. Reiydelle
USA (+1) 415.501.0198
London (+44) (0) 20 8144 9872


On Wed, Nov 11, 2015 at 1:06 PM, Anuj Wadehra  wrote:

Hi,


we are using 2.0.14. We have 2 DCs at remote locations with 10GBps 
connectivity.We are able to complete repair (-par -pr) on 5 nodes. On only one 
node in DC2, we are unable to complete repair as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

Any comments, why this is happening on just one node? In 
OutboundTcpConnection.java,  when isTimeOut method always returns false for 
non-droppable verb such as Merkle Tree Request(verb=REPAIR_MESSAGE),why 
increasing request timeout solved problem on one occasion ?



Thanks

Anuj Wadehra 




On Thursday, 12 November 2015 2:35 AM, Anuj Wadehra  
wrote:



Hi,


We have 2 DCs at remote locations with 10GBps connectivity.We are able to 
complete repair (-par -pr) on 5 nodes. On only one node in DC2, we are unable 
to complete repair as it always han

Re: Repair Hangs while requesting Merkle Trees

2015-11-16 Thread Bryan Cheng
Hi Anuj,

Did you mean streaming_socket_timeout_in_ms? If not, then you definitely
want that set. Even the best network connections will break occasionally,
and in Cassandra < 2.1.10 (I believe) this would leave those connections
hanging indefinitely on one end.

How far away are your two DC's from a network perspective, out of
curiosity? You'll almost certainly be doing different TCP stack tuning for
cross-DC, notably your buffer sizes, window params, cassandra-specific
stuff like otc_coalescing_strategy, inter_dc_tcp_nodelay, etc.

On Sat, Nov 14, 2015 at 10:35 AM, Anuj Wadehra 
wrote:

> One more observation.We observed that there are few TCP connections which
> node shows as Established but when we go to node at other end,connection is
> not there. They are called "phantom" connections I guess. Can this be a
> possible cause?
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> 
> --
> *From*:"Anuj Wadehra" 
> *Date*:Sat, 14 Nov, 2015 at 11:59 pm
>
> *Subject*:Re: Repair Hangs while requesting Merkle Trees
>
> Thanks Daemeon !!
>
> I wil capture the output of netstats and share in next few days. We were
> thinking of taking tcp dumps also. If its a network issue and increasing
> request timeout worked, not sure how Cassandra is dropping messages based
> on timeout.Repair messages are non droppable and not supposed to be
> timedout.
>
> 2 of the 3 nodes in the DC are able to complete repair without any issue.
> Just one node is problematic.
>
> I also observed frequent messages in logs of other nodes which say that
> hints replay timedout..and the node where hints were being replayed is
> always a remote dc node. Is it related some how?
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> 
> --
> *From*:"daemeon reiydelle" 
> *Date*:Thu, 12 Nov, 2015 at 10:34 am
> *Subject*:Re: Repair Hangs while requesting Merkle Trees
>
>
> Have you checked the network statistics on that machine? (netstats -tas)
> while attempting to repair ... if netstats show ANY issues you have a
> problem. If you can put the command in a loop running every 60 seconds for
> maybe 15 minutes and post back?
>
> Out of curiousity, how many remote DC nodes are getting successfully
> repaired?
>
>
>
> *...*
>
>
>
>
>
>
> *“Life should not be a journey to the grave with the intention of arriving
> safely in apretty and well preserved body, but rather to skid in broadside
> in a cloud of smoke,thoroughly used up, totally worn out, and loudly
> proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
> (+1) 415.501.0198 <%28%2B1%29%20415.501.0198>London (+44) (0) 20 8144 9872
> <%28%2B44%29%20%280%29%2020%208144%209872>*
>
> On Wed, Nov 11, 2015 at 1:06 PM, Anuj Wadehra 
> wrote:
>
>> Hi,
>>
>> we are using 2.0.14. We have 2 DCs at remote locations with 10GBps
>> connectivity.We are able to complete repair (-par -pr) on 5 nodes. On only
>> one node in DC2, we are unable to complete repair as it always hangs. Node
>> sends Merkle Tree requests, but one or more nodes in DC1 (remote) never
>> show that they sent the merkle tree reply to requesting node.
>> Repair hangs infinitely.
>>
>> After increasing request_timeout_in_ms on affected node, we were able to
>> successfully run repair on one of the two occassions.
>>
>> Any comments, why this is happening on just one node? In
>> OutboundTcpConnection.java,  when isTimeOut method always returns false for
>> non-droppable verb such as Merkle Tree Request(verb=REPAIR_MESSAGE),why
>> increasing request timeout solved problem on one occasion ?
>>
>>
>> Thanks
>> Anuj Wadehra
>>
>>
>>
>> On Thursday, 12 November 2015 2:35 AM, Anuj Wadehra <
>> anujw_2...@yahoo.co.in> wrote:
>>
>>
>> Hi,
>>
>> We have 2 DCs at remote locations with 10GBps connectivity.We are able to
>> complete repair (-par -pr) on 5 nodes. On only one node in DC2, we are
>> unable to complete repair as it always hangs. Node sends Merkle Tree
>> requests, but one or more nodes in DC1 (remote) never show that they sent
>> the merkle tree reply to requesting node.
>> Repair hangs infinitely.
>>
>> After increasing request_timeout_in_ms on affected node, we were able to
>> successfully run repair on one of the two occassions.
>>
>> Any comments, why this is happening on just one node? In
>> OutboundTcpConnection.java,  when isTimeOut method always returns false for
>> non-droppable verb such as Merkle Tree Request(verb=REPAIR_MESSAGE),why
>> increasing request timeout solved problem on one occasion ?
>>
>>
>> Thanks
>> Anuj Wadehra
>>
>>
>>
>


Issue with protobuff and Spark cassandra connector

2015-11-16 Thread Cassa L
Hi,
 Has anyone used Protobuff with spark-cassandra connector? I am using
protobuff-3.0-beta with spark-1.4 and cassandra-connector-2.10. I keep
getting "Unable to find proto buffer class" in my code. I checked version
of protobuff jar and it is loaded with 3.0-beta in classpath. Protobuff is
coming form KAfka stream.

5/11/16 15:32:21 ERROR Executor: Exception in task 2.0 in stage 13.0 (TID
35)
java.lang.RuntimeException: Unable to find proto buffer class:
com.test.serializers.TestEvent$Event
at
com.google.protobuf.GeneratedMessageLite$SerializedForm.readResolve(GeneratedMessageLite.java:1063)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)



Here is my code:

JavaDStream rddStream =protoBuffMsgs.map(protoBuff ->
StreamRawData.convertProtoBuffToRawData(protoBuff));

rddStream.foreachRDD(rdd -> {
StreamRawData.writeToCassandra(rdd);
return null;
});

public static void writeToCassandra(JavaRDD rowRDD){
//write to Cassandra
javaFunctions(rowRDD).writerBuilder("keyspace", "data",
mapToRow(MyData.class)).saveToCassandra();
}

If I remove writeToCassandra() from my code, it works. It also counts and
filters on my protobuff stream of data.


Re: Overriding timestamp with light weight transactions

2015-11-16 Thread Laing, Michael
So you are reading the row before writing as you say you have the timestamp.

If you really need CAS for the write *and* the timestamp you read is in the
future (by local reckoning), why not delay that write until the future
arrives and forget about explicitly setting the timestamp?

Backtracking on timestamps is definitely a consistency risk anyway, as I
understand it, since the 'latest' one wins and could easily be lurking in a
hint somewhere etc.

On Mon, Nov 16, 2015 at 4:27 PM, Peddi, Praveen  wrote:

> Jon,
> Thanks for your response. Our custom supplied timestamp is only provided
> if current timestamp on the row is in future. *We just add few millis to
> current timestamp value and override the timestamp*. That will ensure the
> updates are read in the correct order. We don’t completely manage the
> timestamp field by ourselves, but only when row’s current timestamp is in
> future. This approach seems to be working fine for us (running for last 6
> weeks).
>
> As far as multiple updates with in few ms, that is the nature of our
> system. Its only a very small percent of our requests that can come in
> rapid fire but when it happens, we need to handle it. With out ntp drift
> issues (or minimal drift), there are absolutely no issues. Only when the
> drift is significant (Europe’s case), it becomes a problem and the above
> approach has solved it nicely.
>
> Praveen
>
> From: Jon Haddad  on behalf of Jon Haddad <
> j...@jonhaddad.com>
> Reply-To: "user@cassandra.apache.org" 
> Date: Monday, November 16, 2015 at 4:05 PM
> To: "user@cassandra.apache.org" 
> Subject: Re: Overriding timestamp with light weight transactions
>
> LWT uses the coordinator’s machine’s timestamp to generate a timeuuid,
> which is used as the timestamp of the paxos ballot.  You cannot supply a
> paxos ballot that’s behind the current time because it’s invalid.
>
> You’re issuing multiple updates within a few ms in a distributed system,
> it sounds like you’re trying to ignore the real world problem of clock
> variance.  If you understand that you’ve got clocks that are going to be
> more than 10ms off, and you’re issuing queries within a few ms of each
> other, why do you think that your custom supplied timestamps are going to
> be correct?
>
>
> On Nov 16, 2015, at 1:01 PM, Peddi, Praveen  wrote:
>
> We have some rapid fire updates (multiple updates with in few millis). I
> wish we had control over ntp drifts but AWS doesn’t guarantee “0 drift”. In
> North America, its minimal (<5 to 10 ms) but Europe has longer drifts. We
> override the timestamp only if we see current timestamp on the row is in
> future. Why do you think overriding timestamp is a work around? It seems
> like a valid reason to override timestamps.
>
> Thanks
> Praveen
>
>
> From: Jon Haddad  on behalf of Jon Haddad <
> j...@jonhaddad.com>
> Reply-To: "user@cassandra.apache.org" 
> Date: Monday, November 16, 2015 at 3:42 PM
> To: "user@cassandra.apache.org" 
> Subject: Re: Overriding timestamp with light weight transactions
>
> Perhaps you should fix your clock drift issues instead of trying to use a
> workaround?
>
> On Nov 16, 2015, at 11:39 AM, Peddi, Praveen  wrote:
>
> Hi,
> We are using Cassandra 2.0.9 and we currently have “using timestamp”
> clause in all our update queries. We did this to fix occasional issues with
> ntp drift on AWS. We recently introduced conditional update in couple of
> our API and we realized that I can’t have “using timestamp” and “if
> column1=?” in the same query.
>
> com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide
> custom timestamp for conditional update
>
> How do I achieve this if I want to override timestamp in a query with
> conditional update? Also, does anyone know the reason behind not supporting
> “using timestamp” for conditional update? I am trying to understand the
> problems this would cause.
>
> Thanks
> Praveen
>
>
>
>


Re: Devcenter & C* 3.0 Connection Error.

2015-11-16 Thread Michael Shuler

On 11/16/2015 04:56 PM, Bosung Seo wrote:

Hi guys,

Doesn't Devcenter support C* 3.0?

When I tried to use Devcenter with C* 3.0, I got this error.

The specified host(s) could not be reached.
All host(s) tried for query failed (tried: /{ipaddress}:9042
(com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured
table schema_keyspaces))
unconfigured table schema_keyspaces

Have anyone seen this issue before?


This is the same error I got with OpsCenter on 3.0, so it appears the 
client is attempting to query a table that no longer exists in 3.0+. 
Cassandra 3.0 support is being worked on for both DevCenter and OpsCenter.


--
Kind regards,
Michael


Re: Generalized download link?

2015-11-16 Thread Michael Shuler

On 11/16/2015 04:24 PM, John Wong wrote:

Obviously you will get a better answer from someone directly with
datastax... but IMO, I would look to either


The ASF handles the Apache Cassandra download infrastructure, not 
DataStax. (I work for DataStax, fyi)


I believe the OP is asking about links to the mirror redirects from 
http://cassandra.apache.org/download/



* keep the package locally in your own infrastructure. I have had mirror
issue or content unavailable error in the past with other OSS before.

If you can get hold of an NFS server, or a S3 bucket, you will probably
be okay doing bulk roll out more quickly and reliability.


This is good advice and might make things a little faster, if the 
internal mirror is local to the installing clients.



On Mon, Nov 16, 2015 at 3:40 PM, Bryan Cheng mailto:br...@blockcypher.com>> wrote:

Is there a URL available for downloading Cassandra that abstracts
away the mirror selection (eg. just 302's to a mirror URL?) We've
got a few self-configuring Cassandras (for example, the Docker
container our devs use), and using the same mirror for the
containers or for any bulk provisioning operation seems like bad
table manners.


As suggested, your own mirror might be the best route to go, since you 
control the availability and content. The full list of mirrors and 
creating your own mirror docs might be helpful if you want a 
geographically closer mirror, or want to set up your own mirror of a 
subset of software.


http://www.apache.org/mirrors/
http://www.apache.org/info/how-to-mirror.html

I don't know if they or one of the mirrors might provide the ability to 
rsync mirror *only* cassandra to your internal mirror, but even that may 
be too much unnecessary data. Personally, I'd download and throw the 
exact versions of packages you want in dirs on your own web server, and 
just upload new versions for your devs after you've had a quick test or two.


--
Kind regards,
Michael


Devcenter & C* 3.0 Connection Error.

2015-11-16 Thread Bosung Seo
Hi guys,

Doesn't Devcenter support C* 3.0?

When I tried to use Devcenter with C* 3.0, I got this error.

The specified host(s) could not be reached.
All host(s) tried for query failed (tried: /{ipaddress}:9042
(com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured
table schema_keyspaces))
unconfigured table schema_keyspaces

Have anyone seen this issue before?

Thanks,
Bo




Re: Generalized download link?

2015-11-16 Thread John Wong
Obviously you will get a better answer from someone directly with
datastax... but IMO, I would look to either

* use package manager like apt or yum, they are usually up-to-date if you
use the ppa route.

* keep the package locally in your own infrastructure. I have had mirror
issue or content unavailable error in the past with other OSS before.

If you can get hold of an NFS server, or a S3 bucket, you will probably be
okay doing bulk roll out more quickly and reliability.


On Mon, Nov 16, 2015 at 3:40 PM, Bryan Cheng  wrote:

> Hey list,
>
> Is there a URL available for downloading Cassandra that abstracts away the
> mirror selection (eg. just 302's to a mirror URL?) We've got a few
> self-configuring Cassandras (for example, the Docker container our devs
> use), and using the same mirror for the containers or for any bulk
> provisioning operation seems like bad table manners.
>


Re: Overriding timestamp with light weight transactions

2015-11-16 Thread Peddi, Praveen
Jon,
Thanks for your response. Our custom supplied timestamp is only provided if 
current timestamp on the row is in future. We just add few millis to current 
timestamp value and override the timestamp. That will ensure the updates are 
read in the correct order. We don't completely manage the timestamp field by 
ourselves, but only when row's current timestamp is in future. This approach 
seems to be working fine for us (running for last 6 weeks).

As far as multiple updates with in few ms, that is the nature of our system. 
Its only a very small percent of our requests that can come in rapid fire but 
when it happens, we need to handle it. With out ntp drift issues (or minimal 
drift), there are absolutely no issues. Only when the drift is significant 
(Europe's case), it becomes a problem and the above approach has solved it 
nicely.

Praveen

From: Jon Haddad mailto:jonathan.had...@gmail.com>> 
on behalf of Jon Haddad mailto:j...@jonhaddad.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Monday, November 16, 2015 at 4:05 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Overriding timestamp with light weight transactions

LWT uses the coordinator's machine's timestamp to generate a timeuuid, which is 
used as the timestamp of the paxos ballot.  You cannot supply a paxos ballot 
that's behind the current time because it's invalid.

You're issuing multiple updates within a few ms in a distributed system, it 
sounds like you're trying to ignore the real world problem of clock variance.  
If you understand that you've got clocks that are going to be more than 10ms 
off, and you're issuing queries within a few ms of each other, why do you think 
that your custom supplied timestamps are going to be correct?


On Nov 16, 2015, at 1:01 PM, Peddi, Praveen 
mailto:pe...@amazon.com>> wrote:

We have some rapid fire updates (multiple updates with in few millis). I wish 
we had control over ntp drifts but AWS doesn't guarantee "0 drift". In North 
America, its minimal (<5 to 10 ms) but Europe has longer drifts. We override 
the timestamp only if we see current timestamp on the row is in future. Why do 
you think overriding timestamp is a work around? It seems like a valid reason 
to override timestamps.

Thanks
Praveen



From: Jon Haddad mailto:jonathan.had...@gmail.com>> 
on behalf of Jon Haddad mailto:j...@jonhaddad.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Monday, November 16, 2015 at 3:42 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Overriding timestamp with light weight transactions

Perhaps you should fix your clock drift issues instead of trying to use a 
workaround?

On Nov 16, 2015, at 11:39 AM, Peddi, Praveen 
mailto:pe...@amazon.com>> wrote:

Hi,
We are using Cassandra 2.0.9 and we currently have "using timestamp" clause in 
all our update queries. We did this to fix occasional issues with ntp drift on 
AWS. We recently introduced conditional update in couple of our API and we 
realized that I can't have "using timestamp" and "if column1=?" in the same 
query.

com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide 
custom timestamp for conditional update

How do I achieve this if I want to override timestamp in a query with 
conditional update? Also, does anyone know the reason behind not supporting 
"using timestamp" for conditional update? I am trying to understand the 
problems this would cause.

Thanks
Praveen




Re: Overriding timestamp with light weight transactions

2015-11-16 Thread Jon Haddad
LWT uses the coordinator’s machine’s timestamp to generate a timeuuid, which is 
used as the timestamp of the paxos ballot.  You cannot supply a paxos ballot 
that’s behind the current time because it’s invalid.

You’re issuing multiple updates within a few ms in a distributed system, it 
sounds like you’re trying to ignore the real world problem of clock variance.  
If you understand that you’ve got clocks that are going to be more than 10ms 
off, and you’re issuing queries within a few ms of each other, why do you think 
that your custom supplied timestamps are going to be correct?  


> On Nov 16, 2015, at 1:01 PM, Peddi, Praveen  wrote:
> 
> We have some rapid fire updates (multiple updates with in few millis). I wish 
> we had control over ntp drifts but AWS doesn’t guarantee “0 drift”. In North 
> America, its minimal (<5 to 10 ms) but Europe has longer drifts. We override 
> the timestamp only if we see current timestamp on the row is in future. Why 
> do you think overriding timestamp is a work around? It seems like a valid 
> reason to override timestamps. 
> 
> Thanks
> Praveen
> 
> 
> From: Jon Haddad  > on behalf of Jon Haddad 
> mailto:j...@jonhaddad.com>>
> Reply-To: "user@cassandra.apache.org " 
> mailto:user@cassandra.apache.org>>
> Date: Monday, November 16, 2015 at 3:42 PM
> To: "user@cassandra.apache.org " 
> mailto:user@cassandra.apache.org>>
> Subject: Re: Overriding timestamp with light weight transactions
> 
> Perhaps you should fix your clock drift issues instead of trying to use a 
> workaround?
> 
>> On Nov 16, 2015, at 11:39 AM, Peddi, Praveen > > wrote:
>> 
>> Hi,
>> We are using Cassandra 2.0.9 and we currently have “using timestamp” clause 
>> in all our update queries. We did this to fix occasional issues with ntp 
>> drift on AWS. We recently introduced conditional update in couple of our API 
>> and we realized that I can’t have “using timestamp” and “if column1=?” in 
>> the same query.
>> 
>> com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide 
>> custom timestamp for conditional update
>> 
>> How do I achieve this if I want to override timestamp in a query with 
>> conditional update? Also, does anyone know the reason behind not supporting 
>> “using timestamp” for conditional update? I am trying to understand the 
>> problems this would cause.
>> 
>> Thanks
>> Praveen
> 



Re: Overriding timestamp with light weight transactions

2015-11-16 Thread Peddi, Praveen
We have some rapid fire updates (multiple updates with in few millis). I wish 
we had control over ntp drifts but AWS doesn't guarantee "0 drift". In North 
America, its minimal (<5 to 10 ms) but Europe has longer drifts. We override 
the timestamp only if we see current timestamp on the row is in future. Why do 
you think overriding timestamp is a work around? It seems like a valid reason 
to override timestamps.

Thanks
Praveen



From: Jon Haddad mailto:jonathan.had...@gmail.com>> 
on behalf of Jon Haddad mailto:j...@jonhaddad.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Monday, November 16, 2015 at 3:42 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Overriding timestamp with light weight transactions

Perhaps you should fix your clock drift issues instead of trying to use a 
workaround?

On Nov 16, 2015, at 11:39 AM, Peddi, Praveen 
mailto:pe...@amazon.com>> wrote:

Hi,
We are using Cassandra 2.0.9 and we currently have "using timestamp" clause in 
all our update queries. We did this to fix occasional issues with ntp drift on 
AWS. We recently introduced conditional update in couple of our API and we 
realized that I can't have "using timestamp" and "if column1=?" in the same 
query.

com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide 
custom timestamp for conditional update

How do I achieve this if I want to override timestamp in a query with 
conditional update? Also, does anyone know the reason behind not supporting 
"using timestamp" for conditional update? I am trying to understand the 
problems this would cause.

Thanks
Praveen



Re: Overriding timestamp with light weight transactions

2015-11-16 Thread Jon Haddad
Perhaps you should fix your clock drift issues instead of trying to use a 
workaround?

> On Nov 16, 2015, at 11:39 AM, Peddi, Praveen  wrote:
> 
> Hi,
> We are using Cassandra 2.0.9 and we currently have “using timestamp” clause 
> in all our update queries. We did this to fix occasional issues with ntp 
> drift on AWS. We recently introduced conditional update in couple of our API 
> and we realized that I can’t have “using timestamp” and “if column1=?” in the 
> same query.
> 
> com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide 
> custom timestamp for conditional update
> 
> How do I achieve this if I want to override timestamp in a query with 
> conditional update? Also, does anyone know the reason behind not supporting 
> “using timestamp” for conditional update? I am trying to understand the 
> problems this would cause.
> 
> Thanks
> Praveen



Generalized download link?

2015-11-16 Thread Bryan Cheng
Hey list,

Is there a URL available for downloading Cassandra that abstracts away the
mirror selection (eg. just 302's to a mirror URL?) We've got a few
self-configuring Cassandras (for example, the Docker container our devs
use), and using the same mirror for the containers or for any bulk
provisioning operation seems like bad table manners.


Overriding timestamp with light weight transactions

2015-11-16 Thread Peddi, Praveen
Hi,
We are using Cassandra 2.0.9 and we currently have "using timestamp" clause in 
all our update queries. We did this to fix occasional issues with ntp drift on 
AWS. We recently introduced conditional update in couple of our API and we 
realized that I can't have "using timestamp" and "if column1=?" in the same 
query.

com.datastax.driver.core.exceptions.InvalidQueryException: Cannot provide 
custom timestamp for conditional update

How do I achieve this if I want to override timestamp in a query with 
conditional update? Also, does anyone know the reason behind not supporting 
"using timestamp" for conditional update? I am trying to understand the 
problems this would cause.

Thanks
Praveen


Re: Deletes Reappeared even when nodes are not down

2015-11-16 Thread Robert Coli
On Sat, Nov 14, 2015 at 9:58 AM, Peddi, Praveen  wrote:

> I checked tpstats and there are no dropped mutations (though I checked it
> after restating the affected nodes). If the problem occurs again, I will
> check tpstats again. Is there any stat that shows failed hints? The only
> abnormality I see is 1 flush writer got blocked (All time blocked = 1).
>

If there are no dropped mutations on any nodes, you haven't stored any
hints.

I think the only place you can see debugging info regarding hints is in the
system.log.

=Rob


Re: handling down node cassandra 2.0.15

2015-11-16 Thread Anuj Wadehra
Sis you set the JVM_OPTS to replace address? That is usually the error I get 
when I forget to set the replace_address on Cassandra-env.

 

JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=address_of_dead_node

 

 

From: Anishek Agarwal [mailto:anis...@gmail.com] 
Sent: Monday, November 16, 2015 9:25 AM
To: user@cassandra.apache.org
Subject: Re: handling down node cassandra 2.0.15

 

nope its not

 

On Mon, Nov 16, 2015 at 5:48 PM, sai krishnam raju potturi 
 wrote:

Is that a seed node?

 

On Mon, Nov 16, 2015, 05:21 Anishek Agarwal  wrote:

Hello,

 

We are having a 3 node cluster and one of the node went down due to a hardware 
memory failure looks like. We followed the steps below after the node was down 
for more than the default value of max_hint_window_in_ms 

 

I tried to restart cassandra by following the steps @

 

http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
 
http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
 

except the "clear data" part as it was not specified in second blog above.

 

i was trying to restart the same node that went down, however I did not get the 
messages in log files as stated in 2 against "StorageService"

 

instead it just tried to replay and then stopped with the error message as 
below:

 

ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584) Exception 
encountered during startup

java.lang.RuntimeException: Cannot replace address with a node that is already 
bootstrapped

 

Can someone please help me if there is something i am doing wrong here. 

 

Thanks for the help in advance. 

 

Regards,

Anishek 

 





Re: handling down node cassandra 2.0.15

2015-11-16 Thread Anuj Wadehra
Hi Abhishek,
In my opinion, you already have data and bootstrapping is not needed here. You 
can set auto_bootstrap to false in Cassandra.yaml and once the cassandra is 
rebooted, you should run repair to fix the inconsistent data.

ThanksAnuj
 


 On Monday, 16 November 2015 10:34 PM, Josh Smith 
 wrote:
   

 #yiv1301064707 -- filtered {font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 
4;}#yiv1301064707 filtered {panose-1:2 4 5 3 5 4 6 3 2 4;}#yiv1301064707 
filtered {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;}#yiv1301064707 
filtered {font-family:Consolas;panose-1:2 11 6 9 2 2 4 3 2 4;}#yiv1301064707 
p.yiv1301064707MsoNormal, #yiv1301064707 li.yiv1301064707MsoNormal, 
#yiv1301064707 div.yiv1301064707MsoNormal 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv1301064707 a:link, 
#yiv1301064707 span.yiv1301064707MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv1301064707 a:visited, #yiv1301064707 
span.yiv1301064707MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv1301064707 p 
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv1301064707 pre 
{margin:0in;margin-bottom:.0001pt;font-size:10.0pt;}#yiv1301064707 
span.yiv1301064707EmailStyle18 {color:#1F497D;}#yiv1301064707 
span.yiv1301064707HTMLPreformattedChar {}#yiv1301064707 
.yiv1301064707MsoChpDefault {}#yiv1301064707 filtered {margin:1.0in 1.0in 1.0in 
1.0in;}#yiv1301064707 div.yiv1301064707WordSection1 {}#yiv1301064707 filtered 
{}#yiv1301064707 ol {margin-bottom:0in;}#yiv1301064707 ul 
{margin-bottom:0in;}#yiv1301064707 Sis you set the JVM_OPTS to replace address? 
That is usually the error I get when I forget to set the replace_address on 
Cassandra-env.    JVM_OPTS="$JVM_OPTS 
-Dcassandra.replace_address=address_of_dead_node       From: Anishek Agarwal 
[mailto:anis...@gmail.com]
Sent: Monday, November 16, 2015 9:25 AM
To: user@cassandra.apache.org
Subject: Re: handling down node cassandra 2.0.15    nope its not    On Mon, Nov 
16, 2015 at 5:48 PM, sai krishnam raju potturi  wrote: 
Is that a seed node?    On Mon, Nov 16, 2015, 05:21 Anishek Agarwal 
 wrote: 
Hello,    We are having a 3 node cluster and one of the node went down due to a 
hardware memory failure looks like. We followed the steps below after the node 
was down for more than the default value of max_hint_window_in_ms     I tried 
to restart cassandra by following the steps @   
   - 
http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
   - 
http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
 except the "clear data" part as it was not specified in second blog above.    
i was trying to restart the same node that went down, however I did not get the 
messages in log files as stated in 2 against "StorageService"    instead it 
just tried to replay and then stopped with the error message as below:    ERROR 
[main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584) Exception 
encountered during startup java.lang.RuntimeException: Cannot replace address 
with a node that is already bootstrapped    Can someone please help me if there 
is something i am doing wrong here.     Thanks for the help in advance.     
Regards, Anishek  

   

  

RE: handling down node cassandra 2.0.15

2015-11-16 Thread Josh Smith
Sis you set the JVM_OPTS to replace address? That is usually the error I get 
when I forget to set the replace_address on Cassandra-env.

JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=address_of_dead_node


From: Anishek Agarwal [mailto:anis...@gmail.com]
Sent: Monday, November 16, 2015 9:25 AM
To: user@cassandra.apache.org
Subject: Re: handling down node cassandra 2.0.15

nope its not

On Mon, Nov 16, 2015 at 5:48 PM, sai krishnam raju potturi 
mailto:pskraj...@gmail.com>> wrote:

Is that a seed node?

On Mon, Nov 16, 2015, 05:21 Anishek Agarwal 
mailto:anis...@gmail.com>> wrote:
Hello,

We are having a 3 node cluster and one of the node went down due to a hardware 
memory failure looks like. We followed the steps below after the node was down 
for more than the default value of max_hint_window_in_ms

I tried to restart cassandra by following the steps @


  1.  
http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
  2.  
http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
except the "clear data" part as it was not specified in second blog above.

i was trying to restart the same node that went down, however I did not get the 
messages in log files as stated in 2 against "StorageService"

instead it just tried to replay and then stopped with the error message as 
below:

ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584) Exception 
encountered during startup
java.lang.RuntimeException: Cannot replace address with a node that is already 
bootstrapped

Can someone please help me if there is something i am doing wrong here.

Thanks for the help in advance.

Regards,
Anishek



Re: unsubscribe

2015-11-16 Thread Raj Chudasama
no.. we can't allow you to leave.

On Mon, Nov 16, 2015 at 4:25 AM, Tanuj Kumar  wrote:

>


Re: handling down node cassandra 2.0.15

2015-11-16 Thread Anishek Agarwal
nope its not

On Mon, Nov 16, 2015 at 5:48 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:

> Is that a seed node?
>
> On Mon, Nov 16, 2015, 05:21 Anishek Agarwal  wrote:
>
>> Hello,
>>
>> We are having a 3 node cluster and one of the node went down due to a
>> hardware memory failure looks like. We followed the steps below after the
>> node was down for more than the default value of *max_hint_window_in_ms*
>>
>> I tried to restart cassandra by following the steps @
>>
>>
>>1.
>>
>> http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
>>2.
>>
>> http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
>>
>> *except the "clear data" part as it was not specified in second blog
>> above.*
>>
>> i was trying to restart the same node that went down, however I did not
>> get the messages in log files as stated in 2 against "StorageService"
>>
>> instead it just tried to replay and then stopped with the error message
>> as below:
>>
>> *ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584)
>> Exception encountered during startup*
>> *java.lang.RuntimeException: Cannot replace address with a node that is
>> already bootstrapped*
>>
>> Can someone please help me if there is something i am doing wrong here.
>>
>> Thanks for the help in advance.
>>
>> Regards,
>> Anishek
>>
>


Re: Convert timeuuid in timestamp programmatically

2015-11-16 Thread Marlon Patrick
Oh thanks. I had misunderstood the application function. I will test soon.

2015-11-16 9:43 GMT-03:00 Laing, Michael :

> http://www.tutorialspoint.com/java/util/uuid_timestamp.htm
>
> On Mon, Nov 16, 2015 at 7:38 AM, Marlon Patrick  > wrote:
>
>> Hi Donfeng,
>>
>> I'm interested in convert a timeuuid already generated in a timestamp,
>> similar to dateOf function of the Cassandra, but in Java code. The your
>> sugestion is for generate a timeuuid.
>>
>> 2015-11-15 19:42 GMT-03:00 Dongfeng Lu :
>>
>>> You can use long java.util.UUID.timestamp().
>>>
>>>
>>>
>>> On Sunday, November 15, 2015 9:20 AM, Marlon Patrick <
>>> marlonpatric...@gmail.com> wrote:
>>>
>>>
>>> Hi guys,
>>>
>>> Is there any way to convert a timeuuid in timestamp (dateOf)
>>> programmatically using DataStax java driver?
>>>
>>> --
>>> Atenciosamente,
>>>
>>> Marlon Patrick
>>>
>>>
>>>
>>
>>
>> --
>> Atenciosamente,
>>
>> Marlon Patrick
>>
>
>


-- 
Atenciosamente,

Marlon Patrick


Re: Convert timeuuid in timestamp programmatically

2015-11-16 Thread Laing, Michael
http://www.tutorialspoint.com/java/util/uuid_timestamp.htm

On Mon, Nov 16, 2015 at 7:38 AM, Marlon Patrick 
wrote:

> Hi Donfeng,
>
> I'm interested in convert a timeuuid already generated in a timestamp,
> similar to dateOf function of the Cassandra, but in Java code. The your
> sugestion is for generate a timeuuid.
>
> 2015-11-15 19:42 GMT-03:00 Dongfeng Lu :
>
>> You can use long java.util.UUID.timestamp().
>>
>>
>>
>> On Sunday, November 15, 2015 9:20 AM, Marlon Patrick <
>> marlonpatric...@gmail.com> wrote:
>>
>>
>> Hi guys,
>>
>> Is there any way to convert a timeuuid in timestamp (dateOf)
>> programmatically using DataStax java driver?
>>
>> --
>> Atenciosamente,
>>
>> Marlon Patrick
>>
>>
>>
>
>
> --
> Atenciosamente,
>
> Marlon Patrick
>


Re: Convert timeuuid in timestamp programmatically

2015-11-16 Thread Marlon Patrick
Hi Donfeng,

I'm interested in convert a timeuuid already generated in a timestamp,
similar to dateOf function of the Cassandra, but in Java code. The your
sugestion is for generate a timeuuid.

2015-11-15 19:42 GMT-03:00 Dongfeng Lu :

> You can use long java.util.UUID.timestamp().
>
>
>
> On Sunday, November 15, 2015 9:20 AM, Marlon Patrick <
> marlonpatric...@gmail.com> wrote:
>
>
> Hi guys,
>
> Is there any way to convert a timeuuid in timestamp (dateOf)
> programmatically using DataStax java driver?
>
> --
> Atenciosamente,
>
> Marlon Patrick
>
>
>


-- 
Atenciosamente,

Marlon Patrick


Re: handling down node cassandra 2.0.15

2015-11-16 Thread sai krishnam raju potturi
Is that a seed node?

On Mon, Nov 16, 2015, 05:21 Anishek Agarwal  wrote:

> Hello,
>
> We are having a 3 node cluster and one of the node went down due to a
> hardware memory failure looks like. We followed the steps below after the
> node was down for more than the default value of *max_hint_window_in_ms*
>
> I tried to restart cassandra by following the steps @
>
>
>1.
>
> http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
>2.
>
> http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html
>
> *except the "clear data" part as it was not specified in second blog
> above.*
>
> i was trying to restart the same node that went down, however I did not
> get the messages in log files as stated in 2 against "StorageService"
>
> instead it just tried to replay and then stopped with the error message as
> below:
>
> *ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584)
> Exception encountered during startup*
> *java.lang.RuntimeException: Cannot replace address with a node that is
> already bootstrapped*
>
> Can someone please help me if there is something i am doing wrong here.
>
> Thanks for the help in advance.
>
> Regards,
> Anishek
>


handling down node cassandra 2.0.15

2015-11-16 Thread Anishek Agarwal
Hello,

We are having a 3 node cluster and one of the node went down due to a
hardware memory failure looks like. We followed the steps below after the
node was down for more than the default value of *max_hint_window_in_ms*

I tried to restart cassandra by following the steps @


   1.
   
http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_replace_node_t.html
   2.
   
http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html

*except the "clear data" part as it was not specified in second blog above.*

i was trying to restart the same node that went down, however I did not get
the messages in log files as stated in 2 against "StorageService"

instead it just tried to replay and then stopped with the error message as
below:

*ERROR [main] 2015-11-16 15:27:22,944 CassandraDaemon.java (line 584)
Exception encountered during startup*
*java.lang.RuntimeException: Cannot replace address with a node that is
already bootstrapped*

Can someone please help me if there is something i am doing wrong here.

Thanks for the help in advance.

Regards,
Anishek


unsubscribe

2015-11-16 Thread Tanuj Kumar



Help diagnosing performance issue

2015-11-16 Thread Antoine Bonavita

Hello,

We have a performance problem when trying to ramp up cassandra (as a 
mongo replacement) on a very specific use case. We store a blob indexed 
by a key and expire it after a few days:


CREATE TABLE views.views (
viewkey text PRIMARY KEY,
value blob
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'max_sstable_age_days': '10', 'class': 
'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}

AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 432000
AND gc_grace_seconds = 172800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

Our workload is mostly writes (approx. 96 writes for 4 reads). Each 
value is about 3kB. reads are mostly for "fresh" data (ie data that was 
written recently).


I have a 4 nodes cluster with spinning disks and a replication factor of 
3. For some historical reason 2 of the machines have 32G of RAM and the 
other 2 have 64G.


This is for the context.

Now, when I use this cluster at about 600 writes per second per node 
everything is fine but when I try to ramp it up (1200 writes per second 
per node) the read latencies are fine on the 64G machines but start 
going crazy on the 32G machines. When looking at disk iops, this is 
clearly related:

* On 32G machines, read iops go from 200 to 1400.
* On 64G machines, read iops go from 10 to 20.

So I thought this was related to the Memtable being flushed "too early" 
on 32G machines. I increased memtable_heap_space_in_mb to 4G on the 32G 
machines but it did not change anything.


At this point I'm kind of lost and could use any help in understanding 
why I'm generating so many read iops on the 32G machines compared to the 
64G one and why it goes crazy (x7) when I merely double the load.


Thanks,

A.

--
Antoine Bonavita (anto...@stickyads.tv) - CTO StickyADS.tv
Tel: +33 6 34 33 47 36/+33 9 50 68 21 32
NEW YORK | LONDON | HAMBURG | PARIS | MONTPELLIER | MILAN | MADRID