Deployment of .net application on production is erroring out

2015-08-28 Thread Asit KAUSHIK
Hi All,

Please excuse my limited knowledge . We have an application in .Net and the
backend database is Cassandra.We have deployed in our application into
production which is behing the Firewall. We have opened the 9042 Port from
our Webserver to the cassandra cluster. But again we are getting the below
error

*INFO  [SharedPool-Worker-1] 2015-08-27 11:07:20,679 Message.java:532 -
Unexpected exception during request; channel = [id: 0x12af6143,
/192.168.16.198:2159  => /10.253.2.53:9042
]*
*java.io.IOException: Error while read(...): Connection timed out*
* at io.netty.channel.epoll.Native.readAddress(Native Method)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
~[netty-all-4.0.23.Final.jar:4.0.23.Final]*
* at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]*


We have a 5 node cluster and added the static routing to all the node for
port 9042 .

Do we need open some more ports as the connection is established but is
timed out

An early help is highly appreciated

Regards
Asit


TTL question

2015-08-28 Thread Tommy Stendahl

Hi,

I did a small test using TTL but I didn't get the result I expected.

I did this in sqlsh:

cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY 
(key, cluster)) ;

cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh> SELECT * FROM foo.bar ;

 key | cluster | col
-+-+--
   1 |   1 | null

(1 rows)
cqlsh> INSERT INTO foo.bar (key, cluster, col ) VALUES ( 1,1,1 ) USING 
TTL 10;

cqlsh> SELECT * FROM foo.bar ;

 key | cluster | col
-+-+-
   1 |   1 |   1

(1 rows)



cqlsh> SELECT * FROM foo.bar ;

 key | cluster | col
-+-+-

(0 rows)



Is this really correct?
I expected the result from the last select to be:

 key | cluster | col
-+-+--
   1 |   1 | null

(1 rows)


Regards,
Tommy


Re: TTL question

2015-08-28 Thread Marcin Pietraszek
Please look at primary key which you've defined. Second mutation has
exactly the same primary key - it overwrote row that you previously
had.

On Fri, Aug 28, 2015 at 1:14 PM, Tommy Stendahl
 wrote:
> Hi,
>
> I did a small test using TTL but I didn't get the result I expected.
>
> I did this in sqlsh:
>
> cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
> (key, cluster)) ;
> cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
> cqlsh> SELECT * FROM foo.bar ;
>
>  key | cluster | col
> -+-+--
>1 |   1 | null
>
> (1 rows)
> cqlsh> INSERT INTO foo.bar (key, cluster, col ) VALUES ( 1,1,1 ) USING TTL
> 10;
> cqlsh> SELECT * FROM foo.bar ;
>
>  key | cluster | col
> -+-+-
>1 |   1 |   1
>
> (1 rows)
>
> 
>
> cqlsh> SELECT * FROM foo.bar ;
>
>  key | cluster | col
> -+-+-
>
> (0 rows)
>
>
>
> Is this really correct?
> I expected the result from the last select to be:
>
>  key | cluster | col
> -+-+--
>1 |   1 | null
>
> (1 rows)
>
>
> Regards,
> Tommy



-- 
--
Marcin Pietraszek


Re: TTL question

2015-08-28 Thread Tommy Stendahl
Yes, I understand that but I think this gives a strange behaviour. 
Having values only on the primary key columns are perfectly valid so why 
should the primary key be deleted by the TTL on the non-key column.


/Tommy

On 2015-08-28 13:19, Marcin Pietraszek wrote:

Please look at primary key which you've defined. Second mutation has
exactly the same primary key - it overwrote row that you previously
had.

On Fri, Aug 28, 2015 at 1:14 PM, Tommy Stendahl
 wrote:

Hi,

I did a small test using TTL but I didn't get the result I expected.

I did this in sqlsh:

cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
(key, cluster)) ;
cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh> SELECT * FROM foo.bar ;

  key | cluster | col
-+-+--
1 |   1 | null

(1 rows)
cqlsh> INSERT INTO foo.bar (key, cluster, col ) VALUES ( 1,1,1 ) USING TTL
10;
cqlsh> SELECT * FROM foo.bar ;

  key | cluster | col
-+-+-
1 |   1 |   1

(1 rows)



cqlsh> SELECT * FROM foo.bar ;

  key | cluster | col
-+-+-

(0 rows)



Is this really correct?
I expected the result from the last select to be:

  key | cluster | col
-+-+--
1 |   1 | null

(1 rows)


Regards,
Tommy







RE: TTL question

2015-08-28 Thread Jacques-Henri Berthemet
What if you use an update statement in the second query?

--
Jacques-Henri Berthemet

-Original Message-
From: Tommy Stendahl [mailto:tommy.stend...@ericsson.com] 
Sent: vendredi 28 août 2015 13:34
To: user@cassandra.apache.org
Subject: Re: TTL question

Yes, I understand that but I think this gives a strange behaviour. 
Having values only on the primary key columns are perfectly valid so why 
should the primary key be deleted by the TTL on the non-key column.

/Tommy

On 2015-08-28 13:19, Marcin Pietraszek wrote:
> Please look at primary key which you've defined. Second mutation has
> exactly the same primary key - it overwrote row that you previously
> had.
>
> On Fri, Aug 28, 2015 at 1:14 PM, Tommy Stendahl
>  wrote:
>> Hi,
>>
>> I did a small test using TTL but I didn't get the result I expected.
>>
>> I did this in sqlsh:
>>
>> cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
>> (key, cluster)) ;
>> cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
>> cqlsh> SELECT * FROM foo.bar ;
>>
>>   key | cluster | col
>> -+-+--
>> 1 |   1 | null
>>
>> (1 rows)
>> cqlsh> INSERT INTO foo.bar (key, cluster, col ) VALUES ( 1,1,1 ) USING TTL
>> 10;
>> cqlsh> SELECT * FROM foo.bar ;
>>
>>   key | cluster | col
>> -+-+-
>> 1 |   1 |   1
>>
>> (1 rows)
>>
>> 
>>
>> cqlsh> SELECT * FROM foo.bar ;
>>
>>   key | cluster | col
>> -+-+-
>>
>> (0 rows)
>>
>>
>>
>> Is this really correct?
>> I expected the result from the last select to be:
>>
>>   key | cluster | col
>> -+-+--
>> 1 |   1 | null
>>
>> (1 rows)
>>
>>
>> Regards,
>> Tommy
>
>



Re: TTL question

2015-08-28 Thread Tommy Stendahl
Thx, that was the problem. When I think about it it makes sense that I 
should use update in this scenario and not insert.


cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY 
(key, cluster)) ;

cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh> SELECT * FROM foo.bar ;

 key | cluster | col
-+-+--
   1 |   1 | null

(1 rows)
cqlsh> UPDATE foo.bar USING TTL 10 SET col = 1 WHERE key = 1 AND cluster 
= 1;

cqlsh> SELECT * FROM foo.bar ;

 key | cluster | col
-+-+-
   1 |   1 |   1

(1 rows)



cqlsh> SELECT * FROM foo.bar ;

 key | cluster | col
-+-+--
   1 |   1 | null

(1 rows)

/Tommy

On 2015-08-28 14:20, Jacques-Henri Berthemet wrote:

What if you use an update statement in the second query?

--
Jacques-Henri Berthemet

-Original Message-
From: Tommy Stendahl [mailto:tommy.stend...@ericsson.com]
Sent: vendredi 28 août 2015 13:34
To: user@cassandra.apache.org
Subject: Re: TTL question

Yes, I understand that but I think this gives a strange behaviour.
Having values only on the primary key columns are perfectly valid so why
should the primary key be deleted by the TTL on the non-key column.

/Tommy

On 2015-08-28 13:19, Marcin Pietraszek wrote:

Please look at primary key which you've defined. Second mutation has
exactly the same primary key - it overwrote row that you previously
had.

On Fri, Aug 28, 2015 at 1:14 PM, Tommy Stendahl
 wrote:

Hi,

I did a small test using TTL but I didn't get the result I expected.

I did this in sqlsh:

cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
(key, cluster)) ;
cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh> SELECT * FROM foo.bar ;

   key | cluster | col
-+-+--
 1 |   1 | null

(1 rows)
cqlsh> INSERT INTO foo.bar (key, cluster, col ) VALUES ( 1,1,1 ) USING TTL
10;
cqlsh> SELECT * FROM foo.bar ;

   key | cluster | col
-+-+-
 1 |   1 |   1

(1 rows)



cqlsh> SELECT * FROM foo.bar ;

   key | cluster | col
-+-+-

(0 rows)



Is this really correct?
I expected the result from the last select to be:

   key | cluster | col
-+-+--
 1 |   1 | null

(1 rows)


Regards,
Tommy







ccm issue

2015-08-28 Thread Cyril Scetbon
Hi guys,

I got some issues with ccm and unit tests in java-driver. Here is what I see :

tail -f /tmp/1440780247703-0/test/node5/logs/system.log
 INFO [STREAM-IN-/127.0.1.3] 2015-08-28 16:45:06,009 StreamResultFuture.java 
(line 220) [Stream #22d9e9f0-4da4-11e5-9409-5d8a0f12fefd] All sessions completed
 INFO [main] 2015-08-28 16:45:06,009 StorageService.java (line 1014) Bootstrap 
completed! for the tokens [8907077543698545973]
 INFO [main] 2015-08-28 16:45:06,010 ColumnFamilyStore.java (line 785) 
Enqueuing flush of Memtable-local@1738175520(84/840 serialized/live bytes, 8 
ops)
 INFO [FlushWriter:1] 2015-08-28 16:45:06,013 Memtable.java (line 331) Writing 
Memtable-local@1738175520(84/840 serialized/live bytes, 8 ops)
 INFO [FlushWriter:1] 2015-08-28 16:45:06,072 Memtable.java (line 371) 
Completed flushing 
/tmp/1440780247703-0/test/node5/data/system/local/system-local-jb-6-Data.db 
(117 bytes) for commitlog position ReplayPosition(segmentId=1440780271059, 
position=143914)
 INFO [main] 2015-08-28 16:45:06,074 ColumnFamilyStore.java (line 785) 
Enqueuing flush of Memtable-local@1171696270(50/500 serialized/live bytes, 4 
ops)
 INFO [FlushWriter:1] 2015-08-28 16:45:06,074 Memtable.java (line 331) Writing 
Memtable-local@1171696270(50/500 serialized/live bytes, 4 ops)
 INFO [FlushWriter:1] 2015-08-28 16:45:06,118 Memtable.java (line 371) 
Completed flushing 
/tmp/1440780247703-0/test/node5/data/system/local/system-local-jb-7-Data.db (97 
bytes) for commitlog position ReplayPosition(segmentId=1440780271059, 
position=144080)
 INFO [main] 2015-08-28 16:45:06,122 StorageService.java (line 1499) Node 
/127.0.1.5 state jump to normal
 INFO [main] 2015-08-28 16:45:06,124 CassandraDaemon.java (line 518) Waiting 
for gossip to settle before accepting client requests...
 INFO [main] 2015-08-28 16:45:14,125 CassandraDaemon.java (line 550) No gossip 
backlog; proceeding
 INFO [main] 2015-08-28 16:45:14,187 Server.java (line 155) Starting listening 
for CQL clients on /127.0.1.5:9042...
 INFO [main] 2015-08-28 16:45:14,224 ThriftServer.java (line 99) Using 
TFramedTransport with a max frame size of 15728640 bytes.
 INFO [main] 2015-08-28 16:45:14,225 ThriftServer.java (line 118) Binding 
thrift service to /127.0.1.5:9160
 INFO [main] 2015-08-28 16:45:14,233 TServerCustomFactory.java (line 47) Using 
synchronous/threadpool thrift server on 127.0.1.5 : 9160
 INFO [Thread-10] 2015-08-28 16:45:14,233 ThriftServer.java (line 135) 
Listening for thrift clients...


However ccm doesn't see that node5 is running and listening :

0  [Scheduled Tasks-0] INFO  com.datastax.driver.core.Cluster  - New 
Cassandra host /127.0.1.5:9042 added
53833  [main] INFO  com.datastax.driver.core.TestUtils  - 127.0.1.5 is not 
UP after 60s
69528  [main] INFO  com.datastax.driver.core.CCMBridge  - Error during 
tests, kept C* logs in /tmp/1440780247703-0

But at the same time I can see that node5 is running and I can also connect to 
it :

# netstat -lnt|grep 9042
tcp0  0 127.0.1.5:9042  0.0.0.0:*   LISTEN
tcp0  0 127.0.1.3:9042  0.0.0.0:*   LISTEN
tcp0  0 127.0.1.4:9042  0.0.0.0:*   LISTEN
tcp0  0 127.0.1.2:9042  0.0.0.0:*   LISTEN
tcp0  0 127.0.1.1:9042  0.0.0.0:*   LISTEN
root@ip-10-0-1-97:~# nc 127.0.1.5 9042

After the error then ccm turns all nodes down to end unit tests :

INFO [GossipStage:1] 2015-08-28 16:45:31,864 Gossiper.java (line 863) 
InetAddress /127.0.1.1 is now DOWN
INFO [GossipStage:1] 2015-08-28 16:45:34,989 Gossiper.java (line 863) 
InetAddress /127.0.1.3 is now DOWN
INFO [GossipStage:1] 2015-08-28 16:45:38,087 Gossiper.java (line 863) 
InetAddress /127.0.1.2 is now DOWN
INFO [StorageServiceShutdownHook] 2015-08-28 16:45:39,181 ThriftServer.java 
(line 141) Stop listening to thrift clients
INFO [StorageServiceShutdownHook] 2015-08-28 16:45:39,200 Server.java (line 
181) Stop listening for CQL clients

Any idea ? Any known issue ?

[RELEASE] Apache Cassandra 2.1.9 released

2015-08-28 Thread Jake Luciani
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.1.9.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 2.1 series. As always, please
pay
attention to the release notes[2] and Let us know[3] if you were to
encounter
any problem.

Enjoy!

[1]: http://goo.gl/xnYwFa (CHANGES.txt)
[2]: http://goo.gl/QDqPhN (NEWS.txt)
[3]: https://issues.apache.org/jira/browse/CASSANDRA


Re: TTL question

2015-08-28 Thread Robert Coli
On Fri, Aug 28, 2015 at 6:27 AM, Tommy Stendahl  wrote:

> Thx, that was the problem. When I think about it it makes sense that I
> should use update in this scenario and not insert.


Per Sylvain on an old thread :

"
INSERT and UPDATE are not totally orthogonal in CQL and you should use
INSERT for actual insertion and UPDATE for updates (granted, the database
will not reject your query if you break this rule but it's nonetheless the
way it's intended to be used).
"

=Rob


Fwd: Re : Restoring nodes in a new datacenter, from snapshots in an existing datacenter

2015-08-28 Thread sai krishnam raju potturi
hi;
 We have cassandra cluster with Vnodes spanning across 3 data centers.
We take backup of the snapshots from one datacenter.
   In a doomsday scenario, we want to restore a downed datacenter, with
snapshots from another datacenter. We have same number of nodes in each
datacenter.

1 : We know it requires copying the snapshots and their corresponding token
ranges to the nodes in new datacenter, and running nodetool refresh.

2 : The question is, we will now have 2 datacenters, with the same exact
token ranges. Will that cause any problem.

DC1 : Node-1 : token1..token10
  Node-2 : token11 .token20
  Node-3 : token21 . token30
  Node-4 : token31 . token40

 DC2 : Node-1 : token1.token10
   Node-2 : token11token20
   Node-3 : token21token30
   Node-4 : token31token40


thanks
Sai


Re: How to get the peer's IP address when writing failed

2015-08-28 Thread Nate McCall
Unfortunately, the addresses/DC of the replicas are not available on the
exception hierarchy within Cassandra.

Fwiw, the DS Java Driver (most native protocol drivers actually) manages
membership dynamically by acting on cluster health events sent back over
the channel by the native transport. Keeping this intelligence down in the
driver makes for significantly less complex cluster management in an
application.

On Wed, Aug 26, 2015 at 3:51 AM, Lu, Boying  wrote:

> Hi, All,
>
>
>
> We have an Cassandra environment with two connected DCs and our
> consistency level of writing operation is EACH_QUORUM.
>
> So if one DC is down, the write will be failed and we get
> TokenRangeOfflineException on the client side (we use netfilix java client
> libraries).
>
>
>
> We want to give more detailed information about this failure. e.g. The IP
> addresses of the broken nodes (on the broken DC in our case).
>
> We checked the TokenRangeOfflineException and its parent class
> ConnectionException.  The only related method is getHost().
>
> But it returns the IP address of the local node (the node that issues the
> writing operation) instead of the remote node on the broken DC.
>
>
>
> Does anyone know how to get such information when writing failed?
>
>
>
> Thanks
>
>
> Boying
>
>
>



-- 
-
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re : Decommissioned node appears in logs, and is sometimes marked as "UNREACHEABLE" in `nodetool describecluster`

2015-08-28 Thread sai krishnam raju potturi
hi;
we decommissioned nodes in a datacenter a while back. Those nodes keep
showing up in the logs, and also sometimes marked as UNREACHABLE when
`nodetool describecluster` is run.

However these nodes do not show up in `nodetool status` and
`nodetool ring`.

Below are a couple lines from the logs.

2015-08-27 04:38:16,180 [GossipStage:473] INFO Gossiper InetAddress /
10.0.0.1 is now DOWN
2015-08-27 04:38:16,183 [GossipStage:473] INFO StorageService Removing
tokens [85070591730234615865843651857942052865] for /10.0.0.1

thanks
Sai


Re: Re : Decommissioned node appears in logs, and is sometimes marked as "UNREACHEABLE" in `nodetool describecluster`

2015-08-28 Thread Nate McCall
Do they show up in nodetool gossipinfo?

Either way, you probably need to invoke Gossiper.unsafeAssassinateEndpoints
via JMX as described in step 1 here:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html

On Fri, Aug 28, 2015 at 1:32 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:

> hi;
> we decommissioned nodes in a datacenter a while back. Those nodes keep
> showing up in the logs, and also sometimes marked as UNREACHABLE when
> `nodetool describecluster` is run.
>
> However these nodes do not show up in `nodetool status` and
> `nodetool ring`.
>
> Below are a couple lines from the logs.
>
> 2015-08-27 04:38:16,180 [GossipStage:473] INFO Gossiper InetAddress /
> 10.0.0.1 is now DOWN
> 2015-08-27 04:38:16,183 [GossipStage:473] INFO StorageService Removing
> tokens [85070591730234615865843651857942052865] for /10.0.0.1
>
> thanks
> Sai
>
>


-- 
-
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: Re : Restoring nodes in a new datacenter, from snapshots in an existing datacenter

2015-08-28 Thread Nate McCall
You cannot use the identical token ranges. You have to capture membership
information somewhere for each datacenter, and use that token information
when briging up the replacement DC.

You can find details on this process here:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_snapshot_restore_new_cluster.html

That process is straight forward, but it can go south pretty quickly if you
miss a step. It's a really good idea to set asside some time to try this
out in a staging/test system and build a runbook for the process targeting
your specific environment.

On Fri, Aug 28, 2015 at 1:12 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:

>
> hi;
>  We have cassandra cluster with Vnodes spanning across 3 data centers.
> We take backup of the snapshots from one datacenter.
>In a doomsday scenario, we want to restore a downed datacenter, with
> snapshots from another datacenter. We have same number of nodes in each
> datacenter.
>
> 1 : We know it requires copying the snapshots and their corresponding
> token ranges to the nodes in new datacenter, and running nodetool refresh.
>
> 2 : The question is, we will now have 2 datacenters, with the same exact
> token ranges. Will that cause any problem.
>
> DC1 : Node-1 : token1..token10
>   Node-2 : token11 .token20
>   Node-3 : token21 . token30
>   Node-4 : token31 . token40
>
>  DC2 : Node-1 : token1.token10
>Node-2 : token11token20
>Node-3 : token21token30
>Node-4 : token31token40
>
>
> thanks
> Sai
>
>
>
>


-- 
-
Nate McCall
Austin, TX
@zznate

Co-Founder & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: Re : Decommissioned node appears in logs, and is sometimes marked as "UNREACHEABLE" in `nodetool describecluster`

2015-08-28 Thread Robert Coli
On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:

> we decommissioned nodes in a datacenter a while back. Those nodes keep
> showing up in the logs, and also sometimes marked as UNREACHABLE when
> `nodetool describecluster` is run.
>

What version of Cassandra?

This happens a lot in 1.0-2.0.

=Rob


Re: Re : Decommissioned node appears in logs, and is sometimes marked as "UNREACHEABLE" in `nodetool describecluster`

2015-08-28 Thread sai krishnam raju potturi
We are using DSE on our clusters.

DSE version : 4.6.7
Cassandra version : 2.0.14

thanks
Sai Potturi



On Fri, Aug 28, 2015 at 3:40 PM, Robert Coli  wrote:

> On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> we decommissioned nodes in a datacenter a while back. Those nodes
>> keep showing up in the logs, and also sometimes marked as UNREACHABLE when
>> `nodetool describecluster` is run.
>>
>
> What version of Cassandra?
>
> This happens a lot in 1.0-2.0.
>
> =Rob
>


Re: Re : Restoring nodes in a new datacenter, from snapshots in an existing datacenter

2015-08-28 Thread sai krishnam raju potturi
thanks Nate. But regarding our situation, of the 3 Datacenters we have DC1
DC2 and DC3, we take backup of snapshots  on DC1.

If DC3 were to go down, will we not be able to bring up a new DC4 with
snapshots and token_ranges from DC1?

On Fri, Aug 28, 2015 at 3:19 PM, Nate McCall  wrote:

> You cannot use the identical token ranges. You have to capture membership
> information somewhere for each datacenter, and use that token information
> when briging up the replacement DC.
>
> You can find details on this process here:
>
> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_snapshot_restore_new_cluster.html
>
> That process is straight forward, but it can go south pretty quickly if
> you miss a step. It's a really good idea to set asside some time to try
> this out in a staging/test system and build a runbook for the process
> targeting your specific environment.
>
> On Fri, Aug 28, 2015 at 1:12 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>>
>> hi;
>>  We have cassandra cluster with Vnodes spanning across 3 data
>> centers. We take backup of the snapshots from one datacenter.
>>In a doomsday scenario, we want to restore a downed datacenter, with
>> snapshots from another datacenter. We have same number of nodes in each
>> datacenter.
>>
>> 1 : We know it requires copying the snapshots and their corresponding
>> token ranges to the nodes in new datacenter, and running nodetool
>> refresh.
>>
>> 2 : The question is, we will now have 2 datacenters, with the same exact
>> token ranges. Will that cause any problem.
>>
>> DC1 : Node-1 : token1..token10
>>   Node-2 : token11 .token20
>>   Node-3 : token21 . token30
>>   Node-4 : token31 . token40
>>
>>  DC2 : Node-1 : token1.token10
>>Node-2 : token11token20
>>Node-3 : token21token30
>>Node-4 : token31token40
>>
>>
>> thanks
>> Sai
>>
>>
>>
>>
>
>
> --
> -
> Nate McCall
> Austin, TX
> @zznate
>
> Co-Founder & Sr. Technical Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>


Upgrade from 2.1.0 to 2.1.9

2015-08-28 Thread Tony Anecito
Hi All,Been awhile since I upgaded and wanted to know what the steps are to 
upgrade from 2.1.0 to 2.1.9. Also want to know if I need to upgrade my java 
database driver.
Thanks,-Tony