Re: NullPointerException when running nodetool stopdaemon

2019-04-03 Thread dmngaya



On 2019/02/22 23:31:01, Timothy Palpant  wrote: 
> I am trying to use `nodetool stopdaemon` to stop Cassandra but hitting the
> following error:
> 
> ```
> $ cassandra_ctl nodetool -h 127.0.0.1 -p 5100 stopdaemon
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at
> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1877)
> at
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:62)
> at
> org.apache.cassandra.tools.nodetool.StopDaemon.execute(StopDaemon.java:39)
> at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168)
> ```
> 
> This looks very similar to:
> https://issues.apache.org/jira/browse/CASSANDRA-13030
> but I am running v3.11.1, which has that fix in it:
> 
> ```
> $ cassandra_ctl nodetool -h 127.0.0.1 -p 5100 version
> ReleaseVersion: 3.11.1
> ```
> 
> Has anyone else run into this problem, or know of a way to work around it?
> (or am I running the command incorrectly?)
> 
> Thanks!
> Tim
> 

Hello, 
Two months ago, i got the same issue when i did upgrade minor  from DSE 5.1.8  
to 5.1.12
error message was: 
Jan 23, 2019 10:12:25 AM ClientCommunicatorAdmin restart 
WARNING: Failed to restart: java.io.IOException: Failed to get a RMI stub: 
javax.naming.CommunicationException [Root exception is 
java.rmi.UnmarshalException: error unmarshalling return; nested exception is: 
java.io.WriteAbortedException: writing aborted; 
java.io.NotSerializableException: 
javax.management.remote.rmi.RMIJRMPServerImpl] 
Jan 23, 2019 10:12:25 AM ClientCommunicatorAdmin Checker-run 
WARNING: Failed to check connection: java.rmi.NoSuchObjectException: no such 
object in table 
Jan 23, 2019 10:12:25 AM ClientCommunicatorAdmin Checker-run 
WARNING: stopping 

suggestion: The cassandra  process did  not exit and i  killed  it manually 
before restarting DSE

Datastax is working on this bug .

For now, with this version, you can use these two commands:

nodetool -h hostname drain  
after it finish, kill java process


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Assassinate fails

2019-04-03 Thread Anthony Grasso
Hi Alex,

We wrote a blog post on this topic late last year:
http://thelastpickle.com/blog/2018/09/18/assassinate.html.

In short, you will need to run the assassinate command on each node
simultaneously a number of times in quick succession. This will generate a
number of messages requesting all nodes completely forget there used to be
an entry within the gossip state for the given IP address.

Regards,
Anthony

On Thu, 4 Apr 2019 at 03:32, Alex  wrote:

> Same result it seems:
> Welcome to JMX terminal. Type "help" for available commands.
> $>open localhost:7199
> #Connection to localhost:7199 is opened
> $>bean org.apache.cassandra.net:type=Gossiper
> #bean is set to org.apache.cassandra.net:type=Gossiper
> $>run unsafeAssassinateEndpoint 192.168.1.18
> #calling operation unsafeAssassinateEndpoint of mbean
> org.apache.cassandra.net:type=Gossiper
> #RuntimeMBeanException: java.lang.NullPointerException
>
>
> There not much more to see in log files :
> WARN  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:13,626
> Gossiper.java:575 - Assassinating /192.168.1.18 via gossip
> INFO  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:13,627
> Gossiper.java:585 - Sleeping for 3ms to ensure /192.168.1.18 does
> not change
> INFO  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:43,628
> Gossiper.java:1029 - InetAddress /192.168.1.18 is now DOWN
> INFO  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:43,631
> StorageService.java:2324 - Removing tokens [..] for /192.168.1.18
>
>
>
>
> Le 03.04.2019 17:10, Nick Hatfield a écrit :
> > Run assassinate the old way. I works very well...
> >
> > wget -q -O jmxterm.jar
> >
> http://downloads.sourceforge.net/cyclops-group/jmxterm-1.0-alpha-4-uber.jar
> >
> > java -jar ./jmxterm.jar
> >
> > $>open localhost:7199
> >
> > $>bean org.apache.cassandra.net:type=Gossiper
> >
> > $>run unsafeAssassinateEndpoint 192.168.1.18
> >
> > $>quit
> >
> >
> > Happy deleting
> >
> > -Original Message-
> > From: Alex [mailto:m...@aca-o.com]
> > Sent: Wednesday, April 03, 2019 10:42 AM
> > To: user@cassandra.apache.org
> > Subject: Assassinate fails
> >
> > Hello,
> >
> > Short story:
> > - I had to replace a dead node in my cluster
> > - 1 week after, dead node is still seen as DN by 3 out of 5 nodes
> > - dead node has null host_id
> > - assassinate on dead node fails with error
> >
> > How can I get rid of this dead node ?
> >
> >
> > Long story:
> > I had a 3 nodes cluster (Cassandra 3.9) ; one node went dead. I built
> > a new node from scratch and "replaced" the dead node using the
> > information from this page
> >
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html
> .
> > It looked like the replacement went ok.
> >
> > I added two more nodes to strengthen the cluster.
> >
> > A few days have passed and the dead node is still visible and marked
> > as "down" on 3 of 5 nodes in nodetool status:
> >
> > --  Address   Load   Tokens   Owns (effective)  Host ID
> >   Rack
> > UN  192.168.1.9   16 GiB 256  35.0%
> > 76223d4c-9d9f-417f-be27-cebb791cddcc  rack1
> > UN  192.168.1.12  16.09 GiB  256  34.0%
> > 719601e2-54a6-440e-a379-c9cf2dc20564  rack1
> > UN  192.168.1.14  14.16 GiB  256  32.6%
> > d8017a03-7e4e-47b7-89b9-cd9ec472d74f  rack1
> > UN  192.168.1.17  15.4 GiB   256  34.1%
> > fa238b21-1db1-47dc-bfb7-beedc6c9967a  rack1
> > DN  192.168.1.18  24.3 GiB   256  33.7% null
> >   rack1
> > UN  192.168.1.22  19.06 GiB  256  30.7%
> > 09d24557-4e98-44c3-8c9d-53c4c31066e1  rack1
> >
> > Its host ID is null, so I cannot use nodetool removenode. Moreover
> > nodetool assassinate 192.168.1.18 fails with :
> >
> > error: null
> > -- StackTrace --
> > java.lang.NullPointerException
> >
> > And in system.log:
> >
> > INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:39:38,595
> > Gossiper.java:585 - Sleeping for 3ms to ensure /192.168.1.18 does
> > not change INFO  [CompactionExecutor:547] 2019-03-27 17:39:38,669
> > AutoSavingCache.java:393 - Saved KeyCache (27316 items) in 163 ms INFO
> >  [IndexSummaryManager:1] 2019-03-27 17:40:03,620
> > IndexSummaryRedistribution.java:75 - Redistributing index summaries
> > INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,597
> > Gossiper.java:1029 - InetAddress /192.168.1.18 is now DOWN INFO  [RMI
> > TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,599
> > StorageService.java:2324 - Removing tokens [-1061369577393671924,...]
> > ERROR [GossipStage:1] 2019-03-27 17:40:08,600 CassandraDaemon.java:226
> > - Exception in thread Thread[GossipStage:1,5,main]
> > java.lang.NullPointerException: null
> >
> >
> > In system.peers, the dead node shows and has the same ID as the
> > replacing node :
> >
> > cqlsh> select peer, host_id from system.peers;
> >
> >   peer | host_id
> > --+--
> >   192.168.1.18 |

Re: New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread David Taylor
I'm afraid I get the same error when navigating to /usr/bin and running
./nodetool help

I'm definitely running Java 8 and Cassandra 3.11.4.

I'm wondering if I did something when installing Oracle Java 11 to run
Hadoop that is interfering, but that's all under another username. There is
nothing java-related in my .bashrc or .bash_profile. (Maybe there should
be?)

On Wed, Apr 3, 2019 at 12:38 PM Paul Chandler  wrote:

> On further reading it does look like there may be a problem with your Java
> setup, as others are reporting this with Java 9 and above.
>
> You could try the 3rd answer here and see if this helps:
> https://stackoverflow.com/questions/48193965/cassandra-nodetool-java-lang-nullpointerexception
>
>
>
> On 3 Apr 2019, at 16:55, David Taylor  wrote:
>
> Hi Paul thanks for responding.
>
> I created a ~/.cassandra directory and chmodded it to 777
>
> in /var/log/cassandra/system.log the only non-INFO items are:
> WARN  [main] 2019-04-03 11:47:54,172 StartupChecks.java:136 - jemalloc
> shared library could not be preloaded to speed up memory allocations
> WARN  [main] 2019-04-03 11:47:54,172 StartupChecks.java:169 - JMX is not
> enabled to receive remote connections. Please see cassandra-env.sh for more
> info.
>
> Indeed, I meant nodetool, not nodetest.
>
> Running nodetool status (or nodetool --help) results in the same stack
> trace as before.
>
> On Wed, Apr 3, 2019 at 11:34 AM Paul Chandler  wrote:
>
>> David,
>>
>> When you start cassandra all the logs go to system.log normally in the
>> /var/log/cassandra directory, so you should look there once it has started,
>> to check everything is ok.
>>
>> I assume you mean you ran nodetool status rather than nodetest.
>>
>> The nodetool command stores a history of commands in the directory
>> ~/.cassandra, and from the stack trace you supply it looks like it is
>> failing to create that directory. So I would check the file system
>> permissions there.
>>
>> Thanks
>>
>> Paul Chandler
>>
>>
>> On 3 Apr 2019, at 15:15, David Taylor  wrote:
>>
>> I am running a System87 Oryx Pro laptop with Ubuntu 18.04
>>
>> I had only Oracle Java 11 installed for Hadoop, so I also installed
>> OpenJDK8  with:
>> $ sudo apt-get install openjdk-8-jre
>> and switched to it with
>> $ sudo update-java-alternatives --set
>> path/shown/with/"update-java-alternatives --list"
>>
>> $ java-version
>> openjdk version "1.8.0_191"
>> OpenJDK Runtime Environment (build
>> 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
>> OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
>>
>> I installed Cassandra according to the directions on
>> http://cassandra.apache.org/doc/latest/getting_started/installing.html,
>> using the "Install from debian packages" instructions.
>>
>> Now when I run
>> $ sudo service cassandra start
>> There are no errors, no feedback to stdout. I assume that's expected
>> behavior?
>>
>> However, this fails:
>> $ nodetest status
>> error: null
>> -- StackTrace --
>> java.lang.NullPointerException
>> at
>> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>> at
>> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:82)
>> at org.apache.cassandra.io.util.FileUtils.(FileUtils.java:79)
>> at
>> org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:860)
>> at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:200)
>> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168)
>>
>> Can anyone help me fix this?
>>
>>
>>
>


Re: New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread Paul Chandler
On further reading it does look like there may be a problem with your Java 
setup, as others are reporting this with Java 9 and above.

You could try the 3rd answer here and see if this helps: 
https://stackoverflow.com/questions/48193965/cassandra-nodetool-java-lang-nullpointerexception



> On 3 Apr 2019, at 16:55, David Taylor  wrote:
> 
> Hi Paul thanks for responding.
> 
> I created a ~/.cassandra directory and chmodded it to 777
> 
> in /var/log/cassandra/system.log the only non-INFO items are:
> WARN  [main] 2019-04-03 11:47:54,172 StartupChecks.java:136 - jemalloc shared 
> library could not be preloaded to speed up memory allocations
> WARN  [main] 2019-04-03 11:47:54,172 StartupChecks.java:169 - JMX is not 
> enabled to receive remote connections. Please see cassandra-env.sh for more 
> info.
> 
> Indeed, I meant nodetool, not nodetest.
> 
> Running nodetool status (or nodetool --help) results in the same stack trace 
> as before.
> 
> On Wed, Apr 3, 2019 at 11:34 AM Paul Chandler  > wrote:
> David,
> 
> When you start cassandra all the logs go to system.log normally in the 
> /var/log/cassandra directory, so you should look there once it has started, 
> to check everything is ok.
> 
> I assume you mean you ran nodetool status rather than nodetest.
> 
> The nodetool command stores a history of commands in the directory 
> ~/.cassandra, and from the stack trace you supply it looks like it is failing 
> to create that directory. So I would check the file system permissions there.
> 
> Thanks 
> 
> Paul Chandler
> 
> 
>> On 3 Apr 2019, at 15:15, David Taylor > > wrote:
>> 
>> I am running a System87 Oryx Pro laptop with Ubuntu 18.04
>> 
>> I had only Oracle Java 11 installed for Hadoop, so I also installed OpenJDK8 
>>  with:
>> $ sudo apt-get install openjdk-8-jre
>> and switched to it with
>> $ sudo update-java-alternatives --set 
>> path/shown/with/"update-java-alternatives --list"
>> 
>> $ java-version
>> openjdk version "1.8.0_191"
>> OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
>> OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
>> 
>> I installed Cassandra according to the directions on 
>> http://cassandra.apache.org/doc/latest/getting_started/installing.html 
>> , 
>> using the "Install from debian packages" instructions.
>> 
>> Now when I run
>> $ sudo service cassandra start
>> There are no errors, no feedback to stdout. I assume that's expected 
>> behavior?
>> 
>> However, this fails:
>> $ nodetest status
>> error: null
>> -- StackTrace --
>> java.lang.NullPointerException
>>  at 
>> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>>  at 
>> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:82)
>>  at org.apache.cassandra.io.util.FileUtils.(FileUtils.java:79)
>>  at 
>> org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:860)
>>  at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:200)
>>  at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168)
>> 
>> Can anyone help me fix this?
>> 
> 



Re: Assassinate fails

2019-04-03 Thread Alex

Same result it seems:
Welcome to JMX terminal. Type "help" for available commands.
$>open localhost:7199
#Connection to localhost:7199 is opened
$>bean org.apache.cassandra.net:type=Gossiper
#bean is set to org.apache.cassandra.net:type=Gossiper
$>run unsafeAssassinateEndpoint 192.168.1.18
#calling operation unsafeAssassinateEndpoint of mbean 
org.apache.cassandra.net:type=Gossiper

#RuntimeMBeanException: java.lang.NullPointerException


There not much more to see in log files :
WARN  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:13,626 
Gossiper.java:575 - Assassinating /192.168.1.18 via gossip
INFO  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:13,627 
Gossiper.java:585 - Sleeping for 3ms to ensure /192.168.1.18 does 
not change
INFO  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:43,628 
Gossiper.java:1029 - InetAddress /192.168.1.18 is now DOWN
INFO  [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:43,631 
StorageService.java:2324 - Removing tokens [..] for /192.168.1.18





Le 03.04.2019 17:10, Nick Hatfield a écrit :

Run assassinate the old way. I works very well...

wget -q -O jmxterm.jar
http://downloads.sourceforge.net/cyclops-group/jmxterm-1.0-alpha-4-uber.jar

java -jar ./jmxterm.jar

$>open localhost:7199

$>bean org.apache.cassandra.net:type=Gossiper

$>run unsafeAssassinateEndpoint 192.168.1.18

$>quit


Happy deleting

-Original Message-
From: Alex [mailto:m...@aca-o.com]
Sent: Wednesday, April 03, 2019 10:42 AM
To: user@cassandra.apache.org
Subject: Assassinate fails

Hello,

Short story:
- I had to replace a dead node in my cluster
- 1 week after, dead node is still seen as DN by 3 out of 5 nodes
- dead node has null host_id
- assassinate on dead node fails with error

How can I get rid of this dead node ?


Long story:
I had a 3 nodes cluster (Cassandra 3.9) ; one node went dead. I built
a new node from scratch and "replaced" the dead node using the
information from this page
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html.
It looked like the replacement went ok.

I added two more nodes to strengthen the cluster.

A few days have passed and the dead node is still visible and marked
as "down" on 3 of 5 nodes in nodetool status:

--  Address   Load   Tokens   Owns (effective)  Host ID
  Rack
UN  192.168.1.9   16 GiB 256  35.0%
76223d4c-9d9f-417f-be27-cebb791cddcc  rack1
UN  192.168.1.12  16.09 GiB  256  34.0%
719601e2-54a6-440e-a379-c9cf2dc20564  rack1
UN  192.168.1.14  14.16 GiB  256  32.6%
d8017a03-7e4e-47b7-89b9-cd9ec472d74f  rack1
UN  192.168.1.17  15.4 GiB   256  34.1%
fa238b21-1db1-47dc-bfb7-beedc6c9967a  rack1
DN  192.168.1.18  24.3 GiB   256  33.7% null
  rack1
UN  192.168.1.22  19.06 GiB  256  30.7%
09d24557-4e98-44c3-8c9d-53c4c31066e1  rack1

Its host ID is null, so I cannot use nodetool removenode. Moreover
nodetool assassinate 192.168.1.18 fails with :

error: null
-- StackTrace --
java.lang.NullPointerException

And in system.log:

INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:39:38,595
Gossiper.java:585 - Sleeping for 3ms to ensure /192.168.1.18 does
not change INFO  [CompactionExecutor:547] 2019-03-27 17:39:38,669
AutoSavingCache.java:393 - Saved KeyCache (27316 items) in 163 ms INFO
 [IndexSummaryManager:1] 2019-03-27 17:40:03,620
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,597
Gossiper.java:1029 - InetAddress /192.168.1.18 is now DOWN INFO  [RMI
TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,599
StorageService.java:2324 - Removing tokens [-1061369577393671924,...]
ERROR [GossipStage:1] 2019-03-27 17:40:08,600 CassandraDaemon.java:226
- Exception in thread Thread[GossipStage:1,5,main]
java.lang.NullPointerException: null


In system.peers, the dead node shows and has the same ID as the 
replacing node :


cqlsh> select peer, host_id from system.peers;

  peer | host_id
--+--
  192.168.1.18 | 09d24557-4e98-44c3-8c9d-53c4c31066e1
  192.168.1.22 | 09d24557-4e98-44c3-8c9d-53c4c31066e1
   192.168.1.9 | 76223d4c-9d9f-417f-be27-cebb791cddcc
  192.168.1.14 | d8017a03-7e4e-47b7-89b9-cd9ec472d74f
  192.168.1.12 | 719601e2-54a6-440e-a379-c9cf2dc20564

Dead node and replacing node have different tokens in system.peers.

I should add that I also tried decommission on a node that still
192.168.1.18 in its peers. - it is still marked as "leaving" 5 days
later. Nothing in notetool netstats or nodetool compactionstats.


Thank you for taking the time to read this. Hope you can help.

Alex

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org





Re: New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread David Taylor
My users has permissions to read everything in /etc/cassandra.

However, it gave me an idea. When I run sudo nodetool status, it works, I
get the "UN" status.

Not sure if this permissions issue will interfere with my use of Cassandra
or not. Do I have to change the permissions of the /usr/bin/nodetool
executable? It's currently -rwxr-xr-x which looks right to me


On Wed, Apr 3, 2019 at 12:16 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:

> On Wed, Apr 3, 2019 at 4:23 PM David Taylor 
> wrote:
>
>>
>> $ nodetest status
>> error: null
>> -- StackTrace --
>> java.lang.NullPointerException
>> at
>> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>>
>
> Could it be that your user doesn't have permissions to read the config
> file in /etc?
>
> --
> Alex
>
>


Re: New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread Oleksandr Shulgin
On Wed, Apr 3, 2019 at 4:23 PM David Taylor  wrote:

>
> $ nodetest status
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at
> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>

Could it be that your user doesn't have permissions to read the config file
in /etc?

--
Alex


Re: New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread David Taylor
Hi Paul thanks for responding.

I created a ~/.cassandra directory and chmodded it to 777

in /var/log/cassandra/system.log the only non-INFO items are:
WARN  [main] 2019-04-03 11:47:54,172 StartupChecks.java:136 - jemalloc
shared library could not be preloaded to speed up memory allocations
WARN  [main] 2019-04-03 11:47:54,172 StartupChecks.java:169 - JMX is not
enabled to receive remote connections. Please see cassandra-env.sh for more
info.

Indeed, I meant nodetool, not nodetest.

Running nodetool status (or nodetool --help) results in the same stack
trace as before.

On Wed, Apr 3, 2019 at 11:34 AM Paul Chandler  wrote:

> David,
>
> When you start cassandra all the logs go to system.log normally in the
> /var/log/cassandra directory, so you should look there once it has started,
> to check everything is ok.
>
> I assume you mean you ran nodetool status rather than nodetest.
>
> The nodetool command stores a history of commands in the directory
> ~/.cassandra, and from the stack trace you supply it looks like it is
> failing to create that directory. So I would check the file system
> permissions there.
>
> Thanks
>
> Paul Chandler
>
>
> On 3 Apr 2019, at 15:15, David Taylor  wrote:
>
> I am running a System87 Oryx Pro laptop with Ubuntu 18.04
>
> I had only Oracle Java 11 installed for Hadoop, so I also installed
> OpenJDK8  with:
> $ sudo apt-get install openjdk-8-jre
> and switched to it with
> $ sudo update-java-alternatives --set
> path/shown/with/"update-java-alternatives --list"
>
> $ java-version
> openjdk version "1.8.0_191"
> OpenJDK Runtime Environment (build
> 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
> OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
>
> I installed Cassandra according to the directions on
> http://cassandra.apache.org/doc/latest/getting_started/installing.html,
> using the "Install from debian packages" instructions.
>
> Now when I run
> $ sudo service cassandra start
> There are no errors, no feedback to stdout. I assume that's expected
> behavior?
>
> However, this fails:
> $ nodetest status
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at
> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
> at
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:82)
> at org.apache.cassandra.io.util.FileUtils.(FileUtils.java:79)
> at
> org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:860)
> at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:200)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168)
>
> Can anyone help me fix this?
>
>
>


Re: Procedures for moving part of a C* cluster to a different datacenter

2019-04-03 Thread Oleksandr Shulgin
On Wed, Apr 3, 2019 at 4:37 PM Saleil Bhat (BLOOMBERG/ 731 LEX) <
sbha...@bloomberg.net> wrote:

>
> Thanks for the reply! One clarification: the replacement node WOULD be
> DC-local as far as Cassandra is is concerned; it would just be in a
> different physical DC. Using the Orlando -> Tampa example, suppose my DC
> was named 'floridaDC' in Cassandra. Then I would just kill a node in
> Orlando, and start a new one in Tampa with the same DC name, 'floridaDC'.
> So from Cassandra's perspective, the replacement node is in the same
> datacenter as the old one was. It will be responsible for the same tokens
> as the old Orlando node, and bootstrap accordingly.
>
> Would this work?
>

Ah, this is a different story.  Assuming you can figure out connectivity
between the locations and assign the rack for the replacement node
properly, I don't see why this shouldn't work.

At the same time, if you really care about data consistency, you will have
to run more repairs than with the documented procedure of adding/removing a
virtual DC.  Replacing a node does not work exactly like bootstrap does, so
after the streaming has finished you should repair the newly started node.
And I guess you really should run it after replacing every single node, not
after replacing all nodes.

--
Alex


Re: New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread Paul Chandler
David,

When you start cassandra all the logs go to system.log normally in the 
/var/log/cassandra directory, so you should look there once it has started, to 
check everything is ok.

I assume you mean you ran nodetool status rather than nodetest.

The nodetool command stores a history of commands in the directory 
~/.cassandra, and from the stack trace you supply it looks like it is failing 
to create that directory. So I would check the file system permissions there.

Thanks 

Paul Chandler


> On 3 Apr 2019, at 15:15, David Taylor  wrote:
> 
> I am running a System87 Oryx Pro laptop with Ubuntu 18.04
> 
> I had only Oracle Java 11 installed for Hadoop, so I also installed OpenJDK8  
> with:
> $ sudo apt-get install openjdk-8-jre
> and switched to it with
> $ sudo update-java-alternatives --set 
> path/shown/with/"update-java-alternatives --list"
> 
> $ java-version
> openjdk version "1.8.0_191"
> OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
> OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
> 
> I installed Cassandra according to the directions on 
> http://cassandra.apache.org/doc/latest/getting_started/installing.html 
> , 
> using the "Install from debian packages" instructions.
> 
> Now when I run
> $ sudo service cassandra start
> There are no errors, no feedback to stdout. I assume that's expected behavior?
> 
> However, this fails:
> $ nodetest status
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
>   at 
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:82)
>   at org.apache.cassandra.io.util.FileUtils.(FileUtils.java:79)
>   at 
> org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:860)
>   at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:200)
>   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168)
> 
> Can anyone help me fix this?
> 



RE: Assassinate fails

2019-04-03 Thread Nick Hatfield
Run assassinate the old way. I works very well...

wget -q -O jmxterm.jar 
http://downloads.sourceforge.net/cyclops-group/jmxterm-1.0-alpha-4-uber.jar

java -jar ./jmxterm.jar

$>open localhost:7199

$>bean org.apache.cassandra.net:type=Gossiper

$>run unsafeAssassinateEndpoint 192.168.1.18

$>quit


Happy deleting

-Original Message-
From: Alex [mailto:m...@aca-o.com] 
Sent: Wednesday, April 03, 2019 10:42 AM
To: user@cassandra.apache.org
Subject: Assassinate fails

Hello,

Short story:
- I had to replace a dead node in my cluster
- 1 week after, dead node is still seen as DN by 3 out of 5 nodes
- dead node has null host_id
- assassinate on dead node fails with error

How can I get rid of this dead node ?


Long story:
I had a 3 nodes cluster (Cassandra 3.9) ; one node went dead. I built a new 
node from scratch and "replaced" the dead node using the information from this 
page 
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html.
 
It looked like the replacement went ok.

I added two more nodes to strengthen the cluster.

A few days have passed and the dead node is still visible and marked as "down" 
on 3 of 5 nodes in nodetool status:

--  Address   Load   Tokens   Owns (effective)  Host ID  
  Rack
UN  192.168.1.9   16 GiB 256  35.0% 
76223d4c-9d9f-417f-be27-cebb791cddcc  rack1
UN  192.168.1.12  16.09 GiB  256  34.0% 
719601e2-54a6-440e-a379-c9cf2dc20564  rack1
UN  192.168.1.14  14.16 GiB  256  32.6% 
d8017a03-7e4e-47b7-89b9-cd9ec472d74f  rack1
UN  192.168.1.17  15.4 GiB   256  34.1% 
fa238b21-1db1-47dc-bfb7-beedc6c9967a  rack1
DN  192.168.1.18  24.3 GiB   256  33.7% null 
  rack1
UN  192.168.1.22  19.06 GiB  256  30.7% 
09d24557-4e98-44c3-8c9d-53c4c31066e1  rack1

Its host ID is null, so I cannot use nodetool removenode. Moreover nodetool 
assassinate 192.168.1.18 fails with :

error: null
-- StackTrace --
java.lang.NullPointerException

And in system.log:

INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:39:38,595
Gossiper.java:585 - Sleeping for 3ms to ensure /192.168.1.18 does not 
change INFO  [CompactionExecutor:547] 2019-03-27 17:39:38,669
AutoSavingCache.java:393 - Saved KeyCache (27316 items) in 163 ms INFO  
[IndexSummaryManager:1] 2019-03-27 17:40:03,620
IndexSummaryRedistribution.java:75 - Redistributing index summaries INFO  [RMI 
TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,597
Gossiper.java:1029 - InetAddress /192.168.1.18 is now DOWN INFO  [RMI TCP 
Connection(16)-127.0.0.1] 2019-03-27 17:40:08,599
StorageService.java:2324 - Removing tokens [-1061369577393671924,...] ERROR 
[GossipStage:1] 2019-03-27 17:40:08,600 CassandraDaemon.java:226 - Exception in 
thread Thread[GossipStage:1,5,main]
java.lang.NullPointerException: null


In system.peers, the dead node shows and has the same ID as the replacing node :

cqlsh> select peer, host_id from system.peers;

  peer | host_id
--+--
  192.168.1.18 | 09d24557-4e98-44c3-8c9d-53c4c31066e1
  192.168.1.22 | 09d24557-4e98-44c3-8c9d-53c4c31066e1
   192.168.1.9 | 76223d4c-9d9f-417f-be27-cebb791cddcc
  192.168.1.14 | d8017a03-7e4e-47b7-89b9-cd9ec472d74f
  192.168.1.12 | 719601e2-54a6-440e-a379-c9cf2dc20564

Dead node and replacing node have different tokens in system.peers.

I should add that I also tried decommission on a node that still
192.168.1.18 in its peers. - it is still marked as "leaving" 5 days later. 
Nothing in notetool netstats or nodetool compactionstats.


Thank you for taking the time to read this. Hope you can help.

Alex

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Assassinate fails

2019-04-03 Thread Alex

Hello,

Short story:
- I had to replace a dead node in my cluster
- 1 week after, dead node is still seen as DN by 3 out of 5 nodes
- dead node has null host_id
- assassinate on dead node fails with error

How can I get rid of this dead node ?


Long story:
I had a 3 nodes cluster (Cassandra 3.9) ; one node went dead. I built a 
new node from scratch and "replaced" the dead node using the information 
from this page 
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html. 
It looked like the replacement went ok.


I added two more nodes to strengthen the cluster.

A few days have passed and the dead node is still visible and marked as 
"down" on 3 of 5 nodes in nodetool status:


--  Address   Load   Tokens   Owns (effective)  Host ID  
 Rack
UN  192.168.1.9   16 GiB 256  35.0% 
76223d4c-9d9f-417f-be27-cebb791cddcc  rack1
UN  192.168.1.12  16.09 GiB  256  34.0% 
719601e2-54a6-440e-a379-c9cf2dc20564  rack1
UN  192.168.1.14  14.16 GiB  256  32.6% 
d8017a03-7e4e-47b7-89b9-cd9ec472d74f  rack1
UN  192.168.1.17  15.4 GiB   256  34.1% 
fa238b21-1db1-47dc-bfb7-beedc6c9967a  rack1
DN  192.168.1.18  24.3 GiB   256  33.7% null 
 rack1
UN  192.168.1.22  19.06 GiB  256  30.7% 
09d24557-4e98-44c3-8c9d-53c4c31066e1  rack1


Its host ID is null, so I cannot use nodetool removenode. Moreover 
nodetool assassinate 192.168.1.18 fails with :


error: null
-- StackTrace --
java.lang.NullPointerException

And in system.log:

INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:39:38,595 
Gossiper.java:585 - Sleeping for 3ms to ensure /192.168.1.18 does 
not change
INFO  [CompactionExecutor:547] 2019-03-27 17:39:38,669 
AutoSavingCache.java:393 - Saved KeyCache (27316 items) in 163 ms
INFO  [IndexSummaryManager:1] 2019-03-27 17:40:03,620 
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,597 
Gossiper.java:1029 - InetAddress /192.168.1.18 is now DOWN
INFO  [RMI TCP Connection(16)-127.0.0.1] 2019-03-27 17:40:08,599 
StorageService.java:2324 - Removing tokens [-1061369577393671924,...]
ERROR [GossipStage:1] 2019-03-27 17:40:08,600 CassandraDaemon.java:226 - 
Exception in thread Thread[GossipStage:1,5,main]

java.lang.NullPointerException: null


In system.peers, the dead node shows and has the same ID as the 
replacing node :


cqlsh> select peer, host_id from system.peers;

 peer | host_id
--+--
 192.168.1.18 | 09d24557-4e98-44c3-8c9d-53c4c31066e1
 192.168.1.22 | 09d24557-4e98-44c3-8c9d-53c4c31066e1
  192.168.1.9 | 76223d4c-9d9f-417f-be27-cebb791cddcc
 192.168.1.14 | d8017a03-7e4e-47b7-89b9-cd9ec472d74f
 192.168.1.12 | 719601e2-54a6-440e-a379-c9cf2dc20564

Dead node and replacing node have different tokens in system.peers.

I should add that I also tried decommission on a node that still 
192.168.1.18 in its peers. - it is still marked as "leaving" 5 days 
later. Nothing in notetool netstats or nodetool compactionstats.



Thank you for taking the time to read this. Hope you can help.

Alex

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Procedures for moving part of a C* cluster to a different datacenter

2019-04-03 Thread Saleil Bhat (BLOOMBERG/ 731 LEX)
Hey, 

Thanks for the reply! One clarification: the replacement node WOULD be DC-local 
as far as Cassandra is is concerned; it would just be in a different physical 
DC. Using the Orlando -> Tampa example, suppose my DC was named 'floridaDC' in 
Cassandra. Then I would just kill a node in Orlando, and start a new one in 
Tampa with the same DC name, 'floridaDC'. So from Cassandra's perspective, the 
replacement node is in the same datacenter as the old one was. It will be 
responsible for the same tokens as the old Orlando node, and bootstrap 
accordingly.  

Would this work? 

-Saleil 

From: oleksandr.shul...@zalando.de At: 04/03/19 03:28:37To:  Saleil Bhat 
(BLOOMBERG/ 731 LEX ) ,  user@cassandra.apache.org
Subject: Re: Procedures for moving part of a C* cluster to a different 
datacenter

On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX) 
 wrote:


The standard procedure for doing this seems to be add a 3rd datacenter to the 
cluster, stream data to the new datacenter via nodetool rebuild, then 
decommission the old datacenter. A more detailed review of this procedure can 
be found here: 
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html



However, I see two problems with the above protocol.  First, it requires 
changes on the application layer because of the datacenter name change; e.g. 
all applications referring to the datacenter ‘Orlando’ will now have to be 
changed to refer to ‘Tampa’.

Alternatively, you may omit DC specification in the client and provide internal 
network addresses as the contact points.


As such, I was wondering what peoples’ thoughts were on the following 
alternative procedure: 

1) Kill one node in the old datacenter

2) Add a new node in the new datacenter but indicate that it is to REPLACE the 
one just shutdown; this node will bootstrap, and all the data which it is 
supposed to be responsible for will be streamed to it


I don't think this is going to work.  First, I believe streaming for bootstrap 
or for replacing a node is DC-local, so the first node won't have any peers to 
stream from.  Even if it would stream from the remote DC, this single node will 
own 100% of the ring and will most likely die of the load well before it 
finishes streaming.

Regards,-- 
Alex




New user on Ubuntu 18.04 laptop, nodetest status throws NullPointerException

2019-04-03 Thread David Taylor
I am running a System87 Oryx Pro laptop with Ubuntu 18.04

I had only Oracle Java 11 installed for Hadoop, so I also installed
OpenJDK8  with:
$ sudo apt-get install openjdk-8-jre
and switched to it with
$ sudo update-java-alternatives --set
path/shown/with/"update-java-alternatives --list"

$ java-version
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)

I installed Cassandra according to the directions on
http://cassandra.apache.org/doc/latest/getting_started/installing.html,
using the "Install from debian packages" instructions.

Now when I run
$ sudo service cassandra start
There are no errors, no feedback to stdout. I assume that's expected
behavior?

However, this fails:
$ nodetest status
error: null
-- StackTrace --
java.lang.NullPointerException
at
org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1892)
at
org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:82)
at org.apache.cassandra.io.util.FileUtils.(FileUtils.java:79)
at
org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:860)
at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:200)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168)

Can anyone help me fix this?


Re: Procedures for moving part of a C* cluster to a different datacenter

2019-04-03 Thread Paul Chandler
Saleil,

Are you performing any regular repairs on the existing cluster?

If you are, you could set this repair up on the Tampa cluster, then after all 
the applications have been switched to Tampa, wait for a complete repair cycle, 
then it will be safe to decommission Orlando. however, there could be missing 
data in Tampa until the repairs are completed. 

If you are not performing any regular repairs, then you could already have data 
inconsistencies between the 2 existing clusters, so it won’t be any worse.

Having said that, we moved more than 50 clusters from the UK to Belgium, using 
a similar process, but we didn’t do any additional repairs apart from the ones 
performed by Opscenter, and we didn’t have any reports of missing data.

One thing I definitely would not do is have  a “logical” datacenter in 
Cassandra actually spans two different physical datacenters. If there is any 
connection issue between the datacenters, including long latencies, then any 
local_quorum may not serviced, due to 2 replicas being in the inaccessible 
datacenter.

Finally, we quite often had problems at the rebuild stage, and needed different 
settings depending on the type of cluster. So be prepared to fail at that point 
and experiment with different settings, but the good thing about this process 
is the fact that you can rollback at any stage without affecting the original 
cluster.

Paul Chandler


> On 3 Apr 2019, at 10:46, Stefan Miklosovic 
>  wrote:
> 
> On Wed, 3 Apr 2019 at 18:38, Oleksandr Shulgin
> mailto:oleksandr.shul...@zalando.de>> wrote:
>> 
>> On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX) 
>>  wrote:
>>> 
>>> 
>>> The standard procedure for doing this seems to be add a 3rd datacenter to 
>>> the cluster, stream data to the new datacenter via nodetool rebuild, then 
>>> decommission the old datacenter. A more detailed review of this procedure 
>>> can be found here:
> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>>> 
>>> 
> 
> However, I see two problems with the above protocol. First, it requires 
> changes on the application layer because of the datacenter name change; e.g. 
> all applications referring to the datacenter ‘Orlando’ will now have to be 
> changed to refer to ‘Tampa’.
>> 
>> 
>> Alternatively, you may omit DC specification in the client and provide 
>> internal network addresses as the contact points.
> 
> I am afraid you are mixing two things together. I believe OP means
> that he has to change local dc in DCAwareRoundRobinPolicy. I am not
> sure what contact points have to do with that. If there is at least
> one contact point from DC nobody removes all should be fine.
> 
> The process in the article is right. Before transitioning to new DC
> one has to be sure that all writes and reads still target old dc too
> after you alter a keyspace and add new dc there so you dont miss any
> write when something goes south and you have to switch it back. Thats
> achieved by local_one / local_quorum and DCAwareRoundRobinPolicy with
> localDc pointing to the old one.
> 
> Then you do rebuild and you restart your app in such way that new DC
> will be in that policy so new writes and reads are going primarily to
> that DC and once all is fine you drop the old one (you can do maybe
> additional repair to be sure). I think the rolling restart of the app
> is inevitable but if services are in some kind of HA setup I dont see
> a problem with that. From outside it would look like there is not any
> downtime.
> 
> OP has a problem with repair on nodes and it is true that can be time
> consuming, even not doable, but there are workarounds around that and
> I do not want to go into here. You can speed this process
> significantly when you are smart about that and you repair in smaller
> chunks so you dont clog your cluster completely, its called subrange
> repair.
> 
>>> As such, I was wondering what peoples’ thoughts were on the following 
>>> alternative procedure:
>>> 1) Kill one node in the old datacenter
>>> 2) Add a new node in the new datacenter but indicate that it is to REPLACE 
>>> the one just shutdown; this node will bootstrap, and all the data which it 
>>> is supposed to be responsible for will be streamed to it
>> 
>> 
>> I don't think this is going to work.  First, I believe streaming for 
>> bootstrap or for replacing a node is DC-local, so the first node won't have 
>> any peers to stream from.  Even if it would stream from the remote DC, this 
>> single node will own 100% of the ring and will most likely die of the load 
>> well before it finishes streaming.
>> 
>> Regards,
>> --
>> Alex
>> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> 
> For additional commands, e-mail: user-h...@cassandra.apache.org 
> 


Re: Procedures for moving part of a C* cluster to a different datacenter

2019-04-03 Thread Stefan Miklosovic
On Wed, 3 Apr 2019 at 18:38, Oleksandr Shulgin
 wrote:
>
> On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX) 
>  wrote:
>>
>>
>> The standard procedure for doing this seems to be add a 3rd datacenter to 
>> the cluster, stream data to the new datacenter via nodetool rebuild, then 
>> decommission the old datacenter. A more detailed review of this procedure 
>> can be found here:

>> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>>
>>

However, I see two problems with the above protocol. First, it requires 
>>changes on the application layer because of the datacenter name change; e.g. 
>>all applications referring to the datacenter ‘Orlando’ will now have to be 
>>changed to refer to ‘Tampa’.
>
>
> Alternatively, you may omit DC specification in the client and provide 
> internal network addresses as the contact points.

I am afraid you are mixing two things together. I believe OP means
that he has to change local dc in DCAwareRoundRobinPolicy. I am not
sure what contact points have to do with that. If there is at least
one contact point from DC nobody removes all should be fine.

The process in the article is right. Before transitioning to new DC
one has to be sure that all writes and reads still target old dc too
after you alter a keyspace and add new dc there so you dont miss any
write when something goes south and you have to switch it back. Thats
achieved by local_one / local_quorum and DCAwareRoundRobinPolicy with
localDc pointing to the old one.

Then you do rebuild and you restart your app in such way that new DC
will be in that policy so new writes and reads are going primarily to
that DC and once all is fine you drop the old one (you can do maybe
additional repair to be sure). I think the rolling restart of the app
is inevitable but if services are in some kind of HA setup I dont see
a problem with that. From outside it would look like there is not any
downtime.

OP has a problem with repair on nodes and it is true that can be time
consuming, even not doable, but there are workarounds around that and
I do not want to go into here. You can speed this process
significantly when you are smart about that and you repair in smaller
chunks so you dont clog your cluster completely, its called subrange
repair.

>> As such, I was wondering what peoples’ thoughts were on the following 
>> alternative procedure:
>> 1) Kill one node in the old datacenter
>> 2) Add a new node in the new datacenter but indicate that it is to REPLACE 
>> the one just shutdown; this node will bootstrap, and all the data which it 
>> is supposed to be responsible for will be streamed to it
>
>
> I don't think this is going to work.  First, I believe streaming for 
> bootstrap or for replacing a node is DC-local, so the first node won't have 
> any peers to stream from.  Even if it would stream from the remote DC, this 
> single node will own 100% of the ring and will most likely die of the load 
> well before it finishes streaming.
>
> Regards,
> --
> Alex
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: How to install an older minor release?

2019-04-03 Thread Kyrylo Lebediev
Hi Oleksandr,
Yes, that was always the case. All older versions are removed from Debian repo 
index :(

From: Oleksandr Shulgin 
Reply-To: "user@cassandra.apache.org" 
Date: Tuesday, April 2, 2019 at 20:04
To: User 
Subject: How to install an older minor release?

Hello,

We've just noticed that we cannot install older minor releases of Apache 
Cassandra from Debian packages, as described on this page: 
http://cassandra.apache.org/download/

Previously we were doing the following at the last step: apt-get install 
cassandra==3.0.17

Today it fails with error:
E: Version '3.0.17' for 'cassandra' was not found

And `apt-get show cassandra` reports only one version available, the latest 
released one: 3.0.18
The packages for the older versions are still in the pool: 
http://dl.bintray.com/apache/cassandra/pool/main/c/cassandra/

Was it always the case that only the latest version is available to be 
installed directly with apt or did something change recently?

Regards,
--
Alex



Re: Procedures for moving part of a C* cluster to a different datacenter

2019-04-03 Thread Oleksandr Shulgin
On Wed, Apr 3, 2019 at 12:28 AM Saleil Bhat (BLOOMBERG/ 731 LEX) <
sbha...@bloomberg.net> wrote:

>
> The standard procedure for doing this seems to be add a 3rd datacenter to
> the cluster, stream data to the new datacenter via nodetool rebuild, then
> decommission the old datacenter. A more detailed review of this procedure
> can be found here:
> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>
> However, I see two problems with the above protocol. First, it requires
> changes on the application layer because of the datacenter name change;
> e.g. all applications referring to the datacenter ‘Orlando’ will now have
> to be changed to refer to ‘Tampa’.
>

Alternatively, you may omit DC specification in the client and provide
internal network addresses as the contact points.

As such, I was wondering what peoples’ thoughts were on the following
> alternative procedure:
> 1) Kill one node in the old datacenter
> 2) Add a new node in the new datacenter but indicate that it is to REPLACE
> the one just shutdown; this node will bootstrap, and all the data which it
> is supposed to be responsible for will be streamed to it
>

I don't think this is going to work.  First, I believe streaming for
bootstrap or for replacing a node is DC-local, so the first node won't have
any peers to stream from.  Even if it would stream from the remote DC, this
single node will own 100% of the ring and will most likely die of the load
well before it finishes streaming.

Regards,
-- 
Alex