Re: A question to updatesstables

2016-08-19 Thread Romain Hardouin
Hi,
There are two ways to upgrade SSTables: - online (C* must be UP): nodetool 
upgradesstables - offline (when C* is stopped): using the tool called 
"sstableupgrade".    It's located in the bin directory of Cassandra so 
depending on how you installed Cassandra, it may be on the path.    See 
https://docs.datastax.com/en/cassandra/2.0/cassandra/tools/ToolsSSTableupgrade_t.html
Few questions: - Did you check you are not hitting 
https://github.com/apache/cassandra/blob/cassandra-2.0/NEWS.txt#L162 ?    i.e. 
are you sure that all your data are in "ic" format? - Why did you choose 
2.0.10? (The latest 2.0 release being 2.0.17.)  Best,  Romain 

Le Vendredi 19 août 2016 5h18, "Lu, Boying"  a écrit :
 

 #yiv8524026874 #yiv8524026874 -- _filtered #yiv8524026874 
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv8524026874 
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv8524026874 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv8524026874 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered #yiv8524026874 
{panose-1:2 1 6 0 3 1 1 1 1 1;}#yiv8524026874 #yiv8524026874 
p.yiv8524026874MsoNormal, #yiv8524026874 li.yiv8524026874MsoNormal, 
#yiv8524026874 div.yiv8524026874MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv8524026874 a:link, 
#yiv8524026874 span.yiv8524026874MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv8524026874 a:visited, #yiv8524026874 
span.yiv8524026874MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv8524026874 p 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv8524026874 
span.yiv8524026874hoenzb {}#yiv8524026874 span.yiv8524026874EmailStyle19 
{color:#1F497D;}#yiv8524026874 .yiv8524026874MsoChpDefault {} _filtered 
#yiv8524026874 {margin:72.0pt 90.0pt 72.0pt 90.0pt;}#yiv8524026874 
div.yiv8524026874WordSection1 {}#yiv8524026874 Thanks a lot.    I’m a little 
bit of confusing.  If the ‘nodetool updatesstable’ doesn’t work without 
Cassandra server running, and Cassandra server failed to start due to the 
incompatible SSTable format,  how to resolve this dilemma?          From: 
Carlos Alonso [mailto:i...@mrcalonso.com]
Sent: 2016年8月18日 18:44
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables    Replies inline 
 Carlos Alonso | Software Engineer | @calonso    On 18 August 2016 at 11:56, 
Lu, Boying  wrote: Hi, All,   We use Cassandra in our 
product. I our early release we use Cassandra 1.2.10 whose SSTable is ‘ic’ 
format. We upgrade Cassandra to 2.0.10 in our product release. But the 
Cassandra server failed to start due to the incompatible SSTable format and the 
log message told us to use ‘nodetool updatesstables’ to upgrade SSTable files.  
 To make sure that no negative impact on our data, I want to confirm following 
things about this command before trying it: 1.  Does it work without 
Cassandra server running? No, it won't.  
2.  Will it cause data lost with this command? 
It shouldn't if you followed the upgrade instructions properly 
3.  What’s the best practice to void this error occurs again (e.g. 
upgrading Cassandra next time)? 
Upgrading SSTables is required or not depending on the upgrade you're running, 
basically if the SSTables layout changes you'll need to run it and not 
otherwise so there's nothing you can do to avoid it  
  Thanks   Boying 
   

  

Re: A question to updatesstables

2016-08-19 Thread Ryan Svihla
The actual error message could be very useful to diagnose the reason. There are 
warnings about incompatible formats which are safe to ignore (usually in the 
cache) and I have one time seen an issue with commit log archiving preventing a 
startup during upgrade. Usually there is something else broken and the version 
mismatch is a false signal.

Regards,

Ryan Svihla

> On Aug 18, 2016, at 10:18 PM, Lu, Boying  wrote:
> 
> Thanks a lot.
>  
> I’m a little bit of confusing.  If the ‘nodetool updatesstable’ doesn’t work 
> without Cassandra server running,
> and Cassandra server failed to start due to the incompatible SSTable format,  
> how to resolve this dilemma?
>  
>  
>  
> From: Carlos Alonso [mailto:i...@mrcalonso.com] 
> Sent: 2016年8月18日 18:44
> To: user@cassandra.apache.org
> Subject: Re: A question to updatesstables
>  
> Replies inline
> 
> Carlos Alonso | Software Engineer | @calonso
>  
> On 18 August 2016 at 11:56, Lu, Boying  wrote:
> Hi, All,
>  
> We use Cassandra in our product. I our early release we use Cassandra 1.2.10 
> whose SSTable is ‘ic’ format.
> We upgrade Cassandra to 2.0.10 in our product release. But the Cassandra 
> server failed to start due to the
> incompatible SSTable format and the log message told us to use ‘nodetool 
> updatesstables’ to upgrade SSTable files.
>  
> To make sure that no negative impact on our data, I want to confirm following 
> things about this command before trying it:
> 1.   Does it work without Cassandra server running?
> 
> No, it won't. 
> 2.   Will it cause data lost with this command?
> 
> It shouldn't if you followed the upgrade instructions properly
> 3.   What’s the best practice to void this error occurs again (e.g. 
> upgrading Cassandra next time)?
> 
> Upgrading SSTables is required or not depending on the upgrade you're 
> running, basically if the SSTables layout changes you'll need to run it and 
> not otherwise so there's nothing you can do to avoid it 
>  
> Thanks
>  
> Boying
>  


RE: A question to updatesstables

2016-08-19 Thread Lu, Boying
Here is the error message in our log file:
java.lang.RuntimeException: Incompatible SSTable found. Current version ka is 
unable to read file: 
/data/db/1/data/StorageOS/RemoteDirectorGroup/StorageOS-RemoteDirectorGroup-ic-37.
 Please run upgradesstables.
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:517)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:494)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:335)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:275)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:121)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:98)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:328)
at 
org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:479)

From: Ryan Svihla [mailto:r...@foundev.pro]
Sent: 2016年8月19日 17:26
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables

The actual error message could be very useful to diagnose the reason. There are 
warnings about incompatible formats which are safe to ignore (usually in the 
cache) and I have one time seen an issue with commit log archiving preventing a 
startup during upgrade. Usually there is something else broken and the version 
mismatch is a false signal.

Regards,

Ryan Svihla

On Aug 18, 2016, at 10:18 PM, Lu, Boying 
mailto:boying...@emc.com>> wrote:
Thanks a lot.

I’m a little bit of confusing.  If the ‘nodetool updatesstable’ doesn’t work 
without Cassandra server running,
and Cassandra server failed to start due to the incompatible SSTable format,  
how to resolve this dilemma?



From: Carlos Alonso [mailto:i...@mrcalonso.com]
Sent: 2016年8月18日 18:44
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables

Replies inline

Carlos Alonso | Software Engineer | @calonso

On 18 August 2016 at 11:56, Lu, Boying 
mailto:boying...@emc.com>> wrote:
Hi, All,

We use Cassandra in our product. I our early release we use Cassandra 1.2.10 
whose SSTable is ‘ic’ format.
We upgrade Cassandra to 2.0.10 in our product release. But the Cassandra server 
failed to start due to the
incompatible SSTable format and the log message told us to use ‘nodetool 
updatesstables’ to upgrade SSTable files.

To make sure that no negative impact on our data, I want to confirm following 
things about this command before trying it:

1.   Does it work without Cassandra server running?
No, it won't.

2.   Will it cause data lost with this command?
It shouldn't if you followed the upgrade instructions properly

3.   What’s the best practice to void this error occurs again (e.g. 
upgrading Cassandra next time)?
Upgrading SSTables is required or not depending on the upgrade you're running, 
basically if the SSTables layout changes you'll need to run it and not 
otherwise so there's nothing you can do to avoid it

Thanks

Boying



Re: A question to updatesstables

2016-08-19 Thread Romain Hardouin
ka is the 2.1 format... I don't understand. Did you install C* 2.1?
Romain 

Le Vendredi 19 août 2016 11h32, "Lu, Boying"  a écrit :
 

 #yiv1355196952 #yiv1355196952 -- _filtered #yiv1355196952 
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv1355196952 
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv1355196952 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv1355196952 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered #yiv1355196952 
{panose-1:2 1 6 0 3 1 1 1 1 1;}#yiv1355196952 #yiv1355196952 
p.yiv1355196952MsoNormal, #yiv1355196952 li.yiv1355196952MsoNormal, 
#yiv1355196952 div.yiv1355196952MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv1355196952 a:link, 
#yiv1355196952 span.yiv1355196952MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv1355196952 a:visited, #yiv1355196952 
span.yiv1355196952MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv1355196952 p 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv1355196952 
p.yiv1355196952MsoAcetate, #yiv1355196952 li.yiv1355196952MsoAcetate, 
#yiv1355196952 div.yiv1355196952MsoAcetate 
{margin:0cm;margin-bottom:.0001pt;font-size:8.0pt;}#yiv1355196952 
span.yiv1355196952hoenzb {}#yiv1355196952 span.yiv1355196952EmailStyle19 
{color:#1F497D;}#yiv1355196952 span.yiv1355196952EmailStyle20 
{color:#1F497D;}#yiv1355196952 span.yiv1355196952BalloonTextChar 
{}#yiv1355196952 .yiv1355196952MsoChpDefault {font-size:10.0pt;} _filtered 
#yiv1355196952 {margin:72.0pt 90.0pt 72.0pt 90.0pt;}#yiv1355196952 
div.yiv1355196952WordSection1 {}#yiv1355196952 Here is the error message in our 
log file: java.lang.RuntimeException: Incompatible SSTable found. Current 
version ka is unable to read file: 
/data/db/1/data/StorageOS/RemoteDirectorGroup/StorageOS-RemoteDirectorGroup-ic-37.
 Please run upgradesstables.     at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:517)
     at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:494)
     at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:335)     at 
org.apache.cassandra.db.Keyspace.(Keyspace.java:275)     at 
org.apache.cassandra.db.Keyspace.open(Keyspace.java:121)     at 
org.apache.cassandra.db.Keyspace.open(Keyspace.java:98)     at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:328)    
 at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:479) 
   From: Ryan Svihla [mailto:r...@foundev.pro]
Sent: 2016年8月19日 17:26
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables    The actual error message could be 
very useful to diagnose the reason. There are warnings about incompatible 
formats which are safe to ignore (usually in the cache) and I have one time 
seen an issue with commit log archiving preventing a startup during upgrade. 
Usually there is something else broken and the version mismatch is a false 
signal. 
Regards,    Ryan Svihla 
On Aug 18, 2016, at 10:18 PM, Lu, Boying  wrote: 
Thanks a lot.   I’m a little bit of confusing.  If the ‘nodetool updatesstable’ 
doesn’t work without Cassandra server running, and Cassandra server failed to 
start due to the incompatible SSTable format,  how to resolve this dilemma?     
  From: Carlos Alonso [mailto:i...@mrcalonso.com]
Sent: 2016年8月18日 18:44
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables   Replies inline 
 Carlos Alonso | Software Engineer | @calonso   On 18 August 2016 at 11:56, Lu, 
Boying  wrote: Hi, All,   We use Cassandra in our product. I 
our early release we use Cassandra 1.2.10 whose SSTable is ‘ic’ format. We 
upgrade Cassandra to 2.0.10 in our product release. But the Cassandra server 
failed to start due to the incompatible SSTable format and the log message told 
us to use ‘nodetool updatesstables’ to upgrade SSTable files.   To make sure 
that no negative impact on our data, I want to confirm following things about 
this command before trying it: 1.  Does it work without Cassandra server 
running? No, it won't.  
2.  Will it cause data lost with this command? 
It shouldn't if you followed the upgrade instructions properly 
3.  What’s the best practice to void this error occurs again (e.g. 
upgrading Cassandra next time)? 
Upgrading SSTables is required or not depending on the upgrade you're running, 
basically if the SSTables layout changes you'll need to run it and not 
otherwise so there's nothing you can do to avoid it  
  Thanks   Boying 
  


  

RE: A question to updatesstables

2016-08-19 Thread Lu, Boying
yes, we use Cassandra 2.1.11 in our latest release.

From: Romain Hardouin [mailto:romainh...@yahoo.fr]
Sent: 2016年8月19日 17:36
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables

ka is the 2.1 format... I don't understand. Did you install C* 2.1?

Romain

Le Vendredi 19 août 2016 11h32, "Lu, Boying" 
mailto:boying...@emc.com>> a écrit :

Here is the error message in our log file:
java.lang.RuntimeException: Incompatible SSTable found. Current version ka is 
unable to read file: 
/data/db/1/data/StorageOS/RemoteDirectorGroup/StorageOS-RemoteDirectorGroup-ic-37.
 Please run upgradesstables.
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:517)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:494)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:335)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:275)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:121)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:98)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:328)
at 
org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:479)

From: Ryan Svihla [mailto:r...@foundev.pro]
Sent: 2016年8月19日 17:26
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables

The actual error message could be very useful to diagnose the reason. There are 
warnings about incompatible formats which are safe to ignore (usually in the 
cache) and I have one time seen an issue with commit log archiving preventing a 
startup during upgrade. Usually there is something else broken and the version 
mismatch is a false signal.

Regards,

Ryan Svihla

On Aug 18, 2016, at 10:18 PM, Lu, Boying 
mailto:boying...@emc.com>> wrote:
Thanks a lot.

I’m a little bit of confusing.  If the ‘nodetool updatesstable’ doesn’t work 
without Cassandra server running,
and Cassandra server failed to start due to the incompatible SSTable format,  
how to resolve this dilemma?



From: Carlos Alonso [mailto:i...@mrcalonso.com]
Sent: 2016年8月18日 18:44
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables

Replies inline

Carlos Alonso | Software Engineer | @calonso

On 18 August 2016 at 11:56, Lu, Boying 
mailto:boying...@emc.com>> wrote:
Hi, All,

We use Cassandra in our product. I our early release we use Cassandra 1.2.10 
whose SSTable is ‘ic’ format.
We upgrade Cassandra to 2.0.10 in our product release. But the Cassandra server 
failed to start due to the
incompatible SSTable format and the log message told us to use ‘nodetool 
updatesstables’ to upgrade SSTable files.

To make sure that no negative impact on our data, I want to confirm following 
things about this command before trying it:
1.   Does it work without Cassandra server running?
No, it won't.
2.   Will it cause data lost with this command?
It shouldn't if you followed the upgrade instructions properly
3.   What’s the best practice to void this error occurs again (e.g. 
upgrading Cassandra next time)?
Upgrading SSTables is required or not depending on the upgrade you're running, 
basically if the SSTables layout changes you'll need to run it and not 
otherwise so there's nothing you can do to avoid it

Thanks

Boying




Re: nodetool repair with -pr and -dc

2016-08-19 Thread Jérôme Mainaud
Hello,

I've got a repair command with both -pr and -local rejected on an 2.2.6
cluster.
The exact command was : nodetool repair --full -par -pr -local -j 4

The message is  “You need to run primary range repair on all nodes in the
cluster”.

Reading the code and previously cited CASSANDRA-7450, it should have been
accepted.

Did anyone meet this error before ?

Thanks


-- 
Jérôme Mainaud
jer...@mainaud.com

2016-08-12 1:14 GMT+02:00 kurt Greaves :

> -D does not do what you think it does. I've quoted the relevant
> documentation from the README:
>
>>
>> Multiple
>> Datacenters
>>
>> If you have multiple datacenters in your ring, then you MUST specify the
>> name of the datacenter containing the node you are repairing as part of the
>> command-line options (--datacenter=DCNAME). Failure to do so will result in
>> only a subset of your data being repaired (approximately
>> data/number-of-datacenters). This is because nodetool has no way to
>> determine the relevant DC on its own, which in turn means it will use the
>> tokens from every ring member in every datacenter.
>>
>
>
> On 11 August 2016 at 12:24, Paulo Motta  wrote:
>
>> > if we want to use -pr option ( which i suppose we should to prevent
>> duplicate checks) in 2.0 then if we run the repair on all nodes in a single
>> DC then it should be sufficient and we should not need to run it on all
>> nodes across DC's?
>>
>> No, because the primary ranges of the nodes in other DCs will be missing
>> repair, so you should either run with -pr in all nodes in all DCs, or
>> restrict repair to a specific DC with -local (and have duplicate checks).
>> Combined -pr and -local are only supported on 2.1
>>
>>
>> 2016-08-11 1:29 GMT-03:00 Anishek Agarwal :
>>
>>> ok thanks, so if we want to use -pr option ( which i suppose we should
>>> to prevent duplicate checks) in 2.0 then if we run the repair on all nodes
>>> in a single DC then it should be sufficient and we should not need to run
>>> it on all nodes across DC's ?
>>>
>>>
>>>
>>> On Wed, Aug 10, 2016 at 5:01 PM, Paulo Motta 
>>> wrote:
>>>
 On 2.0 repair -pr option is not supported together with -local, -hosts
 or -dc, since it assumes you need to repair all nodes in all DCs and it
 will throw and error if you try to run with nodetool, so perhaps there's
 something wrong with range_repair options parsing.

 On 2.1 it was added support to simultaneous -pr and -local options on
 CASSANDRA-7450, so if you need that you can either upgade to 2.1 or
 backport that to 2.0.


 2016-08-10 5:20 GMT-03:00 Anishek Agarwal :

> Hello,
>
> We have 2.0.17 cassandra cluster(*DC1*) with a cross dc setup with a
> smaller cluster(*DC2*).  After reading various blogs about
> scheduling/running repairs looks like its good to run it with the 
> following
>
>
> -pr for primary range only
> -st -et for sub ranges
> -par for parallel
> -dc to make sure we can schedule repairs independently on each Data
> centre we have.
>
> i have configured the above using the repair utility @
> https://github.com/BrianGallew/cassandra_range_repair.git
>
> which leads to the following command :
>
> ./src/range_repair.py -k [keyspace] -c [columnfamily name] -v -H
> localhost -p -D* DC1*
>
> but looks like the merkle tree is being calculated on nodes which are
> part of other *DC2.*
>
> why does this happen? i thought it should only look at the nodes in
> local cluster. however on nodetool the* -pr* option cannot be used
> with *-local* according to docs @https://docs.datastax.com/en/
> cassandra/2.0/cassandra/tools/toolsRepair.html
>
> so i am may be missing something, can someone help explain this please.
>
> thanks
> anishek
>


>>>
>>
>
>
> --
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>


Re: A question to updatesstables

2016-08-19 Thread Romain Hardouin
Ok... you said 2.0.10 in the original post ;-)You can't upgrade from 1.2 to 
2.1.2.0.7 is the minimum. So upgrade to 2.0.17 (the latest 2.0.X) first, see 
https://github.com/apache/cassandra/blob/cassandra-2.1/NEWS.txt#L244
Best,
Romain 

Le Vendredi 19 août 2016 11h41, "Lu, Boying"  a écrit :
 

 #yiv4120164789 #yiv4120164789 -- _filtered #yiv4120164789 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv4120164789 
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv4120164789 
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv4120164789 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv4120164789 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered #yiv4120164789 
{panose-1:2 1 6 0 3 1 1 1 1 1;}#yiv4120164789 #yiv4120164789 
p.yiv4120164789MsoNormal, #yiv4120164789 li.yiv4120164789MsoNormal, 
#yiv4120164789 div.yiv4120164789MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv4120164789 a:link, 
#yiv4120164789 span.yiv4120164789MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv4120164789 a:visited, #yiv4120164789 
span.yiv4120164789MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv4120164789 p 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv4120164789 
p.yiv4120164789MsoAcetate, #yiv4120164789 li.yiv4120164789MsoAcetate, 
#yiv4120164789 div.yiv4120164789MsoAcetate 
{margin:0cm;margin-bottom:.0001pt;font-size:8.0pt;}#yiv4120164789 
p.yiv4120164789msoacetate, #yiv4120164789 li.yiv4120164789msoacetate, 
#yiv4120164789 div.yiv4120164789msoacetate 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv4120164789 
p.yiv4120164789msonormal, #yiv4120164789 li.yiv4120164789msonormal, 
#yiv4120164789 div.yiv4120164789msonormal 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv4120164789 
p.yiv4120164789msochpdefault, #yiv4120164789 li.yiv4120164789msochpdefault, 
#yiv4120164789 div.yiv4120164789msochpdefault 
{margin-right:0cm;margin-left:0cm;font-size:12.0pt;}#yiv4120164789 
span.yiv4120164789msohyperlink {}#yiv4120164789 
span.yiv4120164789msohyperlinkfollowed {}#yiv4120164789 
span.yiv4120164789emailstyle19 {}#yiv4120164789 span.yiv4120164789emailstyle20 
{}#yiv4120164789 p.yiv4120164789msonormal1, #yiv4120164789 
li.yiv4120164789msonormal1, #yiv4120164789 div.yiv4120164789msonormal1 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv4120164789 
span.yiv4120164789msohyperlink1 
{color:blue;text-decoration:underline;}#yiv4120164789 
span.yiv4120164789msohyperlinkfollowed1 
{color:purple;text-decoration:underline;}#yiv4120164789 
p.yiv4120164789msoacetate1, #yiv4120164789 li.yiv4120164789msoacetate1, 
#yiv4120164789 div.yiv4120164789msoacetate1 
{margin:0cm;margin-bottom:.0001pt;font-size:8.0pt;}#yiv4120164789 
span.yiv4120164789emailstyle191 {color:#1F497D;}#yiv4120164789 
span.yiv4120164789emailstyle201 {color:#1F497D;}#yiv4120164789 
p.yiv4120164789msochpdefault1, #yiv4120164789 li.yiv4120164789msochpdefault1, 
#yiv4120164789 div.yiv4120164789msochpdefault1 
{margin-right:0cm;margin-left:0cm;font-size:10.0pt;}#yiv4120164789 
span.yiv4120164789EmailStyle32 {color:#1F497D;}#yiv4120164789 
span.yiv4120164789BalloonTextChar {}#yiv4120164789 .yiv4120164789MsoChpDefault 
{font-size:10.0pt;} _filtered #yiv4120164789 {margin:72.0pt 90.0pt 72.0pt 
90.0pt;}#yiv4120164789 div.yiv4120164789WordSection1 {}#yiv4120164789 yes, we 
use Cassandra 2.1.11 in our latest release.    From: Romain Hardouin 
[mailto:romainh...@yahoo.fr]
Sent: 2016年8月19日 17:36
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables    ka is the 2.1 format... I don't 
understand. Did you install C* 2.1?    Romain    Le Vendredi 19 août 2016 
11h32, "Lu, Boying"  a écrit :    Here is the error message 
in our log file: java.lang.RuntimeException: Incompatible SSTable found. 
Current version ka is unable to read file: 
/data/db/1/data/StorageOS/RemoteDirectorGroup/StorageOS-RemoteDirectorGroup-ic-37.
 Please run upgradesstables.     at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:517)
     at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:494)
     at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:335)     at 
org.apache.cassandra.db.Keyspace.(Keyspace.java:275)     at 
org.apache.cassandra.db.Keyspace.open(Keyspace.java:121)     at 
org.apache.cassandra.db.Keyspace.open(Keyspace.java:98)     at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:328)    
 at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:479) 
  From: Ryan Svihla [mailto:r...@foundev.pro]
Sent: 2016年8月19日 17:26
To: user@cassandra.apache.org
Subject: Re: A question to updatesstables   The actual error message could be 
very useful to diagnose the reason. There are warnings about incompatible 
formats which are safe to ignore (usually in the cache) and I have one time 
seen an issue with commit log archiv

Re: nodetool repair with -pr and -dc

2016-08-19 Thread Romain Hardouin
Hi Jérôme,
The code in 2.2.6 allows -local and 
-pr:https://github.com/apache/cassandra/blob/cassandra-2.2.6/src/java/org/apache/cassandra/service/StorageService.java#L2899
But... the options validation introduced in CASSANDRA-6455 seems to break this 
feature!https://github.com/apache/cassandra/blob/cassandra-2.2.6/src/java/org/apache/cassandra/repair/messages/RepairOption.java#L211
I suggest to open a ticket https://issues.apache.org/jira/browse/cassandra/
Best,
Romain 

Le Vendredi 19 août 2016 11h47, Jérôme Mainaud  a écrit 
:
 

 Hello,

I've got a repair command with both -pr and -local rejected on an 2.2.6 cluster.
The exact command was : nodetool repair --full -par -pr -local -j 4
The message is  “You need to run primary range repair on all nodes in the 
cluster”.

Reading the code and previously cited CASSANDRA-7450, it should have been 
accepted.

Did anyone meet this error before ?

Thanks


-- 
Jérôme Mainaud
jer...@mainaud.com

2016-08-12 1:14 GMT+02:00 kurt Greaves :

-D does not do what you think it does. I've quoted the relevant documentation 
from the README:


Multiple Datacenters
If you have multiple datacenters in your ring, then you MUST specify the name 
of the datacenter containing the node you are repairing as part of the 
command-line options (--datacenter=DCNAME). Failure to do so will result in 
only a subset of your data being repaired (approximately 
data/number-of-datacenters). This is because nodetool has no way to determine 
the relevant DC on its own, which in turn means it will use the tokens from 
every ring member in every datacenter.


On 11 August 2016 at 12:24, Paulo Motta  wrote:

> if we want to use -pr option ( which i suppose we should to prevent duplicate 
> checks) in 2.0 then if we run the repair on all nodes in a single DC then it 
> should be sufficient and we should not need to run it on all nodes across 
> DC's?

No, because the primary ranges of the nodes in other DCs will be missing 
repair, so you should either run with -pr in all nodes in all DCs, or restrict 
repair to a specific DC with -local (and have duplicate checks). Combined -pr 
and -local are only supported on 2.1


2016-08-11 1:29 GMT-03:00 Anishek Agarwal :

ok thanks, so if we want to use -pr option ( which i suppose we should to 
prevent duplicate checks) in 2.0 then if we run the repair on all nodes in a 
single DC then it should be sufficient and we should not need to run it on all 
nodes across DC's ?


On Wed, Aug 10, 2016 at 5:01 PM, Paulo Motta  wrote:

On 2.0 repair -pr option is not supported together with -local, -hosts or -dc, 
since it assumes you need to repair all nodes in all DCs and it will throw and 
error if you try to run with nodetool, so perhaps there's something wrong with 
range_repair options parsing.

On 2.1 it was added support to simultaneous -pr and -local options on 
CASSANDRA-7450, so if you need that you can either upgade to 2.1 or backport 
that to 2.0.

2016-08-10 5:20 GMT-03:00 Anishek Agarwal :

Hello,
We have 2.0.17 cassandra cluster(DC1) with a cross dc setup with a smaller 
cluster(DC2).  After reading various blogs about scheduling/running repairs 
looks like its good to run it with the following 

-pr for primary range only -st -et for sub ranges -par for parallel -dc to make 
sure we can schedule repairs independently on each Data centre we have. 
i have configured the above using the repair utility @ 
https://github.com/BrianGallew /cassandra_range_repair.git
which leads to the following command :
./src/range_repair.py -k [keyspace] -c [columnfamily name] -v -H localhost -p 
-D DC1

but looks like the merkle tree is being calculated on nodes which are part of 
other DC2.
why does this happen? i thought it should only look at the nodes in local 
cluster. however on nodetool the -pr option cannot be used with -local 
according to docs @https://docs.datastax.com/en/ cassandra/2.0/cassandra/tools/ 
toolsRepair.html
so i am may be missing something, can someone help explain this please.
thanksanishek









-- 
Kurt greavesk...@instaclustr.comwww.instaclustr.com



  

Re: nodetool repair with -pr and -dc

2016-08-19 Thread Jérôme Mainaud
Hi Romain,

Thank you for your answer, I will open a ticket soon.

Best

-- 
Jérôme Mainaud
jer...@mainaud.com

2016-08-19 12:16 GMT+02:00 Romain Hardouin :

> Hi Jérôme,
>
> The code in 2.2.6 allows -local and -pr:
> https://github.com/apache/cassandra/blob/cassandra-2.2.
> 6/src/java/org/apache/cassandra/service/StorageService.java#L2899
>
> But... the options validation introduced in CASSANDRA-6455 seems to break
> this feature!
> https://github.com/apache/cassandra/blob/cassandra-2.2.
> 6/src/java/org/apache/cassandra/repair/messages/RepairOption.java#L211
>
> I suggest to open a ticket https://issues.apache.org/
> jira/browse/cassandra/
>
> Best,
>
> Romain
>
>
> Le Vendredi 19 août 2016 11h47, Jérôme Mainaud  a
> écrit :
>
>
> Hello,
>
> I've got a repair command with both -pr and -local rejected on an 2.2.6
> cluster.
> The exact command was : nodetool repair --full -par -pr -local -j 4
>
> The message is  “You need to run primary range repair on all nodes in the
> cluster”.
>
> Reading the code and previously cited CASSANDRA-7450, it should have been
> accepted.
>
> Did anyone meet this error before ?
>
> Thanks
>
>
> --
> Jérôme Mainaud
> jer...@mainaud.com
>
> 2016-08-12 1:14 GMT+02:00 kurt Greaves :
>
> -D does not do what you think it does. I've quoted the relevant
> documentation from the README:
>
>
> Multiple
> Datacenters
> If you have multiple datacenters in your ring, then you MUST specify the
> name of the datacenter containing the node you are repairing as part of the
> command-line options (--datacenter=DCNAME). Failure to do so will result in
> only a subset of your data being repaired (approximately
> data/number-of-datacenters). This is because nodetool has no way to
> determine the relevant DC on its own, which in turn means it will use the
> tokens from every ring member in every datacenter.
>
>
>
> On 11 August 2016 at 12:24, Paulo Motta  wrote:
>
> > if we want to use -pr option ( which i suppose we should to prevent
> duplicate checks) in 2.0 then if we run the repair on all nodes in a single
> DC then it should be sufficient and we should not need to run it on all
> nodes across DC's?
>
> No, because the primary ranges of the nodes in other DCs will be missing
> repair, so you should either run with -pr in all nodes in all DCs, or
> restrict repair to a specific DC with -local (and have duplicate checks).
> Combined -pr and -local are only supported on 2.1
>
>
> 2016-08-11 1:29 GMT-03:00 Anishek Agarwal :
>
> ok thanks, so if we want to use -pr option ( which i suppose we should to
> prevent duplicate checks) in 2.0 then if we run the repair on all nodes in
> a single DC then it should be sufficient and we should not need to run it
> on all nodes across DC's ?
>
>
>
> On Wed, Aug 10, 2016 at 5:01 PM, Paulo Motta 
> wrote:
>
> On 2.0 repair -pr option is not supported together with -local, -hosts or
> -dc, since it assumes you need to repair all nodes in all DCs and it will
> throw and error if you try to run with nodetool, so perhaps there's
> something wrong with range_repair options parsing.
>
> On 2.1 it was added support to simultaneous -pr and -local options on
> CASSANDRA-7450, so if you need that you can either upgade to 2.1 or
> backport that to 2.0.
>
>
> 2016-08-10 5:20 GMT-03:00 Anishek Agarwal :
>
> Hello,
>
> We have 2.0.17 cassandra cluster(*DC1*) with a cross dc setup with a
> smaller cluster(*DC2*).  After reading various blogs about
> scheduling/running repairs looks like its good to run it with the following
>
>
> -pr for primary range only
> -st -et for sub ranges
> -par for parallel
> -dc to make sure we can schedule repairs independently on each Data centre
> we have.
>
> i have configured the above using the repair utility @ 
> https://github.com/BrianGallew
> /cassandra_range_repair.git
> 
>
> which leads to the following command :
>
> ./src/range_repair.py -k [keyspace] -c [columnfamily name] -v -H localhost
> -p -D* DC1*
>
> but looks like the merkle tree is being calculated on nodes which are part
> of other *DC2.*
>
> why does this happen? i thought it should only look at the nodes in local
> cluster. however on nodetool the* -pr* option cannot be used with *-local* 
> according
> to docs @https://docs.datastax.com/en/ cassandra/2.0/cassandra/tools/
> toolsRepair.html
> 
>
> so i am may be missing something, can someone help explain this please.
>
> thanks
> anishek
>
>
>
>
>
>
>
> --
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>
>
>
>
>


full and incremental repair consistency

2016-08-19 Thread Jérôme Mainaud
Hello,

I have a 2.2.6 Cassandra cluster with two DC of 15 nodes each.
A continuous incremental repair process deal with anti-entropy concern.

Due to some untraced operation by someone, we choose to do a full repair on
one DC with the command : nodetool repair --full -local -j 4

Daily incremental repair was disabled during this operation

The significant amount of stream session produced by this repair session
confirms to me that it was a good necessary.

However, I wonder if the sstables involved in that repair are flagged or if
the next daily incremental repair will be equivalent to a full repair.

I didn't use the -pr option since -pr and -local are actually mutually
exclusive (whether they should is the subject of another thread). I chose
-local because the link between the datacenter is slow. But maybe choosing
-pr would have been a better choice.

Is there a better way I should have handled this ?

Thank you,

-- 
Jérôme Mainaud
jer...@mainaud.com


Re: full and incremental repair consistency

2016-08-19 Thread Paulo Motta
Running repair with -local flag does not mark sstables as repaired, since
you can't guarantee data in other DCs are repaired. In order to support
incremental repair, you need to run a full repair without the -local flag,
and then in the next time you run repair, previously repaired sstables are
skipped.

2016-08-19 9:55 GMT-03:00 Jérôme Mainaud :

> Hello,
>
> I have a 2.2.6 Cassandra cluster with two DC of 15 nodes each.
> A continuous incremental repair process deal with anti-entropy concern.
>
> Due to some untraced operation by someone, we choose to do a full repair
> on one DC with the command : nodetool repair --full -local -j 4
>
> Daily incremental repair was disabled during this operation
>
> The significant amount of stream session produced by this repair session
> confirms to me that it was a good necessary.
>
> However, I wonder if the sstables involved in that repair are flagged or
> if the next daily incremental repair will be equivalent to a full repair.
>
> I didn't use the -pr option since -pr and -local are actually mutually
> exclusive (whether they should is the subject of another thread). I chose
> -local because the link between the datacenter is slow. But maybe choosing
> -pr would have been a better choice.
>
> Is there a better way I should have handled this ?
>
> Thank you,
>
> --
> Jérôme Mainaud
> jer...@mainaud.com
>


Re: full and incremental repair consistency

2016-08-19 Thread Jérôme Mainaud
It makes sense.

When you say "you need to run a full repair without the -local flag", do
you mean I have to set the -full flag ? Or do you mean that the next repair
without arguments will be a full one because sstables or not flagged ?

By the way, I suppose the repair flag don't break sstable file
immutability, so I wonder how it is stored.

-- 
Jérôme Mainaud
jer...@mainaud.com

2016-08-19 15:02 GMT+02:00 Paulo Motta :

> Running repair with -local flag does not mark sstables as repaired, since
> you can't guarantee data in other DCs are repaired. In order to support
> incremental repair, you need to run a full repair without the -local flag,
> and then in the next time you run repair, previously repaired sstables are
> skipped.
>
> 2016-08-19 9:55 GMT-03:00 Jérôme Mainaud :
>
>> Hello,
>>
>> I have a 2.2.6 Cassandra cluster with two DC of 15 nodes each.
>> A continuous incremental repair process deal with anti-entropy concern.
>>
>> Due to some untraced operation by someone, we choose to do a full repair
>> on one DC with the command : nodetool repair --full -local -j 4
>>
>> Daily incremental repair was disabled during this operation
>>
>> The significant amount of stream session produced by this repair session
>> confirms to me that it was a good necessary.
>>
>> However, I wonder if the sstables involved in that repair are flagged or
>> if the next daily incremental repair will be equivalent to a full repair.
>>
>> I didn't use the -pr option since -pr and -local are actually mutually
>> exclusive (whether they should is the subject of another thread). I chose
>> -local because the link between the datacenter is slow. But maybe choosing
>> -pr would have been a better choice.
>>
>> Is there a better way I should have handled this ?
>>
>> Thank you,
>>
>> --
>> Jérôme Mainaud
>> jer...@mainaud.com
>>
>
>


Re: full and incremental repair consistency

2016-08-19 Thread Paulo Motta
When you say "you need to run a full repair without the -local flag", do
you mean I have to set the -full flag ? Or do you mean that the next repair
without arguments will be a full one because sstables or not flagged ?

- Either way, with or without the flag will actually be equivalent when
none of the sstables are marked as repaired (this will change after the
first inc repair).

By the way, I suppose the repair flag don't break sstable file
immutability, so I wonder how it is stored.

- The actual data component is mutable, only a flag in the STATS sstable
component is mutated.

2016-08-19 12:17 GMT-03:00 Jérôme Mainaud :

> It makes sense.
>
> When you say "you need to run a full repair without the -local flag", do
> you mean I have to set the -full flag ? Or do you mean that the next repair
> without arguments will be a full one because sstables or not flagged ?
>
> By the way, I suppose the repair flag don't break sstable file
> immutability, so I wonder how it is stored.
>
> --
> Jérôme Mainaud
> jer...@mainaud.com
>
> 2016-08-19 15:02 GMT+02:00 Paulo Motta :
>
>> Running repair with -local flag does not mark sstables as repaired, since
>> you can't guarantee data in other DCs are repaired. In order to support
>> incremental repair, you need to run a full repair without the -local flag,
>> and then in the next time you run repair, previously repaired sstables are
>> skipped.
>>
>> 2016-08-19 9:55 GMT-03:00 Jérôme Mainaud :
>>
>>> Hello,
>>>
>>> I have a 2.2.6 Cassandra cluster with two DC of 15 nodes each.
>>> A continuous incremental repair process deal with anti-entropy concern.
>>>
>>> Due to some untraced operation by someone, we choose to do a full repair
>>> on one DC with the command : nodetool repair --full -local -j 4
>>>
>>> Daily incremental repair was disabled during this operation
>>>
>>> The significant amount of stream session produced by this repair session
>>> confirms to me that it was a good necessary.
>>>
>>> However, I wonder if the sstables involved in that repair are flagged or
>>> if the next daily incremental repair will be equivalent to a full repair.
>>>
>>> I didn't use the -pr option since -pr and -local are actually mutually
>>> exclusive (whether they should is the subject of another thread). I chose
>>> -local because the link between the datacenter is slow. But maybe choosing
>>> -pr would have been a better choice.
>>>
>>> Is there a better way I should have handled this ?
>>>
>>> Thank you,
>>>
>>> --
>>> Jérôme Mainaud
>>> jer...@mainaud.com
>>>
>>
>>
>


Re: full and incremental repair consistency

2016-08-19 Thread Jérôme Mainaud
> - Either way, with or without the flag will actually be equivalent when
> none of the sstables are marked as repaired (this will change after the
> first inc repair).
>

So, if I well understand, the repair -full -local command resets the flag
of sstables previously repaired. So even if I had some sstable already
flagged, they won't be any more.

- The actual data component is mutable, only a flag in the STATS sstable
> component is mutated.
>

This is an important property I missed. That means that snapshots are
succeptible to mutate as they are hard links of actual file.
I also must care of this if I try to deduplicate files in a external backup
system.


Re: large number of pending compactions, sstables steadily increasing

2016-08-19 Thread Mark Rose
Hi Ezra,

Are you making frequent changes to your rows (including TTL'ed
values), or mostly inserting new ones? If you're only inserting new
data, it's probable using size-tiered compaction would work better for
you. If you are TTL'ing whole rows, consider date-tiered.

If leveled compaction is still the best strategy, one way to catch up
with compactions is to have less data per partition -- in other words,
use more machines. Leveled compaction is CPU expensive. You are CPU
bottlenecked currently, or from the other perspective, you have too
much data per node for leveled compaction.

At this point, compaction is so far behind that you'll likely be
getting high latency if you're reading old rows (since dozens to
hundreds of uncompacted sstables will likely need to be checked for
matching rows). You may be better off with size tiered compaction,
even if it will mean always reading several sstables per read (higher
latency than when leveled can keep up).

How much data do you have per node? Do you update/insert to/delete
rows? Do you TTL?

Cheers,
Mark

On Wed, Aug 17, 2016 at 2:39 PM, Ezra Stuetzel  wrote:
> I have one node in my cluster 2.2.7 (just upgraded from 2.2.6 hoping to fix
> issue) which seems to be stuck in a weird state -- with a large number of
> pending compactions and sstables. The node is compacting about 500gb/day,
> number of pending compactions is going up at about 50/day. It is at about
> 2300 pending compactions now. I have tried increasing number of compaction
> threads and the compaction throughput, which doesn't seem to help eliminate
> the many pending compactions.
>
> I have tried running 'nodetool cleanup' and 'nodetool compact'. The latter
> has fixed the issue in the past, but most recently I was getting OOM errors,
> probably due to the large number of sstables. I upgraded to 2.2.7 and am no
> longer getting OOM errors, but also it does not resolve the issue. I do see
> this message in the logs:
>
>> INFO  [RMI TCP Connection(611)-10.9.2.218] 2016-08-17 01:50:01,985
>> CompactionManager.java:610 - Cannot perform a full major compaction as
>> repaired and unrepaired sstables cannot be compacted together. These two set
>> of sstables will be compacted separately.
>
> Below are the 'nodetool tablestats' comparing a normal and the problematic
> node. You can see problematic node has many many more sstables, and they are
> all in level 1. What is the best way to fix this? Can I just delete those
> sstables somehow then run a repair?
>>
>> Normal node
>>>
>>> keyspace: mykeyspace
>>>
>>> Read Count: 0
>>>
>>> Read Latency: NaN ms.
>>>
>>> Write Count: 31905656
>>>
>>> Write Latency: 0.051713177939359714 ms.
>>>
>>> Pending Flushes: 0
>>>
>>> Table: mytable
>>>
>>> SSTable count: 1908
>>>
>>> SSTables in each level: [11/4, 20/10, 213/100, 1356/1000, 306, 0,
>>> 0, 0, 0]
>>>
>>> Space used (live): 301894591442
>>>
>>> Space used (total): 301894591442
>>>
>>>
>>>
>>> Problematic node
>>>
>>> Keyspace: mykeyspace
>>>
>>> Read Count: 0
>>>
>>> Read Latency: NaN ms.
>>>
>>> Write Count: 30520190
>>>
>>> Write Latency: 0.05171286705620116 ms.
>>>
>>> Pending Flushes: 0
>>>
>>> Table: mytable
>>>
>>> SSTable count: 14105
>>>
>>> SSTables in each level: [13039/4, 21/10, 206/100, 831, 0, 0, 0,
>>> 0, 0]
>>>
>>> Space used (live): 561143255289
>>>
>>> Space used (total): 561143255289
>
> Thanks,
>
> Ezra


Client Read Latency is too high during repair

2016-08-19 Thread Benyi Wang
I'm using cassandra java driver to access a small cassandra cluster

* The cluster have 3 nodes in DC1 and 3 nodes in DC2
* The keyspace is originally created in DC1 only with RF=2
* The client had good read latency about 40 ms of 99 percentile under 100
requests/sec (measured at the client side)
* Then keyspace is updated with 2-DC and RF=3 for each DC
* After the repair started (DBA started it, I don't exactly the command),
the client's read latency reached to 2 secs.
* The metric ClientRequest.read.latency.99percentile is still about 4ms
* There were two nodes having 3MB/sec outgoing streaming.

I'm using Cassandra 2.1.8 and the read consistency is LOCAL_ONE.

Can you point me some metrics to see what's the bottleneck?

Thanks


Support/Consulting companies

2016-08-19 Thread Roxy Ubi
Howdy,

I'm looking for a list of support or consulting companies that provide
contracting services related to Cassandra.  Is there a comprehensive list
somewhere?  Alternatively could you folks tell me who you use?

Thanks in advance for any replies!

Roxy


RE: Support/Consulting companies

2016-08-19 Thread Huang, Roger
http://thelastpickle.com/


From: Roxy Ubi [mailto:roxy...@gmail.com]
Sent: Friday, August 19, 2016 2:02 PM
To: user@cassandra.apache.org
Subject: Support/Consulting companies

Howdy,
I'm looking for a list of support or consulting companies that provide 
contracting services related to Cassandra.  Is there a comprehensive list 
somewhere?  Alternatively could you folks tell me who you use?
Thanks in advance for any replies!
Roxy


Re: Support/Consulting companies

2016-08-19 Thread joe . schwartz
Yes, TLP is the place to go!

Joe

Sent from my iPhone

> On Aug 19, 2016, at 12:03 PM, Huang, Roger  wrote:
> 
> http://thelastpickle.com/
>  
>  
> From: Roxy Ubi [mailto:roxy...@gmail.com] 
> Sent: Friday, August 19, 2016 2:02 PM
> To: user@cassandra.apache.org
> Subject: Support/Consulting companies
>  
> Howdy,
> 
> I'm looking for a list of support or consulting companies that provide 
> contracting services related to Cassandra.  Is there a comprehensive list 
> somewhere?  Alternatively could you folks tell me who you use?
> 
> Thanks in advance for any replies!
> 
> Roxy


Re: Support/Consulting companies

2016-08-19 Thread Chris Tozer
Instaclustr ( Instaclustr.com ) also offers Cassandra consulting

On Friday, August 19, 2016,  wrote:

> Yes, TLP is the place to go!
>
> Joe
>
> Sent from my iPhone
>
> On Aug 19, 2016, at 12:03 PM, Huang, Roger  > wrote:
>
> http://thelastpickle.com/
>
>
>
>
>
> *From:* Roxy Ubi [mailto:roxy...@gmail.com
> ]
> *Sent:* Friday, August 19, 2016 2:02 PM
> *To:* user@cassandra.apache.org
> 
> *Subject:* Support/Consulting companies
>
>
>
> Howdy,
>
> I'm looking for a list of support or consulting companies that provide
> contracting services related to Cassandra.  Is there a comprehensive list
> somewhere?  Alternatively could you folks tell me who you use?
>
> Thanks in advance for any replies!
>
> Roxy
>
>

-- 
Chris Tozer

Instaclustr

(408) 781-7914

Spin Up a Free 14 Day Trial 


Re: Client Read Latency is too high during repair

2016-08-19 Thread Benyi Wang
Never mind. I found the root cause. This has nothing to do with Cassandra
and repair. Some web services called by the client caused the problem.

On Fri, Aug 19, 2016 at 11:53 AM, Benyi Wang  wrote:

> I'm using cassandra java driver to access a small cassandra cluster
>
> * The cluster have 3 nodes in DC1 and 3 nodes in DC2
> * The keyspace is originally created in DC1 only with RF=2
> * The client had good read latency about 40 ms of 99 percentile under 100
> requests/sec (measured at the client side)
> * Then keyspace is updated with 2-DC and RF=3 for each DC
> * After the repair started (DBA started it, I don't exactly the command),
> the client's read latency reached to 2 secs.
> * The metric ClientRequest.read.latency.99percentile is still about 4ms
> * There were two nodes having 3MB/sec outgoing streaming.
>
> I'm using Cassandra 2.1.8 and the read consistency is LOCAL_ONE.
>
> Can you point me some metrics to see what's the bottleneck?
>
> Thanks
>


Re: How to create a TupleType/TupleValue in a UDF

2016-08-19 Thread Tyler Hobbs
On Thu, Aug 18, 2016 at 12:57 PM, Drew Kutcharian  wrote:

> I’m running 3.0.8, so it probably wasn’t fixed? ;)
>

Hmm, would you mind opening a new JIRA ticket about that and linking it to
CASSANDRA-11033?


>
> The CodecNotFoundException is very random, when I get it, if I re-run the
> same exact query then it works! I’ll see if I can reproduce it more
> consistently.
>

Thanks.  If you can reproduce, please go ahead and open a ticket for that
as well.


> BTW, is there a way to get the CodecRegistry and the ProtocolVersion from
> the UDF environment so I don’t have to create them?
>

At least in 3.0.8, I don't think so.  It's worth pointing out
https://issues.apache.org/jira/browse/CASSANDRA-10818, which makes it much
easier to create tuples and UDTs in 3.6+.  Check out the bottom of the UDF
section of the docs for some examples and details:
http://cassandra.apache.org/doc/latest/cql/functions.html#user-defined-functions


-- 
Tyler Hobbs
DataStax 


opscenter cluster metric API call - 400 error

2016-08-19 Thread Aoi Kadoya
Hi,
I have upgraded my cluster to DSE 5.0.1 and Opscenter 6.0.1.
I am testing Opscenter APIs to retrieve node/cluster metrics but I get
400 errors when I throw queries as like below.

curl -vvv -H 'opscenter-session: xx' -G
'http:metrics//data-load'

> GET /IDC/metrics//data-load HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 
> Accept: */*
> opscenter-session: xx
>
< HTTP/1.1 400 Bad Request
< Transfer-Encoding: chunked
< Date: Fri, 19 Aug 2016 20:45:36 GMT
* Server TwistedWeb/15.3.0 is not blacklisted
< Server: TwistedWeb/15.3.0
< Content-Type: application/json
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate
<
* Connection #0 to host  left intact
{"message": "unsupported operand type(s) for -: 'NoneType' and
'NoneType'", "type": "TypeError"}

I could use the same session id, opscenter ip, node ip for different
API call and I got responses with no problem but no luck with any of
metrics calls.
The error message seems complaining about "-" but I am not sure what
is wrong with the command I threw..

I am just trying some calls described here without using any query
parameters: http://docs.datastax.com/en/opscenter/6.0/api/docs/metrics.html

Can you please advice me where I should change to retrieve the metrics?

Thanks,
Aoi


Re: Support/Consulting companies

2016-08-19 Thread Jim Ancona
There's also a list of companies that provide Cassandra-related services on
the wiki:

https://wiki.apache.org/cassandra/ThirdPartySupport

Jim

On Fri, Aug 19, 2016 at 3:37 PM, Chris Tozer 
wrote:

> Instaclustr ( Instaclustr.com ) also offers Cassandra consulting
>
>
> On Friday, August 19, 2016,  wrote:
>
>> Yes, TLP is the place to go!
>>
>> Joe
>>
>> Sent from my iPhone
>>
>> On Aug 19, 2016, at 12:03 PM, Huang, Roger  wrote:
>>
>> http://thelastpickle.com/
>>
>>
>>
>>
>>
>> *From:* Roxy Ubi [mailto:roxy...@gmail.com]
>> *Sent:* Friday, August 19, 2016 2:02 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Support/Consulting companies
>>
>>
>>
>> Howdy,
>>
>> I'm looking for a list of support or consulting companies that provide
>> contracting services related to Cassandra.  Is there a comprehensive list
>> somewhere?  Alternatively could you folks tell me who you use?
>>
>> Thanks in advance for any replies!
>>
>> Roxy
>>
>>
>
> --
> Chris Tozer
>
> Instaclustr
>
> (408) 781-7914
>
> Spin Up a Free 14 Day Trial 
>
>