Renaming Partition Key

2019-03-14 Thread Debjani Nag
Hi Everyone ,

   As per documentation , we cannot rename partition key . But partition
key can be renamed .

  *https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlAlterTable.htm
*l

  * Refer : Restrictions apply to the rename operation *

   When we rename the partition key , there are  changes to the SSTable  .

   Can someone please help me understand  that what issues can be possibly
caused due to renaming of the partition key .

Thanks


Re: good monitoring tool for cassandra

2019-03-14 Thread Rahul Singh
I wrote this last year. It's mostly still relevant --- as Jonathan said,
Prometheus+Grafana is the best "make your own hammers and nails" approach.

https://blog.anant.us/resources-for-monitoring-datastax-cassandra-spark-solr-performance/



On Thu, Mar 14, 2019 at 8:13 PM Jonathan Haddad  wrote:

> I've worked with several teams using DataDog, folks are pretty happy with
> it.  We (The Last Pickle) did the dashboards for them:
> http://thelastpickle.com/blog/2017/12/05/datadog-tlp-dashboards.html
>
> Prometheus + Grafana is great if you want to host it yourself.
>
> On Fri, Mar 15, 2019 at 12:45 PM Jeff Jirsa  wrote:
>
>>
>> -dev, +user
>>
>> Datadog worked pretty well last time I used it.
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Mar 14, 2019, at 11:38 PM, Sundaramoorthy, Natarajan <
>> natarajan_sundaramoor...@optum.com> wrote:
>> >
>> > Can someone share knowledge on good monitoring tool for cassandra?
>> Thanks
>> >
>> > This e-mail, including attachments, may include confidential and/or
>> > proprietary information, and may be used only by the person or entity
>> > to which it is addressed. If the reader of this e-mail is not the
>> intended
>> > recipient or his or her authorized agent, the reader is hereby notified
>> > that any dissemination, distribution or copying of this e-mail is
>> > prohibited. If you have received this e-mail in error, please notify the
>> > sender by replying to this message and delete this e-mail immediately.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade
>


-- 
rahul.xavier.si...@gmail.com

http://cassandra.link


Re: good monitoring tool for cassandra

2019-03-14 Thread Jonathan Haddad
I've worked with several teams using DataDog, folks are pretty happy with
it.  We (The Last Pickle) did the dashboards for them:
http://thelastpickle.com/blog/2017/12/05/datadog-tlp-dashboards.html

Prometheus + Grafana is great if you want to host it yourself.

On Fri, Mar 15, 2019 at 12:45 PM Jeff Jirsa  wrote:

>
> -dev, +user
>
> Datadog worked pretty well last time I used it.
>
>
> --
> Jeff Jirsa
>
>
> > On Mar 14, 2019, at 11:38 PM, Sundaramoorthy, Natarajan <
> natarajan_sundaramoor...@optum.com> wrote:
> >
> > Can someone share knowledge on good monitoring tool for cassandra? Thanks
> >
> > This e-mail, including attachments, may include confidential and/or
> > proprietary information, and may be used only by the person or entity
> > to which it is addressed. If the reader of this e-mail is not the
> intended
> > recipient or his or her authorized agent, the reader is hereby notified
> > that any dissemination, distribution or copying of this e-mail is
> > prohibited. If you have received this e-mail in error, please notify the
> > sender by replying to this message and delete this e-mail immediately.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist ingossip

2019-03-14 Thread Stefan Miklosovic
It is just a C* in Docker Compose with static IP addresses as long as all
containers run. I am just killing Cassandra process and starting it again
in each container.

On Fri, 15 Mar 2019 at 10:47, Jeff Jirsa  wrote:

> Are your IPs changing as you restart the cluster? Kubernetes or Mesos or
> something where your data gets scheduled on different machines? If so, if
> it gets an IP that was previously in the cluster, it’ll stomp on the old
> entry in the gossiper maps
>
>
>
> --
> Jeff Jirsa
>
>
> On Mar 14, 2019, at 3:42 PM, Fd Habash  wrote:
>
> I can conclusively say, none of these commands were run. However, I think
> this is  the likely scenario …
>
>
>
> If you have a cluster of three nodes 1,2,3 …
>
>- If 3 shows as DN
>- Restart C* on 1 & 2
>- Nodetool status should NOT show node 3 IP at all.
>
>
>
> Restarting the cluster while a node is down resets gossip state.
>
>
>
> There is a good chance this is what happened.
>
>
>
> Plausible?
>
>
>
> 
> Thank you
>
>
>
> *From: *Jeff Jirsa 
> *Sent: *Thursday, March 14, 2019 11:06 AM
> *To: *cassandra 
> *Subject: *Re: Cannot replace_address /10.xx.xx.xx because it doesn't
> exist ingossip
>
>
>
> Two things that wouldn't be a bug:
>
>
>
> You could have run removenode
>
> You could have run assassinate
>
>
>
> Also could be some new bug, but that's much less likely.
>
>
>
>
>
> On Thu, Mar 14, 2019 at 2:50 PM Fd Habash  wrote:
>
> I have a node which I know for certain was a cluster member last week. It
> showed in nodetool status as DN. When I attempted to replace it today, I
> got this message
>
>
>
> ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception
> encountered during startup
>
> java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx
> because it doesn't exist in gossip
>
> at
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
> ~[apache-cassandra-2.2.8.jar:2.2.8]
>
>
>
>
>
> DN  10.xx.xx.xx  388.43 KB  256  6.9%
> bdbd632a-bf5d-44d4-b220-f17f258c4701  1e
>
>
>
> Under what conditions does this happen?
>
>
>
>
>
> 
> Thank you
>
>
>
>
>
>

-- 


*Stefan Miklosovic**Senior Software Engineer*


M: +61459911436



   


Read our latest technical blog posts here
.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.

Instaclustr values your privacy. Our privacy policy can be found at
https://www.instaclustr.com/company/policies/privacy-policy


Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist ingossip

2019-03-14 Thread Jeff Jirsa
Are your IPs changing as you restart the cluster? Kubernetes or Mesos or 
something where your data gets scheduled on different machines? If so, if it 
gets an IP that was previously in the cluster, it’ll stomp on the old entry in 
the gossiper maps



-- 
Jeff Jirsa


> On Mar 14, 2019, at 3:42 PM, Fd Habash  wrote:
> 
> I can conclusively say, none of these commands were run. However, I think 
> this is  the likely scenario …
>  
> If you have a cluster of three nodes 1,2,3 …
> If 3 shows as DN
> Restart C* on 1 & 2
> Nodetool status should NOT show node 3 IP at all.
>  
> Restarting the cluster while a node is down resets gossip state.
>  
> There is a good chance this is what happened.
>  
> Plausible?
>  
> 
> Thank you
>  
> From: Jeff Jirsa
> Sent: Thursday, March 14, 2019 11:06 AM
> To: cassandra
> Subject: Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist 
> ingossip
>  
> Two things that wouldn't be a bug:
>  
> You could have run removenode
> You could have run assassinate
>  
> Also could be some new bug, but that's much less likely. 
>  
>  
> On Thu, Mar 14, 2019 at 2:50 PM Fd Habash  wrote:
> I have a node which I know for certain was a cluster member last week. It 
> showed in nodetool status as DN. When I attempted to replace it today, I got 
> this message
>  
> ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception 
> encountered during startup
> java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx because 
> it doesn't exist in gossip
> at 
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
>  ~[apache-cassandra-2.2.8.jar:2.2.8]
>  
>  
> DN  10.xx.xx.xx  388.43 KB  256  6.9%  
> bdbd632a-bf5d-44d4-b220-f17f258c4701  1e
>  
> Under what conditions does this happen?
>  
>  
> 
> Thank you
>  
>  


Re: good monitoring tool for cassandra

2019-03-14 Thread Jeff Jirsa


-dev, +user

Datadog worked pretty well last time I used it.


-- 
Jeff Jirsa


> On Mar 14, 2019, at 11:38 PM, Sundaramoorthy, Natarajan 
>  wrote:
> 
> Can someone share knowledge on good monitoring tool for cassandra? Thanks
> 
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the intended
> recipient or his or her authorized agent, the reader is hereby notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify the
> sender by replying to this message and delete this e-mail immediately.

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist ingossip

2019-03-14 Thread Stefan Miklosovic
Hi Fd,

I tried this on 3 nodes cluster. I killed node 2, both node1 and node3
reported node2 to be DN, then I killed node1 and node3 and I restarted them
and node2 was reported like this:

[root@spark-master-1 /]# nodetool status
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID
 Rack
DN  172.19.0.8  ?  256  64.0%
 bd75a5e2-2890-44c5-8f7a-fca1b4ce94ab  r1
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID
 Rack
UN  172.19.0.5  382.75 KiB  256  64.4%
 2a062140-2428-4092-b48b-7495d083d7f9  rack1
UN  172.19.0.9  171.41 KiB  256  71.6%
 9590b791-ad53-4b5a-b4c7-b00408ed02dd  rack3

Prior to killing of node1 and node3, node2 was indeed marked as DN but it
was part of the "Datacenter: dc1" output where both node1 and node3 were.

But after killing both node1 and node3 (so cluster was totally down), after
restarting them, node2 was reported like that.

I do not know what is the difference here. Are gossiping data somewhere
stored on the disk? I would say so, otherwise there is no way how could
node1 / node3 report
that node2 is down but at the same time I dont get why it is "out of the
list" where node1 and node3 are.


On Fri, 15 Mar 2019 at 02:42, Fd Habash  wrote:

> I can conclusively say, none of these commands were run. However, I think
> this is  the likely scenario …
>
>
>
> If you have a cluster of three nodes 1,2,3 …
>
>- If 3 shows as DN
>- Restart C* on 1 & 2
>- Nodetool status should NOT show node 3 IP at all.
>
>
>
> Restarting the cluster while a node is down resets gossip state.
>
>
>
> There is a good chance this is what happened.
>
>
>
> Plausible?
>
>
>
> 
> Thank you
>
>
>
> *From: *Jeff Jirsa 
> *Sent: *Thursday, March 14, 2019 11:06 AM
> *To: *cassandra 
> *Subject: *Re: Cannot replace_address /10.xx.xx.xx because it doesn't
> exist ingossip
>
>
>
> Two things that wouldn't be a bug:
>
>
>
> You could have run removenode
>
> You could have run assassinate
>
>
>
> Also could be some new bug, but that's much less likely.
>
>
>
>
>
> On Thu, Mar 14, 2019 at 2:50 PM Fd Habash  wrote:
>
> I have a node which I know for certain was a cluster member last week. It
> showed in nodetool status as DN. When I attempted to replace it today, I
> got this message
>
>
>
> ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception
> encountered during startup
>
> java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx
> because it doesn't exist in gossip
>
> at
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
> ~[apache-cassandra-2.2.8.jar:2.2.8]
>
>
>
>
>
> DN  10.xx.xx.xx  388.43 KB  256  6.9%
> bdbd632a-bf5d-44d4-b220-f17f258c4701  1e
>
>
>
> Under what conditions does this happen?
>
>
>
>
>
> 
> Thank you
>
>
>
>
>

Stefan Miklosovic


Re: Inconsistent results after restore with Cassandra 3.11.1

2019-03-14 Thread sandeep nethi
Cql select queries are returning 0 rows even though the data is actually
available in sstables.

But when i load/restore the same data with sstable loader data can be
queried without any issues.

Am using network topology strategy  for all keyspace.

Thanks


On Fri, 15 Mar 2019 at 12:11 PM, Rahul Singh 
wrote:

> Can you define "inconsistent" results.. ? What's the topology of the
> cluster? What were you expecting and what did you get?
>
> On Thu, Mar 14, 2019 at 7:09 AM sandeep nethi 
> wrote:
>
>> Hello,
>>
>> Does anyone experience inconsistent results after restoring Cassandra
>> 3.11.1 with refresh command? Was there any bug in this version of
>> cassandra??
>>
>> Thanks in advance.
>>
>> Regards,
>> Sandeep
>>
>


Re: Adding New Column with Default Value

2019-03-14 Thread Rahul Singh
*Spark.* Alter the table, add a column. Run a spark job to scan your table,
and set a value.

* val myKeyspace = "pinch" val myTable = "hitter"*

*def updateColumns(row: CassandraRow): CassandraRow = { *
*  val inputMap = row.toMap val newData = Map( "newColumn" -> "somevalue"
) *
*  var outputMap = inputMap ++ newData CassandraRow.fromMap(outputMap) *
*}*

*val result = sc.cassandraTable(myKeyspace, myTable) .map(updateColumns(_))
.saveToCassandra(myKeyspace, myTable)*

Miraculously the same code could be used to move / copy data from one table
to another ... with a modification as long as you save to a different table
than from where you got it from.


On Thu, Mar 14, 2019 at 12:57 AM kumar bharath 
wrote:

> Hi ,
>
> Can anyone suggest  a best possible way, how we can add a new column to
> the existing table with default value ?
>
> *Column family Size :*  60 Million single partition records.
>
> Thanks,
> Bharath Kumar B
>


Re: update manually rows in cassandra

2019-03-14 Thread Rahul Singh
CQL supports JSON in and out from the Cassandra table, but if your JSON in
the table is a string, then you need to update it as a string.

https://docs.datastax.com/en/cql/3.3/cql/cql_using/useInsertJSON.html
https://docs.datastax.com/en/cql/3.3/cql/cql_using/useQueryJSON.html

What's the schema of the table?


On Wed, Mar 13, 2019 at 5:09 PM Sundaramoorthy, Natarajan <
natarajan_sundaramoor...@optum.com> wrote:

> Update json data with the correct value from file. Thanks
>
>
>
>
>
> *Natarajan Sundaramoorthy*
>
> PaaS Engineering and Automation
>
> Desk - 763-744-1854
>
> Email - *natarajan_sundaramoor...@optum.com
> *
>
>
>
>  [image: cid:image001.jpg@01D03C99.02523830]
>
>
>
>
>
>
>
> *From:* Dieudonné Madishon NGAYA [mailto:dmng...@gmail.com]
> *Sent:* Wednesday, March 13, 2019 3:44 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: update manually rows in cassandra
>
>
>
> Hi ,
>
> In your case, do you want to insert json file data into cassandra ?
>
>
>
> Best regards
>
> _
>
>
> [image:
> https://www.facebook.com/DMN-BigData-371074727032197/?modal=admin_todo_tour]
> 
>   
> 
>
> *Dieudonne Madishon NGAYA*
> Datastax, Cassandra Architect
> *P: *7048580065
> *w: *www.dmnbigdata.com
> *E: *dmng...@dmnbigdata.com
> *Private E: *dmng...@gmail.com
> *A: *Charlotte,NC,28273, USA
>
>
>
>
>
>
>
> On Wed, Mar 13, 2019 at 4:04 PM Sundaramoorthy, Natarajan <
> natarajan_sundaramoor...@optum.com> wrote:
>
>
>
> *Something got goofed up in database. Data is in json format. We have data
> in some file have to match the data in file and update the database. Can
> you please tell how to do it? New to cassandra. Thanks *
>
>
>
>
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the intended
> recipient or his or her authorized agent, the reader is hereby notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify the
> sender by replying to this message and delete this e-mail immediately.
>
>
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the intended
> recipient or his or her authorized agent, the reader is hereby notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify the
> sender by replying to this message and delete this e-mail immediately.
>


Re: Inconsistent results after restore with Cassandra 3.11.1

2019-03-14 Thread Rahul Singh
Can you define "inconsistent" results.. ? What's the topology of the
cluster? What were you expecting and what did you get?

On Thu, Mar 14, 2019 at 7:09 AM sandeep nethi 
wrote:

> Hello,
>
> Does anyone experience inconsistent results after restoring Cassandra
> 3.11.1 with refresh command? Was there any bug in this version of
> cassandra??
>
> Thanks in advance.
>
> Regards,
> Sandeep
>


Re: [EXTERNAL] Re: Migrate large volume of data from one table to another table within the same cluster when COPY is not an option.

2019-03-14 Thread Rahul Singh
Adding to Stefan's comment. There is a "scylladb" migrator, which uses the
spark connector from Datastax, and theoretically can work on any Cassandra
compiant DB.. and should not be limited to cassandra to scylla.

https://www.scylladb.com/2019/02/07/moving-from-cassandra-to-scylla-via-apache-spark-scylla-migrator/

https://github.com/scylladb/scylla-migrator

On Thu, Mar 14, 2019 at 3:04 PM Durity, Sean R 
wrote:

> The possibility of a highly available way to do this gives more
> challenges. I would be weighing the cost of a complex solution vs the
> possibility of a maintenance window when you stop your app to move the
> data, then restart.
>
>
>
> For the straight copy of the data, I am currently enamored with DataStax’s
> dsbulk utility for unloading and loading larger amounts of data. I don’t
> have extensive experience, yet, but it has been fast enough in my
> experiments – and that is without doing too much tuning for speed. From a
> host not in the cluster, I was able to extract 3.5 million rows in about 11
> seconds. I inserted them into a differently partitioned table in about 26
> seconds. Very small data rows, but it was impressive for not doing much to
> try and speed it up further. (In some other tests, it was about ¼ the time
> of simple copy statement from cqlsh)
>
>
>
> If I was designing something for a “can’t take an outage” scenario, I
> would start with:
>
> -  Writing the data to the old and new tables on all inserts
>
> -  On reads, read from the new table first. If not there, read
> from the old table ß could introduce some latency, but would be
> available; could also do asynchronous reads on both tables and choose the
> latest
>
> -  Do this until the data has been copied from old to new (with
> dsbulk or custom code or Spark)
>
> -  Drop the double writes and conditional reads
>
>
>
>
>
> Sean
>
>
>
> *From:* Stefan Miklosovic 
> *Sent:* Wednesday, March 13, 2019 6:39 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: [EXTERNAL] Re: Migrate large volume of data from one table
> to another table within the same cluster when COPY is not an option.
>
>
>
> Hi Leena,
>
>
>
> as already suggested in my previous email, you could use Apache Spark and
> Cassandra Spark connector (1). I have checked TTLs and I believe you should
> especially read this section (2) about TTLs. Seems like thats what you need
> to do, ttls per row. The workflow would be that you read from your source
> table, making transformations per row (via some mapping) and then you would
> save it to new table.
>
>
>
> This would import it "all" but until you switch to the new table and
> records are still being saved into the original one, I am not sure how to
> cover "the gap" in such sense that once you make the switch, you would miss
> records which were created in the first table after you did the loading.
> You could maybe leverage Spark streaming (Cassandra connector knows that
> too) so you would make this transformation on the fly with new ones.
>
>
>
> (1) https://github.com/datastax/spark-cassandra-connector
> 
>
> (2)
> https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md#using-a-different-value-for-each-row
> 
>
>
>
>
>
> On Thu, 14 Mar 2019 at 00:13, Leena Ghatpande 
> wrote:
>
> Understand, 2nd table would be a better approach. So what would be the
> best way to copy 70M rows from current table to the 2nd table with ttl set
> on each record as the first table?
>
>
> --
>
> *From:* Durity, Sean R 
> *Sent:* Wednesday, March 13, 2019 8:17 AM
> *To:* user@cassandra.apache.org
> *Subject:* RE: [EXTERNAL] Re: Migrate large volume of data from one table
> to another table within the same cluster when COPY is not an option.
>
>
>
> Correct, there is no current flag. I think there SHOULD be one.
>
>
>
>
>
> *From:* Dieudonné Madishon NGAYA 
> *Sent:* Tuesday, March 12, 2019 7:17 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Migrate large volume of data from one table to
> another table within the same cluster when COPY is not an option.
>
>
>
> Hi Sean, you can’t flag in Cassandra.yaml not allowing allow filtering ,
> the only thing you can do will be from your data model .
>
> Don’t ask Cassandra to query all data from table but the ideal query will
> be using single partitio

RE: To Repair or Not to Repair

2019-03-14 Thread Nick Hatfield
Beautiful, thank you very much!

From: Jonathan Haddad [mailto:j...@jonhaddad.com]
Sent: Thursday, March 14, 2019 4:55 PM
To: user 
Subject: Re: To Repair or Not to Repair

My coworker Alex (from The Last Pickle) wrote an in depth blog post on TWCS.  
We recommend not running repair on tables that use TWCS.

http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html

It's enough of a problem that we added a feature into Reaper to auto-blacklist 
TWCS / DTCS tables from being repaired, we wrote about it here: 
http://thelastpickle.com/blog/2019/02/15/reaper-1_4-released.html

Hope this helps!
Jon

On Fri, Mar 15, 2019 at 9:48 AM Nick Hatfield 
mailto:nick.hatfi...@metricly.com>> wrote:
It seems that running a repair works really well, quickly and efficiently when 
repairing a column family that does not use TWCS. Has anyone else had a similar 
experience? Wondering if running TWCS is doing more harm than good as it chews 
up a lot of cpu and for extended periods of time in comparison to CF’s with a 
compaction strategy of STCS


Thanks,


--
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: To Repair or Not to Repair

2019-03-14 Thread Jonathan Haddad
My coworker Alex (from The Last Pickle) wrote an in depth blog post on
TWCS.  We recommend not running repair on tables that use TWCS.

http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html

It's enough of a problem that we added a feature into Reaper to
auto-blacklist TWCS / DTCS tables from being repaired, we wrote about it
here: http://thelastpickle.com/blog/2019/02/15/reaper-1_4-released.html

Hope this helps!
Jon

On Fri, Mar 15, 2019 at 9:48 AM Nick Hatfield 
wrote:

> It seems that running a repair works really well, quickly and efficiently
> when repairing a column family that does not use TWCS. Has anyone else had
> a similar experience? Wondering if running TWCS is doing more harm than
> good as it chews up a lot of cpu and for extended periods of time in
> comparison to CF’s with a compaction strategy of STCS
>
>
>
>
>
> Thanks,
>


-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


To Repair or Not to Repair

2019-03-14 Thread Nick Hatfield
It seems that running a repair works really well, quickly and efficiently when 
repairing a column family that does not use TWCS. Has anyone else had a similar 
experience? Wondering if running TWCS is doing more harm than good as it chews 
up a lot of cpu and for extended periods of time in comparison to CF's with a 
compaction strategy of STCS


Thanks,


RE: [EXTERNAL] Re: Default TTL on CF

2019-03-14 Thread Durity, Sean R
I spent a month of my life on similar problem... There wasn't an easy answer, 
but this is what I did

#1 - Stop the problem from growing further. Get new inserts using a TTL (or set 
the default on the table so they get it). App team had to do this one.
#2 - Delete any data  that should already be expired.
- In my case the partition key included a date in the composite string they had 
built. So I could know from the partition key if the data needed to be deleted. 
I used sstablekeys to get the keys into files on each host. Then I parsed the 
files and created deletes for only those expired records. Then I executed the 
deletes. Then I had to do some compaction to actually create disk space. A long 
process with hundreds of billions of records...
#3 - Add TTL to data that should live. I gave this to the app team. Using the 
extracted keys I gave them, they could calculate the proper TTL. They read the 
data with the key, calculated TTL, and rewrote the data with TTL. Long, boring, 
etc. but they did it.



Sean Durity

-Original Message-
From: Jeff Jirsa 
Sent: Thursday, March 14, 2019 9:30 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Default TTL on CF

SSTableReader and CQLSSTableWriter if you’re comfortable with Java


--
Jeff Jirsa


> On Mar 14, 2019, at 1:28 PM, Nick Hatfield  wrote:
>
> Bummer but, reasonable. Any cool tricks I could use to make that process
> easier? I have many TB of data on a live cluster and was hoping to
> starting cleaning out the earlier bad habits of data housekeeping
>
>> On 3/14/19, 9:24 AM, "Jeff Jirsa"  wrote:
>>
>> It does not impact existing data
>>
>> The data gets an expiration time stamp when you write it. Changing the
>> default only impacts newly written data
>>
>> If you need to change the expiration time on existing data, you must
>> update it
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>>> On Mar 14, 2019, at 1:16 PM, Nick Hatfield 
>>> wrote:
>>>
>>> Hello,
>>>
>>> Can anyone tell me if setting a default TTL will affect existing data?
>>> I would like to enable a default TTL and have cassandra add that TTL to
>>> any rows that don¹t currently have a TTL set.
>>>
>>> Thanks,
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org




The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org


RE: [EXTERNAL] Re: Migrate large volume of data from one table to another table within the same cluster when COPY is not an option.

2019-03-14 Thread Durity, Sean R
The possibility of a highly available way to do this gives more challenges. I 
would be weighing the cost of a complex solution vs the possibility of a 
maintenance window when you stop your app to move the data, then restart.

For the straight copy of the data, I am currently enamored with DataStax’s 
dsbulk utility for unloading and loading larger amounts of data. I don’t have 
extensive experience, yet, but it has been fast enough in my experiments – and 
that is without doing too much tuning for speed. From a host not in the 
cluster, I was able to extract 3.5 million rows in about 11 seconds. I inserted 
them into a differently partitioned table in about 26 seconds. Very small data 
rows, but it was impressive for not doing much to try and speed it up further. 
(In some other tests, it was about ¼ the time of simple copy statement from 
cqlsh)

If I was designing something for a “can’t take an outage” scenario, I would 
start with:

-  Writing the data to the old and new tables on all inserts

-  On reads, read from the new table first. If not there, read from the 
old table <-- could introduce some latency, but would be available; could also 
do asynchronous reads on both tables and choose the latest

-  Do this until the data has been copied from old to new (with dsbulk 
or custom code or Spark)

-  Drop the double writes and conditional reads


Sean

From: Stefan Miklosovic 
Sent: Wednesday, March 13, 2019 6:39 PM
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Re: Migrate large volume of data from one table to 
another table within the same cluster when COPY is not an option.

Hi Leena,

as already suggested in my previous email, you could use Apache Spark and 
Cassandra Spark connector (1). I have checked TTLs and I believe you should 
especially read this section (2) about TTLs. Seems like thats what you need to 
do, ttls per row. The workflow would be that you read from your source table, 
making transformations per row (via some mapping) and then you would save it to 
new table.

This would import it "all" but until you switch to the new table and records 
are still being saved into the original one, I am not sure how to cover "the 
gap" in such sense that once you make the switch, you would miss records which 
were created in the first table after you did the loading. You could maybe 
leverage Spark streaming (Cassandra connector knows that too) so you would make 
this transformation on the fly with new ones.

(1) 
https://github.com/datastax/spark-cassandra-connector
(2) 
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md#using-a-different-value-for-each-row


On Thu, 14 Mar 2019 at 00:13, Leena Ghatpande 
mailto:lghatpa...@hotmail.com>> wrote:
Understand, 2nd table would be a better approach. So what would be the best way 
to copy 70M rows from current table to the 2nd table with ttl set on each 
record as the first table?


From: Durity, Sean R 
mailto:sean_r_dur...@homedepot.com>>
Sent: Wednesday, March 13, 2019 8:17 AM
To: user@cassandra.apache.org
Subject: RE: [EXTERNAL] Re: Migrate large volume of data from one table to 
another table within the same cluster when COPY is not an option.


Correct, there is no current flag. I think there SHOULD be one.





From: Dieudonné Madishon NGAYA mailto:dmng...@gmail.com>>
Sent: Tuesday, March 12, 2019 7:17 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Migrate large volume of data from one table to another 
table within the same cluster when COPY is not an option.



Hi Sean, you can’t flag in Cassandra.yaml not allowing allow filtering , the 
only thing you can do will be from your data model .

Don’t ask Cassandra to query all data from table but the ideal query will be 
using single partition.



On Tue, Mar 12, 2019 at 6:46 PM Stefan Miklosovic 
mailto:stefan.mikloso...@instaclustr.com>> 
wrote:

Hi Sean,



for sure, the best approach would be to create another table which would treat 
just that specific query.



How do I set the flag for not allowing allow filtering in cassandra.yaml? I 
read a doco and there seems to be nothing about that.



Regards



On Wed, 13 Mar 2019 at 06:57, Durity, Sean R 
mailto:sean_r_dur...@homedepot.com>> 

RE: Cannot replace_address /10.xx.xx.xx because it doesn't exist ingossip

2019-03-14 Thread Fd Habash
I can conclusively say, none of these commands were run. However, I think this 
is  the likely scenario …

If you have a cluster of three nodes 1,2,3 …
- If 3 shows as DN
- Restart C* on 1 & 2
- Nodetool status should NOT show node 3 IP at all.

Restarting the cluster while a node is down resets gossip state. 

There is a good chance this is what happened. 

Plausible? 


Thank you

From: Jeff Jirsa
Sent: Thursday, March 14, 2019 11:06 AM
To: cassandra
Subject: Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist 
ingossip

Two things that wouldn't be a bug:

You could have run removenode
You could have run assassinate

Also could be some new bug, but that's much less likely. 


On Thu, Mar 14, 2019 at 2:50 PM Fd Habash  wrote:
I have a node which I know for certain was a cluster member last week. It 
showed in nodetool status as DN. When I attempted to replace it today, I got 
this message 
 
ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception 
encountered during startup
java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx because it 
doesn't exist in gossip
    at 
org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
 ~[apache-cassandra-2.2.8.jar:2.2.8]
 
 
DN  10.xx.xx.xx  388.43 KB  256  6.9%  
bdbd632a-bf5d-44d4-b220-f17f258c4701  1e
 
Under what conditions does this happen?
 
 

Thank you
 



Re: AxonOps - Cassandra operational management tool

2019-03-14 Thread AxonOps
Thank you for taking the time for writing this great feedback. I've
commented in-line to yours.

Hayato

On Wed, 13 Mar 2019 at 01:04, Rahul Singh 
wrote:

> Nice.. Good to see the community producing tools around the Cassandra
> product.
>
> Few pieces of feedback
>
> *Kudos*
> 1. Glad that you are doing it
>
Thanks! We're having fun doing it.


> 2. Looks great
>
Thanks, there's more to come :-)


> 3. Willing to try it out if you find this guy called "Free Time" for me :)
>
I'm still looking for this guy.



>
> *Criticism*
> 1. It mimics a lot of stack components that are out there.. Though I agree
> with you that the prometheus/grafana/etc stack is difficult to get running,
> I look at
> https://docs.scylladb.com/operating-scylla/monitoring/monitoring_stack/
> and give them kudos for just making a simple tool to leverage what's there.
> Even DSE is now drinking the prometheus coolaid
> https://www.datastax.com/2018/12/improved-performance-diagnostics-with-datastax-metrics-collector
>
>

We definitely took inspiration from the tooling stack that we have been
using, including Grafana. In terms of drinking the Prometheus cool aid, our
server API provides Prometheus query API. This is currently undocumented
but we are intending to add a section to our docs. It means you can point
your Grafana to AxonOps server API and integrate with your existing
dashboards.

The way Prometheus "scrapes" metrics was where we wanted to be different
with AxonOps. Prometheus servers must co-exist in the same private network
as Cassandra servers in order to scrape the agent HTTP endpoints. There is
Prometheus Push Gateway but that is yet another component to add to our
monitoring stack. AxonOps agent initiates the connection to server using
HTTPS/WebSocket, meaning AxonOps server can reside in an entirely different
network. Just to experiment how far we can take this, we tested the agent
connecting through an outbound web proxy -> internet -> Cloudflare edge
load balancer -> AWS load balancer -> VPC -> nginx (F5!!) -> AxonOps
server. It worked very well with this setup.

This single socket HTTPS/WebSocket connection between the server and the
agent provides a bi-directional communication, where repair/backup/config
update commands traverse back the same connection as the metrics/logs etc.

If you have a multi-cloud/hybrid Cassandra deployment, you can
theoretically have a Cassandra cluster spanning across GCP, on-premises,
and AWS, it is entirely feasible to have the AxonOps server hosted in Azure
without having VPN connections to GCP/AWS/on-premises.



> 2. Given a choice of making something on my own (1), using a "stack"
> approach similar to Scylla (2), buying something that DSE produces (3), or
> buying AxonOps (4), the challenge for a practioner will be whether the cost
> offsets the effective pains of options 1 (more time),2( less time),3 (money)
>
> You can install it from our APT/YUM repos without charge, with each server
instance supporting up to 6 Cassandra nodes. We believe those people with
production clusters of this size, of which there are many, will find this
tool useful.


>
> "It is not the critic who counts; not *the man* who points out how the
> strong *man* stumbles, or where the doer of deeds could have done them
> better. ... It is the man in the arena" - Teddy Roosevelt
>
> Keep play in the Arena and looking forward to updates!
>
>
>
>
>
> On Wed, Mar 6, 2019 at 10:15 AM AxonOps  wrote:
>
>> Hi Kenneth,
>>
>> We  using AxonOps with on a number of production clusters already, but
>> we're continuously improving it, so we've got a good level of comfort and
>> confidence with the product with our own customers.
>>
>> In terms of our recommendations on the upper bounds of the cluster size,
>> we do not know yet. The biggest resource user is with Elasticsearch that
>> stores all the data. The free version available supports up to 6 nodes and
>> AxonOps can easily support this.
>>
>> You can already install the product from our APT or YUM repos. The
>> installation instructions are available here - https://docs.axonops.com
>>
>> Hayato
>>
>>
>> On Tue, 5 Mar 2019 at 20:44, Kenneth Brotman 
>> wrote:
>>
>>> Hayato,
>>>
>>>
>>>
>>> I agree with what you are addressing as I’ve always thought the big
>>> elephant in the room regarding Cassandra was that you had to use all these
>>> other tools, each of which requires updating, configuring changes, and that
>>> too much attention had to be paid to all those other tools instead of what
>>> your trying to accomplish; when instead if addressed it all could be
>>> centralized, internalized, or something but clearly it was quite doable.
>>>
>>>
>>>
>>> Questions regarding where things are at:
>>>
>>>
>>>
>>> Are you using AxonOps in any of your clients Apache Cassandra production
>>> clusters?
>>>
>>>
>>>
>>> What is the largest Cassandra cluster in which you use it?
>>>
>>>
>>>
>>> Would you recommend NOT using AxonOps on production clusters for now or
>>> do you co

Re: Default TTL on CF

2019-03-14 Thread Nick Hatfield
Awesome! Thank you!

On 3/14/19, 9:29 AM, "Jeff Jirsa"  wrote:

>SSTableReader and CQLSSTableWriter if you’re comfortable with Java
>
>
>-- 
>Jeff Jirsa
>
>
>> On Mar 14, 2019, at 1:28 PM, Nick Hatfield 
>>wrote:
>> 
>> Bummer but, reasonable. Any cool tricks I could use to make that process
>> easier? I have many TB of data on a live cluster and was hoping to
>> starting cleaning out the earlier bad habits of data housekeeping
>> 
>>> On 3/14/19, 9:24 AM, "Jeff Jirsa"  wrote:
>>> 
>>> It does not impact existing data
>>> 
>>> The data gets an expiration time stamp when you write it. Changing the
>>> default only impacts newly written data
>>> 
>>> If you need to change the expiration time on existing data, you must
>>> update it
>>> 
>>> 
>>> -- 
>>> Jeff Jirsa
>>> 
>>> 
 On Mar 14, 2019, at 1:16 PM, Nick Hatfield

 wrote:
 
 Hello,
 
 Can anyone tell me if setting a default TTL will affect existing data?
 I would like to enable a default TTL and have cassandra add that TTL
to
 any rows that don¹t currently have a TTL set.
 
 Thanks,
>>> 
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>> 
>>> 
>> 
>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
>
>-
>To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org


Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist in gossip

2019-03-14 Thread Jeff Jirsa
Two things that wouldn't be a bug:

You could have run removenode
You could have run assassinate

Also could be some new bug, but that's much less likely.


On Thu, Mar 14, 2019 at 2:50 PM Fd Habash  wrote:

> I have a node which I know for certain was a cluster member last week. It
> showed in nodetool status as DN. When I attempted to replace it today, I
> got this message
>
>
>
> ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception
> encountered during startup
>
> java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx
> because it doesn't exist in gossip
>
> at
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
> ~[apache-cassandra-2.2.8.jar:2.2.8]
>
>
>
>
>
> DN  10.xx.xx.xx  388.43 KB  256  6.9%
> bdbd632a-bf5d-44d4-b220-f17f258c4701  1e
>
>
>
> Under what conditions does this happen?
>
>
>
>
>
> 
> Thank you
>
>
>


Cannot replace_address /10.xx.xx.xx because it doesn't exist in gossip

2019-03-14 Thread Fd Habash
I have a node which I know for certain was a cluster member last week. It 
showed in nodetool status as DN. When I attempted to replace it today, I got 
this message 

ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception 
encountered during startup
java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx because it 
doesn't exist in gossip
at 
org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
 ~[apache-cassandra-2.2.8.jar:2.2.8]


DN  10.xx.xx.xx  388.43 KB  256  6.9%  
bdbd632a-bf5d-44d4-b220-f17f258c4701  1e

Under what conditions does this happen?



Thank you



Re: Audit in C*

2019-03-14 Thread Rane, Sanjay
unsubscribe

Sent from my iPhone

On Mar 14, 2019, at 6:01 AM, Nitan Kainth 
mailto:nitankai...@gmail.com>> wrote:

This email is from an external sender.

3.11


Regards,
Nitan
Cell: 510 449 9629

On Mar 14, 2019, at 3:18 AM, Per Otterström 
mailto:per.otterst...@ericsson.com>> wrote:

With ecAudit you can get audit records for login attempts and queries on 
selected ks/tables. Currently there is no way to get audit records for rejected 
attempts/queries _only_, but that’s an interesting feature for future versions.

The plug-in is free to use under the Apache 2.0 license and comes pre-build for 
Cassandra 3.0.x and Cassandra 3.11.x.

/pelle

From: Bobbie Haynes mailto:haynes30...@gmail.com>>
Sent: den 13 mars 2019 20:21
To: user@cassandra.apache.org
Subject: Re: Audit in C*

Yes.we are using it and it is very helpful to u so far...!

On Wed, Mar 13, 2019 at 11:38 AM Rahul Singh 
mailto:rahul.xavier.si...@gmail.com>> wrote:
Which version are you referring to?

On Wed, Mar 13, 2019 at 10:28 AM Nitan Kainth 
mailto:nitankai...@gmail.com>> wrote:
Hi,

Anybody have used auditing to find out failed login attempts, or unauthorized 
access tries.

I found ecAudit by Ericsson, is it free to use? Has anybody tried it?

Ref: https://github.com/Ericsson/ecaudit


Re: Query failure

2019-03-14 Thread Léo FERLIN SUTTON
I checked and the configuration file matched on all the nodes.

I checked `cqlsh  --cqlversion "3.4.0" -u cassandra_superuser -p
my_password nodeXX 9042` with each node and finally one failed.

It had somehow not been restarted since the configuration change. It was
not responsive to `systemctl stop/start/restart cassandra` but once I
finnaly got it to restart my issues disappeared.

Thank you so much for the help !

Regards,

Leo


On Thu, Mar 14, 2019 at 1:38 PM Sam Tunnicliffe  wrote:

> Hi Leo
>
> my guess would be that your configuration is not consistent across all
> nodes in the cluster. The responses you’re seeing are totally indicative of
> being connected to a node where PasswordAuthenticator is not enabled in
> cassandra.yaml.
>
> Thanks,
> Sam
>
> On 14 Mar 2019, at 10:56, Léo FERLIN SUTTON 
> wrote:
>
> Hello !
>
> Recently I have noticed some clients are having errors almost every time
> they try to contact my Cassandra cluster.
>
> The error messages vary but there is one constant : *It's not constant* !
> Let me show you :
>
> From the client host :
>
> `cqlsh  --cqlversion "3.4.0" -u cassandra_superuser -p my_password
> cassandra_address 9042`
>
> The CL commands will fail half of the time :
>
> ```
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD =
> 'leo4' AND LOGIN=TRUE;
> InvalidRequest: Error from server: code=2200 [Invalid query]
> message="org.apache.cassandra.auth.CassandraRoleManager doesn't support
> PASSWORD"
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD =
> 'leo4' AND LOGIN=TRUE;
> ```
>
> Same with grants :
> ```
> cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
> Unauthorized: Error from server: code=2100 [Unauthorized] message="You
> have to be logged in and not anonymous to perform this request"
> cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
> ```
>
> Same with `list roles` :
> ```
> cassandra_vault_superuser@cqlsh> list roles;
>
>  role | super | login
> | options
>
> --+---+---+-
> cassandra |  True |  True
> |{}
> [...]
>
> cassandra_vault_superuser@cqlsh> list roles;
> Unauthorized: Error from server: code=2100 [Unauthorized] message="You
> have to be logged in and not anonymous to perform this request"
> ```
>
> My Cassandra  (3.0.18) configuration seems correct :
> ```
> authenticator: PasswordAuthenticator
> authorizer: CassandraAuthorizer
> role_manager: CassandraRoleManager
> ```
>
> The system_auth schema seems correct as well :
> `CREATE KEYSPACE system_auth WITH replication = {'class':
> 'NetworkTopologyStrategy', 'my_dc': '3'}  AND durable_writes = true;`
>
>
> I am only having those errors when :
>
>   * I am on a non local client.
>   * Via `cqlsh`
>   * Or via the vaultproject client (
> https://www.vaultproject.io/docs/secrets/databases/cassandra.html) (1
> error occurred: You have to be logged in and not anonymous to perform this
> request)
>
> If I am using cqlsh (with authentification) but from a Cassandra node it
> works 100% of the time.
>
> Any idas abut what might be going wrong ?
>
> Regards,
>
> Leo
>
>
>


Re: Default TTL on CF

2019-03-14 Thread Jeff Jirsa
SSTableReader and CQLSSTableWriter if you’re comfortable with Java


-- 
Jeff Jirsa


> On Mar 14, 2019, at 1:28 PM, Nick Hatfield  wrote:
> 
> Bummer but, reasonable. Any cool tricks I could use to make that process
> easier? I have many TB of data on a live cluster and was hoping to
> starting cleaning out the earlier bad habits of data housekeeping
> 
>> On 3/14/19, 9:24 AM, "Jeff Jirsa"  wrote:
>> 
>> It does not impact existing data
>> 
>> The data gets an expiration time stamp when you write it. Changing the
>> default only impacts newly written data
>> 
>> If you need to change the expiration time on existing data, you must
>> update it
>> 
>> 
>> -- 
>> Jeff Jirsa
>> 
>> 
>>> On Mar 14, 2019, at 1:16 PM, Nick Hatfield 
>>> wrote:
>>> 
>>> Hello,
>>> 
>>> Can anyone tell me if setting a default TTL will affect existing data?
>>> I would like to enable a default TTL and have cassandra add that TTL to
>>> any rows that don¹t currently have a TTL set.
>>> 
>>> Thanks,
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
>> 
> 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Default TTL on CF

2019-03-14 Thread Nick Hatfield
Bummer but, reasonable. Any cool tricks I could use to make that process
easier? I have many TB of data on a live cluster and was hoping to
starting cleaning out the earlier bad habits of data housekeeping

On 3/14/19, 9:24 AM, "Jeff Jirsa"  wrote:

>It does not impact existing data
>
>The data gets an expiration time stamp when you write it. Changing the
>default only impacts newly written data
>
>If you need to change the expiration time on existing data, you must
>update it
>
>
>-- 
>Jeff Jirsa
>
>
>> On Mar 14, 2019, at 1:16 PM, Nick Hatfield 
>>wrote:
>> 
>> Hello,
>> 
>> Can anyone tell me if setting a default TTL will affect existing data?
>>I would like to enable a default TTL and have cassandra add that TTL to
>>any rows that don¹t currently have a TTL set.
>> 
>> Thanks,
>
>-
>To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>For additional commands, e-mail: user-h...@cassandra.apache.org
>
>



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Default TTL on CF

2019-03-14 Thread Jeff Jirsa
It does not impact existing data

The data gets an expiration time stamp when you write it. Changing the default 
only impacts newly written data

If you need to change the expiration time on existing data, you must update it


-- 
Jeff Jirsa


> On Mar 14, 2019, at 1:16 PM, Nick Hatfield  wrote:
> 
> Hello,
> 
> Can anyone tell me if setting a default TTL will affect existing data? I 
> would like to enable a default TTL and have cassandra add that TTL to any 
> rows that don’t currently have a TTL set. 
> 
> Thanks,

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Default TTL on CF

2019-03-14 Thread Nick Hatfield
Hello,

Can anyone tell me if setting a default TTL will affect existing data? I would 
like to enable a default TTL and have cassandra add that TTL to any rows that 
don’t currently have a TTL set.

Thanks,


Re: Audit in C*

2019-03-14 Thread Nitan Kainth
Good to know Bobbie 


Regards,
Nitan
Cell: 510 449 9629

> On Mar 13, 2019, at 3:21 PM, Bobbie Haynes  wrote:
> 
> Yes.we are using it and it is very helpful to u so far...!
> 
>> On Wed, Mar 13, 2019 at 11:38 AM Rahul Singh  
>> wrote:
>> Which version are you referring to?
>> 
>>> On Wed, Mar 13, 2019 at 10:28 AM Nitan Kainth  wrote:
>>> Hi,
>>> 
>>> Anybody have used auditing to find out failed login attempts, or 
>>> unauthorized access tries.
>>> 
>>> I found ecAudit by Ericsson, is it free to use? Has anybody tried it?
>>> 
>>> Ref: https://github.com/Ericsson/ecaudit


Re: Audit in C*

2019-03-14 Thread Nitan Kainth
3.11


Regards,
Nitan
Cell: 510 449 9629

> On Mar 14, 2019, at 3:18 AM, Per Otterström  
> wrote:
> 
> With ecAudit you can get audit records for login attempts and queries on 
> selected ks/tables. Currently there is no way to get audit records for 
> rejected attempts/queries _only_, but that’s an interesting feature for 
> future versions.
>  
> The plug-in is free to use under the Apache 2.0 license and comes pre-build 
> for Cassandra 3.0.x and Cassandra 3.11.x.
>  
> /pelle
>  
> From: Bobbie Haynes  
> Sent: den 13 mars 2019 20:21
> To: user@cassandra.apache.org
> Subject: Re: Audit in C*
>  
> Yes.we are using it and it is very helpful to u so far...!
>  
> On Wed, Mar 13, 2019 at 11:38 AM Rahul Singh  
> wrote:
> Which version are you referring to?
>  
> On Wed, Mar 13, 2019 at 10:28 AM Nitan Kainth  wrote:
> Hi,
>  
> Anybody have used auditing to find out failed login attempts, or unauthorized 
> access tries.
>  
> I found ecAudit by Ericsson, is it free to use? Has anybody tried it?
>  
> Ref: https://github.com/Ericsson/ecaudit


Re: Audit in C*

2019-03-14 Thread Nitan Kainth
Thank you Pelle


Regards,
Nitan
Cell: 510 449 9629

> On Mar 14, 2019, at 3:18 AM, Per Otterström  
> wrote:
> 
> With ecAudit you can get audit records for login attempts and queries on 
> selected ks/tables. Currently there is no way to get audit records for 
> rejected attempts/queries _only_, but that’s an interesting feature for 
> future versions.
>  
> The plug-in is free to use under the Apache 2.0 license and comes pre-build 
> for Cassandra 3.0.x and Cassandra 3.11.x.
>  
> /pelle
>  
> From: Bobbie Haynes  
> Sent: den 13 mars 2019 20:21
> To: user@cassandra.apache.org
> Subject: Re: Audit in C*
>  
> Yes.we are using it and it is very helpful to u so far...!
>  
> On Wed, Mar 13, 2019 at 11:38 AM Rahul Singh  
> wrote:
> Which version are you referring to?
>  
> On Wed, Mar 13, 2019 at 10:28 AM Nitan Kainth  wrote:
> Hi,
>  
> Anybody have used auditing to find out failed login attempts, or unauthorized 
> access tries.
>  
> I found ecAudit by Ericsson, is it free to use? Has anybody tried it?
>  
> Ref: https://github.com/Ericsson/ecaudit


Re: Query failure

2019-03-14 Thread Sam Tunnicliffe
Hi Leo

my guess would be that your configuration is not consistent across all nodes in 
the cluster. The responses you’re seeing are totally indicative of being 
connected to a node where PasswordAuthenticator is not enabled in 
cassandra.yaml. 

Thanks,
Sam

> On 14 Mar 2019, at 10:56, Léo FERLIN SUTTON  
> wrote:
> 
> Hello !
> 
> Recently I have noticed some clients are having errors almost every time they 
> try to contact my Cassandra cluster.
> 
> The error messages vary but there is one constant : It's not constant ! Let 
> me show you : 
> 
> From the client host : 
> 
> `cqlsh  --cqlversion "3.4.0" -u cassandra_superuser -p my_password 
> cassandra_address 9042`
> 
> The CL commands will fail half of the time :
> 
> ```
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4' 
> AND LOGIN=TRUE;
> InvalidRequest: Error from server: code=2200 [Invalid query] 
> message="org.apache.cassandra.auth.CassandraRoleManager doesn't support 
> PASSWORD"
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4' 
> AND LOGIN=TRUE;
> ```
> 
> Same with grants : 
> ```
> cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
> Unauthorized: Error from server: code=2100 [Unauthorized] message="You have 
> to be logged in and not anonymous to perform this request"
> cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
> ```
> 
> Same with `list roles` : 
> ```
> cassandra_vault_superuser@cqlsh> list roles;
> 
>  role | super | login | 
> options
> --+---+---+-
> cassandra |  True |  True |   
>  {}
> [...]
> 
> cassandra_vault_superuser@cqlsh> list roles;
> Unauthorized: Error from server: code=2100 [Unauthorized] message="You have 
> to be logged in and not anonymous to perform this request"
> ```
> 
> My Cassandra  (3.0.18) configuration seems correct : 
> ```
> authenticator: PasswordAuthenticator
> authorizer: CassandraAuthorizer
> role_manager: CassandraRoleManager
> ```
> 
> The system_auth schema seems correct as well : 
> `CREATE KEYSPACE system_auth WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'my_dc': '3'}  AND durable_writes = true;`
> 
> 
> I am only having those errors when : 
> 
>   * I am on a non local client. 
>   * Via `cqlsh`
>   * Or via the vaultproject client 
> (https://www.vaultproject.io/docs/secrets/databases/cassandra.html 
> ) (1 error 
> occurred: You have to be logged in and not anonymous to perform this request)
> 
> If I am using cqlsh (with authentification) but from a Cassandra node it 
> works 100% of the time.
> 
> Any idas abut what might be going wrong ?
> 
> Regards,
> 
> Leo
> 



Re: Cluster size "limit"

2019-03-14 Thread Jeff Jirsa
So gossip behaves reasonably well up into the ~1000 or so hosts per
cluster.

Repairs can get hard to schedule with ~256 vnodes and large numbers of
nodes. It's do-able, it just requires a bit of extra work. Personally, I
wouldn't run over 60 hosts with 256 vnodes, but I know from JIRA that some
people go quite a bit above that.

Going from 100 -> 150 is probably not going to be a deal breaker. I expect
you'll see quite a bit of compaction as you bootstrap/rebuild the new
hosts, but that's probably true for you now.





On Wed, Mar 13, 2019 at 5:27 PM Ahmed Eljami  wrote:

> Yes, 256 vnodes
>
> Le mer. 13 mars 2019 à 17:31, Jeff Jirsa  a écrit :
>
>> Do you use vnodes? How many vnodes per machine?
>>
>> --
>> Jeff Jirsa
>>
>>
>> On Mar 13, 2019, at 3:58 PM, Ahmed Eljami  wrote:
>>
>> Hi,
>>
>> We are planning to add a third datacenter to our cluster (already has 2
>> datacenter, every datcenter has 50 nodes, so 100 nodes in total).
>>
>> My fear is that an important number of nodes per cluster (> 100) could
>> cause a lot of problems like gossip duration, maintenance (repair...)...
>>
>> I know that it depends on use cases, volume of data and many other
>> thing, but I would like that you share your  experiences with that.
>>
>> Thx
>>
>>
>>
>>


Inconsistent results after restore with Cassandra 3.11.1

2019-03-14 Thread sandeep nethi
Hello,

Does anyone experience inconsistent results after restoring Cassandra
3.11.1 with refresh command? Was there any bug in this version of
cassandra??

Thanks in advance.

Regards,
Sandeep


Query failure

2019-03-14 Thread Léo FERLIN SUTTON
Hello !

Recently I have noticed some clients are having errors almost every time
they try to contact my Cassandra cluster.

The error messages vary but there is one constant : *It's not constant* !
Let me show you :

>From the client host :

`cqlsh  --cqlversion "3.4.0" -u cassandra_superuser -p my_password
cassandra_address 9042`

The CL commands will fail half of the time :

```
cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4'
AND LOGIN=TRUE;
InvalidRequest: Error from server: code=2200 [Invalid query]
message="org.apache.cassandra.auth.CassandraRoleManager doesn't support
PASSWORD"
cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4'
AND LOGIN=TRUE;
```

Same with grants :
```
cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
Unauthorized: Error from server: code=2100 [Unauthorized] message="You have
to be logged in and not anonymous to perform this request"
cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
```

Same with `list roles` :
```
cassandra_vault_superuser@cqlsh> list roles;

 role | super | login |
options
--+---+---+-
cassandra |  True |  True
|{}
[...]

cassandra_vault_superuser@cqlsh> list roles;
Unauthorized: Error from server: code=2100 [Unauthorized] message="You have
to be logged in and not anonymous to perform this request"
```

My Cassandra  (3.0.18) configuration seems correct :
```
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
role_manager: CassandraRoleManager
```

The system_auth schema seems correct as well :
`CREATE KEYSPACE system_auth WITH replication = {'class':
'NetworkTopologyStrategy', 'my_dc': '3'}  AND durable_writes = true;`


I am only having those errors when :

  * I am on a non local client.
  * Via `cqlsh`
  * Or via the vaultproject client (
https://www.vaultproject.io/docs/secrets/databases/cassandra.html) (1 error
occurred: You have to be logged in and not anonymous to perform this
request)

If I am using cqlsh (with authentification) but from a Cassandra node it
works 100% of the time.

Any idas abut what might be going wrong ?

Regards,

Leo


Re: [EXTERNAL] Re: Cluster size "limit"

2019-03-14 Thread Ahmed Eljami
So less vnodes allows more nodes, I understand.

But, It still hard to implement  on existing cluster with more than 10
Keyspaces with different RF...


RE: Audit in C*

2019-03-14 Thread Per Otterström
With ecAudit you can get audit records for login attempts and queries on 
selected ks/tables. Currently there is no way to get audit records for rejected 
attempts/queries _only_, but that’s an interesting feature for future versions.

The plug-in is free to use under the Apache 2.0 license and comes pre-build for 
Cassandra 3.0.x and Cassandra 3.11.x.

/pelle

From: Bobbie Haynes 
Sent: den 13 mars 2019 20:21
To: user@cassandra.apache.org
Subject: Re: Audit in C*

Yes.we are using it and it is very helpful to u so far...!

On Wed, Mar 13, 2019 at 11:38 AM Rahul Singh 
mailto:rahul.xavier.si...@gmail.com>> wrote:
Which version are you referring to?

On Wed, Mar 13, 2019 at 10:28 AM Nitan Kainth 
mailto:nitankai...@gmail.com>> wrote:
Hi,

Anybody have used auditing to find out failed login attempts, or unauthorized 
access tries.

I found ecAudit by Ericsson, is it free to use? Has anybody tried it?

Ref: https://github.com/Ericsson/ecaudit