[jira] [Commented] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread Ke Han (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645132#comment-17645132
 ] 

Ke Han commented on CASSANDRA-18105:


[~maxwellguo] Could you try 3.0.28 or 2.2.19? I just reproduced at those two 
versions.

> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart or upgrade. This problem happens at all 
> the latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and 
> restart/upgrade the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch asf-site updated (136bdfad -> c1542aea)

2022-12-08 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


 discard 136bdfad generate docs for 6f603a2c
 add 091d00dd Added C* Day China to Events pages
 add c1542aea generate docs for 091d00dd

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (136bdfad)
\
 N -- N -- N   refs/heads/asf-site (c1542aea)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../events/20221222-cday-china-1024x512.png| Bin 0 -> 531284 bytes
 .../_images/events/20221222-cday-china-300x300.png | Bin 0 -> 138422 bytes
 content/_/events.html  |  27 +
 .../20221222-cday-china.html}  | 128 ++---
 .../cassandra/configuration/cass_yaml_file.html|  34 ++
 .../4.2/cassandra/tools/nodetool/getsstables.html  |   7 +-
 .../cassandra/configuration/cass_yaml_file.html|  34 ++
 .../cassandra/tools/nodetool/getsstables.html  |   7 +-
 content/search-index.js|   2 +-
 .../images/events/20221222-cday-china-1024x512.png | Bin 0 -> 531284 bytes
 .../images/events/20221222-cday-china-300x300.png  | Bin 0 -> 138422 bytes
 site-content/source/modules/ROOT/pages/events.adoc |  26 +
 .../ROOT/pages/events/20221222-cday-china.adoc |  87 ++
 site-ui/build/ui-bundle.zip| Bin 4970139 -> 4970898 
bytes
 .../layouts/{single-post.hbs => event-post.hbs}|   2 +-
 15 files changed, 335 insertions(+), 19 deletions(-)
 create mode 100644 content/_/_images/events/20221222-cday-china-1024x512.png
 create mode 100644 content/_/_images/events/20221222-cday-china-300x300.png
 copy content/_/{blog/Cassandra-Days-Asia-2022.html => 
events/20221222-cday-china.html} (65%)
 create mode 100644 
site-content/source/modules/ROOT/images/events/20221222-cday-china-1024x512.png
 create mode 100644 
site-content/source/modules/ROOT/images/events/20221222-cday-china-300x300.png
 create mode 100644 
site-content/source/modules/ROOT/pages/events/20221222-cday-china.adoc
 copy site-ui/src/layouts/{single-post.hbs => event-post.hbs} (93%)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread maxwellguo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645112#comment-17645112
 ] 

maxwellguo commented on CASSANDRA-18105:


for 4.0.7 I tried ,but failed. I can not reproduce it 

> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart or upgrade. This problem happens at all 
> the latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and 
> restart/upgrade the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread Ke Han (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645096#comment-17645096
 ] 

Ke Han commented on CASSANDRA-18105:


[~maedhroz] Thanks for the reply. Yes. The index is necessary to trigger it. 

> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart or upgrade. This problem happens at all 
> the latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and 
> restart/upgrade the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread Caleb Rackliffe (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645065#comment-17645065
 ] 

Caleb Rackliffe commented on CASSANDRA-18105:
-

Does this only manifest when there is an index present?

> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart or upgrade. This problem happens at all 
> the latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and 
> restart/upgrade the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated CASSANDRA-18105:
---
Description: 
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart or upgrade. This problem happens at all the 
latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and 
restart/upgrade the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 

  was:
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart or upgrade. This problem happens at all the 
latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 


> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart or upgrade. This problem happens at all 
> the latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and 
> restart/upgrade the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated CASSANDRA-18105:
---
Description: 
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart or upgrade. This problem happens at all the 
latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 

  was:
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart. This problem happens at all the latest 
releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 


> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart or upgrade. This problem happens at all 
> the latest releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and restart 
> the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18105) TRUNCATED data come back after a restart or upgrade

2022-12-08 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated CASSANDRA-18105:
---
Summary: TRUNCATED data come back after a restart or upgrade  (was: 
TRUNCATED data come back after a restart)

> TRUNCATED data come back after a restart or upgrade
> ---
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart. This problem happens at all the latest 
> releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and restart 
> the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18105) TRUNCATED data come back after a restart

2022-12-08 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated CASSANDRA-18105:
---
Description: 
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart. This problem happens at all the latest 
releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 

  was:
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart. This problem happens at all the latest 
releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 


> TRUNCATED data come back after a restart
> 
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart. This problem happens at all the latest 
> releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> Start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and restart 
> the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17797) All system properties and environment variables should be accessed via the new CassandraRelevantProperties and CassandraRelevantEnv classes

2022-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645045#comment-17645045
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17797:
-

Yes, this will mean those will be also visible in the VirtualTables.

I guess we need to raise this to the mailing list according to the latest 
policy discussed until nothing else is decided.

[~mmuzaf], do you mind to do this? 

> All system properties and environment variables should be accessed via the 
> new CassandraRelevantProperties and CassandraRelevantEnv classes
> ---
>
> Key: CASSANDRA-17797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17797
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Ekaterina Dimitrova
>Assignee: Maxim Muzafarov
>Priority: Low
> Fix For: 4.x
>
>
> Follow up ticket for CASSANDRA-15876 - 
> "Always access system properties and environment variables via the new 
> CassandraRelevantProperties and CassandraRelevantEnv classes"
> As part of that ticket we moved to the two new classes only 
> properties/variables that were currently listed in System Properties Virtual 
> Table.
> We have to move to those classes the rest of the properties around the code 
> and start using those classes to access all of them. 
> +Additional information for newcomers:+
> You might want to start by getting acquainted with 
> CassandraRelevantProperties and CassandraRelevantEnv classes. Also, you might 
> want to check what changes were done and how the first batch was transferred 
> to this new framework as part of  
> [CASSANDRA-15876|https://github.com/apache/cassandra/commit/7694c1d191531ac152db55e83bc0db6864a5441e]
> We are interested into the properties accessed currently through 
> getProperties around the code.
> As part of CASSANDRA-15876 relevant unit tests were added 
> (CassandraRelevantPropertiesTest). To verify the new patch we need to ensure 
> that all tests Cassandra pass and also to think about potential update of the 
> mentioned test class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18105) TRUNCATED data come back after a restart

2022-12-08 Thread Ke Han (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Han updated CASSANDRA-18105:
---
Description: 
When we use the TRUNCATE command to delete all data in the table, the deleted 
data come back after a node restart. This problem happens at all the latest 
releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 

  was:
When we use the TRUNCATE command to delete all data in the table, the deleted 
data comes back after a node restart. This problem happens at all the latest 
releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 


> TRUNCATED data come back after a restart
> 
>
> Key: CASSANDRA-18105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh, Tool/nodetool
>Reporter: Ke Han
>Priority: Normal
>
> When we use the TRUNCATE command to delete all data in the table, the deleted 
> data come back after a node restart. This problem happens at all the latest 
> releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
> h2. Steps to reproduce
> start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 
> 4.0.7). Using the default configuration and execute the following cqlsh 
> commands.
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 
> ));
> INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
> CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
> TRUNCATE TABLE ks.tb;
> DROP INDEX IF EXISTS ks.tb; {code}
> Execute a read command
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
> (0 rows) {code}
> Then, we flush the node by bin/nodetool flush, shut down the node and restart 
> the node.
> When the node has started, perform the same read, and the deleted data comes 
> back again.
> {code:java}
> cqlsh> SELECT c2 FROM ks.tb; 
> c2
> 
>   1
> (1 rows) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18105) TRUNCATED data come back after a restart

2022-12-08 Thread Ke Han (Jira)
Ke Han created CASSANDRA-18105:
--

 Summary: TRUNCATED data come back after a restart
 Key: CASSANDRA-18105
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18105
 Project: Cassandra
  Issue Type: Bug
  Components: Tool/cqlsh, Tool/nodetool
Reporter: Ke Han


When we use the TRUNCATE command to delete all data in the table, the deleted 
data comes back after a node restart. This problem happens at all the latest 
releases (2.2.19, 3.0.28, 3.11.14 or 4.0.7)
h2. Steps to reproduce

start up a single node (the latest release: 2.2.19, 3.0.28, 3.11.14 or 4.0.7). 
Using the default configuration and execute the following cqlsh commands.
{code:java}
CREATE KEYSPACE IF NOT EXISTS ks WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE  ks.tb (c3 TEXT,c4 TEXT,c2 INT,c1 TEXT, PRIMARY KEY (c1, c2, c3 ));
INSERT INTO ks.tb (c3, c1, c2) VALUES ('val1','val2',1);
CREATE INDEX IF NOT EXISTS tb ON ks.tb ( c3);
TRUNCATE TABLE ks.tb;
DROP INDEX IF EXISTS ks.tb; {code}
Execute a read command
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2


(0 rows) {code}
Then, we flush the node by bin/nodetool flush, shut down the node and restart 
the node.

When the node has started, perform the same read, and the deleted data comes 
back again.
{code:java}
cqlsh> SELECT c2 FROM ks.tb; 

c2

  1

(1 rows) {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch asf-staging updated (ae7239a6 -> c1542aea)

2022-12-08 Thread git-site-role
This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


 discard ae7239a6 generate docs for 091d00dd
 new c1542aea generate docs for 091d00dd

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (ae7239a6)
\
 N -- N -- N   refs/heads/asf-staging (c1542aea)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 site-ui/build/ui-bundle.zip | Bin 4970898 -> 4970898 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17918) DESCRIBE output does not quote column names using reserved keywords

2022-12-08 Thread Yifan Cai (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644991#comment-17644991
 ] 

Yifan Cai commented on CASSANDRA-17918:
---

[~bernardo.botella], bunch of tests failed. Can you take a look? (See the 
circle CI links in my comment).

> DESCRIBE output does not quote column names using reserved keywords
> ---
>
> Key: CASSANDRA-17918
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17918
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL
>Reporter: Yifan Cai
>Assignee: Bernardo Botella Corbi
>Priority: Normal
> Fix For: 4.0.x, 4.1.x
>
>
> The DESCRIBE output of the column names that using reserved keywords are not 
> quoted for UDTs. The following test reproduces. Reading the code, it looks 
> like that the such columns names are not quoted in materialized view, UDF and 
> user defined aggregation. 
> The impact of the bug is that schema described cannot be imported due to the 
> usage of reserved keywords as column names. 
>  
> {code:java}
>     @Test
>     public void testUsingReservedInCreateType() throws Throwable
>     {
>         String type = createType(KEYSPACE_PER_TEST, "CREATE TYPE %s 
> (\"token\" text, \"desc\" text);");       
> assertRowsNet(executeDescribeNet(KEYSPACE_PER_TEST, "DESCRIBE TYPE " + type),
>                 row(KEYSPACE_PER_TEST, "type", type, "CREATE TYPE " + 
> KEYSPACE_PER_TEST + "." + type + " (\n" +
>                         "    \"token\" text,\n" +
>                         "    \"desc\" text\n" +
>                         ");"));
>     } {code}
> +Additional information for newcomers:+
>  * Unit tests for DESCRIBE statements are in {{DescribeStatementTest}}
>  * The statement implementation is in {{DescribeStatement and fetch the 
> create statement from the different schema element using  
> SchemaElement.toCqlString}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644985#comment-17644985
 ] 

Stefan Miklosovic commented on CASSANDRA-18102:
---

Ok. Sounds good. Nice compromise.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644982#comment-17644982
 ] 

Stefan Miklosovic commented on CASSANDRA-18102:
---

Interesting. I dont know ... going to disk kind of beats the whole purpose of 
not going there.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644983#comment-17644983
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

bq. so we would need to go to disk every time we need to list? For every 
snapshot to see if it is on disk or not?

File stat is very cheap since it's normally cached by the OS.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644984#comment-17644984
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

bq. Interesting. I dont know ... going to disk kind of beats the whole purpose 
of not going there.

The expensive part of going to disk is to traverse directories looking for 
snapshot, we will not be doing this.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644981#comment-17644981
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

bq. You mean like we would return it to listsnapshots output but we could add a 
column called "notes" and there it would be like "missing on disk"?

{SnapshotManager.listSnapshots}} only returns a snapshot if {{exists() == 
true}}, otherwise it removes the phantom snapshot from {{SnapshotManager}}.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644978#comment-17644978
 ] 

Stefan Miklosovic edited comment on CASSANDRA-18102 at 12/8/22 8:47 PM:


You mean like we would return it to listsnapshots output but we could add a 
column called "notes" and there it would be like "missing on disk"? 

_We can add logic to check that the snapshot is present on disk before 
returning from SnapshotManager_

so we would need to go to disk every time we need to list? For every snapshot 
to see if it is on disk or not? 


was (Author: smiklosovic):
You mean like we would return it to listsnapshots output but we could add a 
column called "notes" and there it would be like "missing on disk"? 

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644979#comment-17644979
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

We could add a periodic refresher thread if we don't want to check existence 
when it's returned, but I don't expect the existence check on return to be very 
expensive (it just check the snapshot dir exists, and does not do traversal).

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644978#comment-17644978
 ] 

Stefan Miklosovic commented on CASSANDRA-18102:
---

You mean like we would return it to listsnapshots output but we could add a 
column called "notes" and there it would be like "missing on disk"? 

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644977#comment-17644977
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

bq. Yes, manual removal of a snapshot from disk by negligent or uninformed 
user, even done by accident, would make it unsynchronised.

We can add logic to check that the snapshot is present on disk before returning 
from {{SnapshotManager}}, I think this is done by the 
{{TableSnapshot.exists()}} method. We just need to remove it from 
{{SnapshotManager}} and maybe log a warning.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644975#comment-17644975
 ] 

Stefan Miklosovic edited comment on CASSANDRA-18102 at 12/8/22 8:43 PM:


Yes, manual removal of a snapshot from disk by negligent or uninformed user, 
even done by accident, would make it unsynchronised.

We do not need to have that flag to refresh manually. We can refresh each 10 
minutes, for example, just to be sure (or each hour?). That would still go to 
disk but so un-frequently it does not matter.

edit: however, what is funny is that if it was done periodically, by periodic 
refreshing, we would suddenly go to disk when we have not been touching that at 
all previously. So we would go to disk even more than before (only upon listing 
vs periodically).


was (Author: smiklosovic):
Yes, manual removal of a snapshot from disk by negligent or uninformed user, 
even done by accident, would make it unsynchronised.

We do not need to have that flag to refresh manually. We can refresh each 10 
minutes, for example, just to be sure (or each hour?). That would still go to 
disk but so un-frequently it does not matter.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644975#comment-17644975
 ] 

Stefan Miklosovic commented on CASSANDRA-18102:
---

Yes, manual removal of a snapshot from disk by negligent or uninformed user, 
even done by accident, would make it unsynchronised.

We do not need to have that flag to refresh manually. We can refresh each 10 
minutes, for example, just to be sure (or each hour?). That would still go to 
disk but so un-frequently it does not matter.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644972#comment-17644972
 ] 

Brandon Williams edited comment on CASSANDRA-18102 at 12/8/22 8:38 PM:
---

bq. I don't think snapshots in memory and disk should ever get unsynchronized

Given how snapshots are managed today, I could see someone manually removing a 
snapshot from disk.


was (Author: brandon.williams):
bq. I don't think snapshots in memory and disk should ever get unsynchronized

Given how snapshots are managed today, I could see someone manually removing a 
snapshot.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644972#comment-17644972
 ] 

Brandon Williams commented on CASSANDRA-18102:
--

bq. I don't think snapshots in memory and disk should ever get unsynchronized

Given how snapshots are managed today, I could see someone manually removing a 
snapshot.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644970#comment-17644970
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

bq. After that is done, this ticket will be easy.

I think these works are independent, but should be merged together. The virtual 
table will just read from SnapshotManager instead of reading from disk, but it 
should be transparent since the virtual table will just display a list of 
{{{}TableSnapshot{}}}.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644968#comment-17644968
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

{quote}If snapshots in memory and on disk get unsynchronized, there should be a 
way how to sync it again. I propose new flag in nodetool listsnaphot (like -r / 
--refresh). That would go to disk, by default it would read from 
SnapshotManager.
{quote}
I don't think snapshots in memory and disk should ever get unsynchronized, so I 
don't think we need such flag. We just need to ensure any created or removed 
snapshots are reflected in {{SnapshotManager}}.

The work of unifying snapshot management on {{SnapshotManager}} on [this 
branch|https://github.com/apache/cassandra/pull/1305] moves snapshots to 
memory, with tests. We just need to rebase the branch and remove stuff that was 
committed as part of CASSANDRA-17588.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18089) The source code must obey the avoid star import checkstyle rule

2022-12-08 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644964#comment-17644964
 ] 

Michael Semb Wever commented on CASSANDRA-18089:


bq. Because of the amount of the changes, it would be nice if we fix order of 
these imports as well in one run so we do not need to have two big patches 
instead of one. This is again something to discuss. Ideally once we do that we 
have nothing else to do when it comes to imports.

I'm in favour of separating them. Each will be much easier to review, therefore 
safer, on their own.
And we're nit-picking here, i'm happy to leave it to Maxim who's the one doing 
the effort.

> The source code must obey the avoid star import checkstyle rule
> ---
>
> Key: CASSANDRA-18089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18089
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra has the code style rules regarding the classes import order: 
> [https://cassandra.apache.org/_/development/code_style.html]
> Importing all classes from a package or static members from a class leads to 
> tight coupling between packages or classes and might lead to problems when a 
> new library version introduces name clashes. The advantage of explicitly 
> listing all imports from a package is that you can tell at a glance which 
> class you meant to use, which does reading and refactoring the source code 
> that much easier.
> The checkstyle that is already used for checking the source code has a such 
> check and this check may be added to the config both for the production and 
> test source code:
> https://checkstyle.sourceforge.io/config_imports.html#AvoidStaticImport
> Besides adding a new checkstyle rule it may be more convenient for those 
> community members that are working with the code to reflect the same rule in 
> the IDE's inspection profiles (if it's possible), thus using the 'optimize 
> imports' will produce the same results on each execution as the checkstyle 
> does.
> Summary:
> - add new checkstyle rule;
> - update IDE's appropriate built-in inspections configurations;
> - update development code-style web page and wiki;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644962#comment-17644962
 ] 

Stefan Miklosovic commented on CASSANDRA-18102:
---

[~maxwellguo] there is no our team. We are all one team. We just need to start 
to cache snapshots in SnapshotManager as well. We do that already but just for 
expiring snapshots. We need to remove that logic. If snapshots in memory and on 
disk get unsynchronized, there should be a way how to sync it again. I propose 
new flag in nodetool listsnaphot (like -r / --refresh). That would go to disk, 
by default it would read from SnapshotManager.

We can dedicate new ticket for this. After that is done, this ticket will be 
easy. 

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17863) additionalWritePolicy in TableParams is not added into equals, hashcode and toString methods

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644959#comment-17644959
 ] 

Stefan Miklosovic commented on CASSANDRA-17863:
---

[~e.dimitrova]

This was already done for trunk here, there was quite a big cleanup as part of 
that patch which we saw as something opportunistic.
(1) 
https://github.com/apache/cassandra/commit/6e3770bc154ffd201b306febd92cfc14101efbbf

I am not sure if there is enough will to do this for 4.1 and older. I think 
that there is no value in doing that anymore.

> additionalWritePolicy in TableParams is not added into equals, hashcode and 
> toString methods
> 
>
> Key: CASSANDRA-17863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17863
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Priority: Normal
>  Labels: lhf
>
> This commit (1) introduces additionalWritePolicy into TableParams but equals, 
> hashcode and toString are not updated and they do not contain it. Unless this 
> was done on purpose (which I doubt), I think we should add it there.
> (1) 
> https://github.com/apache/cassandra/commit/4ae229f5cd270c2b43475b3f752a7b228de260ea#diff-1179d180c9b57e04b1088a10e5e168dcb924ace760a71fb28daf654453b1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default c

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644958#comment-17644958
 ] 

Stefan Miklosovic commented on CASSANDRA-12525:
---

Ok lets to this then. I would not increase default delay for that system 
property. If we kept increasing this and other time that property, after a 
while all the "waitings" together would slow-downed the overall startup 
unbearably.

For your scenario, maybe I am completely wrong here, but I would expect that 
nodes in dc2 would have as a seed a node from dc1. So the startup should not 
even propagate that far to create the default role. It should not even start if 
it can not see any seeds.

To test this, we could do this then with 2 nodes only. Start the first, change 
the password, partition the network, start the second and it should create the 
default role and then start the first node and repair the second. You should be 
able to connect to the second node with the changed password.

To partition the network, there are already some examples how to drop the 
communication, start to take a look at this (1) and you eventually figure it 
out.

Please, tell me if it is too much for you to do this. We would be glad if you 
figured it out, if you dont we are here to help to guide you. We are trying to 
guide new contributors so they will be more comfortable here.

(1) 
https://github.com/apache/cassandra/blob/trunk/test/distributed/org/apache/cassandra/distributed/shared/ClusterUtils.java#L124



> When adding new nodes to a cluster which has authentication enabled, we end 
> up losing cassandra user's current crendentials and they get reverted back to 
> default cassandra/cassandra crendetials
> -
>
> Key: CASSANDRA-12525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema, Local/Config
>Reporter: Atin Sood
>Assignee: German Eichberger
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Made the following observation:
> When adding new nodes to an existing C* cluster with authentication enabled 
> we end up loosing password information about `cassandra` user. 
> Initial Setup
> - Create a 5 node cluster with system_auth having RF=5 and 
> NetworkTopologyStrategy
> - Enable PasswordAuthenticator on this cluster and update the password for 
> 'cassandra' user to say 'password' via the alter query
> - Make sure you run nodetool repair on all the nodes
> Test case
> - Now go ahead and add 5 more nodes to this cluster.
> - Run nodetool repair on all the 10 nodes now
> - Decommission the original 5 nodes such that only the new 5 nodes are in the 
> cluster now
> - Run cqlsh and try to connect to this cluster using old user name and 
> password, cassandra/password
> I was unable to connect to the nodes with the original credentials and was 
> only able to connect using the default cassandra/cassandra credentials
> From the conversation over IIRC
> `beobal: sood: that definitely shouldn't happen. The new nodes should only 
> create the default superuser role if there are 0 roles currently defined 
> (including that default one)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17918) DESCRIBE output does not quote column names using reserved keywords

2022-12-08 Thread Yifan Cai (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644956#comment-17644956
 ] 

Yifan Cai commented on CASSANDRA-17918:
---

Rebased all branches and started the full CI runs. I will run the repeated test 
for the new tests after those finishes. 

CI Results (pending):
||Branch||Source||Circle CI||
|cassandra-4.0|[branch|https://github.com/yifan-c/cassandra/tree/commit_remote_branch/CASSANDRA-17918-cassandra-4.0-F5705D26-0685-449B-98DB-523066CDC82C]|[build|https://app.circleci.com/pipelines/github/yifan-c/cassandra?branch=commit_remote_branch%2FCASSANDRA-17918-cassandra-4.0-F5705D26-0685-449B-98DB-523066CDC82C]|
|cassandra-4.1|[branch|https://github.com/yifan-c/cassandra/tree/commit_remote_branch/CASSANDRA-17918-cassandra-4.1-F5705D26-0685-449B-98DB-523066CDC82C]|[build|https://app.circleci.com/pipelines/github/yifan-c/cassandra?branch=commit_remote_branch%2FCASSANDRA-17918-cassandra-4.1-F5705D26-0685-449B-98DB-523066CDC82C]|
|trunk|[branch|https://github.com/yifan-c/cassandra/tree/commit_remote_branch/CASSANDRA-17918-trunk-F5705D26-0685-449B-98DB-523066CDC82C]|[build|https://app.circleci.com/pipelines/github/yifan-c/cassandra?branch=commit_remote_branch%2FCASSANDRA-17918-trunk-F5705D26-0685-449B-98DB-523066CDC82C]|

> DESCRIBE output does not quote column names using reserved keywords
> ---
>
> Key: CASSANDRA-17918
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17918
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL
>Reporter: Yifan Cai
>Assignee: Bernardo Botella Corbi
>Priority: Normal
> Fix For: 4.0.x, 4.1.x
>
>
> The DESCRIBE output of the column names that using reserved keywords are not 
> quoted for UDTs. The following test reproduces. Reading the code, it looks 
> like that the such columns names are not quoted in materialized view, UDF and 
> user defined aggregation. 
> The impact of the bug is that schema described cannot be imported due to the 
> usage of reserved keywords as column names. 
>  
> {code:java}
>     @Test
>     public void testUsingReservedInCreateType() throws Throwable
>     {
>         String type = createType(KEYSPACE_PER_TEST, "CREATE TYPE %s 
> (\"token\" text, \"desc\" text);");       
> assertRowsNet(executeDescribeNet(KEYSPACE_PER_TEST, "DESCRIBE TYPE " + type),
>                 row(KEYSPACE_PER_TEST, "type", type, "CREATE TYPE " + 
> KEYSPACE_PER_TEST + "." + type + " (\n" +
>                         "    \"token\" text,\n" +
>                         "    \"desc\" text\n" +
>                         ");"));
>     } {code}
> +Additional information for newcomers:+
>  * Unit tests for DESCRIBE statements are in {{DescribeStatementTest}}
>  * The statement implementation is in {{DescribeStatement and fetch the 
> create statement from the different schema element using  
> SchemaElement.toCqlString}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17797) All system properties and environment variables should be accessed via the new CassandraRelevantProperties and CassandraRelevantEnv classes

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644950#comment-17644950
 ] 

Stefan Miklosovic commented on CASSANDRA-17797:
---

Great ticket to spend time on!

> All system properties and environment variables should be accessed via the 
> new CassandraRelevantProperties and CassandraRelevantEnv classes
> ---
>
> Key: CASSANDRA-17797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17797
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Ekaterina Dimitrova
>Assignee: Maxim Muzafarov
>Priority: Low
> Fix For: 4.x
>
>
> Follow up ticket for CASSANDRA-15876 - 
> "Always access system properties and environment variables via the new 
> CassandraRelevantProperties and CassandraRelevantEnv classes"
> As part of that ticket we moved to the two new classes only 
> properties/variables that were currently listed in System Properties Virtual 
> Table.
> We have to move to those classes the rest of the properties around the code 
> and start using those classes to access all of them. 
> +Additional information for newcomers:+
> You might want to start by getting acquainted with 
> CassandraRelevantProperties and CassandraRelevantEnv classes. Also, you might 
> want to check what changes were done and how the first batch was transferred 
> to this new framework as part of  
> [CASSANDRA-15876|https://github.com/apache/cassandra/commit/7694c1d191531ac152db55e83bc0db6864a5441e]
> We are interested into the properties accessed currently through 
> getProperties around the code.
> As part of CASSANDRA-15876 relevant unit tests were added 
> (CassandraRelevantPropertiesTest). To verify the new patch we need to ensure 
> that all tests Cassandra pass and also to think about potential update of the 
> mentioned test class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9312) Provide a way to retrieve the write time of a CQL row

2022-12-08 Thread Yangyi Shi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644937#comment-17644937
 ] 

Yangyi Shi commented on CASSANDRA-9312:
---

[~e.dimitrova] Hi I assigned it back and plan to work on this soon. Thanks for 
the notification.

> Provide a way to retrieve the write time of a CQL row
> -
>
> Key: CASSANDRA-9312
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9312
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL
>Reporter: Nicolas Favre-Felix
>Assignee: Yangyi Shi
>Priority: Normal
>  Labels: lhf
> Fix For: 4.x
>
>
> There is currently no way to retrieve the "writetime" of a CQL row. This is 
> an issue for tables in which all dimensions are part of the primary key.
> Since Cassandra already stores a cell for the CQL row, it would make sense to 
> provide a way to read its timestamp. This feature would be consistent with 
> the concept of a row as an entity containing a number of optional columns, 
> but able to exist on its own.
> +Additional information for newcomers+
> As [~slebresne] suggested in the comments, this functionality can be done by 
> allowing the {{writeTime}} and {{ttl}} methods on primary key columns. To do 
> that you will need to:
> * remove the check of {{Selectable.WritetimeOrTTL}} preventing the use of 
> {{writeTime}} and {{ttl}} methods on primary key columns
> * add a new method like {{add(ByteBuffer v, LivenessInfo livenessInfo, int 
> nowInSec)}} to {{ResultSetBuilder}} that method should populate the value as 
> well as the timestamps and ttls if needed
> * In {{SelectStatement.processPartition}} retrieve the row 
> primaryKeyLivenessInfo and call the new {{ResultSetBuilder}} method with 
> those information.
> * Adds some unit tests in {{SelectTest}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-9312) Provide a way to retrieve the write time of a CQL row

2022-12-08 Thread Yangyi Shi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangyi Shi reassigned CASSANDRA-9312:
-

Assignee: Yangyi Shi

> Provide a way to retrieve the write time of a CQL row
> -
>
> Key: CASSANDRA-9312
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9312
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL
>Reporter: Nicolas Favre-Felix
>Assignee: Yangyi Shi
>Priority: Normal
>  Labels: lhf
> Fix For: 4.x
>
>
> There is currently no way to retrieve the "writetime" of a CQL row. This is 
> an issue for tables in which all dimensions are part of the primary key.
> Since Cassandra already stores a cell for the CQL row, it would make sense to 
> provide a way to read its timestamp. This feature would be consistent with 
> the concept of a row as an entity containing a number of optional columns, 
> but able to exist on its own.
> +Additional information for newcomers+
> As [~slebresne] suggested in the comments, this functionality can be done by 
> allowing the {{writeTime}} and {{ttl}} methods on primary key columns. To do 
> that you will need to:
> * remove the check of {{Selectable.WritetimeOrTTL}} preventing the use of 
> {{writeTime}} and {{ttl}} methods on primary key columns
> * add a new method like {{add(ByteBuffer v, LivenessInfo livenessInfo, int 
> nowInSec)}} to {{ResultSetBuilder}} that method should populate the value as 
> well as the timestamps and ttls if needed
> * In {{SelectStatement.processPartition}} retrieve the row 
> primaryKeyLivenessInfo and call the new {{ResultSetBuilder}} method with 
> those information.
> * Adds some unit tests in {{SelectTest}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18094) Add python 3.11 to CI

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644935#comment-17644935
 ] 

Brandon Williams commented on CASSANDRA-18094:
--

I've added the circle configs for 3.11 and kept the commit history separate for 
each file to hopefully ease the review.

> Add python 3.11 to CI
> -
>
> Key: CASSANDRA-18094
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18094
> Project: Cassandra
>  Issue Type: Task
>  Components: CI
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Normal
>
> Python 3.11 has a small divergence that necessitates a SaferScanner for that 
> version and those after it in CASSANDRA-18088, similar to what 3.8 did and we 
> solved with CASSANDRA-15573.  We should add 3.11 to our docker images so we 
> can test with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17178) CEP-10: Simulator Java11 Support

2022-12-08 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644929#comment-17644929
 ] 

David Capwell commented on CASSANDRA-17178:
---

org.apache.cassandra.simulator.test.ShortPaxosSimulationTest was flakey in J8 
but not j11

> CEP-10: Simulator Java11 Support
> 
>
> Key: CASSANDRA-17178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17178
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/fuzz
>Reporter: Benedict Elliott Smith
>Assignee: David Capwell
>Priority: Normal
> Fix For: NA
>
>
> Java11 introduces new protection domains for package private methods, so that 
> classes loaded with different class loaders may not access each others’ 
> package private methods, regardless of the package they are declared within. 
> There are differences within the JDK also, and there may be byte weaving 
> targets that need updating (ThreadLocalRandom was one that has been handled)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13855) URL Seed provider

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644928#comment-17644928
 ] 

Paulo Motta commented on CASSANDRA-13855:
-

Hi [~rustyrazorblade] are you planning to work on this?

> URL Seed provider
> -
>
> Key: CASSANDRA-13855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13855
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Coordination, Legacy/Core
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Low
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 0001-Add-URL-Seed-Provider-trunk.txt
>
>
> Seems like including a dead simple seed provider that can fetch from a URL, 1 
> line per seed, would be useful.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to defa

2022-12-08 Thread German Eichberger (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644920#comment-17644920
 ] 

German Eichberger edited comment on CASSANDRA-12525 at 12/8/22 5:43 PM:


[~smiklosovic]  it's best practice after adding a second DC to run a (full) 
repair of the system_auth keyspace so with my patch this will effectively 
mitigate the issue for most people.

Also consider this scenario:
1. We bring up DC 1 in network A and change the cassandra user's password

2. We bring up DC 2 on network B - but due to some error the networks are not 
connected

3. We discover our error and connect network A and B so the DCs can see each 
other

 

In this case DC2 would not know that there is another DC (someone might just 
have misconfigured seed nodes) until the connection is established.To help with 
this scenario (other than my patch and repair) we would need some command line 
parameter so an operator can skip the initial role generation...


was (Author: JIRAUSER298386):
[~smiklosovic]  it's best practice after adding a second DC to run a (full) 
repair of the system_auth keyspace so with my patch this will effectively 
mitigate the issue for most people.



Also consider this scenario:
1. We bring up DC 1 i network A and change the cassandra user's password

2. We bring up DC 2 on network B - but due to some error the networks are not 
connected

3. We discover our error and connect network A and B so the DCs can see each 
other

 

In this case DC2 would not know that there is another DC (someone might just 
have misconfigured seed nodes) until the connection is established.To help with 
this scenario (other than my patch and repair) we would need some command line 
parameter so an operator can skip the initial role generation...

> When adding new nodes to a cluster which has authentication enabled, we end 
> up losing cassandra user's current crendentials and they get reverted back to 
> default cassandra/cassandra crendetials
> -
>
> Key: CASSANDRA-12525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema, Local/Config
>Reporter: Atin Sood
>Assignee: German Eichberger
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Made the following observation:
> When adding new nodes to an existing C* cluster with authentication enabled 
> we end up loosing password information about `cassandra` user. 
> Initial Setup
> - Create a 5 node cluster with system_auth having RF=5 and 
> NetworkTopologyStrategy
> - Enable PasswordAuthenticator on this cluster and update the password for 
> 'cassandra' user to say 'password' via the alter query
> - Make sure you run nodetool repair on all the nodes
> Test case
> - Now go ahead and add 5 more nodes to this cluster.
> - Run nodetool repair on all the 10 nodes now
> - Decommission the original 5 nodes such that only the new 5 nodes are in the 
> cluster now
> - Run cqlsh and try to connect to this cluster using old user name and 
> password, cassandra/password
> I was unable to connect to the nodes with the original credentials and was 
> only able to connect using the default cassandra/cassandra credentials
> From the conversation over IIRC
> `beobal: sood: that definitely shouldn't happen. The new nodes should only 
> create the default superuser role if there are 0 roles currently defined 
> (including that default one)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default c

2022-12-08 Thread German Eichberger (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644920#comment-17644920
 ] 

German Eichberger commented on CASSANDRA-12525:
---

[~smiklosovic]  it's best practice after adding a second DC to run a (full) 
repair of the system_auth keyspace so with my patch this will effectively 
mitigate the issue for most people.



Also consider this scenario:
1. We bring up DC 1 i network A and change the cassandra user's password

2. We bring up DC 2 on network B - but due to some error the networks are not 
connected

3. We discover our error and connect network A and B so the DCs can see each 
other

 

In this case DC2 would not know that there is another DC (someone might just 
have misconfigured seed nodes) until the connection is established.To help with 
this scenario (other than my patch and repair) we would need some command line 
parameter so an operator can skip the initial role generation...

> When adding new nodes to a cluster which has authentication enabled, we end 
> up losing cassandra user's current crendentials and they get reverted back to 
> default cassandra/cassandra crendetials
> -
>
> Key: CASSANDRA-12525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema, Local/Config
>Reporter: Atin Sood
>Assignee: German Eichberger
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Made the following observation:
> When adding new nodes to an existing C* cluster with authentication enabled 
> we end up loosing password information about `cassandra` user. 
> Initial Setup
> - Create a 5 node cluster with system_auth having RF=5 and 
> NetworkTopologyStrategy
> - Enable PasswordAuthenticator on this cluster and update the password for 
> 'cassandra' user to say 'password' via the alter query
> - Make sure you run nodetool repair on all the nodes
> Test case
> - Now go ahead and add 5 more nodes to this cluster.
> - Run nodetool repair on all the 10 nodes now
> - Decommission the original 5 nodes such that only the new 5 nodes are in the 
> cluster now
> - Run cqlsh and try to connect to this cluster using old user name and 
> password, cassandra/password
> I was unable to connect to the nodes with the original credentials and was 
> only able to connect using the default cassandra/cassandra credentials
> From the conversation over IIRC
> `beobal: sood: that definitely shouldn't happen. The new nodes should only 
> create the default superuser role if there are 0 roles currently defined 
> (including that default one)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18027) Use G1GC as default

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-18027:

Complexity: Normal  (was: Low Hanging Fruit)

> Use G1GC as default
> ---
>
> Key: CASSANDRA-18027
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18027
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/Config
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.1.x, 4.x
>
>
> G1GC is well battle tested now, and the recommended configuration for most 
> users. CMS can work well on smaller heaps but requires more tuning, initially 
> and over time. G1GC just works. CMS was deprecated in JDK 9.
> Patch at 
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:cassandra:mck/7486/trunk



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9312) Provide a way to retrieve the write time of a CQL row

2022-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644906#comment-17644906
 ] 

Ekaterina Dimitrova commented on CASSANDRA-9312:


Unassigned it due to inactivity, feel free to assign it back if you plan to 
work on it

> Provide a way to retrieve the write time of a CQL row
> -
>
> Key: CASSANDRA-9312
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9312
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL
>Reporter: Nicolas Favre-Felix
>Priority: Normal
>  Labels: lhf
> Fix For: 4.x
>
>
> There is currently no way to retrieve the "writetime" of a CQL row. This is 
> an issue for tables in which all dimensions are part of the primary key.
> Since Cassandra already stores a cell for the CQL row, it would make sense to 
> provide a way to read its timestamp. This feature would be consistent with 
> the concept of a row as an entity containing a number of optional columns, 
> but able to exist on its own.
> +Additional information for newcomers+
> As [~slebresne] suggested in the comments, this functionality can be done by 
> allowing the {{writeTime}} and {{ttl}} methods on primary key columns. To do 
> that you will need to:
> * remove the check of {{Selectable.WritetimeOrTTL}} preventing the use of 
> {{writeTime}} and {{ttl}} methods on primary key columns
> * add a new method like {{add(ByteBuffer v, LivenessInfo livenessInfo, int 
> nowInSec)}} to {{ResultSetBuilder}} that method should populate the value as 
> well as the timestamps and ttls if needed
> * In {{SelectStatement.processPartition}} retrieve the row 
> primaryKeyLivenessInfo and call the new {{ResultSetBuilder}} method with 
> those information.
> * Adds some unit tests in {{SelectTest}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17863) additionalWritePolicy in TableParams is not added into equals, hashcode and toString methods

2022-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644905#comment-17644905
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17863:
-

Unassigned it due to inactivity, feel free to assign it back if you plan to 
work on it

> additionalWritePolicy in TableParams is not added into equals, hashcode and 
> toString methods
> 
>
> Key: CASSANDRA-17863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17863
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Priority: Normal
>  Labels: lhf
>
> This commit (1) introduces additionalWritePolicy into TableParams but equals, 
> hashcode and toString are not updated and they do not contain it. Unless this 
> was done on purpose (which I doubt), I think we should add it there.
> (1) 
> https://github.com/apache/cassandra/commit/4ae229f5cd270c2b43475b3f752a7b228de260ea#diff-1179d180c9b57e04b1088a10e5e168dcb924ace760a71fb28daf654453b1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17818) Fix error message handling when trying to use CLUSTERING ORDER with non-clustering column

2022-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644903#comment-17644903
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17818:
-

Unassigned it due to inactivity, feel free to assign it back if you plan to 
work on it

> Fix error message handling when trying to use CLUSTERING ORDER with 
> non-clustering column
> -
>
> Key: CASSANDRA-17818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17818
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: Ekaterina Dimitrova
>Assignee: Andri Rahimov
>Priority: Normal
>  Labels: lhf
> Fix For: 3.11.x, 4.0.x, 4.1.x, 4.x
>
>
> Imagine ck1, ck2, v columns. For "CLUSTERING ORDER ck1 ASC, v DESC" error msg 
> will suggest that information for ck2 is missing. But if you add it it will 
> still be wrong as "v" cannot be used. So the problem here is really about 
> using non-clustering column rather than about not providing information about 
> some clustering column.
> The following is example from 3.11, but the code is the same in 4.0, 4.1, 
> trunk:
> {code:java}
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Missing 
> CLUSTERING ORDER for column ck1"
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (ck1 ASC, v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Missing 
> CLUSTERING ORDER for column ck2"
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (ck1 ASC, ck2 DESC, v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Only 
> clustering key columns can be defined in CLUSTERING ORDER directive"{code}
> We need to be sure that we return to the user the same correct error message 
> in all three cases and it should be "Only clustering key columns can be 
> defined in CLUSTERING ORDER directive"
> +Additional information for newcomers+
>  * 
> [This|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/statements/schema/CreateTableStatement.java#L251-L252]
>  is where we handle the issue incorrectly as proved by the example. The 
> easiest way to handle this issue would be to  check the key set content of 
> {_}clusteringOrder{_}.
>  * It would be good also to add more unit tests in 
> [CreateTableValidationTest|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/schema/CreateTableValidationTest.java]
>  to cover different cases. 
>  * I suggest we create patch first for 3.11 and then we can propagate it up 
> to the next versions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-17818) Fix error message handling when trying to use CLUSTERING ORDER with non-clustering column

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-17818:
---

Assignee: (was: Andri Rahimov)

> Fix error message handling when trying to use CLUSTERING ORDER with 
> non-clustering column
> -
>
> Key: CASSANDRA-17818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17818
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: Ekaterina Dimitrova
>Priority: Normal
>  Labels: lhf
> Fix For: 3.11.x, 4.0.x, 4.1.x, 4.x
>
>
> Imagine ck1, ck2, v columns. For "CLUSTERING ORDER ck1 ASC, v DESC" error msg 
> will suggest that information for ck2 is missing. But if you add it it will 
> still be wrong as "v" cannot be used. So the problem here is really about 
> using non-clustering column rather than about not providing information about 
> some clustering column.
> The following is example from 3.11, but the code is the same in 4.0, 4.1, 
> trunk:
> {code:java}
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Missing 
> CLUSTERING ORDER for column ck1"
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (ck1 ASC, v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Missing 
> CLUSTERING ORDER for column ck2"
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (ck1 ASC, ck2 DESC, v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Only 
> clustering key columns can be defined in CLUSTERING ORDER directive"{code}
> We need to be sure that we return to the user the same correct error message 
> in all three cases and it should be "Only clustering key columns can be 
> defined in CLUSTERING ORDER directive"
> +Additional information for newcomers+
>  * 
> [This|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/statements/schema/CreateTableStatement.java#L251-L252]
>  is where we handle the issue incorrectly as proved by the example. The 
> easiest way to handle this issue would be to  check the key set content of 
> {_}clusteringOrder{_}.
>  * It would be good also to add more unit tests in 
> [CreateTableValidationTest|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/schema/CreateTableValidationTest.java]
>  to cover different cases. 
>  * I suggest we create patch first for 3.11 and then we can propagate it up 
> to the next versions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16792) Add snapshot_before_compaction_ttl configuration

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-16792:

Labels: lhf  (was: ghc-lhf)

> Add snapshot_before_compaction_ttl configuration
> 
>
> Key: CASSANDRA-16792
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16792
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Paulo Motta
>Priority: Normal
>  Labels: lhf
>
> This property should take a human readable parameter (ie. 6h, 3days). When 
> specified and snapshot_before_compaction_ttl: true, snapshots created before 
> compaction should use the specified TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17818) Fix error message handling when trying to use CLUSTERING ORDER with non-clustering column

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-17818:

Labels: lhf  (was: ghc-lhf lhf)

> Fix error message handling when trying to use CLUSTERING ORDER with 
> non-clustering column
> -
>
> Key: CASSANDRA-17818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17818
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: Ekaterina Dimitrova
>Assignee: Andri Rahimov
>Priority: Normal
>  Labels: lhf
> Fix For: 3.11.x, 4.0.x, 4.1.x, 4.x
>
>
> Imagine ck1, ck2, v columns. For "CLUSTERING ORDER ck1 ASC, v DESC" error msg 
> will suggest that information for ck2 is missing. But if you add it it will 
> still be wrong as "v" cannot be used. So the problem here is really about 
> using non-clustering column rather than about not providing information about 
> some clustering column.
> The following is example from 3.11, but the code is the same in 4.0, 4.1, 
> trunk:
> {code:java}
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Missing 
> CLUSTERING ORDER for column ck1"
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (ck1 ASC, v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Missing 
> CLUSTERING ORDER for column ck2"
> cqlsh:k_test> CREATE TABLE test2 (pk int, ck1 int, ck2 int, v int, PRIMARY 
> KEY ((pk),ck1, ck2)) WITH CLUSTERING ORDER BY (ck1 ASC, ck2 DESC, v ASC);
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Only 
> clustering key columns can be defined in CLUSTERING ORDER directive"{code}
> We need to be sure that we return to the user the same correct error message 
> in all three cases and it should be "Only clustering key columns can be 
> defined in CLUSTERING ORDER directive"
> +Additional information for newcomers+
>  * 
> [This|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/statements/schema/CreateTableStatement.java#L251-L252]
>  is where we handle the issue incorrectly as proved by the example. The 
> easiest way to handle this issue would be to  check the key set content of 
> {_}clusteringOrder{_}.
>  * It would be good also to add more unit tests in 
> [CreateTableValidationTest|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/schema/CreateTableValidationTest.java]
>  to cover different cases. 
>  * I suggest we create patch first for 3.11 and then we can propagate it up 
> to the next versions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9312) Provide a way to retrieve the write time of a CQL row

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-9312:
---
Labels: lhf  (was: ghc-lhf lhf)

> Provide a way to retrieve the write time of a CQL row
> -
>
> Key: CASSANDRA-9312
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9312
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL
>Reporter: Nicolas Favre-Felix
>Assignee: Yangyi Shi
>Priority: Normal
>  Labels: lhf
> Fix For: 4.x
>
>
> There is currently no way to retrieve the "writetime" of a CQL row. This is 
> an issue for tables in which all dimensions are part of the primary key.
> Since Cassandra already stores a cell for the CQL row, it would make sense to 
> provide a way to read its timestamp. This feature would be consistent with 
> the concept of a row as an entity containing a number of optional columns, 
> but able to exist on its own.
> +Additional information for newcomers+
> As [~slebresne] suggested in the comments, this functionality can be done by 
> allowing the {{writeTime}} and {{ttl}} methods on primary key columns. To do 
> that you will need to:
> * remove the check of {{Selectable.WritetimeOrTTL}} preventing the use of 
> {{writeTime}} and {{ttl}} methods on primary key columns
> * add a new method like {{add(ByteBuffer v, LivenessInfo livenessInfo, int 
> nowInSec)}} to {{ResultSetBuilder}} that method should populate the value as 
> well as the timestamps and ttls if needed
> * In {{SelectStatement.processPartition}} retrieve the row 
> primaryKeyLivenessInfo and call the new {{ResultSetBuilder}} method with 
> those information.
> * Adds some unit tests in {{SelectTest}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-9312) Provide a way to retrieve the write time of a CQL row

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-9312:
--

Assignee: (was: Yangyi Shi)

> Provide a way to retrieve the write time of a CQL row
> -
>
> Key: CASSANDRA-9312
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9312
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/CQL
>Reporter: Nicolas Favre-Felix
>Priority: Normal
>  Labels: lhf
> Fix For: 4.x
>
>
> There is currently no way to retrieve the "writetime" of a CQL row. This is 
> an issue for tables in which all dimensions are part of the primary key.
> Since Cassandra already stores a cell for the CQL row, it would make sense to 
> provide a way to read its timestamp. This feature would be consistent with 
> the concept of a row as an entity containing a number of optional columns, 
> but able to exist on its own.
> +Additional information for newcomers+
> As [~slebresne] suggested in the comments, this functionality can be done by 
> allowing the {{writeTime}} and {{ttl}} methods on primary key columns. To do 
> that you will need to:
> * remove the check of {{Selectable.WritetimeOrTTL}} preventing the use of 
> {{writeTime}} and {{ttl}} methods on primary key columns
> * add a new method like {{add(ByteBuffer v, LivenessInfo livenessInfo, int 
> nowInSec)}} to {{ResultSetBuilder}} that method should populate the value as 
> well as the timestamps and ttls if needed
> * In {{SelectStatement.processPartition}} retrieve the row 
> primaryKeyLivenessInfo and call the new {{ResultSetBuilder}} method with 
> those information.
> * Adds some unit tests in {{SelectTest}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-17863) additionalWritePolicy in TableParams is not added into equals, hashcode and toString methods

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova reassigned CASSANDRA-17863:
---

Assignee: (was: Sonal Hundekari)

> additionalWritePolicy in TableParams is not added into equals, hashcode and 
> toString methods
> 
>
> Key: CASSANDRA-17863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17863
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Priority: Normal
>  Labels: ghc-lhf, lhf
>
> This commit (1) introduces additionalWritePolicy into TableParams but equals, 
> hashcode and toString are not updated and they do not contain it. Unless this 
> was done on purpose (which I doubt), I think we should add it there.
> (1) 
> https://github.com/apache/cassandra/commit/4ae229f5cd270c2b43475b3f752a7b228de260ea#diff-1179d180c9b57e04b1088a10e5e168dcb924ace760a71fb28daf654453b1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17863) additionalWritePolicy in TableParams is not added into equals, hashcode and toString methods

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-17863:

Labels: lhf  (was: ghc-lhf lhf)

> additionalWritePolicy in TableParams is not added into equals, hashcode and 
> toString methods
> 
>
> Key: CASSANDRA-17863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17863
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Priority: Normal
>  Labels: lhf
>
> This commit (1) introduces additionalWritePolicy into TableParams but equals, 
> hashcode and toString are not updated and they do not contain it. Unless this 
> was done on purpose (which I doubt), I think we should add it there.
> (1) 
> https://github.com/apache/cassandra/commit/4ae229f5cd270c2b43475b3f752a7b228de260ea#diff-1179d180c9b57e04b1088a10e5e168dcb924ace760a71fb28daf654453b1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15046) Add a "history" command to cqlsh. Perhaps "show history"?

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15046:

Labels: lhf  (was: ghc-lhf lhf)

> Add a "history" command to cqlsh.  Perhaps "show history"?
> --
>
> Key: CASSANDRA-15046
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15046
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Wes Peters
>Assignee: Yundi Chen
>Priority: Low
>  Labels: lhf
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I was trying to capture some create key space and create table commands from 
> a running cqlsh, and found there was no equivalent to the '\s' history 
> command in Postgres' psql shell.  It's a great tool for figuring out what you 
> were doing yesterday.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15046) Add a "history" command to cqlsh. Perhaps "show history"?

2022-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644900#comment-17644900
 ] 

Ekaterina Dimitrova commented on CASSANDRA-15046:
-

Hi [~ychen02], hope you are doing well! I wanted to touch base with you if you 
still plan to finish this work or you ran out of time? Please feel free to 
unassign the ticket and leave it to someone else to finish the work as a 
co-author if you don't have the time.

Thanks 

> Add a "history" command to cqlsh.  Perhaps "show history"?
> --
>
> Key: CASSANDRA-15046
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15046
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Wes Peters
>Assignee: Yundi Chen
>Priority: Low
>  Labels: ghc-lhf, lhf
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I was trying to capture some create key space and create table commands from 
> a running cqlsh, and found there was no equivalent to the '\s' history 
> command in Postgres' psql shell.  It's a great tool for figuring out what you 
> were doing yesterday.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default c

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644898#comment-17644898
 ] 

Brandon Williams commented on CASSANDRA-12525:
--

I don't think a third way precludes also adding the protection that creating 
the user in an idempotent manner grants.  We can do both.  

> When adding new nodes to a cluster which has authentication enabled, we end 
> up losing cassandra user's current crendentials and they get reverted back to 
> default cassandra/cassandra crendetials
> -
>
> Key: CASSANDRA-12525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema, Local/Config
>Reporter: Atin Sood
>Assignee: German Eichberger
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Made the following observation:
> When adding new nodes to an existing C* cluster with authentication enabled 
> we end up loosing password information about `cassandra` user. 
> Initial Setup
> - Create a 5 node cluster with system_auth having RF=5 and 
> NetworkTopologyStrategy
> - Enable PasswordAuthenticator on this cluster and update the password for 
> 'cassandra' user to say 'password' via the alter query
> - Make sure you run nodetool repair on all the nodes
> Test case
> - Now go ahead and add 5 more nodes to this cluster.
> - Run nodetool repair on all the 10 nodes now
> - Decommission the original 5 nodes such that only the new 5 nodes are in the 
> cluster now
> - Run cqlsh and try to connect to this cluster using old user name and 
> password, cassandra/password
> I was unable to connect to the nodes with the original credentials and was 
> only able to connect using the default cassandra/cassandra credentials
> From the conversation over IIRC
> `beobal: sood: that definitely shouldn't happen. The new nodes should only 
> create the default superuser role if there are 0 roles currently defined 
> (including that default one)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default cas

2022-12-08 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-12525:
--
Status: Patch Available  (was: Review In Progress)

> When adding new nodes to a cluster which has authentication enabled, we end 
> up losing cassandra user's current crendentials and they get reverted back to 
> default cassandra/cassandra crendetials
> -
>
> Key: CASSANDRA-12525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema, Local/Config
>Reporter: Atin Sood
>Assignee: German Eichberger
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Made the following observation:
> When adding new nodes to an existing C* cluster with authentication enabled 
> we end up loosing password information about `cassandra` user. 
> Initial Setup
> - Create a 5 node cluster with system_auth having RF=5 and 
> NetworkTopologyStrategy
> - Enable PasswordAuthenticator on this cluster and update the password for 
> 'cassandra' user to say 'password' via the alter query
> - Make sure you run nodetool repair on all the nodes
> Test case
> - Now go ahead and add 5 more nodes to this cluster.
> - Run nodetool repair on all the 10 nodes now
> - Decommission the original 5 nodes such that only the new 5 nodes are in the 
> cluster now
> - Run cqlsh and try to connect to this cluster using old user name and 
> password, cassandra/password
> I was unable to connect to the nodes with the original credentials and was 
> only able to connect using the default cassandra/cassandra credentials
> From the conversation over IIRC
> `beobal: sood: that definitely shouldn't happen. The new nodes should only 
> create the default superuser role if there are 0 roles currently defined 
> (including that default one)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default c

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644897#comment-17644897
 ] 

Stefan Miklosovic commented on CASSANDRA-12525:
---

Hi again [~xgerman42] ,

I wanted to replicate your issue locally but I was not able to do that. I used 
3 nodes per dc in two dc's (6 nodes in total).

I was also checking the logic when it comes to role creation (1) and (2). What 
it does is that it will create local cassandra role only in case it is nowhere 
to be found. First it checks with consistency level ONE, if it is not there, it 
will check with QUORUM CL. I put more debugging to that and every time it 
executed only query with ONE and it saw that cassandra role is there and it 
just moved on.

The only ever case when cassandra role was created was when I started the very 
first node in dc1. The fact that you see cassandra role creation in other node 
in the second dc is very interesting. I am not completely sure how you got that 
behavior.

There is also this (3), it will postpone the check of (1) for 10 seconds. So 
what could in theory happen is that the node did not see any peers in this 
window of 10 seconds which means that it evaluated that it is alone in the 
cluster (all conditions in (1) were evaluated as true (and false as they were 
negated)). This is the most reasonable explanation why you see this from time 
to time.

The interesting consequence of that logic in (3) is that it can not be blocking 
because that node about to start does not know in advance if it is going to be 
the only one in the cluster or not. The delay is controlled by system property 
(4)  so you could pro-actively increase this to some higher value, like 30 
seconds to minimize the chance that this might happen.

To improve this, we might make the default waiting time bigger but that is not 
solving it entirely, we are just kicking the can down the road here.

So if we can not completely prevent this, the next best option is to do what 
you suggested.

Is not there any third, better way? What about waiting on _something_ in that 
scheduled tasked, postponed to 10s by default, to wait for something which is 
not initialized fully yet? The fact that these peers are not there yet seems 
like Gossip did not have a chance to see the topology fully yet? 

(1) 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L351-L356
(2) 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L376-L384
(3) 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L386-L405
(4) 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/auth/AuthKeyspace.java#L58

> When adding new nodes to a cluster which has authentication enabled, we end 
> up losing cassandra user's current crendentials and they get reverted back to 
> default cassandra/cassandra crendetials
> -
>
> Key: CASSANDRA-12525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema, Local/Config
>Reporter: Atin Sood
>Assignee: German Eichberger
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Made the following observation:
> When adding new nodes to an existing C* cluster with authentication enabled 
> we end up loosing password information about `cassandra` user. 
> Initial Setup
> - Create a 5 node cluster with system_auth having RF=5 and 
> NetworkTopologyStrategy
> - Enable PasswordAuthenticator on this cluster and update the password for 
> 'cassandra' user to say 'password' via the alter query
> - Make sure you run nodetool repair on all the nodes
> Test case
> - Now go ahead and add 5 more nodes to this cluster.
> - Run nodetool repair on all the 10 nodes now
> - Decommission the original 5 nodes such that only the new 5 nodes are in the 
> cluster now
> - Run cqlsh and try to connect to this cluster using old user name and 
> password, cassandra/password
> I was unable to connect to the nodes with the original credentials and was 
> only able to connect using the default cassandra/cassandra credentials
> From the conversation over IIRC
> `beobal: sood: that definitely shouldn't happen. The new nodes should only 
> create the default superuser role if there are 0 roles currently defined 
> (including that default one)`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

--

[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644889#comment-17644889
 ] 

Paulo Motta commented on CASSANDRA-11537:
-

Thanks for the feedback. I agree this should be a rare case, and you can always 
use other JMX tools to inspect the server.

If this turns out to be a problem, we could introduce a {{--skip-init-check}} 
flag to skip this safer behavior.

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Observability, Local/Startup and Shutdown
>Reporter: Edward Capriolo
>Priority: Low
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18104) Major Performance degradation of Casandara 4.0.7 against Casandra 3.11.14

2022-12-08 Thread Sreedhar J (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreedhar J updated CASSANDRA-18104:
---
Severity: Critical

> Major Performance degradation of  Casandara 4.0.7   against Casandra 3.11.14
> 
>
> Key: CASSANDRA-18104
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18104
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sreedhar J
>Priority: Urgent
> Attachments: 3.11.14.txt, 4.0.7.txt, mailboxes_snapshot.zip
>
>
> Our application uses Casandra 3.11.x  and has lot of security vulnerabilities 
>  which are addressed in 4.0.x.  So  we have upgraded the Casandra to 4.0.7 
> and  our performance tests have shown aorund 20% degradation  compare to 
> 3.11.x
> We are now able to reproduce the same performance degradation using the 
> standalone queries.   Here are the steps.
> 1. Expand Cassandra 3.11.14 tarball and 4.0.7 tarball to different folders
> 2. Import the attached data from the snapshot (mailboxes_snapshot.zip) into 
> each Cassandra instance, see schema.cql for CQL for creating the required 
> table and indexes before import
> 3. With CQLSH run the following query several times with TRACING ON and 
> PAGING OFF against both versions of Cassandra:  select * from 
> mailbox.mailboxes where mbx_id= 6c57da2e-7ddd-4984-be62-105415e6b48a;
> 4. Compare results
> IWe ran the target query 30 times. Here's the average times to run the query:
> 3.11.14 - 19400.77
> 4.0.7 - 34906.03



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18104) Major Performance degradation of Casandara 4.0.7 against Casandra 3.11.14

2022-12-08 Thread Sreedhar J (Jira)
Sreedhar J created CASSANDRA-18104:
--

 Summary: Major Performance degradation of  Casandara 4.0.7   
against Casandra 3.11.14
 Key: CASSANDRA-18104
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18104
 Project: Cassandra
  Issue Type: Bug
Reporter: Sreedhar J
 Attachments: 3.11.14.txt, 4.0.7.txt, mailboxes_snapshot.zip

Our application uses Casandra 3.11.x  and has lot of security vulnerabilities  
which are addressed in 4.0.x.  So  we have upgraded the Casandra to 4.0.7 and  
our performance tests have shown aorund 20% degradation  compare to 3.11.x

We are now able to reproduce the same performance degradation using the 
standalone queries.   Here are the steps.

1. Expand Cassandra 3.11.14 tarball and 4.0.7 tarball to different folders
2. Import the attached data from the snapshot (mailboxes_snapshot.zip) into 
each Cassandra instance, see schema.cql for CQL for creating the required table 
and indexes before import
3. With CQLSH run the following query several times with TRACING ON and PAGING 
OFF against both versions of Cassandra:  select * from mailbox.mailboxes where 
mbx_id= 6c57da2e-7ddd-4984-be62-105415e6b48a;
4. Compare results

IWe ran the target query 30 times. Here's the average times to run the query:
3.11.14 - 19400.77
4.0.7 - 34906.03



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option

2022-12-08 Thread Jeremy Hanna (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644885#comment-17644885
 ] 

Jeremy Hanna edited comment on CASSANDRA-11721 at 12/8/22 4:06 PM:
---

I think CASSANDRA-10383 solves the production use cases for this and I'm very 
happy that it got implemented there.  There are cases in test and dev 
environments where I could still see a per operation setting being useful, but 
the majority of the use cases are covered by a table level setting.  I'm happy 
to "won't do" this one as updating CQL is a pain for just those use cases.


was (Author: jeromatron):
I think CASSANDRA-10383 solves the production use cases for this and I'm very 
happy that it got implemented there.  There are cases in test and dev 
environments where I could still see a per operation setting being useful, but 
the majority of the use cases are covered by a table level setting.  I'm happy 
to "won't fix" this one as updating CQL is a pain for just those use cases.

> Have a per operation truncate ddl "no snapshot" option
> --
>
> Key: CASSANDRA-11721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11721
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL, Local/Snapshots
>Reporter: Jeremy Hanna
>Priority: Low
>  Labels: AdventCalendar2021
>
> Right now with truncate, it will always create a snapshot.  That is the right 
> thing to do most of the time.  'auto_snapshot' exists as an option to disable 
> that but it is server wide and requires a restart to change.  There are data 
> models, however, that require rotating through a handful of tables and 
> periodically truncating them.  Currently you either have to operate with no 
> safety net (some actually do this) or manually clear those snapshots out 
> periodically.  Both are less than optimal.
> In HDFS, you generally delete something where it goes to the trash.  If you 
> don't want that safety net, you can do something like 'rm -rf -skiptrash 
> /jeremy/stuff' in one command.
> It would be nice to have something in the truncate ddl to skip the snapshot 
> on a per operation basis.  Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'.
> This might also be useful in those situations where you're just playing with 
> data and you don't want something to take a snapshot in a development system. 
>  If that's the case, this would also be useful for the DROP operation, but 
> that convenience is not the main reason for this option.
> +Additional information for newcomers:+
> This test is a bit more complex that normal LHF tickets but is still 
> reasonably easy.
> The idea is to support disabling snapshots when performing a Truncate as 
> follow:
> {code}TRUNCATE x WITH OPTIONS = { 'snapshot' : false }{code}
> In order to implement that feature several changes are required:
> * A new Class {{TruncateAttributes}} inheriting from {{PropertyDefinitions}} 
> must be create in a similar way to {{KeyspaceAttributes}} or 
> {{TableAttributes}}
> * This class should be passed to the {{TruncateStatement}} constructor and 
> stored as a field
> * The ANTLR parser logic should be change to retrieve the options and passe 
> them to the constructor (see {{createKeyspaceStatement}} for an example)
> * The {{TruncateStatement}} will then need to be modified to take into 
> account the new option. Locally it will neeed to call 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} if no snapshot should 
> be done instead of  {{ColumnFamilyStore#truncateBlocking}}. For non local 
> call it will need to pass a new parameter to 
> {{StorageProxy#truncateBloking}}. That parameter will then need to be passed 
> to the other nodes through the {{TruncateRequest}}.
> * As a new field need to be added to {{TruncateRequest}} this field will need 
> to be serialized and deserialized and a new {{MessagingService.Version}} will 
> need to be created and set as the current version the new version should be 
> 50 (and yes it means that the next release will be a major one 5.0)
> * In {{TruncateVerbHandler}} the new field should be used to determine if 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} or 
> {{ColumnFamilyStore#truncateBlocking}} should be called.  
> * An in-jvm test should be added in 
> {{test/distributed/org/apache/cassandra/distributed/test}}  to test that 
> truncate does not generate snapshots when the new option is specified. 
> Do not hesitate to ping the mentor for more information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option

2022-12-08 Thread Jeremy Hanna (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11721:
-
Resolution: Won't Do
Status: Resolved  (was: Open)

As discussed previously,  CASSANDRA-10383 solves the majority of what this 
covers.

> Have a per operation truncate ddl "no snapshot" option
> --
>
> Key: CASSANDRA-11721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11721
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL, Local/Snapshots
>Reporter: Jeremy Hanna
>Priority: Low
>  Labels: AdventCalendar2021
>
> Right now with truncate, it will always create a snapshot.  That is the right 
> thing to do most of the time.  'auto_snapshot' exists as an option to disable 
> that but it is server wide and requires a restart to change.  There are data 
> models, however, that require rotating through a handful of tables and 
> periodically truncating them.  Currently you either have to operate with no 
> safety net (some actually do this) or manually clear those snapshots out 
> periodically.  Both are less than optimal.
> In HDFS, you generally delete something where it goes to the trash.  If you 
> don't want that safety net, you can do something like 'rm -rf -skiptrash 
> /jeremy/stuff' in one command.
> It would be nice to have something in the truncate ddl to skip the snapshot 
> on a per operation basis.  Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'.
> This might also be useful in those situations where you're just playing with 
> data and you don't want something to take a snapshot in a development system. 
>  If that's the case, this would also be useful for the DROP operation, but 
> that convenience is not the main reason for this option.
> +Additional information for newcomers:+
> This test is a bit more complex that normal LHF tickets but is still 
> reasonably easy.
> The idea is to support disabling snapshots when performing a Truncate as 
> follow:
> {code}TRUNCATE x WITH OPTIONS = { 'snapshot' : false }{code}
> In order to implement that feature several changes are required:
> * A new Class {{TruncateAttributes}} inheriting from {{PropertyDefinitions}} 
> must be create in a similar way to {{KeyspaceAttributes}} or 
> {{TableAttributes}}
> * This class should be passed to the {{TruncateStatement}} constructor and 
> stored as a field
> * The ANTLR parser logic should be change to retrieve the options and passe 
> them to the constructor (see {{createKeyspaceStatement}} for an example)
> * The {{TruncateStatement}} will then need to be modified to take into 
> account the new option. Locally it will neeed to call 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} if no snapshot should 
> be done instead of  {{ColumnFamilyStore#truncateBlocking}}. For non local 
> call it will need to pass a new parameter to 
> {{StorageProxy#truncateBloking}}. That parameter will then need to be passed 
> to the other nodes through the {{TruncateRequest}}.
> * As a new field need to be added to {{TruncateRequest}} this field will need 
> to be serialized and deserialized and a new {{MessagingService.Version}} will 
> need to be created and set as the current version the new version should be 
> 50 (and yes it means that the next release will be a major one 5.0)
> * In {{TruncateVerbHandler}} the new field should be used to determine if 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} or 
> {{ColumnFamilyStore#truncateBlocking}} should be called.  
> * An in-jvm test should be added in 
> {{test/distributed/org/apache/cassandra/distributed/test}}  to test that 
> truncate does not generate snapshots when the new option is specified. 
> Do not hesitate to ping the mentor for more information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option

2022-12-08 Thread Jeremy Hanna (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644885#comment-17644885
 ] 

Jeremy Hanna commented on CASSANDRA-11721:
--

I think CASSANDRA-10383 solves the production use cases for this and I'm very 
happy that it got implemented there.  There are cases in test and dev 
environments where I could still see a per operation setting being useful, but 
the majority of the use cases are covered by a table level setting.  I'm happy 
to "won't fix" this one as updating CQL is a pain for just those use cases.

> Have a per operation truncate ddl "no snapshot" option
> --
>
> Key: CASSANDRA-11721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11721
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL, Local/Snapshots
>Reporter: Jeremy Hanna
>Priority: Low
>  Labels: AdventCalendar2021
>
> Right now with truncate, it will always create a snapshot.  That is the right 
> thing to do most of the time.  'auto_snapshot' exists as an option to disable 
> that but it is server wide and requires a restart to change.  There are data 
> models, however, that require rotating through a handful of tables and 
> periodically truncating them.  Currently you either have to operate with no 
> safety net (some actually do this) or manually clear those snapshots out 
> periodically.  Both are less than optimal.
> In HDFS, you generally delete something where it goes to the trash.  If you 
> don't want that safety net, you can do something like 'rm -rf -skiptrash 
> /jeremy/stuff' in one command.
> It would be nice to have something in the truncate ddl to skip the snapshot 
> on a per operation basis.  Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'.
> This might also be useful in those situations where you're just playing with 
> data and you don't want something to take a snapshot in a development system. 
>  If that's the case, this would also be useful for the DROP operation, but 
> that convenience is not the main reason for this option.
> +Additional information for newcomers:+
> This test is a bit more complex that normal LHF tickets but is still 
> reasonably easy.
> The idea is to support disabling snapshots when performing a Truncate as 
> follow:
> {code}TRUNCATE x WITH OPTIONS = { 'snapshot' : false }{code}
> In order to implement that feature several changes are required:
> * A new Class {{TruncateAttributes}} inheriting from {{PropertyDefinitions}} 
> must be create in a similar way to {{KeyspaceAttributes}} or 
> {{TableAttributes}}
> * This class should be passed to the {{TruncateStatement}} constructor and 
> stored as a field
> * The ANTLR parser logic should be change to retrieve the options and passe 
> them to the constructor (see {{createKeyspaceStatement}} for an example)
> * The {{TruncateStatement}} will then need to be modified to take into 
> account the new option. Locally it will neeed to call 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} if no snapshot should 
> be done instead of  {{ColumnFamilyStore#truncateBlocking}}. For non local 
> call it will need to pass a new parameter to 
> {{StorageProxy#truncateBloking}}. That parameter will then need to be passed 
> to the other nodes through the {{TruncateRequest}}.
> * As a new field need to be added to {{TruncateRequest}} this field will need 
> to be serialized and deserialized and a new {{MessagingService.Version}} will 
> need to be created and set as the current version the new version should be 
> 50 (and yes it means that the next release will be a major one 5.0)
> * In {{TruncateVerbHandler}} the new field should be used to determine if 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} or 
> {{ColumnFamilyStore#truncateBlocking}} should be called.  
> * An in-jvm test should be added in 
> {{test/distributed/org/apache/cassandra/distributed/test}}  to test that 
> truncate does not generate snapshots when the new option is specified. 
> Do not hesitate to ping the mentor for more information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-17797) All system properties and environment variables should be accessed via the new CassandraRelevantProperties and CassandraRelevantEnv classes

2022-12-08 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov reassigned CASSANDRA-17797:
---

Assignee: Maxim Muzafarov

> All system properties and environment variables should be accessed via the 
> new CassandraRelevantProperties and CassandraRelevantEnv classes
> ---
>
> Key: CASSANDRA-17797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17797
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Ekaterina Dimitrova
>Assignee: Maxim Muzafarov
>Priority: Low
> Fix For: 4.x
>
>
> Follow up ticket for CASSANDRA-15876 - 
> "Always access system properties and environment variables via the new 
> CassandraRelevantProperties and CassandraRelevantEnv classes"
> As part of that ticket we moved to the two new classes only 
> properties/variables that were currently listed in System Properties Virtual 
> Table.
> We have to move to those classes the rest of the properties around the code 
> and start using those classes to access all of them. 
> +Additional information for newcomers:+
> You might want to start by getting acquainted with 
> CassandraRelevantProperties and CassandraRelevantEnv classes. Also, you might 
> want to check what changes were done and how the first batch was transferred 
> to this new framework as part of  
> [CASSANDRA-15876|https://github.com/apache/cassandra/commit/7694c1d191531ac152db55e83bc0db6864a5441e]
> We are interested into the properties accessed currently through 
> getProperties around the code.
> As part of CASSANDRA-15876 relevant unit tests were added 
> (CassandraRelevantPropertiesTest). To verify the new patch we need to ensure 
> that all tests Cassandra pass and also to think about potential update of the 
> mentioned test class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644872#comment-17644872
 ] 

Brandon Williams commented on CASSANDRA-11537:
--

I think the only problem might be if the server wouldn't start due to some kind 
of problem and you couldn't introspect anything to find out what's going on, 
but that should be a rare case and generally I don't think you'd find the 
solution over JMX anyway.

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Observability, Local/Startup and Shutdown
>Reporter: Edward Capriolo
>Priority: Low
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17797) All system properties and environment variables should be accessed via the new CassandraRelevantProperties and CassandraRelevantEnv classes

2022-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644871#comment-17644871
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17797:
-

Hi [~mmuzaf],

Great point, GHC is already in the past so I removed the label. Please feel 
free to assign the ticket. I agree it is important and I would really be happy 
and appreciate you taking it.

Now as you brought it, I will also check the other tickets marked for Grace 
Hopper to make some adjustments. Thank you! 

> All system properties and environment variables should be accessed via the 
> new CassandraRelevantProperties and CassandraRelevantEnv classes
> ---
>
> Key: CASSANDRA-17797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17797
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Ekaterina Dimitrova
>Priority: Low
> Fix For: 4.x
>
>
> Follow up ticket for CASSANDRA-15876 - 
> "Always access system properties and environment variables via the new 
> CassandraRelevantProperties and CassandraRelevantEnv classes"
> As part of that ticket we moved to the two new classes only 
> properties/variables that were currently listed in System Properties Virtual 
> Table.
> We have to move to those classes the rest of the properties around the code 
> and start using those classes to access all of them. 
> +Additional information for newcomers:+
> You might want to start by getting acquainted with 
> CassandraRelevantProperties and CassandraRelevantEnv classes. Also, you might 
> want to check what changes were done and how the first batch was transferred 
> to this new framework as part of  
> [CASSANDRA-15876|https://github.com/apache/cassandra/commit/7694c1d191531ac152db55e83bc0db6864a5441e]
> We are interested into the properties accessed currently through 
> getProperties around the code.
> As part of CASSANDRA-15876 relevant unit tests were added 
> (CassandraRelevantPropertiesTest). To verify the new patch we need to ensure 
> that all tests Cassandra pass and also to think about potential update of the 
> mentioned test class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17797) All system properties and environment variables should be accessed via the new CassandraRelevantProperties and CassandraRelevantEnv classes

2022-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-17797:

Labels:   (was: ghc-lhf)

> All system properties and environment variables should be accessed via the 
> new CassandraRelevantProperties and CassandraRelevantEnv classes
> ---
>
> Key: CASSANDRA-17797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17797
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Ekaterina Dimitrova
>Priority: Low
> Fix For: 4.x
>
>
> Follow up ticket for CASSANDRA-15876 - 
> "Always access system properties and environment variables via the new 
> CassandraRelevantProperties and CassandraRelevantEnv classes"
> As part of that ticket we moved to the two new classes only 
> properties/variables that were currently listed in System Properties Virtual 
> Table.
> We have to move to those classes the rest of the properties around the code 
> and start using those classes to access all of them. 
> +Additional information for newcomers:+
> You might want to start by getting acquainted with 
> CassandraRelevantProperties and CassandraRelevantEnv classes. Also, you might 
> want to check what changes were done and how the first batch was transferred 
> to this new framework as part of  
> [CASSANDRA-15876|https://github.com/apache/cassandra/commit/7694c1d191531ac152db55e83bc0db6864a5441e]
> We are interested into the properties accessed currently through 
> getProperties around the code.
> As part of CASSANDRA-15876 relevant unit tests were added 
> (CassandraRelevantPropertiesTest). To verify the new patch we need to ensure 
> that all tests Cassandra pass and also to think about potential update of the 
> mentioned test class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644827#comment-17644827
 ] 

Paulo Motta commented on CASSANDRA-11537:
-

I might take a stab at this prehistoric ticket. I think the cleanest way to do 
that would be to check if {{StorageServiceMBean.isInitialized()}} at the end of 
[NodeProbe.connect()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/NodeProbe.java#L292]
 and throw a friendly error if not.

However, I'm not sure this is the right thing to do, as other Mbeans (ie. 
GCInspectorMXBean) would be inacessible before {{StorageServiceMBean}} is 
ready. Do you know if this would be a problem [~brandon.williams]? 

Otherwise we need to check if {{StorageServiceMBean.isInitialized()}} per 
{{NodeProbe}} method, and exclude any method which should be accessible before 
storage service is ready, which would make this a bit uglier and more fragile.

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Observability, Local/Startup and Shutdown
>Reporter: Edward Capriolo
>Priority: Low
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch asf-staging updated (17cc285d -> ae7239a6)

2022-12-08 Thread git-site-role
This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


 discard 17cc285d generate docs for 091d00dd
 new ae7239a6 generate docs for 091d00dd

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (17cc285d)
\
 N -- N -- N   refs/heads/asf-staging (ae7239a6)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/search-index.js |   2 +-
 site-ui/build/ui-bundle.zip | Bin 4970898 -> 4970898 bytes
 2 files changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17874) Only reload compaction strategies if disk boundaries change

2022-12-08 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-17874:
--
Status: Ready to Commit  (was: Review In Progress)

Suggested some cleanup to pre-existing code as part of the patch, since this is 
going to trunk, to make {{CSM#reload()}} and its callsites easier to reason 
about, ultimately splitting it into several. Patch LGTM with or without 
accepting cleanup suggestions.

> Only reload compaction strategies if disk boundaries change
> ---
>
> Key: CASSANDRA-17874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17874
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Low
> Fix For: 4.x
>
>
> We currently reload compaction strategies every time ringVersion changes - we 
> should change this to only reload if the ring version change actually changes 
> the disk boundaries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread maxwellguo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644815#comment-17644815
 ] 

maxwellguo commented on CASSANDRA-18102:


OK I have not start yet ,it ok to reassign to your team member .

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Paulo Motta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644812#comment-17644812
 ] 

Paulo Motta commented on CASSANDRA-18102:
-

[~maxwellguo] Thanks for your interest on this! Someone on my team was planning 
to work on this, do you mind if I reassign if you haven't started work on it 
yet?

[~smiklosovic] agreed, I think it's important to keep snapshots in memory to 
allow this to be more performant. Are you planning to do this in 
CASSANDRA-13338 ?

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17797) All system properties and environment variables should be accessed via the new CassandraRelevantProperties and CassandraRelevantEnv classes

2022-12-08 Thread Maxim Muzafarov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644809#comment-17644809
 ] 

Maxim Muzafarov commented on CASSANDRA-17797:
-

[~e.dimitrova],

Hello, is this issue still pinned to the Grace Hopper Celebration Open Source 
day? 
It seems to me it is important for the project, so I'd like to take care of it.

> All system properties and environment variables should be accessed via the 
> new CassandraRelevantProperties and CassandraRelevantEnv classes
> ---
>
> Key: CASSANDRA-17797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17797
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Ekaterina Dimitrova
>Priority: Low
>  Labels: ghc-lhf
> Fix For: 4.x
>
>
> Follow up ticket for CASSANDRA-15876 - 
> "Always access system properties and environment variables via the new 
> CassandraRelevantProperties and CassandraRelevantEnv classes"
> As part of that ticket we moved to the two new classes only 
> properties/variables that were currently listed in System Properties Virtual 
> Table.
> We have to move to those classes the rest of the properties around the code 
> and start using those classes to access all of them. 
> +Additional information for newcomers:+
> You might want to start by getting acquainted with 
> CassandraRelevantProperties and CassandraRelevantEnv classes. Also, you might 
> want to check what changes were done and how the first batch was transferred 
> to this new framework as part of  
> [CASSANDRA-15876|https://github.com/apache/cassandra/commit/7694c1d191531ac152db55e83bc0db6864a5441e]
> We are interested into the properties accessed currently through 
> getProperties around the code.
> As part of CASSANDRA-15876 relevant unit tests were added 
> (CassandraRelevantPropertiesTest). To verify the new patch we need to ensure 
> that all tests Cassandra pass and also to think about potential update of the 
> mentioned test class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18075) Upgraded (C* 4.0.4) node stops communicating with older version (3.11.4) nodes during upgrade

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644779#comment-17644779
 ] 

Brandon Williams commented on CASSANDRA-18075:
--

bq. Did you check the latest logs that I uploaded?

I understood that upgrade to be successful, so there is no reason to check them.

bq. Will you be able to try at your end once where IP is changing during the 
upgrade process? 

I'm not interested in exploring a high effort path that ignores the crucial bit 
of evidence we have, which is the connection refused to the storage port on the 
3.11 side.  That is the smoking gun, that is what must be explained to go any 
further here.  And we should remember that being on the 3.11 side means that 
there's nothing we can change on the 4.0 side to fix it.

> Upgraded (C* 4.0.4) node stops communicating with older version (3.11.4) 
> nodes during upgrade
> -
>
> Key: CASSANDRA-18075
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18075
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Alaykumar Barochia
>Priority: Normal
> Attachments: In-place-upgrade.zip, cassandra-env.sh_3114, 
> cassandra-env.sh_404, cassandra.yaml_10.110.44.207_explicitely_set_port, 
> cassandra.yaml_10.110.49.242_explicitely_set_port, cassandra.yaml_3114, 
> cassandra.yaml_404, system.log_10.110.44.207, 
> system.log_10.110.44.207_after_explicitely_set_port, 
> system.log_10.110.49.242_after_explicitely_set_port
>
>
> We are testing upgrade from Cassandra 3.11.4 to 4.0.4 on our test cluster 
> which is SSL enabled and facing an issue.
> Our cluster size is 3x3. 
> {noformat}
> Datacenter: abssl_dev_tap_ttc
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens   Owns (effective)  Host ID  
>  Rack
> UN  10.109.6.153   94.27 KiB  16   100.0%
> 130e59d2-2a9a-4039-a42f-deb20afcf288  rack1
> UN  10.109.45.8104.43 KiB  16   100.0%
> 35274a2c-f915-4308-9981-d207a4e2108f  rack1
> UN  10.109.66.149  104.23 KiB  16   100.0%
> ea0151bc-fb6c-425d-af42-75c10e52f941  rack1
> Datacenter: abssl_dev_tap_tte
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens   Owns (effective)  Host ID  
>  Rack
> UN  10.110.4.110   104.44 KiB  16   100.0%
> fd4a9fa8-f2a9-494c-afb8-7cb8a08c7554  rack1
> UN  10.110.44.220  99.33 KiB  16   100.0%
> f1dc35c0-a1c2-45fe-9f65-b1cc3d7f6947  rack1
> UN  10.110.49.242  65.57 KiB  16   100.0%
> 72bc4ae5-876d-4d0a-91ac-6cf8b531b4dd  rack1
> dbaasprod-ca-abssl-de-393671-v001-yqlvf:~# nodetool describecluster
> Cluster Information:
>   Name: abssl_dev
>   Snitch: org.apache.cassandra.locator.GossipingPropertyFileSnitch
>   DynamicEndPointSnitch: enabled
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   f68fbc0c-c9d6-3709-8075-c5a0d74192f2: [10.110.4.110, 
> 10.110.44.220, 10.109.6.153, 10.109.45.8, 10.109.66.149, 10.110.49.242]
> {noformat}
> During the upgrade, we re-run the pipeline in which we get new server (with 
> different IP) that will have Cassandra 4.0.4 binary. 
> Disk '/data' (contains data files, commitlogs etc.) will get detached from 
> the old server and get attached to the new server.
> This process works fine on non-SSL cluster but when we perform this on SSL 
> cluster, new node stops communicating with the rest of the nodes.
> In this example, after upgrade, node 10.110.4.110 got replaced with new 
> server with new IP 10.110.44.207.
> *Output from 3.11.4 node:*
> {noformat}
> dbaasprod-ca-abssl-dc-437097-v001-7mump:~# hostname -i
> 10.109.6.153
> dbaasprod-ca-abssl-dc-437097-v001-7mump:~# java -version
> openjdk version "1.8.0_322"
> OpenJDK Runtime Environment (Temurin)(build 1.8.0_322-b06)
> OpenJDK 64-Bit Server VM (Temurin)(build 25.322-b06, mixed mode)
> dbaasprod-ca-abssl-dc-437097-v001-7mump:~# nodetool status
> Datacenter: abssl_dev_tap_ttc
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens   Owns (effective)  Host ID  
>  Rack
> UN  10.109.6.153   135.24 KiB  16   100.0%
> 130e59d2-2a9a-4039-a42f-deb20afcf288  rack1
> UN  10.109.45.8135.35 KiB  16   100.0%
> 35274a2c-f915-4308-9981-d207a4e2108f  rack1
> UN  10.109.66.149  135.25 KiB  16   100.0%
> ea0151bc-fb6c-425d-af42-75c10e52f941  rack1
> D

[jira] [Commented] (CASSANDRA-16555) Add out-of-the-box snitch for Ec2 IMDSv2

2022-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644776#comment-17644776
 ] 

Brandon Williams commented on CASSANDRA-16555:
--

That is possible, but we will need to find a reviewer.

> Add out-of-the-box snitch for Ec2 IMDSv2
> 
>
> Key: CASSANDRA-16555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16555
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Coordination
>Reporter: Paul Rütter (BlueConic)
>Assignee: fulco taen
>Priority: Normal
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In order to patch a vulnerability, Amazon came up with a new version of their 
> metadata service.
> It's no longer unrestricted but now requires a token (in a header), in order 
> to access the metadata service.
> See 
> [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html]
>  for more information.
> Cassandra currently doesn't offer an out-of-the-box snitch class to support 
> this.
> See 
> [https://cassandra.apache.org/doc/latest/operating/snitch.html#snitch-classes]
> This issue asks to add support for this as a separate snitch class.
> We'll probably do a PR for this, as we are in the process of developing one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16555) Add out-of-the-box snitch for Ec2 IMDSv2

2022-12-08 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-16555:
-
Test and Documentation Plan: New snitch should be added to the docs. Run 
CI.  (was: New snitch should be added to the docs.)
 Status: Patch Available  (was: Open)

> Add out-of-the-box snitch for Ec2 IMDSv2
> 
>
> Key: CASSANDRA-16555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16555
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Coordination
>Reporter: Paul Rütter (BlueConic)
>Assignee: fulco taen
>Priority: Normal
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In order to patch a vulnerability, Amazon came up with a new version of their 
> metadata service.
> It's no longer unrestricted but now requires a token (in a header), in order 
> to access the metadata service.
> See 
> [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html]
>  for more information.
> Cassandra currently doesn't offer an out-of-the-box snitch class to support 
> this.
> See 
> [https://cassandra.apache.org/doc/latest/operating/snitch.html#snitch-classes]
> This issue asks to add support for this as a separate snitch class.
> We'll probably do a PR for this, as we are in the process of developing one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18103) Allow different compaction strategies per-DC

2022-12-08 Thread Johnny Miller (Jira)
Johnny Miller created CASSANDRA-18103:
-

 Summary: Allow different compaction strategies per-DC
 Key: CASSANDRA-18103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18103
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


I have a requirement for deploying an additional DC. The cluster is split 
between multiple DCS - on-prem and a cloud.

Several tables use LCS and perform fine on bare metal and not so well on the 
infrastructure allocated for the cloud DC.

The cloud deployment is intended to run offline analytical batch-type workloads 
where the criticality of read response times does not necessitate LCS. The cost 
of presenting appropriate storage for LCS is high and unnecessary for the 
system requirements or budget.

The JMX call to change compaction locally for testing/migrating compaction, 
unfortunately, does not survive restarts, schema changes etc.

It would be very helpful to indicate on the table what compaction strategy to 
use per dc or make the JMX change durable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16555) Add out-of-the-box snitch for Ec2 IMDSv2

2022-12-08 Thread Thomas Steinmaurer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644718#comment-17644718
 ] 

Thomas Steinmaurer commented on CASSANDRA-16555:


[~brandon.williams] many thanks for picking this up! As there are PRs available 
now, how realistic would it be that this goes into the not yet released 
3.11.15? Again, thanks a lot!

> Add out-of-the-box snitch for Ec2 IMDSv2
> 
>
> Key: CASSANDRA-16555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16555
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Coordination
>Reporter: Paul Rütter (BlueConic)
>Assignee: fulco taen
>Priority: Normal
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In order to patch a vulnerability, Amazon came up with a new version of their 
> metadata service.
> It's no longer unrestricted but now requires a token (in a header), in order 
> to access the metadata service.
> See 
> [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html]
>  for more information.
> Cassandra currently doesn't offer an out-of-the-box snitch class to support 
> this.
> See 
> [https://cassandra.apache.org/doc/latest/operating/snitch.html#snitch-classes]
> This issue asks to add support for this as a separate snitch class.
> We'll probably do a PR for this, as we are in the process of developing one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644708#comment-17644708
 ] 

Stefan Miklosovic edited comment on CASSANDRA-11721 at 12/8/22 10:10 AM:
-

This was effectively achieved in CASSANDRA-10383

Is this ticket still relevant?

The difference between 10383 and this one is that there is not option in cql 
for TRUNCATE, it is driven by a table property. I think that is better approach 
because in practice, I just can't see scenario when I just do not want to take 
a snapshot on truncated tables _just right now_ but otherwise I am happy to 
have them. What is the reasoning behind that? I think it is either yes we want 
them or no we do not. Some tables are not so interesting to keep holding data 
for them forever after truncation. That decision is mostly made upon's table 
creation. If somebody want to have snapshots anyway they can just enable that 
flag again.

With TRUNCATE + option for CQL, if somebody does not want these snapshots taken 
upon truncation at all, he would be forced to explicitly mention it every 
single time.


was (Author: smiklosovic):
This was effectively achieved in CASSANDRA-10383. May we close this ticket?

> Have a per operation truncate ddl "no snapshot" option
> --
>
> Key: CASSANDRA-11721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11721
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL, Local/Snapshots
>Reporter: Jeremy Hanna
>Priority: Low
>  Labels: AdventCalendar2021
>
> Right now with truncate, it will always create a snapshot.  That is the right 
> thing to do most of the time.  'auto_snapshot' exists as an option to disable 
> that but it is server wide and requires a restart to change.  There are data 
> models, however, that require rotating through a handful of tables and 
> periodically truncating them.  Currently you either have to operate with no 
> safety net (some actually do this) or manually clear those snapshots out 
> periodically.  Both are less than optimal.
> In HDFS, you generally delete something where it goes to the trash.  If you 
> don't want that safety net, you can do something like 'rm -rf -skiptrash 
> /jeremy/stuff' in one command.
> It would be nice to have something in the truncate ddl to skip the snapshot 
> on a per operation basis.  Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'.
> This might also be useful in those situations where you're just playing with 
> data and you don't want something to take a snapshot in a development system. 
>  If that's the case, this would also be useful for the DROP operation, but 
> that convenience is not the main reason for this option.
> +Additional information for newcomers:+
> This test is a bit more complex that normal LHF tickets but is still 
> reasonably easy.
> The idea is to support disabling snapshots when performing a Truncate as 
> follow:
> {code}TRUNCATE x WITH OPTIONS = { 'snapshot' : false }{code}
> In order to implement that feature several changes are required:
> * A new Class {{TruncateAttributes}} inheriting from {{PropertyDefinitions}} 
> must be create in a similar way to {{KeyspaceAttributes}} or 
> {{TableAttributes}}
> * This class should be passed to the {{TruncateStatement}} constructor and 
> stored as a field
> * The ANTLR parser logic should be change to retrieve the options and passe 
> them to the constructor (see {{createKeyspaceStatement}} for an example)
> * The {{TruncateStatement}} will then need to be modified to take into 
> account the new option. Locally it will neeed to call 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} if no snapshot should 
> be done instead of  {{ColumnFamilyStore#truncateBlocking}}. For non local 
> call it will need to pass a new parameter to 
> {{StorageProxy#truncateBloking}}. That parameter will then need to be passed 
> to the other nodes through the {{TruncateRequest}}.
> * As a new field need to be added to {{TruncateRequest}} this field will need 
> to be serialized and deserialized and a new {{MessagingService.Version}} will 
> need to be created and set as the current version the new version should be 
> 50 (and yes it means that the next release will be a major one 5.0)
> * In {{TruncateVerbHandler}} the new field should be used to determine if 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} or 
> {{ColumnFamilyStore#truncateBlocking}} should be called.  
> * An in-jvm test should be added in 
> {{test/distributed/org/apache/cassandra/distributed/test}}  to test that 
> truncate does not generate snapshots when the new option is specified. 
> Do not hesitate to ping the mentor for more information.



--
This message was sent by Atlassian Jira
(v8.20.10#8

[jira] [Updated] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option

2022-12-08 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-11721:
--
Component/s: Local/Snapshots

> Have a per operation truncate ddl "no snapshot" option
> --
>
> Key: CASSANDRA-11721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11721
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL, Local/Snapshots
>Reporter: Jeremy Hanna
>Priority: Low
>  Labels: AdventCalendar2021
>
> Right now with truncate, it will always create a snapshot.  That is the right 
> thing to do most of the time.  'auto_snapshot' exists as an option to disable 
> that but it is server wide and requires a restart to change.  There are data 
> models, however, that require rotating through a handful of tables and 
> periodically truncating them.  Currently you either have to operate with no 
> safety net (some actually do this) or manually clear those snapshots out 
> periodically.  Both are less than optimal.
> In HDFS, you generally delete something where it goes to the trash.  If you 
> don't want that safety net, you can do something like 'rm -rf -skiptrash 
> /jeremy/stuff' in one command.
> It would be nice to have something in the truncate ddl to skip the snapshot 
> on a per operation basis.  Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'.
> This might also be useful in those situations where you're just playing with 
> data and you don't want something to take a snapshot in a development system. 
>  If that's the case, this would also be useful for the DROP operation, but 
> that convenience is not the main reason for this option.
> +Additional information for newcomers:+
> This test is a bit more complex that normal LHF tickets but is still 
> reasonably easy.
> The idea is to support disabling snapshots when performing a Truncate as 
> follow:
> {code}TRUNCATE x WITH OPTIONS = { 'snapshot' : false }{code}
> In order to implement that feature several changes are required:
> * A new Class {{TruncateAttributes}} inheriting from {{PropertyDefinitions}} 
> must be create in a similar way to {{KeyspaceAttributes}} or 
> {{TableAttributes}}
> * This class should be passed to the {{TruncateStatement}} constructor and 
> stored as a field
> * The ANTLR parser logic should be change to retrieve the options and passe 
> them to the constructor (see {{createKeyspaceStatement}} for an example)
> * The {{TruncateStatement}} will then need to be modified to take into 
> account the new option. Locally it will neeed to call 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} if no snapshot should 
> be done instead of  {{ColumnFamilyStore#truncateBlocking}}. For non local 
> call it will need to pass a new parameter to 
> {{StorageProxy#truncateBloking}}. That parameter will then need to be passed 
> to the other nodes through the {{TruncateRequest}}.
> * As a new field need to be added to {{TruncateRequest}} this field will need 
> to be serialized and deserialized and a new {{MessagingService.Version}} will 
> need to be created and set as the current version the new version should be 
> 50 (and yes it means that the next release will be a major one 5.0)
> * In {{TruncateVerbHandler}} the new field should be used to determine if 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} or 
> {{ColumnFamilyStore#truncateBlocking}} should be called.  
> * An in-jvm test should be added in 
> {{test/distributed/org/apache/cassandra/distributed/test}}  to test that 
> truncate does not generate snapshots when the new option is specified. 
> Do not hesitate to ping the mentor for more information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644708#comment-17644708
 ] 

Stefan Miklosovic commented on CASSANDRA-11721:
---

This was effectively achieved in CASSANDRA-10383. May we close this ticket?

> Have a per operation truncate ddl "no snapshot" option
> --
>
> Key: CASSANDRA-11721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11721
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/CQL
>Reporter: Jeremy Hanna
>Priority: Low
>  Labels: AdventCalendar2021
>
> Right now with truncate, it will always create a snapshot.  That is the right 
> thing to do most of the time.  'auto_snapshot' exists as an option to disable 
> that but it is server wide and requires a restart to change.  There are data 
> models, however, that require rotating through a handful of tables and 
> periodically truncating them.  Currently you either have to operate with no 
> safety net (some actually do this) or manually clear those snapshots out 
> periodically.  Both are less than optimal.
> In HDFS, you generally delete something where it goes to the trash.  If you 
> don't want that safety net, you can do something like 'rm -rf -skiptrash 
> /jeremy/stuff' in one command.
> It would be nice to have something in the truncate ddl to skip the snapshot 
> on a per operation basis.  Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'.
> This might also be useful in those situations where you're just playing with 
> data and you don't want something to take a snapshot in a development system. 
>  If that's the case, this would also be useful for the DROP operation, but 
> that convenience is not the main reason for this option.
> +Additional information for newcomers:+
> This test is a bit more complex that normal LHF tickets but is still 
> reasonably easy.
> The idea is to support disabling snapshots when performing a Truncate as 
> follow:
> {code}TRUNCATE x WITH OPTIONS = { 'snapshot' : false }{code}
> In order to implement that feature several changes are required:
> * A new Class {{TruncateAttributes}} inheriting from {{PropertyDefinitions}} 
> must be create in a similar way to {{KeyspaceAttributes}} or 
> {{TableAttributes}}
> * This class should be passed to the {{TruncateStatement}} constructor and 
> stored as a field
> * The ANTLR parser logic should be change to retrieve the options and passe 
> them to the constructor (see {{createKeyspaceStatement}} for an example)
> * The {{TruncateStatement}} will then need to be modified to take into 
> account the new option. Locally it will neeed to call 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} if no snapshot should 
> be done instead of  {{ColumnFamilyStore#truncateBlocking}}. For non local 
> call it will need to pass a new parameter to 
> {{StorageProxy#truncateBloking}}. That parameter will then need to be passed 
> to the other nodes through the {{TruncateRequest}}.
> * As a new field need to be added to {{TruncateRequest}} this field will need 
> to be serialized and deserialized and a new {{MessagingService.Version}} will 
> need to be created and set as the current version the new version should be 
> 50 (and yes it means that the next release will be a major one 5.0)
> * In {{TruncateVerbHandler}} the new field should be used to determine if 
> {{ColumnFamilyStore#truncateBlockingWithoutSnapshot}} or 
> {{ColumnFamilyStore#truncateBlocking}} should be called.  
> * An in-jvm test should be added in 
> {{test/distributed/org/apache/cassandra/distributed/test}}  to test that 
> truncate does not generate snapshots when the new option is specified. 
> Do not hesitate to ping the mentor for more information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14013) Data loss in snapshots keyspace after service restart

2022-12-08 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic reassigned CASSANDRA-14013:
-

Assignee: Stefan Miklosovic  (was: Stefan Miklosovic)

> Data loss in snapshots keyspace after service restart
> -
>
> Key: CASSANDRA-14013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core, Local/Snapshots
>Reporter: Gregor Uhlenheuer
>Assignee: Stefan Miklosovic
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I am posting this bug in hope to discover the stupid mistake I am doing 
> because I can't imagine a reasonable answer for the behavior I see right now 
> :-)
> In short words, I do observe data loss in a keyspace called *snapshots* after 
> restarting the Cassandra service. Say I do have 1000 records in a table 
> called *snapshots.test_idx* then after restart the table has less entries or 
> is even empty.
> My kind of "mysterious" observation is that it happens only in a keyspace 
> called *snapshots*...
> h3. Steps to reproduce
> These steps to reproduce show the described behavior in "most" attempts (not 
> every single time though).
> {code}
> # create keyspace
> CREATE KEYSPACE snapshots WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> # create table
> CREATE TABLE snapshots.test_idx (key text, seqno bigint, primary key(key));
> # insert some test data
> INSERT INTO snapshots.test_idx (key,seqno) values ('key1', 1);
> ...
> INSERT INTO snapshots.test_idx (key,seqno) values ('key1000', 1000);
> # count entries
> SELECT count(*) FROM snapshots.test_idx;
> 1000
> # restart service
> kill 
> cassandra -f
> # count entries
> SELECT count(*) FROM snapshots.test_idx;
> 0
> {code}
> I hope someone can point me to the obvious mistake I am doing :-)
> This happened to me using both Cassandra 3.9 and 3.11.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-18031) Make snapshot directory configurable

2022-12-08 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic reassigned CASSANDRA-18031:
-

Assignee: Stefan Miklosovic

> Make snapshot directory configurable
> 
>
> Key: CASSANDRA-18031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18031
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Snapshots
>Reporter: Paulo Motta
>Assignee: Stefan Miklosovic
>Priority: Normal
>  Labels: lhf
>
> Currently the snapshot directory is hard-coded to 
> \{table_dir}/snapshots/\{tag}.
> It should be possible to configure this to another directory, something like 
> \{snapshot_dir}/\{table}/\{snapshot_tag}.
> Alternatively we could make a single directory for all snapshots making the 
> global snapshot tag unique: \{snapshot_dir}/\{global_snapshot_tag}.
> We should probably check that \{snapshot_dir} is in the same disk as 
> \{storage_directory} to allow hard linking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644695#comment-17644695
 ] 

Stefan Miklosovic edited comment on CASSANDRA-18102 at 12/8/22 9:49 AM:


This ticket is easy to do but I think it will not be optimal. Currently, upon 
every SELECT statement, we would need to load all snapshots all over again and 
again from disk because SnapshotManager is holding only to-be-expired ones.


was (Author: smiklosovic):
This ticket is easy to do but I think it will not be optimal. Currently, upon 
every SELECT statement, we would need to load all snapshots all over again and 
again from disk because SnapshotManager is holding only expired ones.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18102) Add a virtual table to list snapshots

2022-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644695#comment-17644695
 ] 

Stefan Miklosovic commented on CASSANDRA-18102:
---

This ticket is easy to do but I think it will not be optimal. Currently, upon 
every SELECT statement, we would need to load all snapshots all over again and 
again from disk because SnapshotManager is holding only expired ones.

> Add a virtual table to list snapshots
> -
>
> Key: CASSANDRA-18102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18102
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Virtual Tables, Local/Snapshots
>Reporter: Paulo Motta
>Assignee: maxwellguo
>Priority: Normal
>
> It should be possible to query a node's snapshots via virtual tables.
> The table should expose the same fields/columns as the 
> [TableSnapshot|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/snapshot/TableSnapshot.java]
>  class.
> Something along these lines:
> {noformat}
> cqlsh> SELECT * FROM system_views.snapshots;
>  
> tag | keyspace_name | table_name |  table_id | is_ephemeral | created_at | 
> expires_at | directories
> +---++---+--+---++
> 1670460346841 | system | compaction_info | 
> 123e4567-e89b-12d3-a456-426614174000 | false | 2022-12-08T00:45:47.108Z | 
> null | 
> {'/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/snapshots/1670460346841'}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org