[jira] [Updated] (CASSANDRA-19569) sstableupgrade is very slow

2024-04-17 Thread Norbert Schultz (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Schultz updated CASSANDRA-19569:

   Platform: Java11,Linux  (was: All)
Description: 
We are in the process of migrating cassandra from 3.11.x to 4.1.4 and upgrading 
the sstables using sstableupgrade from Cassandra V4.1.4, from `me-` to `nb-` 
Format

Unfortunately, the process is very very slow (less than 0.5 MB/s).

Some observations:
- The process is only slow on (fast) SSDs, but not on ram disks.
- The sstables consist of many partitions (this may be unrelated)
- The upgrade process is fast, if we use `automatic_sstable_upgrade` instead of 
the sstableupgradetool.
- We give enough RAM (export MAX_HEAP_SIZE=8g)

On profiling, we found out, that sstableupgrade is burning most CPU time on 
{{posix_fadvise}} (see flamegraph_sstableupgrade.png ).

My naive interpretation of the whole {{maybeReopenEarly}} to {{posix_fadvise}} 
chain is, that the process just informs the linux kernel, that the written data 
should not be cached. If we comment out the call to 
{{NativeLibrary.trySkipCache}}, the conversion is running at expected 10MB/s 
(see flamegraph_ok.png )



  was:
We are in the process of migrating cassandra from 3.11.x to 4.1.4 and upgrading 
the sstables using sstableupgrade from Cassandra V4.1.4, from `me-` to `nb-` 
Format

Unfortunately, the process is very very slow (less than 0.5 MB/s).

Some observations:
- The process is only slow on (fast) SSDs, but not on ram disks.
- The sstables consist of many partitions (this may be unrelated)
- The upgrade process is fast, if we use `automatic_sstable_upgrade` instead of 
the sstableupgradetool.
- We give enough RAM (export MAX_HEAP_SIZE=8g)

On profiling, we found out, that sstableupgrade is burning most CPU time on 
{{posix_fadvise}} (see attached   !flamegraph_sstableupgrade.png! ).

My naive interpretation of the whole {{maybeReopenEarly}} to {{posix_fadvise}} 
chain is, that the process just informs the linux kernel, that the written data 
should not be cached. If we comment out the call to 
{{NativeLibrary.trySkipCache}}, the conversion is running at expected 10MB/s 
(see  !flamegraph_ok.png! )




> sstableupgrade is very slow
> ---
>
> Key: CASSANDRA-19569
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19569
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Norbert Schultz
>Priority: Normal
> Attachments: flamegraph_ok.png, flamegraph_sstableupgrade.png
>
>
> We are in the process of migrating cassandra from 3.11.x to 4.1.4 and 
> upgrading the sstables using sstableupgrade from Cassandra V4.1.4, from `me-` 
> to `nb-` Format
> Unfortunately, the process is very very slow (less than 0.5 MB/s).
> Some observations:
> - The process is only slow on (fast) SSDs, but not on ram disks.
> - The sstables consist of many partitions (this may be unrelated)
> - The upgrade process is fast, if we use `automatic_sstable_upgrade` instead 
> of the sstableupgradetool.
> - We give enough RAM (export MAX_HEAP_SIZE=8g)
> On profiling, we found out, that sstableupgrade is burning most CPU time on 
> {{posix_fadvise}} (see flamegraph_sstableupgrade.png ).
> My naive interpretation of the whole {{maybeReopenEarly}} to 
> {{posix_fadvise}} chain is, that the process just informs the linux kernel, 
> that the written data should not be cached. If we comment out the call to 
> {{NativeLibrary.trySkipCache}}, the conversion is running at expected 10MB/s 
> (see flamegraph_ok.png )



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-19569) sstableupgrade is very slow

2024-04-17 Thread Norbert Schultz (Jira)
Norbert Schultz created CASSANDRA-19569:
---

 Summary: sstableupgrade is very slow
 Key: CASSANDRA-19569
 URL: https://issues.apache.org/jira/browse/CASSANDRA-19569
 Project: Cassandra
  Issue Type: Bug
Reporter: Norbert Schultz
 Attachments: flamegraph_ok.png, flamegraph_sstableupgrade.png

We are in the process of migrating cassandra from 3.11.x to 4.1.4 and upgrading 
the sstables using sstableupgrade from Cassandra V4.1.4, from `me-` to `nb-` 
Format

Unfortunately, the process is very very slow (less than 0.5 MB/s).

Some observations:
- The process is only slow on (fast) SSDs, but not on ram disks.
- The sstables consist of many partitions (this may be unrelated)
- The upgrade process is fast, if we use `automatic_sstable_upgrade` instead of 
the sstableupgradetool.
- We give enough RAM (export MAX_HEAP_SIZE=8g)

On profiling, we found out, that sstableupgrade is burning most CPU time on 
{{posix_fadvise}} (see attached   !flamegraph_sstableupgrade.png! ).

My naive interpretation of the whole {{maybeReopenEarly}} to {{posix_fadvise}} 
chain is, that the process just informs the linux kernel, that the written data 
should not be cached. If we comment out the call to 
{{NativeLibrary.trySkipCache}}, the conversion is running at expected 10MB/s 
(see  !flamegraph_ok.png! )





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19401) Nodetool import expects directory structure

2024-04-17 Thread Norbert Schultz (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838135#comment-17838135
 ] 

Norbert Schultz commented on CASSANDRA-19401:
-

Hi [~smiklosovic] I tested your patch it it works for the nodetool import use 
case :) Thanks alot!


> Nodetool import expects directory structure
> ---
>
> Key: CASSANDRA-19401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19401
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Norbert Schultz
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1.x, 5.0.x, 5.x
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> According to the 
> [documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
>  the nodetool import should not rely on the folder structure of the imported 
> sst files:
> {quote}
> Because the keyspace and table are specified on the command line for nodetool 
> import, there is not the same requirement as with sstableloader, to have the 
> SSTables in a specific directory path. When importing snapshots or 
> incremental backups with nodetool import, the SSTables don’t need to be 
> copied to another directory.
> {quote}
> However when importing old cassandra snapshots, we figured out, that sstables 
> still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
> when keyspace and table name are already present as parameters for the 
> nodetool import call.
> Call we used:
> {code}
> nodetool import --copy-data mykeyspace mytable /full_path_to/test1
> {code}
> Log:
> {code}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
> SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
> Options{srcPaths='[/full_path_to/test1]', resetLevel=true, 
> clearRepaired=true, verifySSTables=true, verifyTokens=true, 
> invalidateCaches=true, extendedVerify=false, copyData= true}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
> SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
> {code}
> However, when we move the sstables (.db-Files) to 
> {{alternative/mykeyspace/mytable}}
> and import with
> {code}
> nodetool import --copy-data mykeyspace mytable 
> /fullpath/alternative/mykeyspace/mytable
> {code}
> the import works
> {code}
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:177 - Loading new SSTables and building secondary 
> indexes for mykeyspace/mytable: 
> [BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
>  
> BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:190 - Done loading load new SSTables for 
> mykeyspace/mytable
> {code}
> We experienced this in Cassandra 4.1.3 on Java 11 (Linux)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19401) Nodetool import expects directory structure

2024-04-16 Thread Norbert Schultz (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17837992#comment-17837992
 ] 

Norbert Schultz commented on CASSANDRA-19401:
-

Hi [~smiklosovic] are there any updates on this?

We can workaround on this issue, now that we know, but maybe the rest of the 
community is happy if it's not happening anymore.


> Nodetool import expects directory structure
> ---
>
> Key: CASSANDRA-19401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19401
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Norbert Schultz
>Priority: Normal
> Fix For: 4.1.x, 5.0.x, 5.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> According to the 
> [documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
>  the nodetool import should not rely on the folder structure of the imported 
> sst files:
> {quote}
> Because the keyspace and table are specified on the command line for nodetool 
> import, there is not the same requirement as with sstableloader, to have the 
> SSTables in a specific directory path. When importing snapshots or 
> incremental backups with nodetool import, the SSTables don’t need to be 
> copied to another directory.
> {quote}
> However when importing old cassandra snapshots, we figured out, that sstables 
> still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
> when keyspace and table name are already present as parameters for the 
> nodetool import call.
> Call we used:
> {code}
> nodetool import --copy-data mykeyspace mytable /full_path_to/test1
> {code}
> Log:
> {code}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
> SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
> Options{srcPaths='[/full_path_to/test1]', resetLevel=true, 
> clearRepaired=true, verifySSTables=true, verifyTokens=true, 
> invalidateCaches=true, extendedVerify=false, copyData= true}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
> SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
> {code}
> However, when we move the sstables (.db-Files) to 
> {{alternative/mykeyspace/mytable}}
> and import with
> {code}
> nodetool import --copy-data mykeyspace mytable 
> /fullpath/alternative/mykeyspace/mytable
> {code}
> the import works
> {code}
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:177 - Loading new SSTables and building secondary 
> indexes for mykeyspace/mytable: 
> [BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
>  
> BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:190 - Done loading load new SSTables for 
> mykeyspace/mytable
> {code}
> We experienced this in Cassandra 4.1.3 on Java 11 (Linux)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19401) Nodetool import expects directory structure

2024-02-19 Thread Norbert Schultz (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17818375#comment-17818375
 ] 

Norbert Schultz commented on CASSANDRA-19401:
-

[~smiklosovic] Hi Stefan, I tested your PR and the nodetool import is working 
there :)

I cannot say to much about the PR itself. From my perspective It looks as if 
removing the desired check could have other implications.

Thanks alot for your rapid response!


> Nodetool import expects directory structure
> ---
>
> Key: CASSANDRA-19401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19401
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Norbert Schultz
>Priority: Normal
> Fix For: 4.1.x, 5.0.x, 5.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> According to the 
> [documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
>  the nodetool import should not rely on the folder structure of the imported 
> sst files:
> {quote}
> Because the keyspace and table are specified on the command line for nodetool 
> import, there is not the same requirement as with sstableloader, to have the 
> SSTables in a specific directory path. When importing snapshots or 
> incremental backups with nodetool import, the SSTables don’t need to be 
> copied to another directory.
> {quote}
> However when importing old cassandra snapshots, we figured out, that sstables 
> still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
> when keyspace and table name are already present as parameters for the 
> nodetool import call.
> Call we used:
> {code}
> nodetool import --copy-data mykeyspace mytable /full_path_to/test1
> {code}
> Log:
> {code}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
> SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
> Options{srcPaths='[/full_path_to/test1]', resetLevel=true, 
> clearRepaired=true, verifySSTables=true, verifyTokens=true, 
> invalidateCaches=true, extendedVerify=false, copyData= true}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
> SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
> {code}
> However, when we move the sstables (.db-Files) to 
> {{alternative/mykeyspace/mytable}}
> and import with
> {code}
> nodetool import --copy-data mykeyspace mytable 
> /fullpath/alternative/mykeyspace/mytable
> {code}
> the import works
> {code}
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:177 - Loading new SSTables and building secondary 
> indexes for mykeyspace/mytable: 
> [BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
>  
> BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:190 - Done loading load new SSTables for 
> mykeyspace/mytable
> {code}
> We experienced this in Cassandra 4.1.3 on Java 11 (Linux)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19401) Nodetool import expects directory structure

2024-02-15 Thread Norbert Schultz (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Schultz updated CASSANDRA-19401:

Description: 
According to the 
[documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
 the nodetool import should not rely on the folder structure of the imported 
sst files:

{quote}
Because the keyspace and table are specified on the command line for nodetool 
import, there is not the same requirement as with sstableloader, to have the 
SSTables in a specific directory path. When importing snapshots or incremental 
backups with nodetool import, the SSTables don’t need to be copied to another 
directory.
{quote}

However when importing old cassandra snapshots, we figured out, that sstables 
still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
when keyspace and table name are already present as parameters for the nodetool 
import call.

Call we used:

{code}
nodetool import --copy-data mykeyspace mytable /full_path_to/test1
{code}

Log:

{code}
INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
Options{srcPaths='[/full_path_to/test1]', resetLevel=true, clearRepaired=true, 
verifySSTables=true, verifyTokens=true, invalidateCaches=true, 
extendedVerify=false, copyData= true}
INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
{code}

However, when we move the sstables (.db-Files) to 
{{alternative/mykeyspace/mytable}}

and import with

{code}
nodetool import --copy-data mykeyspace mytable 
/fullpath/alternative/mykeyspace/mytable
{code}

the import works

{code}
INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
SSTableImporter.java:177 - Loading new SSTables and building secondary indexes 
for mykeyspace/mytable: 
[BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
 
BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
SSTableImporter.java:190 - Done loading load new SSTables for mykeyspace/mytable
{code}


We experienced this in Cassandra 4.1.3 on Java 11 (Linux)

  was:
According to the 
[documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
 the nodetool import should not rely on the folder structure of the imported 
sst files:

{quote}
Because the keyspace and table are specified on the command line for nodetool 
import, there is not the same requirement as with sstableloader, to have the 
SSTables in a specific directory path. When importing snapshots or incremental 
backups with nodetool import, the SSTables don’t need to be copied to another 
directory.
{quote}

However when importing old cassandra snapshots, we figured out, that sstables 
still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
when keyspace and table name are already present as parameters for the nodetool 
import call.

Call we used:

{code}
nodetool import --copy-data mykeyspace mytable /full_path_to/test1
{code}

Log:

{code}
INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
Options{srcPaths='[/full_path_to/test1]', resetLevel=true, clearRepaired=true, 
verifySSTables=true, verifyTokens=true, invalidateCaches=true, 
extendedVerify=false, copyData= true}
INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
{code}

However, when we move the sstables (.db-Files) to 
{{alternative/mykeyspace/mytable}}

and import with

{code}
nodetool import --copy-data mykeyspace mytable 
/fullpath/alternative/mykeyspace/mytable
{code}

the import works

{code}
INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
SSTableImporter.java:177 - Loading new SSTables and building secondary indexes 
for mykeyspace/mytable: 
[BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
 
BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
SSTableImporter.java:190 - Done loading load new SSTables for mykeyspace/mytable
{code}


> Nodetool import expects directory structure
> ---
>
> Key: CASSANDRA-19401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19401
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Norbert Schultz
>Priority: N

[jira] [Updated] (CASSANDRA-19401) Nodetool import expects directory structure

2024-02-15 Thread Norbert Schultz (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Schultz updated CASSANDRA-19401:

Summary: Nodetool import expects directory structure  (was: nodetool import 
expects directory)

> Nodetool import expects directory structure
> ---
>
> Key: CASSANDRA-19401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19401
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Norbert Schultz
>Priority: Normal
>
> According to the 
> [documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
>  the nodetool import should not rely on the folder structure of the imported 
> sst files:
> {quote}
> Because the keyspace and table are specified on the command line for nodetool 
> import, there is not the same requirement as with sstableloader, to have the 
> SSTables in a specific directory path. When importing snapshots or 
> incremental backups with nodetool import, the SSTables don’t need to be 
> copied to another directory.
> {quote}
> However when importing old cassandra snapshots, we figured out, that sstables 
> still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
> when keyspace and table name are already present as parameters for the 
> nodetool import call.
> Call we used:
> {code}
> nodetool import --copy-data mykeyspace mytable /full_path_to/test1
> {code}
> Log:
> {code}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
> SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
> Options{srcPaths='[/full_path_to/test1]', resetLevel=true, 
> clearRepaired=true, verifySSTables=true, verifyTokens=true, 
> invalidateCaches=true, extendedVerify=false, copyData= true}
> INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
> SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
> {code}
> However, when we move the sstables (.db-Files) to 
> {{alternative/mykeyspace/mytable}}
> and import with
> {code}
> nodetool import --copy-data mykeyspace mytable 
> /fullpath/alternative/mykeyspace/mytable
> {code}
> the import works
> {code}
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:177 - Loading new SSTables and building secondary 
> indexes for mykeyspace/mytable: 
> [BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
>  
> BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
> INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
> SSTableImporter.java:190 - Done loading load new SSTables for 
> mykeyspace/mytable
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-19401) nodetool import expects directory

2024-02-15 Thread Norbert Schultz (Jira)
Norbert Schultz created CASSANDRA-19401:
---

 Summary: nodetool import expects directory
 Key: CASSANDRA-19401
 URL: https://issues.apache.org/jira/browse/CASSANDRA-19401
 Project: Cassandra
  Issue Type: Bug
Reporter: Norbert Schultz


According to the 
[documentation|https://cassandra.apache.org/doc/4.1/cassandra/operating/bulk_loading.html]
 the nodetool import should not rely on the folder structure of the imported 
sst files:

{quote}
Because the keyspace and table are specified on the command line for nodetool 
import, there is not the same requirement as with sstableloader, to have the 
SSTables in a specific directory path. When importing snapshots or incremental 
backups with nodetool import, the SSTables don’t need to be copied to another 
directory.
{quote}

However when importing old cassandra snapshots, we figured out, that sstables 
still need to be in a directory called like $KEYSPACE/$TABLENAME files, even 
when keyspace and table name are already present as parameters for the nodetool 
import call.

Call we used:

{code}
nodetool import --copy-data mykeyspace mytable /full_path_to/test1
{code}

Log:

{code}
INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,565 
SSTableImporter.java:72 - Loading new SSTables for mykeyspace/mytable: 
Options{srcPaths='[/full_path_to/test1]', resetLevel=true, clearRepaired=true, 
verifySSTables=true, verifyTokens=true, invalidateCaches=true, 
extendedVerify=false, copyData= true}
INFO  [RMI TCP Connection(21)-127.0.0.1] 2024-02-15 10:41:06,566 
SSTableImporter.java:173 - No new SSTables were found for mykeyspace/mytable
{code}

However, when we move the sstables (.db-Files) to 
{{alternative/mykeyspace/mytable}}

and import with

{code}
nodetool import --copy-data mykeyspace mytable 
/fullpath/alternative/mykeyspace/mytable
{code}

the import works

{code}
INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
SSTableImporter.java:177 - Loading new SSTables and building secondary indexes 
for mykeyspace/mytable: 
[BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-2-big-Data.db'),
 
BigTableReader(path='/mnt/ramdisk/cassandra4/data/mykeyspace/mytable-561a12d0cbe611eead78fbfd293cee40/me-1-big-Data.db')]
INFO  [RMI TCP Connection(23)-127.0.0.1] 2024-02-15 10:43:36,093 
SSTableImporter.java:190 - Done loading load new SSTables for mykeyspace/mytable
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16656) Assertion Error on invalid ALTER TABLE Command

2021-05-05 Thread Norbert Schultz (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Schultz updated CASSANDRA-16656:

Description: 
If there is an invalid ALTER TABLE statement (extra comma), then Cassandra 
responds with an assertion error.

 

This happens on 3.11.10 but not on 4.0-rc1

This statement fails:
{code:java}
> cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE foo WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
cqlsh> use foo;
cqlsh:foo> create table test(id INT, PRIMARY KEY(id));
cqlsh:foo> alter table test ADD (x INT, y INT,);
ServerError: java.lang.AssertionError
{code}

The following can be found inside the Log:

{code}
java.lang.AssertionError: null
at 
org.apache.cassandra.cql3.statements.AlterTableStatementColumn.(AlterTableStatementColumn.java:36)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.Cql_Parser.alterTableStatement(Cql_Parser.java:5820) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.Cql_Parser.cqlStatement(Cql_Parser.java:628) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.cql3.CqlParser.cqlStatement(CqlParser.java:604) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.cql3.CqlParser.query(CqlParser.java:344) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.CQLFragmentParser.parseAnyUnhandled(CQLFragmentParser.java:76)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:589)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:559) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.transport.Message$Dispatcher.processRequest(Message.java:685)
 [apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.transport.Message$Dispatcher.lambda$channelRead0$0(Message.java:591)
 [apache-cassandra-3.11.10.jar:3.11.10]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_292]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:113) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_292]
{code}

Cassandra 4.0-rc1 responds as expected:
{code}
cqlsh:foo> alter table test ADD (x INT, y INT,);
SyntaxException: line 1:35 no viable alternative at input ')' (...(x INT, y 
INT,[)]...)
{code}

  was:
If there is an invalid ALTER TABLE statement (extra comma), then Cassandra 
responds with an assertion error.

 

This happens on 3.11.10 but not on 4.0-rc1

This statement fails:
{code:java}
> cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE foo WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
cqlsh> use foo;
cqlsh:foo> create table test(id INT, PRIMARY KEY(id));
cqlsh:foo> alter table test ADD (x INT, y INT,);
ServerError: java.lang.AssertionError
{code}

The following can be found inside the Log:

{code}
java.lang.AssertionError: null
at 
org.apache.cassandra.cql3.statements.AlterTableStatementColumn.(AlterTableStatementColumn.java:36)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.Cql_Parser.alterTableStatement(Cql_Parser.java:5820) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.Cql_Parser.cqlStatement(Cql_Parser.java:628) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.cql3.CqlParser.cqlStatement(CqlParser.java:604) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.cql3.CqlParser.query(CqlParser.java:344) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.CQLFragmentParser.parseAnyUnhandled(CQLFragmentParser.java:76)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:589)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcesso

[jira] [Updated] (CASSANDRA-16656) Assertion Error on invalid ALTER TABLE Command

2021-05-05 Thread Norbert Schultz (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Schultz updated CASSANDRA-16656:

   Severity: Low
Description: 
If there is an invalid ALTER TABLE statement (extra comma), then Cassandra 
responds with an assertion error.

 

This happens on 3.11.10 but not on 4.0-rc1

This statement fails:
{code:java}
> cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE foo WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
cqlsh> use foo;
cqlsh:foo> create table test(id INT, PRIMARY KEY(id));
cqlsh:foo> alter table test ADD (x INT, y INT,);
ServerError: java.lang.AssertionError
{code}

The following can be found inside the Log:

{code}
java.lang.AssertionError: null
at 
org.apache.cassandra.cql3.statements.AlterTableStatementColumn.(AlterTableStatementColumn.java:36)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.Cql_Parser.alterTableStatement(Cql_Parser.java:5820) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.Cql_Parser.cqlStatement(Cql_Parser.java:628) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.cql3.CqlParser.cqlStatement(CqlParser.java:604) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.cql3.CqlParser.query(CqlParser.java:344) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.CQLFragmentParser.parseAnyUnhandled(CQLFragmentParser.java:76)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:589)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:559) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.transport.Message$Dispatcher.processRequest(Message.java:685)
 [apache-cassandra-3.11.10.jar:3.11.10]
at 
org.apache.cassandra.transport.Message$Dispatcher.lambda$channelRead0$0(Message.java:591)
 [apache-cassandra-3.11.10.jar:3.11.10]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_292]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:113) 
~[apache-cassandra-3.11.10.jar:3.11.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_292]
{code}

  was:
If there is an invalid ALTER TABLE statement (extra comma), then Cassandra 
responds with an assertion error.

 

This happens on 3.11.10 but not on 4.0-rc1

This statement fails:
{code:java}
> cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE foo WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
cqlsh> use foo;
cqlsh:foo> create table test(id INT, PRIMARY KEY(id));
cqlsh:foo> alter table test ADD (x INT, y INT,);
ServerError: java.lang.AssertionError
{code}


> Assertion Error on invalid ALTER TABLE Command
> --
>
> Key: CASSANDRA-16656
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16656
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Syntax
>Reporter: Norbert Schultz
>Priority: Low
>
> If there is an invalid ALTER TABLE statement (extra comma), then Cassandra 
> responds with an assertion error.
>  
> This happens on 3.11.10 but not on 4.0-rc1
> This statement fails:
> {code:java}
> > cqlsh
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.11.10 | CQL spec 3.4.4 | Native protocol v4]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE foo WITH REPLICATION = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> cqlsh> use foo;
> cqlsh:foo> create table test(id INT, PRIMARY KEY(id));
> cqlsh:foo> alter table test ADD (x INT, y INT,);
> ServerError: java.lang.AssertionError
> {code}
> The following can be found inside the Log:
> {code}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.cql3.statements.AlterTableStatementColumn.(AlterTableStatementColumn.java:36)
>  ~[apache-cassandra-3.11.10.jar:3.

[jira] [Created] (CASSANDRA-16656) Assertion Error on invalid ALTER TABLE Command

2021-05-05 Thread Norbert Schultz (Jira)
Norbert Schultz created CASSANDRA-16656:
---

 Summary: Assertion Error on invalid ALTER TABLE Command
 Key: CASSANDRA-16656
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16656
 Project: Cassandra
  Issue Type: Bug
  Components: CQL/Syntax
Reporter: Norbert Schultz


If there is an invalid ALTER TABLE statement (extra comma), then Cassandra 
responds with an assertion error.

 

This happens on 3.11.10 but not on 4.0-rc1

This statement fails:
{code:java}
> cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE foo WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
cqlsh> use foo;
cqlsh:foo> create table test(id INT, PRIMARY KEY(id));
cqlsh:foo> alter table test ADD (x INT, y INT,);
ServerError: java.lang.AssertionError
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12882) Deadlock in MemtableAllocator

2018-04-18 Thread Norbert Schultz (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norbert Schultz updated CASSANDRA-12882:

Attachment: stacktrace_cassandra_12882.txt

> Deadlock in MemtableAllocator
> -
>
> Key: CASSANDRA-12882
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12882
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.40
> Cassandra 3.5
>Reporter: Nimi Wariboko Jr.
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra.yaml, stacktrace_cassandra_12882.txt, 
> threaddump.txt
>
>
> I'm seeing an issue where a node will eventually lock up and their thread 
> pools - I looked into jstack, and a lot of threads are stuck in the Memtable 
> Allocator
> {code}
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
>   at 
> org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:198)
>   at 
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:89)
>   at 
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
>   at 
> org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
>   at 
> org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:41)
> {code}
> I looked into the code, and its not immediately apparent to me what thread 
> might hold the relevant lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12882) Deadlock in MemtableAllocator

2018-04-18 Thread Norbert Schultz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16442109#comment-16442109
 ] 

Norbert Schultz commented on CASSANDRA-12882:
-

[~krummas] I can provide you with the thread dump part from a heap dump.

[~jjirsa] I have no physical access to the machines, but according to our admin 
stuff the machines do not respond at all anymore and they get restarted as soon 
as they are stuck. In their view there is no abnormal high number of sstables.

 

 

> Deadlock in MemtableAllocator
> -
>
> Key: CASSANDRA-12882
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12882
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.40
> Cassandra 3.5
>Reporter: Nimi Wariboko Jr.
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra.yaml, threaddump.txt
>
>
> I'm seeing an issue where a node will eventually lock up and their thread 
> pools - I looked into jstack, and a lot of threads are stuck in the Memtable 
> Allocator
> {code}
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
>   at 
> org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:198)
>   at 
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:89)
>   at 
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
>   at 
> org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
>   at 
> org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:41)
> {code}
> I looked into the code, and its not immediately apparent to me what thread 
> might hold the relevant lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12882) Deadlock in MemtableAllocator

2018-04-12 Thread Norbert Schultz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435522#comment-16435522
 ] 

Norbert Schultz commented on CASSANDRA-12882:
-

We have the same Problem on fairly huge machines with Cassandra 3.0.14 & DSE 
5.0.11

A lot of Threads waiting for the MemTableAllocator, the whole instance is 
blocking and does not respond to anything.

 
{code:java}
"SharedPool-Worker-19" daemon prio=5 tid=1542 WAITING
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
Local Variable: org.apache.cassandra.utils.concurrent.WaitQueue$AnySignal#10
at 
org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:162)
at 
org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82)
at 
org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
at 
org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
...{code}

> Deadlock in MemtableAllocator
> -
>
> Key: CASSANDRA-12882
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12882
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.40
> Cassandra 3.5
>Reporter: Nimi Wariboko Jr.
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra.yaml, threaddump.txt
>
>
> I'm seeing an issue where a node will eventually lock up and their thread 
> pools - I looked into jstack, and a lot of threads are stuck in the Memtable 
> Allocator
> {code}
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
>   at 
> org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:198)
>   at 
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:89)
>   at 
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
>   at 
> org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
>   at 
> org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:41)
> {code}
> I looked into the code, and its not immediately apparent to me what thread 
> might hold the relevant lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14245) SELECT JSON prints null on empty strings

2018-03-29 Thread Norbert Schultz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418535#comment-16418535
 ] 

Norbert Schultz commented on CASSANDRA-14245:
-

Thanks for fixing.

 

> SELECT JSON prints null on empty strings
> 
>
> Key: CASSANDRA-14245
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14245
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11.2, Ubuntu 16.04 LTS
>  
>Reporter: Norbert Schultz
>Assignee: Francisco Fernandez
>Priority: Major
> Fix For: 4.0, 3.11.3
>
>
> SELECT JSON reports an empty string as null.
>  
> Example:
> {code:java}
> cqlsh:unittest> create table test(id INT, name TEXT, PRIMARY KEY(id));
> cqlsh:unittest> insert into test (id, name) VALUES (1, 'Foo');
> cqlsh:unittest> insert into test (id, name) VALUES (2, '');
> cqlsh:unittest> insert into test (id, name) VALUES (3, null);
> cqlsh:unittest> select * from test;
> id | name
> +--
>   1 |  Foo
>   2 |     
>   3 | null
> (3 rows)
> cqlsh:unittest> select JSON * from test;
> [json]
> --
> {"id": 1, "name": "Foo"}
> {"id": 2, "name": null}
> {"id": 3, "name": null}
> (3 rows){code}
>  
> This even happens, if the string is part of the Primary Key, which makes the 
> generated string not insertable.
>  
> {code:java}
> cqlsh:unittest> create table test2 (id INT, name TEXT, age INT, PRIMARY 
> KEY(id, name));
> cqlsh:unittest> insert into test2 (id, name, age) VALUES (1, '', 42);
> cqlsh:unittest> select JSON * from test2;
> [json]
> 
> {"id": 1, "name": null, "age": 42}
> (1 rows)
> cqlsh:unittest> insert into test2 JSON '{"id": 1, "name": null, "age": 42}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid 
> null value in condition for column name"{code}
>  
> On an older version of Cassandra (3.0.8) does not have this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14245) SELECT JSON prints null on empty strings

2018-02-21 Thread Norbert Schultz (JIRA)
Norbert Schultz created CASSANDRA-14245:
---

 Summary: SELECT JSON prints null on empty strings
 Key: CASSANDRA-14245
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14245
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: Cassandra 3.11.2, Ubuntu 16.04 LTS

 
Reporter: Norbert Schultz


SELECT JSON reports an empty string as null.

 

Example:
{code:java}
cqlsh:unittest> create table test(id INT, name TEXT, PRIMARY KEY(id));
cqlsh:unittest> insert into test (id, name) VALUES (1, 'Foo');
cqlsh:unittest> insert into test (id, name) VALUES (2, '');
cqlsh:unittest> insert into test (id, name) VALUES (3, null);

cqlsh:unittest> select * from test;

id | name
+--
  1 |  Foo
  2 |     
  3 | null

(3 rows)

cqlsh:unittest> select JSON * from test;

[json]
--
{"id": 1, "name": "Foo"}
{"id": 2, "name": null}
{"id": 3, "name": null}

(3 rows){code}
 

This even happens, if the string is part of the Primary Key, which makes the 
generated string not insertable.

 
{code:java}
cqlsh:unittest> create table test2 (id INT, name TEXT, age INT, PRIMARY KEY(id, 
name));
cqlsh:unittest> insert into test2 (id, name, age) VALUES (1, '', 42);
cqlsh:unittest> select JSON * from test2;

[json]

{"id": 1, "name": null, "age": 42}

(1 rows)

cqlsh:unittest> insert into test2 JSON '{"id": 1, "name": null, "age": 42}';
InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid 
null value in condition for column name"{code}
 

On an older version of Cassandra (3.0.8) does not have this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org