[jira] [Comment Edited] (CASSANDRA-10279) Inconsistent update/delete behavior for static lists vs lists

2015-09-07 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14733873#comment-14733873
 ] 

Lex Lythius edited comment on CASSANDRA-10279 at 9/7/15 4:10 PM:
-

Most likely the underlying issue is the same as in 
https://issues.apache.org/jira/browse/CASSANDRA-9838. Removing elements by 
setting to null doesn't work either:
{code:sql}
update t_lists set sls[0]=null where id=1;
InvalidRequest: code=2200 [Invalid query] message="Attempted to set an element 
on a list which is null"
{code}
Said bug entry shows there's a patch available, due to be fixed at 2.1.x. Not 
fixed in 2.2.0, however.


was (Author: lexlythius):
Maybe the underlying issue is the same as in 
https://issues.apache.org/jira/browse/CASSANDRA-9838.

> Inconsistent update/delete behavior for static lists vs lists
> -
>
> Key: CASSANDRA-10279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.2.0 | CQL spec 3.3.0 | Native 
> protocol v4]
> Ubuntu 14.04 x64
>Reporter: Lex Lythius
>
> Partial list deletions (either in the form of {{UPDATE list = list - \[...]}} 
> or {{DELETE list\[index]}} form) work fine in regular list columns, but do 
> nothing or throw bad index error when applied to *static* list columns.
> Example:
> {code:sql}
> create table t_lists (
>   id int,
>   dt int,
>   ls list,
>   sls list static,
>   primary key(id, dt) 
> );
> cqlsh:test> update t_lists set ls = ['foo', 'bar', 'baz'], sls = ['chico', 
> 'harpo', 'zeppo', 'groucho'] where id=1 and dt=1;
> cqlsh:test> select * from t_lists;
>  id | dt | sls| ls
> +++---
>   1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo', 'bar', 'baz']
> (1 rows)
> cqlsh:test> delete ls[2] from t_lists where id = 1 and dt = 1; -- works
> cqlsh:test> delete sls[2] from t_lists where id = 1; -- errors
> InvalidRequest: code=2200 [Invalid query] message="Attempted to delete an 
> element from a list which is null"
> cqlsh:test> select * from t_lists;
>  id | dt | sls| ls
> +++
>   1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo', 'bar']
> (1 rows)
> cqlsh:test> update t_lists set ls = ls - ['bar'] where id=1 and dt=1; -- works
> cqlsh:test> update t_lists set sls = sls - ['zeppo'] where id=1; -- fails 
> silently
> cqlsh:test> select * from t_lists;
>  id | dt | sls| ls
> +++-
>   1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo']
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10279) Inconsistent update/delete behavior for static lists vs lists

2015-09-07 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14733873#comment-14733873
 ] 

Lex Lythius commented on CASSANDRA-10279:
-

Maybe the underlying issue is the same as in 
https://issues.apache.org/jira/browse/CASSANDRA-9838.

> Inconsistent update/delete behavior for static lists vs lists
> -
>
> Key: CASSANDRA-10279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.2.0 | CQL spec 3.3.0 | Native 
> protocol v4]
> Ubuntu 14.04 x64
>Reporter: Lex Lythius
>
> Partial list deletions (either in the form of {{UPDATE list = list - \[...]}} 
> or {{DELETE list\[index]}} form) work fine in regular list columns, but do 
> nothing or throw bad index error when applied to *static* list columns.
> Example:
> {code:sql}
> create table t_lists (
>   id int,
>   dt int,
>   ls list,
>   sls list static,
>   primary key(id, dt) 
> );
> cqlsh:test> update t_lists set ls = ['foo', 'bar', 'baz'], sls = ['chico', 
> 'harpo', 'zeppo', 'groucho'] where id=1 and dt=1;
> cqlsh:test> select * from t_lists;
>  id | dt | sls| ls
> +++---
>   1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo', 'bar', 'baz']
> (1 rows)
> cqlsh:test> delete ls[2] from t_lists where id = 1 and dt = 1; -- works
> cqlsh:test> delete sls[2] from t_lists where id = 1; -- errors
> InvalidRequest: code=2200 [Invalid query] message="Attempted to delete an 
> element from a list which is null"
> cqlsh:test> select * from t_lists;
>  id | dt | sls| ls
> +++
>   1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo', 'bar']
> (1 rows)
> cqlsh:test> update t_lists set ls = ls - ['bar'] where id=1 and dt=1; -- works
> cqlsh:test> update t_lists set sls = sls - ['zeppo'] where id=1; -- fails 
> silently
> cqlsh:test> select * from t_lists;
>  id | dt | sls| ls
> +++-
>   1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo']
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10279) Inconsistent update/delete behavior for static lists vs lists

2015-09-07 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-10279:
---

 Summary: Inconsistent update/delete behavior for static lists vs 
lists
 Key: CASSANDRA-10279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10279
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.2.0 | CQL spec 3.3.0 | Native 
protocol v4]
Ubuntu 14.04 x64

Reporter: Lex Lythius


Partial list deletions (either in the form of {{UPDATE list = list - \[...]}} 
or {{DELETE list\[index]}} form) work fine in regular list columns, but do 
nothing or throw bad index error when applied to *static* list columns.

Example:
{code:sql}
create table t_lists (
  id int,
  dt int,
  ls list,
  sls list static,
  primary key(id, dt) 
);
cqlsh:test> update t_lists set ls = ['foo', 'bar', 'baz'], sls = ['chico', 
'harpo', 'zeppo', 'groucho'] where id=1 and dt=1;
cqlsh:test> select * from t_lists;

 id | dt | sls| ls
+++---
  1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo', 'bar', 'baz']

(1 rows)
cqlsh:test> delete ls[2] from t_lists where id = 1 and dt = 1; -- works
cqlsh:test> delete sls[2] from t_lists where id = 1; -- errors
InvalidRequest: code=2200 [Invalid query] message="Attempted to delete an 
element from a list which is null"
cqlsh:test> select * from t_lists;

 id | dt | sls| ls
+++
  1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo', 'bar']

(1 rows)

cqlsh:test> update t_lists set ls = ls - ['bar'] where id=1 and dt=1; -- works
cqlsh:test> update t_lists set sls = sls - ['zeppo'] where id=1; -- fails 
silently
cqlsh:test> select * from t_lists;

 id | dt | sls| ls
+++-
  1 |  1 | ['chico', 'harpo', 'zeppo', 'groucho'] | ['foo']

(1 rows)

{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8675) COPY TO/FROM broken for newline characters

2015-01-23 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8675:
---
Description: 
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{code:sql}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{code}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name, since we can't 
rename tables?


  was:
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{code:sql}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{code}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name, since we can't 
rename tables?



> COPY TO/FROM broken for newline characters
> --
>
> Key: CASSANDRA-8675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8675
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04 64-bit
>Reporter: Lex Lythius
>  Labels: cql
> Attachments: copytest.csv
>
>
> Exporting/importing does not preserve contents when texts containing newline 
> (and possibly other) characters are involved:
> {code:sql}
> cqlsh:test> create table if not exists copytest (id int primary key, t text);
> cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
> ... character');
> cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
> character');
> cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
> character (typed backslash, t)');
> cqlsh:test> select * from copytest;
>  id | t
> +-
>   1 |   Th

[jira] [Updated] (CASSANDRA-8675) COPY TO/FROM broken for newline characters

2015-01-23 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8675:
---
Description: 
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{code:sql}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{code}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name, since we can't 
rename tables?


  was:
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{code:sql}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{code}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name?



> COPY TO/FROM broken for newline characters
> --
>
> Key: CASSANDRA-8675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8675
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04 64-bit
>Reporter: Lex Lythius
>  Labels: cql
> Attachments: copytest.csv
>
>
> Exporting/importing does not preserve contents when texts containing newline 
> (and possibly other) characters are involved:
> {code:sql}
> cqlsh:test> create table if not exists copytest (id int primary key, t text);
> cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
> ... character');
> cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
> character');
> cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
> character (typed backslash, t)');
> cqlsh:test> select * from copytest;
>  id | t
> +--

[jira] [Updated] (CASSANDRA-8675) COPY TO/FROM broken for newline characters

2015-01-23 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8675:
---
Attachment: copytest.csv

> COPY TO/FROM broken for newline characters
> --
>
> Key: CASSANDRA-8675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8675
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04 64-bit
>Reporter: Lex Lythius
>  Labels: cql
> Attachments: copytest.csv
>
>
> Exporting/importing does not preserve contents when texts containing newline 
> (and possibly other) characters are involved:
> {code:sql}
> cqlsh:test> create table if not exists copytest (id int primary key, t text);
> cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
> ... character');
> cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
> character');
> cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
> character (typed backslash, t)');
> cqlsh:test> select * from copytest;
>  id | t
> +-
>   1 |   This has a newline\ncharacter
>   2 |This has a quote " character
>   3 | This has a fake tab \t character (entered slash-t text)
> (3 rows)
> cqlsh:test> copy copytest to '/tmp/copytest.csv';
> 3 rows exported in 0.034 seconds.
> cqlsh:test> 
> cqlsh:test> copy copytest to '/tmp/copytest.csv';
> 3 rows exported in 0.034 seconds.
> cqlsh:test> copy copytest from '/tmp/copytest.csv';
> 3 rows imported in 0.005 seconds.
> cqlsh:test> select * from copytest;
>  id | t
> +---
>   1 |  This has a newlinencharacter
>   2 |  This has a quote " character
>   3 | This has a fake tab \t character (typed backslash, t)
> (3 rows)
> {code}
> I tried replacing \n in the CSV file with \\n, which just expands to \n in 
> the table; and with an actual newline character, which fails with error since 
> it prematurely terminates the record.
> It seems backslashes are only used to take the following character as a 
> literal
> Until this is fixed, what would be the best way to refactor an old table with 
> a new, incompatible structure maintaining its content and name?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8675) COPY TO/FROM broken for newline characters

2015-01-23 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8675:
---
Description: 
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{code:sql}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{code}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name?


  was:
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)


I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name?



> COPY TO/FROM broken for newline characters
> --
>
> Key: CASSANDRA-8675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8675
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04 64-bit
>Reporter: Lex Lythius
>  Labels: cql
>
> Exporting/importing does not preserve contents when texts containing newline 
> (and possibly other) characters are involved:
> {code:sql}
> cqlsh:test> create table if not exists copytest (id int primary key, t text);
> cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
> ... character');
> cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
> character');
> cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
> character (typed backslash, t)');
> cqlsh:test> select * from copytest;
>  id | t
> +-
>   1 |   This has a newline\n

[jira] [Updated] (CASSANDRA-8675) COPY TO/FROM broken for newline characters

2015-01-23 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8675:
---
Description: 
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)


I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name?


  was:
Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{{code}}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{{/code}}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name?



> COPY TO/FROM broken for newline characters
> --
>
> Key: CASSANDRA-8675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8675
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04 64-bit
>Reporter: Lex Lythius
>  Labels: cql
>
> Exporting/importing does not preserve contents when texts containing newline 
> (and possibly other) characters are involved:
> cqlsh:test> create table if not exists copytest (id int primary key, t text);
> cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
> ... character');
> cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
> character');
> cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
> character (typed backslash, t)');
> cqlsh:test> select * from copytest;
>  id | t
> +-
>   1 |   This has a newline\ncharacter
> 

[jira] [Created] (CASSANDRA-8675) COPY TO/FROM broken for newline characters

2015-01-23 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-8675:
--

 Summary: COPY TO/FROM broken for newline characters
 Key: CASSANDRA-8675
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8675
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
protocol v3]
Ubuntu 14.04 64-bit
Reporter: Lex Lythius


Exporting/importing does not preserve contents when texts containing newline 
(and possibly other) characters are involved:

{{code}}
cqlsh:test> create table if not exists copytest (id int primary key, t text);
cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline
... character');
cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " 
character');
cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t 
character (typed backslash, t)');
cqlsh:test> select * from copytest;

 id | t
+-
  1 |   This has a newline\ncharacter
  2 |This has a quote " character
  3 | This has a fake tab \t character (entered slash-t text)

(3 rows)

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> 

cqlsh:test> copy copytest to '/tmp/copytest.csv';

3 rows exported in 0.034 seconds.
cqlsh:test> copy copytest from '/tmp/copytest.csv';

3 rows imported in 0.005 seconds.
cqlsh:test> select * from copytest;

 id | t
+---
  1 |  This has a newlinencharacter
  2 |  This has a quote " character
  3 | This has a fake tab \t character (typed backslash, t)

(3 rows)
{{/code}}

I tried replacing \n in the CSV file with \\n, which just expands to \n in the 
table; and with an actual newline character, which fails with error since it 
prematurely terminates the record.

It seems backslashes are only used to take the following character as a literal

Until this is fixed, what would be the best way to refactor an old table with a 
new, incompatible structure maintaining its content and name?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-3025) PHP/PDO driver for Cassandra CQL

2015-01-06 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266240#comment-14266240
 ] 

Lex Lythius commented on CASSANDRA-3025:


@Michael Yes, I've checked nearly all the drivers listed in that 
PlanetCassandra page.

While Cassandra JIRA != DataStax, there is considerable overlapping between 
Apache and DataStax regarding Cassandra. My goal is to get some indication from 
either of the main driving forces behind Cassandra.

@Alex's reply sheds some light on this matter: it will be a C++-built wrapper 
around the official C++ driver, which is native protocol-based. Will it be 
PDO-based as well? (with several C*-specific added features, to be sure). The 
one I've been contributing to is, but it uses the deprecated Thrift interface 
and, apart from a few bugs, it has no support for user-defined types and tuples.

I will be happy to contribute in my pretty modest capacity as C++ developer to 
a C* PHP driver -- I would just like to know I work in the right direction.

If this JIRA is not the right place to bring the issue to the table, where 
would that be?


> PHP/PDO driver for Cassandra CQL
> 
>
> Key: CASSANDRA-3025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3025
> Project: Cassandra
>  Issue Type: New Feature
>  Components: API
>Reporter: Mikko Koppanen
>Assignee: Mikko Koppanen
>  Labels: php
> Attachments: pdo_cassandra-0.1.0.tgz, pdo_cassandra-0.1.1.tgz, 
> pdo_cassandra-0.1.2.tgz, pdo_cassandra-0.1.3.tgz, pdo_cassandra-0.2.0.tgz, 
> pdo_cassandra-0.2.1.tgz, php_test_results_20110818_2317.txt
>
>
> Hello,
> attached is the initial version of the PDO driver for Cassandra CQL language. 
> This is a native PHP extension written in what I would call a combination of 
> C and C++, due to PHP being C. The thrift API used is the C++.
> The API looks roughly following:
> {code}
>  $db = new PDO('cassandra:host=127.0.0.1;port=9160');
> $db->exec ("CREATE KEYSPACE mytest with strategy_class = 'SimpleStrategy' and 
> strategy_options:replication_factor=1;");
> $db->exec ("USE mytest");
> $db->exec ("CREATE COLUMNFAMILY users (
>   my_key varchar PRIMARY KEY,
>   full_name varchar );");
>   
> $stmt = $db->prepare ("INSERT INTO users (my_key, full_name) VALUES (:key, 
> :full_name);");
> $stmt->execute (array (':key' => 'mikko', ':full_name' => 'Mikko K' ));
> {code}
> Currently prepared statements are emulated on the client side but I 
> understand that there is a plan to add prepared statements to Cassandra CQL 
> API as well. I will add this feature in to the extension as soon as they are 
> implemented.
> Additional documentation can be found in github 
> https://github.com/mkoppanen/php-pdo_cassandra, in the form of rendered 
> MarkDown file. Tests are currently not included in the package file and they 
> can be found in the github for now as well.
> I have created documentation in docbook format as well, but have not yet 
> rendered it.
> Comments and feedback are welcome.
> Thanks,
> Mikko



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-3025) PHP/PDO driver for Cassandra CQL

2015-01-05 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14265088#comment-14265088
 ] 

Lex Lythius edited comment on CASSANDRA-3025 at 1/5/15 9:10 PM:


Would you mind reopening this bug? cassandra-pdo (CQL | PHP) has been dead for 
two and a half years.

The lack of a clear official roadmap towards a PHP driver for Cassandra has 
resulted in a long list of, regretably, suboptimal work(arounds), none of which 
(AFAIK) is production-ready:

- YACassandraPDO: forked from deceased cassandra-pdo, CQL3-only C++, 
thrift-based PDO driver (I've contributed some patches to this project without 
really being a C++ developer). Still has some issues with param binding and 
value parsing and proves tricky to build from source (because of thrift) and in 
Mac OS X. Works with C*1.x, but has issues with C*2.x.
- PHPCassa: probably the most stable solution currently, but not based around 
CQL and still uses Thrift.
- php-cassandra-binary: a pure PHP, native protocol-based driver that 
apparently has issues with return values when NULLs are involved.
- php-cassandra: a non-PDO wrapper around deprecated branch of official C++ 
native protocl driver. It reportedly has problems with collections.
- CQLSÍ: a showcase of what someone can do out of desperation to get a CQL 
driver, CQLSÍ wraps around CQLSH and parses its output. Not great for 
performance, floats and timestamps and probably for NULL values.
- Many of the others are pretty outdated.

I guess DataStax, understandably, lacks the resources to provide an official 
driver for PHP. But I think it would be greatly beneficial to everyone if it 
could provide *guidance and coordination of efforts* so there's not a bunch of 
us doing parallel, duplicate, incompatible and hardly production-ready attempts 
to remedy this.

PS: My apologies if I've made any erroneous statemtents about existing driver 
projects.


was (Author: lexlythius):
Would you mind reopening this bug? cassandra-pdo (CQL | PHP) has been dead for 
two and a half years.

The lack of a clear official roadmap towards a PHP driver for Cassandra has 
resulted in a long list of, regretably, suboptimal work(arounds), none of which 
(AFAIK) is production-ready:

- YACassandraPDO: forked from deceased cassandra-pdo, CQL3-only C++, 
thrift-based PDO driver (I've contributed some patches to this project without 
really being a C++ developer). Still has some issues with param binding and 
value parsing and proves tricky to build from source (because of thrift) and in 
Mac OS X. Works with C*1.x, but has issues with C*2.x.
- PHPCassa: probably the most stable solution currently, but not based around 
CQL and still uses Thrift.
- php-cassandra-binary: a pure PHP, native protocol-based driver that 
apparently has issues with return values when NULLs are involved.
- php-cassandra: a non-PDO wrapper around deprecated branch of official C++ 
native protocl driver. It reportedly has problems with collections.
- CQLSÍ: a showcase of what someone can do out of desperation to get a CQL 
driver, CQLSÍ wraps around CQLSH and parses its output. Not great for 
performance, floats and timestamps and probably for NULL values.
- Many of the others are pretty outdated.

I guess DataStax, understandably, lacks the resources to provide an official 
driver for PHP. But I think it would be greatly beneficial to everyone if it 
could provide *guidance and coordination of efforts* so there's not a bunch of 
us doing parallel, duplicate, incompatible and hardly production-ready attempts 
to remediate this.


> PHP/PDO driver for Cassandra CQL
> 
>
> Key: CASSANDRA-3025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3025
> Project: Cassandra
>  Issue Type: New Feature
>  Components: API
>Reporter: Mikko Koppanen
>Assignee: Mikko Koppanen
>  Labels: php
> Attachments: pdo_cassandra-0.1.0.tgz, pdo_cassandra-0.1.1.tgz, 
> pdo_cassandra-0.1.2.tgz, pdo_cassandra-0.1.3.tgz, pdo_cassandra-0.2.0.tgz, 
> pdo_cassandra-0.2.1.tgz, php_test_results_20110818_2317.txt
>
>
> Hello,
> attached is the initial version of the PDO driver for Cassandra CQL language. 
> This is a native PHP extension written in what I would call a combination of 
> C and C++, due to PHP being C. The thrift API used is the C++.
> The API looks roughly following:
> {code}
>  $db = new PDO('cassandra:host=127.0.0.1;port=9160');
> $db->exec ("CREATE KEYSPACE mytest with strategy_class = 'SimpleStrategy' and 
> strategy_options:replication_factor=1;");
> $db->exec ("USE mytest");
> $db->exec ("CREATE COLUMNFAMILY users (
>   my_key varchar PRIMARY KEY,
>   full_name varchar );");
>   
> $stmt = $db->prepare ("INSERT INTO users (my_key

[jira] [Commented] (CASSANDRA-3025) PHP/PDO driver for Cassandra CQL

2015-01-05 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14265088#comment-14265088
 ] 

Lex Lythius commented on CASSANDRA-3025:


Would you mind reopening this bug? cassandra-pdo (CQL | PHP) has been dead for 
two and a half years.

The lack of a clear official roadmap towards a PHP driver for Cassandra has 
resulted in a long list of, regretably, suboptimal work(arounds), none of which 
(AFAIK) is production-ready:

- YACassandraPDO: forked from deceased cassandra-pdo, CQL3-only C++, 
thrift-based PDO driver (I've contributed some patches to this project without 
really being a C++ developer). Still has some issues with param binding and 
value parsing and proves tricky to build from source (because of thrift) and in 
Mac OS X. Works with C*1.x, but has issues with C*2.x.
- PHPCassa: probably the most stable solution currently, but not based around 
CQL and still uses Thrift.
- php-cassandra-binary: a pure PHP, native protocol-based driver that 
apparently has issues with return values when NULLs are involved.
- php-cassandra: a non-PDO wrapper around deprecated branch of official C++ 
native protocl driver. It reportedly has problems with collections.
- CQLSÍ: a showcase of what someone can do out of desperation to get a CQL 
driver, CQLSÍ wraps around CQLSH and parses its output. Not great for 
performance, floats and timestamps and probably for NULL values.
- Many of the others are pretty outdated.

I guess DataStax, understandably, lacks the resources to provide an official 
driver for PHP. But I think it would be greatly beneficial to everyone if it 
could provide *guidance and coordination of efforts* so there's not a bunch of 
us doing parallel, duplicate, incompatible and hardly production-ready attempts 
to remediate this.


> PHP/PDO driver for Cassandra CQL
> 
>
> Key: CASSANDRA-3025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3025
> Project: Cassandra
>  Issue Type: New Feature
>  Components: API
>Reporter: Mikko Koppanen
>Assignee: Mikko Koppanen
>  Labels: php
> Attachments: pdo_cassandra-0.1.0.tgz, pdo_cassandra-0.1.1.tgz, 
> pdo_cassandra-0.1.2.tgz, pdo_cassandra-0.1.3.tgz, pdo_cassandra-0.2.0.tgz, 
> pdo_cassandra-0.2.1.tgz, php_test_results_20110818_2317.txt
>
>
> Hello,
> attached is the initial version of the PDO driver for Cassandra CQL language. 
> This is a native PHP extension written in what I would call a combination of 
> C and C++, due to PHP being C. The thrift API used is the C++.
> The API looks roughly following:
> {code}
>  $db = new PDO('cassandra:host=127.0.0.1;port=9160');
> $db->exec ("CREATE KEYSPACE mytest with strategy_class = 'SimpleStrategy' and 
> strategy_options:replication_factor=1;");
> $db->exec ("USE mytest");
> $db->exec ("CREATE COLUMNFAMILY users (
>   my_key varchar PRIMARY KEY,
>   full_name varchar );");
>   
> $stmt = $db->prepare ("INSERT INTO users (my_key, full_name) VALUES (:key, 
> :full_name);");
> $stmt->execute (array (':key' => 'mikko', ':full_name' => 'Mikko K' ));
> {code}
> Currently prepared statements are emulated on the client side but I 
> understand that there is a plan to add prepared statements to Cassandra CQL 
> API as well. I will add this feature in to the extension as soon as they are 
> implemented.
> Additional documentation can be found in github 
> https://github.com/mkoppanen/php-pdo_cassandra, in the form of rendered 
> MarkDown file. Tests are currently not included in the package file and they 
> can be found in the github for now as well.
> I have created documentation in docbook format as well, but have not yet 
> rendered it.
> Comments and feedback are welcome.
> Thanks,
> Mikko



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-05 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235540#comment-14235540
 ] 

Lex Lythius commented on CASSANDRA-8424:


Thanks for the explanation, Sylvain. I'll wait for 3.0 then.
Just so that this doesn't get overlooked when refactoring takes place, I added 
the corresponding "#8099 blocks #8424" link.
Cheers!

> Collection filtering not working when using PK
> --
>
> Key: CASSANDRA-8424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04.5 64-bit
>Reporter: Lex Lythius
>Priority: Minor
>  Labels: collections
>
> I can do queries for collection keys/values as detailed in 
> http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
> having a secondary index on the collection it will work (with {{ALLOW 
> FILTERING}}) but only as long as the query is performed through a *secondary* 
> index. If you go through PK it won't. Of course full-scan filtering query is 
> not allowed.
> As an example, I created this table:
> {code:SQL}
> CREATE TABLE test.uloc9 (
> usr int,
> type int,
> gb ascii,
> gb_q ascii,
> info map,
> lat float,
> lng float,
> q int,
> traits set,
> ts timestamp,
> PRIMARY KEY (usr, type)
> );
> CREATE INDEX uloc9_gb ON test.uloc9 (gb);
> CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
> CREATE INDEX uloc9_traits ON test.uloc9 (traits);
> {code}
> then added some data and queried:
> {code}
> cqlsh:test> select * from uloc9 where gb='/nw' and info contains 'argentina' 
> allow filtering;
>  usr | type | gb  | gb_q  | info | lat
>   | lng  | q | traits | ts
> -+--+-+---+--+--+--+---++--
>1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
>1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
> (2 rows)
> cqlsh:test> select * from uloc9 where usr=1 and info contains 'argentina' 
> allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> cqlsh:test> select * from uloc9 where usr=1 and type=0 and info contains 
> 'argentina' allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> {code}
> Maybe I got things wrong, but I don't see any reasons why collection 
> filtering should fail when using PK while it succeeds using any secondary 
> index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8425) Add full entry indexing capability for maps

2014-12-04 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-8425:
--

 Summary: Add full entry indexing capability for maps
 Key: CASSANDRA-8425
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8425
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Lex Lythius
Priority: Minor


Since C* 2.1 we're able to index map keys or map values and query them using 
{{CONTAINS KEY}} and {{CONTAINS}} respectively.

However, some use cases require being able to filter for specific key/value 
combination. Syntax might be something along the lines of 
{code:sql}
SELECT * FROM table WHERE map['country'] = 'usa';
{code}
or
{code:sql}
SELECT * FROM table WHERE map CONTAINS ENTRY { 'country': 'usa' };
{code}

Of course, right now we can have the client refine the results from
{code:sql}
SELECT * FROM table WHERE map CONTAINS { 'usa' };
{code}
or
{code:sql}
SELECT * FROM table WHERE map CONTAINS KEY { 'country' };
{code}
but I believe this would add a good deal of flexibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8424:
---
Labels: collections  (was: )

> Collection filtering not working when using PK
> --
>
> Key: CASSANDRA-8424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04.5 64-bit
>Reporter: Lex Lythius
>  Labels: collections
> Fix For: 2.1.3
>
>
> I can do queries for collection keys/values as detailed in 
> http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
> having a secondary index on the collection it will work (with {{ALLOW 
> FILTERING}}) but only as long as the query is performed through a *secondary* 
> index. If you go through PK it won't. Of course full-scan filtering query is 
> not allowed.
> As an example, I created this table:
> {code:SQL}
> CREATE TABLE test.uloc9 (
> usr int,
> type int,
> gb ascii,
> gb_q ascii,
> info map,
> lat float,
> lng float,
> q int,
> traits set,
> ts timestamp,
> PRIMARY KEY (usr, type)
> );
> CREATE INDEX uloc9_gb ON test.uloc9 (gb);
> CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
> CREATE INDEX uloc9_traits ON test.uloc9 (traits);
> {code}
> then added some data and queried:
> {code}
> cqlsh:test> select * from uloc9 where gb='/nw' and info contains 'argentina' 
> allow filtering;
>  usr | type | gb  | gb_q  | info | lat
>   | lng  | q | traits | ts
> -+--+-+---+--+--+--+---++--
>1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
>1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
> (2 rows)
> cqlsh:test> select * from uloc9 where usr=1 and info contains 'argentina' 
> allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> cqlsh:test> select * from uloc9 where usr=1 and type=0 and info contains 
> 'argentina' allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> {code}
> Maybe I got things wrong, but I don't see any reasons why collection 
> filtering should fail when using PK while it succeeds using any secondary 
> index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8424:
---
Summary: Collection filtering not working when using PK  (was: Secondary 
index on column not working when using PK)

> Collection filtering not working when using PK
> --
>
> Key: CASSANDRA-8424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04.5 64-bit
>Reporter: Lex Lythius
> Fix For: 2.1.3
>
>
> I can do queries for collection keys/values as detailed in 
> http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
> having a secondary index on the collection it will work (with {{ALLOW 
> FILTERING}}) but only as long as the query is performed through a *secondary* 
> index. If you go through PK it won't. Of course full-scan filtering query is 
> not allowed.
> As an example, I created this table:
> {code:SQL}
> CREATE TABLE test.uloc9 (
> usr int,
> type int,
> gb ascii,
> gb_q ascii,
> info map,
> lat float,
> lng float,
> q int,
> traits set,
> ts timestamp,
> PRIMARY KEY (usr, type)
> );
> CREATE INDEX uloc9_gb ON test.uloc9 (gb);
> CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
> CREATE INDEX uloc9_traits ON test.uloc9 (traits);
> {code}
> then added some data and queried:
> {code}
> cqlsh:test> select * from uloc9 where gb='/nw' and info contains 'argentina' 
> allow filtering;
>  usr | type | gb  | gb_q  | info | lat
>   | lng  | q | traits | ts
> -+--+-+---+--+--+--+---++--
>1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
>1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
> (2 rows)
> cqlsh:test> select * from uloc9 where usr=1 and info contains 'argentina' 
> allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> cqlsh:test> select * from uloc9 where usr=1 and type=0 and info contains 
> 'argentina' allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> {code}
> Maybe I got things wrong, but I don't see any reasons why collection 
> filtering should fail when using PK while it succeeds using any secondary 
> index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8424) Secondary index on column not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234571#comment-14234571
 ] 

Lex Lythius commented on CASSANDRA-8424:


By the way, if this happens to be an uninteded filtering capability rather than 
a bug, it is a very useful feature indeed.

> Secondary index on column not working when using PK
> ---
>
> Key: CASSANDRA-8424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04.5 64-bit
>Reporter: Lex Lythius
> Fix For: 2.1.3
>
>
> I can do queries for collection keys/values as detailed in 
> http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
> having a secondary index on the collection it will work (with {{ALLOW 
> FILTERING}}) but only as long as the query is performed through a *secondary* 
> index. If you go through PK it won't. Of course full-scan filtering query is 
> not allowed.
> As an example, I created this table:
> {code:SQL}
> CREATE TABLE test.uloc9 (
> usr int,
> type int,
> gb ascii,
> gb_q ascii,
> info map,
> lat float,
> lng float,
> q int,
> traits set,
> ts timestamp,
> PRIMARY KEY (usr, type)
> );
> CREATE INDEX uloc9_gb ON test.uloc9 (gb);
> CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
> CREATE INDEX uloc9_traits ON test.uloc9 (traits);
> {code}
> then added some data and queried:
> {code}
> cqlsh:test> select * from uloc9 where gb='/nw' and info contains 'argentina' 
> allow filtering;
>  usr | type | gb  | gb_q  | info | lat
>   | lng  | q | traits | ts
> -+--+-+---+--+--+--+---++--
>1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
>1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
> (2 rows)
> cqlsh:test> select * from uloc9 where usr=1 and info contains 'argentina' 
> allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> cqlsh:test> select * from uloc9 where usr=1 and type=0 and info contains 
> 'argentina' allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> {code}
> Maybe I got things wrong, but I don't see any reasons why collection 
> filtering should fail when using PK while it succeeds using any secondary 
> index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8424) Secondary index on column not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-8424:
--

 Summary: Secondary index on column not working when using PK
 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
protocol v3]
Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius


I can do queries for collection keys/values as detailed in 
http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
having a secondary index on the collection it will work (with {{ALLOW 
FILTERING}}) but only as long as the query is performed through a *secondary* 
index. If you go through PK it won't. Of course full-scan filtering query is 
not allowed.

As an example, I created this table:

{code:SQL}
CREATE TABLE test.uloc9 (
usr int,
type int,
gb ascii,
gb_q ascii,
info map,
lat float,
lng float,
q int,
traits set,
ts timestamp,
PRIMARY KEY (usr, type)
);
CREATE INDEX uloc9_gb ON test.uloc9 (gb);
CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
CREATE INDEX uloc9_traits ON test.uloc9 (traits);
{code}
then added some data and queried:
{code}
cqlsh:test> select * from uloc9 where gb='/nw' and info contains 'argentina' 
allow filtering;

 usr | type | gb  | gb_q  | info | lat  
| lng  | q | traits | ts
-+--+-+---+--+--+--+---++--
   1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
-40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 18:20:29-0300
   1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
-40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 18:20:29-0300

(2 rows)
cqlsh:test> select * from uloc9 where usr=1 and info contains 'argentina' allow 
filtering;
code=2200 [Invalid query] message="No indexed columns present in by-columns 
clause with Equal operator"
cqlsh:test> select * from uloc9 where usr=1 and type=0 and info contains 
'argentina' allow filtering;
code=2200 [Invalid query] message="No indexed columns present in by-columns 
clause with Equal operator"
{code}

Maybe I got things wrong, but I don't see any reasons why collection filtering 
should fail when using PK while it succeeds using any secondary index (related 
or otherwise).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4511) Secondary index support for CQL3 collections

2014-11-06 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200569#comment-14200569
 ] 

Lex Lythius commented on CASSANDRA-4511:


Note that CQL for Cassandra 2.x documentation (page 48) still says:

{quote}Currently, you cannot create an index on a column of type map, set, or 
list.{quote}

whereas CREATE INDEX syntax is accurate:

{code}
CREATE CUSTOM INDEX IF NOT EXISTS index_name
ON keyspace_name.table_name ( KEYS ( column_name ) )
(USING class_name) (WITH OPTIONS = map)
{code}

> Secondary index support for CQL3 collections 
> -
>
> Key: CASSANDRA-4511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1 beta1
>
> Attachments: 4511.txt
>
>
> We should allow to 2ndary index on collections. A typical use case would be 
> to add a 'tag set' to say a user profile and to query users based on 
> what tag they have.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7588) cqlsh error for query against collection index - list index out of range

2014-11-05 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14199150#comment-14199150
 ] 

Lex Lythius commented on CASSANDRA-7588:


This still happens in version 2.1.1, CQLSH 5.0.1.
{code}
> SELECT * from uloc8 where info2 CONTAINS KEY 'spd';
Traceback (most recent call last):
  File "/usr/bin/cqlsh", line 861, in onecmd
self.handle_statement(st, statementtext)
  File "/usr/bin/cqlsh", line 901, in handle_statement
return self.handle_parse_error(cmdword, tokens, parsed, srcstr)
  File "/usr/bin/cqlsh", line 910, in handle_parse_error
return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
  File "/usr/bin/cqlsh", line 935, in perform_statement
result = self.perform_simple_statement(stmt)
  File "/usr/bin/cqlsh", line 968, in perform_simple_statement
self.print_result(rows, self.parse_for_table_meta(statement.query_string))
  File "/usr/bin/cqlsh", line 946, in parse_for_table_meta
parsed = cqlruleset.cql_parse(query_string)[1]
IndexError: list index out of range
{code}

> cqlsh error for query against collection index - list index out of range
> 
>
> Key: CASSANDRA-7588
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7588
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: dan jatnieks
>Assignee: Robert Stupp
>  Labels: cqlsh
> Fix For: 2.1.1
>
> Attachments: 7588-cqlsh-6910-contains-allow-filtering.txt
>
>
> This worked in 2.1 RC1
> {noformat}
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.0-rc1-SNAPSHOT | CQL spec 3.1.7 | Native 
> protocol v3]
> Use HELP for help.
> cqlsh> use k1;
> cqlsh:k1> SELECT id, description FROM products WHERE categories CONTAINS 
> 'hdtv';
>  id| description
> ---+-
>  29412 |32-inch LED HDTV (black)
>  34134 | 120-inch 1080p 3D plasma TV
> (2 rows)
> {noformat}
> But fails on 2.1:
> {noformat}
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.0-rc4-SNAPSHOT | CQL spec 3.2.0 | Native 
> protocol v3]
> Use HELP for help.
> cqlsh> use k1;
> cqlsh:k1> SELECT id, description FROM products WHERE categories CONTAINS 
> 'hdtv';
> list index out of range
> cqlsh:k1>
> {noformat}
> This is using the example from the blog post 
> http://www.datastax.com/dev/blog/cql-in-2-1
> A more complete repro:
> {noformat}
> cqlsh:k1>
> cqlsh:k1> CREATE KEYSPACE cat_index_test
>   ... WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> cqlsh:k1> USE cat_index_test;
> cqlsh:cat_index_test>
> cqlsh:cat_index_test>  CREATE TABLE IF NOT EXISTS products (
>   ...   id int PRIMARY KEY,
>   ...   description text,
>   ...   price int,
>   ...   categories set,
>   ...   features map
>   ...   );
> cqlsh:cat_index_test>
> cqlsh:cat_index_test>   CREATE INDEX IF NOT EXISTS cat_index ON 
> products(categories);
> cqlsh:cat_index_test>   CREATE INDEX IF NOT EXISTS feat_index ON 
> products(features);
> cqlsh:cat_index_test>
> cqlsh:cat_index_test>   INSERT INTO products(id, description, price, 
> categories, features)
>   ...VALUES (34134,
>   ...'120-inch 1080p 3D plasma TV',
>   ...,
>   ...{'tv', '3D', 'hdtv'},
>   ...{'screen' : '120-inch', 'refresh-rate' : 
> '400hz', 'techno' : 'plasma'});
> cqlsh:cat_index_test>
> cqlsh:cat_index_test>   INSERT INTO products(id, description, price, 
> categories, features)
>   ...VALUES (29412,
>   ...'32-inch LED HDTV (black)',
>   ...929,
>   ...{'tv', 'hdtv'},
>   ...{'screen' : '32-inch', 'techno' : 
> 'LED'});
> cqlsh:cat_index_test>
> cqlsh:cat_index_test>   INSERT INTO products(id, description, price, 
> categories, features)
>   ...VALUES (38471,
>   ...'32-inch LCD TV',
>   ...110,
>   ...{'tv', 'used'},
>   ...{'screen' : '32-inch', 'techno' : 
> 'LCD'});
> cqlsh:cat_index_test>   SELECT id, description FROM products WHERE categories 
> CONTAINS 'hdtv';
> list index out of range
> cqlsh:cat_index_test>   SELECT id, description FROM products WHERE features 
> CONTAINS '32-inch';
> list index out of range
> cqlsh:cat_index_test> DROP INDEX feat_index;
> cqlsh:cat_index_test> CREATE INDEX feat

[jira] [Updated] (CASSANDRA-8258) SELECT ... TOKEN() function broken in C* 2.1.1

2014-11-05 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8258:
---
Labels: cqlsh  (was: )

{code:title=cqlsh -k blink --debug}
Using CQL driver: 
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh:blink> select token(id) from users limit 1;
Traceback (most recent call last):
  File "/usr/bin/cqlsh", line 861, in onecmd
self.handle_statement(st, statementtext)
  File "/usr/bin/cqlsh", line 901, in handle_statement
return self.handle_parse_error(cmdword, tokens, parsed, srcstr)
  File "/usr/bin/cqlsh", line 910, in handle_parse_error
return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
  File "/usr/bin/cqlsh", line 935, in perform_statement
result = self.perform_simple_statement(stmt)
  File "/usr/bin/cqlsh", line 968, in perform_simple_statement
self.print_result(rows, self.parse_for_table_meta(statement.query_string))
  File "/usr/bin/cqlsh", line 946, in parse_for_table_meta
parsed = cqlruleset.cql_parse(query_string)[1]
IndexError: list index out of range
{code}

> SELECT ... TOKEN() function broken in C* 2.1.1
> --
>
> Key: CASSANDRA-8258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8258
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04 64-bit, Oracle Java 1.7.0_72
>Reporter: Lex Lythius
>  Labels: cqlsh
>
> {code:title=Cassandra 2.1.1, Oracle Java 1.7.0_72, Ubuntu 14.04.1 64 bit}
> cqlsh:blink> show version;
> [cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
> cqlsh:blink> select token(id) from users limit 1;
> list index out of range
> {code}
> versus
> {code:title=Cassandra 2.1.0, Oracle Java 1.7.0_67, Ubuntu 12.04.5 64 bit}
> cqlsh:blink> show version;
> [cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]
> cqlsh:blink> select token(id) from users limit 1;
>  token(id)
> --
>  -9223237793432919630
> (1 rows)
> {code}
> It also fails with C* 2.1.1, Java 1.7.0_72, Ubuntu 12.04.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8258) SELECT ... TOKEN() function broken in C* 2.1.1

2014-11-05 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-8258:
--

 Summary: SELECT ... TOKEN() function broken in C* 2.1.1
 Key: CASSANDRA-8258
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8258
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.04 64-bit, Oracle Java 1.7.0_72
Reporter: Lex Lythius


{code:title=Cassandra 2.1.1, Oracle Java 1.7.0_72, Ubuntu 14.04.1 64 bit}
cqlsh:blink> show version;
[cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
cqlsh:blink> select token(id) from users limit 1;
list index out of range
{code}

versus

{code:title=Cassandra 2.1.0, Oracle Java 1.7.0_67, Ubuntu 12.04.5 64 bit}
cqlsh:blink> show version;
[cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]
cqlsh:blink> select token(id) from users limit 1;

 token(id)
--
 -9223237793432919630

(1 rows)
{code}

It also fails with C* 2.1.1, Java 1.7.0_72, Ubuntu 12.04.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6098) NullPointerException causing query timeout

2013-09-25 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777971#comment-13777971
 ] 

Lex Lythius commented on CASSANDRA-6098:


Dropping the secondary index and creating it again solved the problem at hand.
I still think the issue is worth looking into, though.


> NullPointerException causing query timeout
> --
>
> Key: CASSANDRA-6098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6098
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQLSH 4.0.0
> Cassandra 2.0.0
> Oracle Java 1.7.0_40
> Ubuntu 12.04.3 x64
>Reporter: Lex Lythius
>
> A common SELECT query could not be completed failing with.
> {noformat}
> Request did not complete within rpc_timeout.
> {noformat}
> output.log showed this:
> {noformat}
> ERROR 15:38:04,036 Exception in thread Thread[ReadStage:170,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1867)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.db.index.composites.CompositesIndexOnRegular.isStale(CompositesIndexOnRegular.java:97)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:247)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:102)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1651)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:525)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1639)
> at 
> org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
> at 
> org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1358)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1863)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-6098) NullPointerException causing query timeout

2013-09-25 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777943#comment-13777943
 ] 

Lex Lythius edited comment on CASSANDRA-6098 at 9/25/13 7:17 PM:
-

This happens only when querying for some particular values on a secondary 
index. Cannot reproduce it querying by primary key.

Neither of the following did any good:
{noformat}
Restarting Cassandra
nodetool rebuild_index DB TABLE INDEX
nodetool invalidaterowcache
nodetool invalidatekeycache
nodetool cleanup
nodetool repair
{noformat}

  was (Author: lexlythius):
This happens only when querying for some particular values on a secondary 
index. Cannot reproduce it querying by primary key.

Neither of the following did any good:
{noformat}
restarting cassandra
nodetool invalidaterowcache
nodetool invalidatekeycache
nodetool cleanup
nodetool repair
{noformat}
  
> NullPointerException causing query timeout
> --
>
> Key: CASSANDRA-6098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6098
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQLSH 4.0.0
> Cassandra 2.0.0
> Oracle Java 1.7.0_40
> Ubuntu 12.04.3 x64
>Reporter: Lex Lythius
>
> A common SELECT query could not be completed failing with.
> {noformat}
> Request did not complete within rpc_timeout.
> {noformat}
> output.log showed this:
> {noformat}
> ERROR 15:38:04,036 Exception in thread Thread[ReadStage:170,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1867)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.db.index.composites.CompositesIndexOnRegular.isStale(CompositesIndexOnRegular.java:97)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:247)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:102)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1651)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:525)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1639)
> at 
> org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
> at 
> org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1358)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1863)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-6098) NullPointerException causing query timeout

2013-09-25 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777943#comment-13777943
 ] 

Lex Lythius commented on CASSANDRA-6098:


This happens only when querying for some particular values on a secondary 
index. Cannot reproduce it querying by primary key.

Neither of the following did any good:
{noformat}
restarting cassandra
nodetool invalidaterowcache
nodetool invalidatekeycache
nodetool cleanup
nodetool repair
{noformat}

> NullPointerException causing query timeout
> --
>
> Key: CASSANDRA-6098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6098
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQLSH 4.0.0
> Cassandra 2.0.0
> Oracle Java 1.7.0_40
> Ubuntu 12.04.3 x64
>Reporter: Lex Lythius
>
> A common SELECT query could not be completed failing with.
> {noformat}
> Request did not complete within rpc_timeout.
> {noformat}
> output.log showed this:
> {noformat}
> ERROR 15:38:04,036 Exception in thread Thread[ReadStage:170,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1867)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.db.index.composites.CompositesIndexOnRegular.isStale(CompositesIndexOnRegular.java:97)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:247)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:102)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1651)
> at 
> org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
> at 
> org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:525)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1639)
> at 
> org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
> at 
> org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1358)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1863)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-6098) NullPointerException causing query timeout

2013-09-25 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-6098:
--

 Summary: NullPointerException causing query timeout
 Key: CASSANDRA-6098
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6098
 Project: Cassandra
  Issue Type: Bug
 Environment: CQLSH 4.0.0
Cassandra 2.0.0
Oracle Java 1.7.0_40
Ubuntu 12.04.3 x64
Reporter: Lex Lythius


A common SELECT query could not be completed failing with.

{noformat}
Request did not complete within rpc_timeout.
{noformat}

output.log showed this:
{noformat}
ERROR 15:38:04,036 Exception in thread Thread[ReadStage:170,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1867)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.db.index.composites.CompositesIndexOnRegular.isStale(CompositesIndexOnRegular.java:97)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:247)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:102)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1651)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:525)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1639)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1358)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1863)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3772) Evaluate Murmur3-based partitioner

2013-06-28 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695919#comment-13695919
 ] 

Lex Lythius commented on CASSANDRA-3772:


Right. I was thinking some web scenarios using tables with non-UUID keys and 
not much screening of input.
Anyway...

> Evaluate Murmur3-based partitioner
> --
>
> Key: CASSANDRA-3772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3772
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 1.2.0 beta 1
>
> Attachments: 0001-CASSANDRA-3772.patch, 
> 0001-CASSANDRA-3772-Test.patch, CASSANDRA-3772-v2.patch, 
> CASSANDRA-3772-v3.patch, CASSANDRA-3772-v4.patch, hashed_partitioner_3.diff, 
> hashed_partitioner.diff, MumPartitionerTest.docx, try_murmur3_2.diff, 
> try_murmur3.diff
>
>
> MD5 is a relatively heavyweight hash to use when we don't need cryptographic 
> qualities, just a good output distribution.  Let's see how much overhead we 
> can save by using Murmur3 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3772) Evaluate Murmur3-based partitioner

2013-06-28 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695905#comment-13695905
 ] 

Lex Lythius commented on CASSANDRA-3772:


@Brandon Yes, I understand that we're talking about map bucket (hence node) 
collisions, not row collisions here. Still I wanted to bring this to your 
attention, if nothing else to disregard it as a non-issue.

Would feeding masses of data to a particular bucket actually be imperceptible? 
I figured that could saturate a node, cause row/key cache issues, etc.


> Evaluate Murmur3-based partitioner
> --
>
> Key: CASSANDRA-3772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3772
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 1.2.0 beta 1
>
> Attachments: 0001-CASSANDRA-3772.patch, 
> 0001-CASSANDRA-3772-Test.patch, CASSANDRA-3772-v2.patch, 
> CASSANDRA-3772-v3.patch, CASSANDRA-3772-v4.patch, hashed_partitioner_3.diff, 
> hashed_partitioner.diff, MumPartitionerTest.docx, try_murmur3_2.diff, 
> try_murmur3.diff
>
>
> MD5 is a relatively heavyweight hash to use when we don't need cryptographic 
> qualities, just a good output distribution.  Let's see how much overhead we 
> can save by using Murmur3 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3772) Evaluate Murmur3-based partitioner

2013-06-28 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695804#comment-13695804
 ] 

Lex Lythius commented on CASSANDRA-3772:


Sorry to bring a skeleton back with bad news from the day after it was buried.

Apparently, finding collisions for Murmur3 hash would be easy enough to attempt 
choking Cassandra.
http://emboss.github.io/blog/2012/12/14/breaking-murmur-hash-flooding-dos-reloaded/

Just wanted to make sure you are aware of this.

> Evaluate Murmur3-based partitioner
> --
>
> Key: CASSANDRA-3772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3772
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 1.2.0 beta 1
>
> Attachments: 0001-CASSANDRA-3772.patch, 
> 0001-CASSANDRA-3772-Test.patch, CASSANDRA-3772-v2.patch, 
> CASSANDRA-3772-v3.patch, CASSANDRA-3772-v4.patch, hashed_partitioner_3.diff, 
> hashed_partitioner.diff, MumPartitionerTest.docx, try_murmur3_2.diff, 
> try_murmur3.diff
>
>
> MD5 is a relatively heavyweight hash to use when we don't need cryptographic 
> qualities, just a good output distribution.  Let's see how much overhead we 
> can save by using Murmur3 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5710) COPY ... TO command does not work with collections

2013-06-27 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-5710:
--

 Summary: COPY ... TO command does not work with collections
 Key: CASSANDRA-5710
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5710
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.5
 Environment: Ubuntu 12.04 LTS
Reporter: Lex Lythius


COPY TO does not quote set/list/map entries, which renders CSV unusable.

E.g, having tbl with a column col set

INSERT INTO tbl (id, col) VALUES (1, {'}'});

COPY tbl TO ... produces this:
1,{}}

Then COPY FROM complains:
Bad Request: line 1:4 extraneous input '}' expecting ')'

CSV imports consistently fail when trying to import non-empty collection 
columns.

Actually, the effect is pretty much a CQL injection, although I wasn't able to 
exploit it using tainted values like '}; DROP TABLE users;--'.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5596) CREATE TABLE error message swaps table and keyspace

2013-05-28 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-5596:
--

 Summary: CREATE TABLE error message swaps table and keyspace
 Key: CASSANDRA-5596
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5596
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0
 Environment: CQLSH 3.0 on Ubuntu Linux 12.04 LTS
Reporter: Lex Lythius
Priority: Trivial


When trying to create an existing table, CQLSH rightfully complains in a weird 
way:

USE blink;
CREATE TABLE tags ( tag ascii PRIMARY KEY, type ascii, label varchar, rel 
map )

Bad Request: Cannot add already existing column family "blink" to keyspace 
"tags"

Keyspace and table names are swapped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira