[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-03-14 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399888#comment-16399888
 ] 

mck edited comment on CASSANDRA-14247 at 3/15/18 4:20 AM:
--

{quote}the only other thought i have is if the in-tree documentation needs to 
be updated give this is something people interact with via CQL and schema 
updates.{quote}

Yes I better do that. Good catch! 

EDIT: there's actually no CQL/schema docs down to the details of SASI options. 
But i'll add the relevant section to {{doc/SASI.md}}.


was (Author: michaelsembwever):
{quote}the only other thought i have is if the in-tree documentation needs to 
be updated give this is something people interact with via CQL and schema 
updates.{quote}

Yes I better do that. Good catch! 

> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
>  Labels: sasi
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-03-14 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399884#comment-16399884
 ] 

mck edited comment on CASSANDRA-14247 at 3/15/18 3:56 AM:
--

[~mkjellman],
{quote}one thing that stuck out to me was this while loop that didn't actually 
"do" anything but it does do something... could you at least throw a comment in 
just to make it a bit more readable?{quote}

comment thrown in :-)

The byte buffer approach is pushed to the trunk_14247 and cassandra-3.11_14247 
branches. 

The rationale to adding this patch also to cassandra-3.11 is it's an important 
stability workaround to {{\{mode:CONTAINS\}}}, and is a standalone class, 
annotated as {{@Beta}}, that does not touch any other code .

The following patches have been submitted:

|| branch || testall || dtest ||
| 
[cassandra-3.11_14247|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_14247]
   | 
[testall|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_14247]
 | 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/512]
 |
| [trunk_14247|https://github.com/thelastpickle/cassandra/tree/mck/trunk_14247] 
| 
[testall|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_14247]
  | 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/513]
 |



was (Author: michaelsembwever):
[~mkjellman],
{quote}one thing that stuck out to me was this while loop that didn't actually 
"do" anything but it does do something... could you at least throw a comment in 
just to make it a bit more readable?{quote}

comment thrown in :-)

The byte buffer approach is pushed to the trunk_14247 and cassandra-3.11_14247 
branches. 

The rationale to adding this patch also to cassandra-3.11 is it's an important 
stability workaround to {{mode: CONTAINS}}, and it is an additional standalone 
class that does not touch other code which has been annotated as {{@Beta}}.

The following patches have been submitted:

|| branch || testall || dtest ||
| 
[cassandra-3.11_14247|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_14247]
   | 
[testall|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_14247]
 | 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/512]
 |
| [trunk_14247|https://github.com/thelastpickle/cassandra/tree/mck/trunk_14247] 
| 
[testall|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_14247]
  | 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/513]
 |


> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-02-28 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381446#comment-16381446
 ] 

mck edited comment on CASSANDRA-14247 at 3/1/18 5:40 AM:
-

Approaches to (2) are found 
[here|https://github.com/thelastpickle/cassandra/commit/0d6c8117120ef444e1aa52e49ab66aafa159677e]
 and 
[here|https://github.com/thelastpickle/cassandra/commit/c1f66d7c389ab5816b36d7d02ca2b8043bab0ecf].

The former was just my first attempt at removing the overhead of the 
{{string.split(..)}} call. The second re-codes it to use nio buffers.
It's the latter i presume we are aiming for. Is it what you had in mind 
[~mkjellman]?
A few quick stress test showed that it was 60% (±5%) faster than the original 
patches above, working with {{world_cities_a.csv}} as input.

{quote}iterate the text left to right or right to left{quote}
Can we put that in the too-hard basket for now? 
I would think a better next step (in a new ticket) would be to improve the 
other analysers to also use ByteBuffers. as there's an obvious performance win 
here.


was (Author: michaelsembwever):
Approaches to (2) are found 
[here|https://github.com/thelastpickle/cassandra/commit/0d6c8117120ef444e1aa52e49ab66aafa159677e]
 and 
[here|https://github.com/thelastpickle/cassandra/commit/c1f66d7c389ab5816b36d7d02ca2b8043bab0ecf].

The former was just my first attempt at removing the overhead of the 
{{string.split(..)}} call. The second re-codes it to use nio buffers.
It's the latter i presume we are aiming for. Is it what you had in mind 
[~mkjellman]?
A few quick stress test showed that it was 60% (±5%) faster than the original 
patches above, working with {{world_cities_a.csv}} as input.

{quote}iterate the text left or right or right{quote}
Can we put that in the too-hard basket for now? 
I would think a better next step (in a new ticket) would be to improve the 
other analysers to also use ByteBuffers. as there's an obvious win here.

> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-02-28 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381446#comment-16381446
 ] 

mck edited comment on CASSANDRA-14247 at 3/1/18 3:10 AM:
-

Approaches to (2) are found 
[here|https://github.com/thelastpickle/cassandra/commit/0d6c8117120ef444e1aa52e49ab66aafa159677e]
 and 
[here|https://github.com/thelastpickle/cassandra/commit/c1f66d7c389ab5816b36d7d02ca2b8043bab0ecf].

The former was just my first attempt at removing the overhead of the 
{{string.split(..)}} call. The second re-codes it to use nio buffers.
It's the latter i presume we are aiming for. Is it what you had in mind 
[~mkjellman]?
A few quick stress test showed that it was 60% (±5%) faster than the original 
patches above, working with {{world_cities_a.csv}} as input.

{quote}iterate the text left or right or right{quote}
Can we put that in the too-hard basket for now? 
I would think a better next step (in a new ticket) would be to improve the 
other analysers to also use ByteBuffers. as there's an obvious win here.


was (Author: michaelsembwever):
Approaches to (2) are found 
[here|https://github.com/thelastpickle/cassandra/commit/0d6c8117120ef444e1aa52e49ab66aafa159677e]
 and 
[here|https://github.com/thelastpickle/cassandra/commit/c1f66d7c389ab5816b36d7d02ca2b8043bab0ecf].

The former was just my first attempt at removing the overhead of the 
{{string.split(..)}} call. The second re-codes it to use nio buffers.
It's the latter i presume we are aiming for. Is it what you had in mind 
[~mkjellman]?
A few quick stress test showed that it was 60% (±5%) faster than the original 
patches above, working with {{world_cities_a.csv}} as input.

{quote}iterate the text left or right or right{quote}
Can we put that in the too-hard basket for now? 
I think the next step would be to improve the other analysers to also use 
ByteBuffers. as there's an obvious win here.

> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-02-26 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376253#comment-16376253
 ] 

mck edited comment on CASSANDRA-14247 at 2/26/18 9:29 AM:
--

[~mkjellman], have forced pushed the branch again. (let me know if you want to 
be adding checkpoint commits rather than overwriting the existing commit.)

This adds the test file {{test/resources/tokenization/world_cities_a.csv}}, and 
a unit test to match. The other unit test methods have been updated to use 
different delimiters as appropriate for the existing test data files.

Example {{cqlsh}} corridor testing…
{code:java}
create table test ( one text, two int, three text, PRIMARY KEY (one,two) );

# insert a new row, with the contents of 
test/resources/tokenization/world_cities_a.csv going into column 'three'.

create CUSTOM INDEX on test (three) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = { 'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 'delimiter': ',', 
'mode': 'prefix', 'analyzed': 'true'};

select one,two from test where three LIKE 'azzazl' ALLOW FILTERING;
{code}

Aside: this tokenizer raises the need for a "exact" mode. Querying a csv inside 
a column like this is one example where the user may never require wildcarding 
LIKE clause (using %) and an 'exact' mode would be significantly more 
performant and use less disk. (btw I'm suspecting that {{is_literal: false}} 
would have the same impact as an 'exact' mode…)


was (Author: michaelsembwever):
[~mkjellman], have forced pushed the branch again. (let me know if you want to 
be adding checkpoint commits rather than overwriting the existing commit.)

This adds the test file {{test/resources/tokenization/world_cities_a.csv}}, and 
a unit test to match. The other unit test methods have been updated to use 
different delimiters as appropriate for the existing test data files.

Example corridor testing…
{code:java}
create table test ( one text, two int, three text, PRIMARY KEY (one,two) );

# insert a new row, with the contents of 
test/resources/tokenization/world_cities_a.csv going into column 'three'.

create CUSTOM INDEX on test (three) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = { 'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 'delimiter': ',', 
'mode': 'prefix', 'analyzed': 'true'};

select one,two from test where three LIKE 'azzazl' ALLOW FILTERING;
{code}

Aside: this tokenizer raises the need for a "exact" mode. Querying a csv inside 
a column like this is one example where the user may never require wildcarding 
LIKE clause (using %) and an 'exact' mode would be significantly more 
performant and use less disk. (btw I'm suspecting that {{is_literal: false}} 
would have the same impact as an 'exact' mode…)

> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-02-25 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376253#comment-16376253
 ] 

mck edited comment on CASSANDRA-14247 at 2/26/18 12:35 AM:
---

[~mkjellman], have forced pushed the branch again. (let me know if you want to 
be adding checkpoint commits rather than overwriting the existing commit.)

This adds the test file {{test/resources/tokenization/world_cities_a.csv}}, and 
a unit test to match. The other unit test methods have been updated to use 
different delimiters as appropriate for the existing test data files.

Example corridor testing…
{code:java}
create table test ( one text, two int, three text, PRIMARY KEY (one,two) );

# insert a new row, with the contents of 
test/resources/tokenization/world_cities_a.csv going into column 'three'.

create CUSTOM INDEX on test (three) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = { 'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 'delimiter': ',', 
'mode': 'prefix', 'analyzed': 'true'};

select one,two from test where three LIKE 'azzazl' ALLOW FILTERING;
{code}

Aside: this tokenizer raises the need for a "exact" mode. Querying a csv inside 
a column like this is one example where the user may never require wildcarding 
LIKE clause (using %) and an 'exact' mode would be significantly more 
performant and use less disk. (btw I'm suspecting that {{is_literal: false}} 
would have the same impact as an 'exact' mode…)


was (Author: michaelsembwever):
[~mkjellman], have forced pushed the branch again. (let me know if you want to 
be adding checkpoint commits rather than overwriting the existing commit.)

This adds the test file {{test/resources/tokenization/world_cities_a.csv}}, and 
a unit test to match. The other unit test methods have been updated to use 
different delimiters as appropriate for the existing test data files.

Example corridor testing…
{code:java}
create table test ( one text, two int, three text, PRIMARY KEY (one,two) );

# insert a new row, with the contents of 
test/resources/tokenization/world_cities_a.csv going into column 'three'.

create CUSTOM INDEX on test (three) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = { 'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 'delimiter': ',', 
'mode': 'prefix', 'analyzed': 'true'};

select one,two from test where three LIKE 'azzazl' ALLOW FILTERING;
{code}

Aside: this tokenizer raises the need for a "exact" mode. Querying a csv inside 
a column like this is one example where the user may never require wildcarding 
LIKE clause (using %) and an 'exact' mode would be significantly more 
performant and use less disk.

> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14247) SASI tokenizer for simple delimiter based entries

2018-02-25 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376253#comment-16376253
 ] 

mck edited comment on CASSANDRA-14247 at 2/25/18 9:30 PM:
--

[~mkjellman], have forced pushed the branch again. (let me know if you want to 
be adding checkpoint commits rather than overwriting the existing commit.)

This adds the test file {{test/resources/tokenization/world_cities_a.csv}}, and 
a unit test to match. The other unit test methods have been updated to use 
different delimiters as appropriate for the existing test data files.

Example corridor testing…
{code:java}
create table test ( one text, two int, three text, PRIMARY KEY (one,two) );

# insert a new row, with the contents of 
test/resources/tokenization/world_cities_a.csv going into column 'three'.

create CUSTOM INDEX on test (three) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = { 'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 'delimiter': ',', 
'mode': 'prefix', 'analyzed': 'true'};

select one,two from test where three LIKE 'azzazl' ALLOW FILTERING;
{code}

Aside: this tokenizer raises the need for a "exact" mode. Querying a csv inside 
a column like this is one example where the user may never require wildcarding 
LIKE clause (using %) and an 'exact' mode would be significantly more 
performant and use less disk.


was (Author: michaelsembwever):
[~mkjellman], have forced pushed the branch again. (let me know if you want to 
be adding checkpoint commits rather than overwriting the existing commit.)

This adds the test file {{test/resources/tokenization/world_cities_a.csv}}, and 
a unit test to match. The other unit test methods have been updated to use 
different delimiters as appropriate for the existing test data files.

Example corridor testing…
{code:java}
create table test ( one text, two int, three text, PRIMARY KEY (one,two) );

# insert a new row, with the contents of 
test/resources/tokenization/world_cities_a.csv going into column 'three'.

create CUSTOM INDEX on test (three) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = { 'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 'delimiter': ',', 
'mode': 'prefix', 'analyzed': 'true'};

select one,two from test where three LIKE 'azzazl' ALLOW FILTERING;
{code}

> SASI tokenizer for simple delimiter based entries
> -
>
> Key: CASSANDRA-14247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14247
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: mck
>Assignee: mck
>Priority: Major
> Fix For: 4.0, 3.11.x
>
>
> Currently SASI offers only two tokenizer options:
>  - NonTokenizerAnalyser
>  - StandardAnalyzer
> The latter is built upon Snowball, powerful for human languages but overkill 
> for simple tokenization.
> A simple tokenizer is proposed here. The need for this arose as a workaround 
> of CASSANDRA-11182, and to avoid the disk usage explosion when having to 
> resort to {{CONTAINS}}. See https://github.com/openzipkin/zipkin/issues/1861
> Example use of this would be:
> {code}
> CREATE CUSTOM INDEX span_annotation_query_idx 
> ON zipkin2.span (annotation_query) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = {
> 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.DelimiterAnalyzer', 
> 'delimiter': '░',
> 'case_sensitive': 'true', 
> 'mode': 'prefix', 
> 'analyzed': 'true'};
> {code}
> Original credit for this work goes to https://github.com/zuochangan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org