[jira] [Commented] (CASSANDRA-6233) Authentication is broken for the protocol v1 on C* 2.0

2013-10-26 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806070#comment-13806070
 ] 

Sylvain Lebresne commented on CASSANDRA-6233:
-

I'm talking of the native protocol. cassandra-dtest uses CQL-over-thrift so 
there is no way to reproduce this bug with it. To produce, you'd need to for 
example use the Datastax java driver 1.0.4 against C* 2.0.1. The steps to 
reproduce are there: https://datastax-oss.atlassian.net/browse/JAVA-190

 Authentication is broken for the protocol v1 on C* 2.0
 --

 Key: CASSANDRA-6233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6233
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.3

 Attachments: 6233.txt


 CASSANDRA-5664 simplified the decoding method of CredentialsMessage by using 
 CBUtil.readStringMap (instead of duplicating the code). Unfortunately, that 
 latter method turns his keys to uppercase (to provide some form of case 
 insensitivity for keys), and in the case of CredentialsMessage this breaks 
 PasswordAuthenticator that expect lowercased keys (besides, it's a bad idea 
 to mess up with the case of the credentials map in general).
 Making CBUtil.readStringMap uppercase keys was probably a bad idea in the 
 first place (as nothing in the method name imply this), so attaching patch 
 that remove this (and uppercase keys specifically in StartupMessage where 
 that was done on purpose).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: cqlsh: fix LIST USERS output

2013-10-26 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 1bd7ac3c4 - 18260c5f2


cqlsh: fix LIST USERS output

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for
CASSANDRA-6242


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18260c5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18260c5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18260c5f

Branch: refs/heads/cassandra-2.0
Commit: 18260c5f2c1d3056355bfd8c9c4bc1a1f6f5bc37
Parents: 1bd7ac3
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Oct 26 15:31:47 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Oct 26 15:31:47 2013 +0300

--
 CHANGES.txt | 1 +
 bin/cqlsh   | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18260c5f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3c96770..62c3f52 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.3
  * Fix modifying column_metadata from thrift (CASSANDRA-6182)
+ * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 
 
 2.0.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18260c5f/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 3382111..acfb1f6 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -917,7 +917,7 @@ class Shell(cmd.Cmd):
 self.printerr(traceback.format_exc())
 return False
 
-if statement[:6].lower() == 'select':
+if statement[:6].lower() == 'select' or 
statement.lower().startswith(list):
 self.print_result(self.cursor, with_default_limit)
 elif self.cursor.rowcount == 1:
 # CAS INSERT/UPDATE



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-26 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55a77487
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55a77487
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55a77487

Branch: refs/heads/trunk
Commit: 55a7748752b5aa0445659d7227db459cda5eb1c0
Parents: db5c95c 18260c5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Oct 26 15:33:08 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Oct 26 15:33:08 2013 +0300

--
 CHANGES.txt | 1 +
 bin/cqlsh   | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/55a77487/CHANGES.txt
--
diff --cc CHANGES.txt
index 549f318,62c3f52..b66534f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,6 +1,15 @@@
 +2.1
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 +
 +
  2.0.3
   * Fix modifying column_metadata from thrift (CASSANDRA-6182)
+  * cqlsh: fix LIST USERS output (CASSANDRA-6242)
  
  
  2.0.2



[1/2] git commit: cqlsh: fix LIST USERS output

2013-10-26 Thread aleksey
Updated Branches:
  refs/heads/trunk db5c95cdc - 55a774875


cqlsh: fix LIST USERS output

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for
CASSANDRA-6242


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18260c5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18260c5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18260c5f

Branch: refs/heads/trunk
Commit: 18260c5f2c1d3056355bfd8c9c4bc1a1f6f5bc37
Parents: 1bd7ac3
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Oct 26 15:31:47 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Oct 26 15:31:47 2013 +0300

--
 CHANGES.txt | 1 +
 bin/cqlsh   | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18260c5f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3c96770..62c3f52 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.3
  * Fix modifying column_metadata from thrift (CASSANDRA-6182)
+ * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 
 
 2.0.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18260c5f/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 3382111..acfb1f6 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -917,7 +917,7 @@ class Shell(cmd.Cmd):
 self.printerr(traceback.format_exc())
 return False
 
-if statement[:6].lower() == 'select':
+if statement[:6].lower() == 'select' or 
statement.lower().startswith(list):
 self.print_result(self.cursor, with_default_limit)
 elif self.cursor.rowcount == 1:
 # CAS INSERT/UPDATE



[jira] [Updated] (CASSANDRA-6233) Authentication is broken for the protocol v1 on C* 2.0

2013-10-26 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6233:
-

Reviewer: Aleksey Yeschenko

 Authentication is broken for the protocol v1 on C* 2.0
 --

 Key: CASSANDRA-6233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6233
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.3

 Attachments: 6233.txt


 CASSANDRA-5664 simplified the decoding method of CredentialsMessage by using 
 CBUtil.readStringMap (instead of duplicating the code). Unfortunately, that 
 latter method turns his keys to uppercase (to provide some form of case 
 insensitivity for keys), and in the case of CredentialsMessage this breaks 
 PasswordAuthenticator that expect lowercased keys (besides, it's a bad idea 
 to mess up with the case of the credentials map in general).
 Making CBUtil.readStringMap uppercase keys was probably a bad idea in the 
 first place (as nothing in the method name imply this), so attaching patch 
 that remove this (and uppercase keys specifically in StartupMessage where 
 that was done on purpose).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6233) Authentication is broken for the protocol v1 on C* 2.0

2013-10-26 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6233:
-

Fix Version/s: (was: 2.0.3)
   2.0.2

 Authentication is broken for the protocol v1 on C* 2.0
 --

 Key: CASSANDRA-6233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6233
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.2

 Attachments: 6233.txt


 CASSANDRA-5664 simplified the decoding method of CredentialsMessage by using 
 CBUtil.readStringMap (instead of duplicating the code). Unfortunately, that 
 latter method turns his keys to uppercase (to provide some form of case 
 insensitivity for keys), and in the case of CredentialsMessage this breaks 
 PasswordAuthenticator that expect lowercased keys (besides, it's a bad idea 
 to mess up with the case of the credentials map in general).
 Making CBUtil.readStringMap uppercase keys was probably a bad idea in the 
 first place (as nothing in the method name imply this), so attaching patch 
 that remove this (and uppercase keys specifically in StartupMessage where 
 that was done on purpose).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6233) Authentication is broken for the protocol v1 on C* 2.0

2013-10-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806081#comment-13806081
 ] 

Aleksey Yeschenko commented on CASSANDRA-6233:
--

Committed by Sylvain in 86b26b67fe9dd804b84a56c2535726b966d28d13, so is part of 
2.0.2. 'formal' +1 here. (needs updating CHANGES.txt).

 Authentication is broken for the protocol v1 on C* 2.0
 --

 Key: CASSANDRA-6233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6233
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.2

 Attachments: 6233.txt


 CASSANDRA-5664 simplified the decoding method of CredentialsMessage by using 
 CBUtil.readStringMap (instead of duplicating the code). Unfortunately, that 
 latter method turns his keys to uppercase (to provide some form of case 
 insensitivity for keys), and in the case of CredentialsMessage this breaks 
 PasswordAuthenticator that expect lowercased keys (besides, it's a bad idea 
 to mess up with the case of the credentials map in general).
 Making CBUtil.readStringMap uppercase keys was probably a bad idea in the 
 first place (as nothing in the method name imply this), so attaching patch 
 that remove this (and uppercase keys specifically in StartupMessage where 
 that was done on purpose).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol

2013-10-26 Thread Daniel Norberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806141#comment-13806141
 ] 

Daniel Norberg commented on CASSANDRA-5981:
---

Looks to me like it might discard too much data if buffer.readableBytes()  
MAX_FRAME_LENGTH. Unless I'm mistaken this problem is also present in the 
original LengthFieldBasedFrameDecoder though. [~norman], what do you say? 
Admittedly it's a corner case that's unlikely to be encountered in production.

Are there any tests for the dropping of too large requests?

 Netty frame length exception when storing data to Cassandra using binary 
 protocol
 -

 Key: CASSANDRA-5981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux, Java 7
Reporter: Justin Sweeney
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.2

 Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 
 0002-Allow-to-configure-the-max-frame-length.txt, 5981-v2.txt


 Using Cassandra 1.2.8, I am running into an issue where when I send a large 
 amount of data using the binary protocol, I get the following netty exception 
 in the Cassandra log file:
 {quote}
 ERROR 09:08:35,845 Unexpected exception during request
 org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame 
 length exceeds 268435456: 292413714 - discarded
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441)
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412)
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372)
 at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181)
 at 
 org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
 at 
 org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
 at 
 org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
 at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
 at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
 at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 {quote}
 I am using the Datastax driver and using CQL to execute insert queries. The 
 query that is failing is using atomic batching executing a large number of 
 statements (~55).
 Looking into the code a bit, I saw that in the 
 org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is 
 hard coded to 256 mb.
 Is this something that should be configurable or is this a hard limit that 
 will prevent batch statements of this size from executing for some reason?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol

2013-10-26 Thread Daniel Norberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806141#comment-13806141
 ] 

Daniel Norberg edited comment on CASSANDRA-5981 at 10/26/13 5:18 PM:
-

Looks to me like it might discard too much data if buffer.readableBytes()  
MAX_FRAME_LENGTH. Unless I'm mistaken this problem is also present in the 
original LengthFieldBasedFrameDecoder though. [~norman], what do you say? 
Admittedly it's a corner case that's unlikely to be encountered in production.

Are there any tests for the dropping of too large requests?

Apart from this it looks good to me.


was (Author: danielnorberg):
Looks to me like it might discard too much data if buffer.readableBytes()  
MAX_FRAME_LENGTH. Unless I'm mistaken this problem is also present in the 
original LengthFieldBasedFrameDecoder though. [~norman], what do you say? 
Admittedly it's a corner case that's unlikely to be encountered in production.

Are there any tests for the dropping of too large requests?

 Netty frame length exception when storing data to Cassandra using binary 
 protocol
 -

 Key: CASSANDRA-5981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux, Java 7
Reporter: Justin Sweeney
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.2

 Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 
 0002-Allow-to-configure-the-max-frame-length.txt, 5981-v2.txt


 Using Cassandra 1.2.8, I am running into an issue where when I send a large 
 amount of data using the binary protocol, I get the following netty exception 
 in the Cassandra log file:
 {quote}
 ERROR 09:08:35,845 Unexpected exception during request
 org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame 
 length exceeds 268435456: 292413714 - discarded
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441)
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412)
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372)
 at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181)
 at 
 org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
 at 
 org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
 at 
 org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
 at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
 at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
 at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 {quote}
 I am using the Datastax driver and using CQL to execute insert queries. The 
 query that is failing is using atomic batching executing a large number of 
 statements (~55).
 Looking into the code a bit, I saw that in the 
 org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is 
 hard coded to 256 mb.
 Is this something that should be configurable or is this a hard limit that 
 will prevent batch statements of this size from executing for some reason?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6247) CAS updates should require P.MODIFY AND P.SELECT

2013-10-26 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6247:


 Summary: CAS updates should require P.MODIFY AND P.SELECT
 Key: CASSANDRA-6247
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6247
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.3


With CAS it is possible to simulate a SELECT query using conditional UPDATE IF. 
Hence all CAS updates should require P.SELECT permission, and not just P.MODIFY.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-3578) Multithreaded commitlog

2013-10-26 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806193#comment-13806193
 ] 

Vijay commented on CASSANDRA-3578:
--

Hi Jonathan, 

{quote}
 I must be missing where this gets persisted back to disk
{quote}
First 4 bytes at the beginning of the file, may be we can get rid of it and 
stop when the size and checksum doesn't match?

But the header is pretty light, and will need one additional seek every 10 
seconds (it just marks the end of the file at the beginning of the file just 
before fsync).

{quote}
 I think allocate needs to write the length to the segment before returning
{quote}
The first thing the thread does after allocation is writing the size and its 
checksum are we talking about synchronization in the allocation, so only 1 
thread writes the size and end (-1)? currently the atomic operation is only on 
AtomicLong (position) 

We might be able to do something similar to the current implementation and 
without headers with a Read Write lock, where write lock will ensure that we 
write the end (write -1 to mark the end, lock to ensure no one else overwrites 
the end marker) just before fsync (but the OS can also write before we force 
the buffers too)... also that might not be desirable, since it might stall the 
system like the current one.

Not sure if the header is that bad though  Let me know what you think, 
thanks!

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 Current-CL.png, Multi-Threded-CL.png, parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)