[jira] [Comment Edited] (CASSANDRA-5605) Crash caused by insufficient disk space to flush

2013-07-10 Thread Ananth Gundabattula (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705442#comment-13705442
 ] 

Ananth Gundabattula edited comment on CASSANDRA-5605 at 7/11/13 5:54 AM:
-

Am not sure if the following information helps but we too hit this issue in 
production today. We were running with cassandra 1.2.4 and two patches 
CASSANDRA-5554 & CASSANDRA-5418. 

We were running with RF=3 and LCS. 

We ran into this issue while using sstablelaoder to push data from  remote 
1.2.4 cluster nodes to another cluster

We cross checked using JMX if blacklisting is the cause of this bug and it 
looks like it is definitely not the case. 

We however saw a pile up of pending compactions ~ 1800 pending compactions per 
node when node crashed. Surprising thing is that the "Insufficient disk space 
to write  bytes" appears much before the node crashes. For us it started 
appearing aprrox 3 hours before the node crashed. 

The cluster which showed this behavior was having loads of writes occurring ( 
We were using multiple SSTableLoaders to stream data into this cluster. ). We 
pushed in almost 15 TB worth data ( including the RF =3 ) in a matter of 16 
hours. We were not serving any reads from this cluster as we were still 
migrating data to it. 

Another interesting behavior observed that nodes were neighbors in most of the 
time. 

Am not sure if the above information helps but wanted to add it to the context 
of the ticket.  

  was (Author: agundabattula):
Am not sure if the following information helps but we too hit this issue in 
production today. We were running with cassandra 1.2.4 and two patches 
CASSANDRA-5554 & CASSANDRA-5418. 

We were running with RF=3 and LCS. 

We cross checked using JMX if blacklisting is the cause of this bug and it 
looks like it is definitely not the case. 

We however saw a pile up of pending compactions ~ 1800 pending compactions per 
node when node crashed. Surprising thing is that the "Insufficient disk space 
to write  bytes" appears much before the node crashes. For us it started 
appearing aprrox 3 hours before the node crashed. 

The cluster which showed this behavior was having loads of writes occurring ( 
We were using multiple SSTableLoaders to stream data into this cluster. ). We 
pushed in almost 15 TB worth data ( including the RF =3 ) in a matter of 16 
hours. We were not serving any reads from this cluster as we were still 
migrating data to it. 

Another interesting behavior observed that nodes were neighbors in most of the 
time. 

Am not sure if the above information helps but wanted to add it to the context 
of the ticket.  
  
> Crash caused by insufficient disk space to flush
> 
>
> Key: CASSANDRA-5605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.3, 1.2.5
> Environment: java version "1.7.0_15"
>Reporter: Dan Hendry
>
> A few times now I have seen our Cassandra nodes crash by running themselves 
> out of memory. It starts with the following exception:
> {noformat}
> ERROR [FlushWriter:13000] 2013-05-31 11:32:02,350 CassandraDaemon.java (line 
> 164) Exception in thread Thread[FlushWriter:13000,5,main]
> java.lang.RuntimeException: Insufficient disk space to write 8042730 bytes
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:42)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {noformat} 
> After which, it seems the MemtablePostFlusher stage gets stuck and no further 
> memtables get flushed: 
> {noformat} 
> INFO [ScheduledTasks:1] 2013-05-31 11:59:12,467 StatusLogger.java (line 68) 
> MemtablePostFlusher   132 0
> INFO [ScheduledTasks:1] 2013-05-31 11:59:12,469 StatusLogger.java (line 73) 
> CompactionManager 1 2
> {noformat} 
> What makes this ridiculous is that, at the time, the data directory on this 
> node had 981GB free disk space (as reported by du). We primarily use STCS and 
> at the time the aforementioned exception occurred, at least one compaction 
> task was executing which could have easily involved 981GB (or more) worth of 
> input SSTables. Correct me if I am wrong but but Cassandra counts data 
> currently being compacted against available disk space. In our case, this is 
> a significant overestimation of the space required by compaction since a 
> 

[jira] [Commented] (CASSANDRA-5605) Crash caused by insufficient disk space to flush

2013-07-10 Thread Ananth Gundabattula (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705442#comment-13705442
 ] 

Ananth Gundabattula commented on CASSANDRA-5605:


Am not sure if the following information helps but we too hit this issue in 
production today. We were running with cassandra 1.2.4 and two patches 
CASSANDRA-5554 & CASSANDRA-5418. 

We were running with RF=3 and LCS. 

We cross checked using JMX if blacklisting is the cause of this bug and it 
looks like it is definitely not the case. 

We however saw a pile up of pending compactions ~ 1800 pending compactions per 
node when node crashed. Surprising thing is that the "Insufficient disk space 
to write  bytes" appears much before the node crashes. For us it started 
appearing aprrox 3 hours before the node crashed. 

The cluster which showed this behavior was having loads of writes occurring ( 
We were using multiple SSTableLoaders to stream data into this cluster. ). We 
pushed in almost 15 TB worth data ( including the RF =3 ) in a matter of 16 
hours. We were not serving any reads from this cluster as we were still 
migrating data to it. 

Another interesting behavior observed that nodes were neighbors in most of the 
time. 

Am not sure if the above information helps but wanted to add it to the context 
of the ticket.  

> Crash caused by insufficient disk space to flush
> 
>
> Key: CASSANDRA-5605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.3, 1.2.5
> Environment: java version "1.7.0_15"
>Reporter: Dan Hendry
>
> A few times now I have seen our Cassandra nodes crash by running themselves 
> out of memory. It starts with the following exception:
> {noformat}
> ERROR [FlushWriter:13000] 2013-05-31 11:32:02,350 CassandraDaemon.java (line 
> 164) Exception in thread Thread[FlushWriter:13000,5,main]
> java.lang.RuntimeException: Insufficient disk space to write 8042730 bytes
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:42)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {noformat} 
> After which, it seems the MemtablePostFlusher stage gets stuck and no further 
> memtables get flushed: 
> {noformat} 
> INFO [ScheduledTasks:1] 2013-05-31 11:59:12,467 StatusLogger.java (line 68) 
> MemtablePostFlusher   132 0
> INFO [ScheduledTasks:1] 2013-05-31 11:59:12,469 StatusLogger.java (line 73) 
> CompactionManager 1 2
> {noformat} 
> What makes this ridiculous is that, at the time, the data directory on this 
> node had 981GB free disk space (as reported by du). We primarily use STCS and 
> at the time the aforementioned exception occurred, at least one compaction 
> task was executing which could have easily involved 981GB (or more) worth of 
> input SSTables. Correct me if I am wrong but but Cassandra counts data 
> currently being compacted against available disk space. In our case, this is 
> a significant overestimation of the space required by compaction since a 
> large portion of the data being compacted has expired or is an overwrite.
> More to the point though, Cassandra should not crash because its out of disk 
> space unless its really actually out of disk space (ie, dont consider 
> 'phantom' compaction disk usage when flushing). I have seen one of our nodes 
> die in this way before our alerts for disk space even went off.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705266#comment-13705266
 ] 

Dave Brosius edited comment on CASSANDRA-5733 at 7/11/13 12:56 AM:
---

Patch generally is fine, however

1) you probably should error out on "-" and -s -n
2) please add a close to 'parser' before reopening, probably should also add a 
close at the end in a finally block as well... Yes not your issue.. existing.
3) what's the point of using contentEquals over equals?

formatting

4) cassandra puts curly braces on the next line down
5) no spaces between ) )
6) there appears to be indenting offset problems

  was (Author: dbrosius):
Patch generally is fine, however

1) you probably should error out on "-" and -s -n
2) please add a close to 'parser' before reopening, probably should also add a 
close at the end in a finally block as well... Yes not your issue.. existing.

formatting

3) cassandra puts curly braces on the next line down
4) no spaces between ) )
5) there appears to be indenting offset problems
  
> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Assignee: Steven Lowenthal
>Priority: Minor
> Attachments: 5733.txt
>
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705266#comment-13705266
 ] 

Dave Brosius commented on CASSANDRA-5733:
-

Patch generally is fine, however

1) you probably should error out on "-" and -s -n
2) please add a close to 'parser' before reopening, probably should also add a 
close at the end in a finally block as well... Yes not your issue.. existing.

formatting

3) cassandra puts curly braces on the next line down
4) no spaces between ) )
5) there appears to be indenting offset problems

> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Assignee: Steven Lowenthal
>Priority: Minor
> Attachments: 5733.txt
>
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5520) Query tracing session info inconsistent with events info

2013-07-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705212#comment-13705212
 ] 

Jonathan Ellis commented on CASSANDRA-5520:
---

Can you also post a version against trunk?  Unfortunately the merge is not 
trivial.

> Query tracing session info inconsistent with events info
> 
>
> Key: CASSANDRA-5520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.0
> Environment: Linux
>Reporter: Ilya Kirnos
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 1.2.7
>
> Attachments: 5520-v1.txt, 5520-v2.txt, 5520-v3.txt, 5520-v4.txt
>
>
> Session info for a trace is showing that a query took > 10 seconds (it timed 
> out).
> {noformat}
> cqlsh:system_traces> select session_id, duration, request from sessions where 
> session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | duration | request
> 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice
> However, the event-level breakdown shows no such large duration:
> cqlsh:system_traces> select * from events where session_id = 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | event_id | activity | source | source_elapsed | thread
> -+---
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | 
> WRITE-/xxx.xxx.4.16
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | 
> Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | 
> Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | 
> ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | 
> Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | 
> Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | 
> Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | 
> Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | 
> Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 202 | 
> ReadStage:4

[jira] [Updated] (CASSANDRA-5520) Query tracing session info inconsistent with events info

2013-07-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5520:
--

  Component/s: Tools
Affects Version/s: (was: 1.2.4)
   1.2.0
Fix Version/s: 1.2.7

> Query tracing session info inconsistent with events info
> 
>
> Key: CASSANDRA-5520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.0
> Environment: Linux
>Reporter: Ilya Kirnos
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 1.2.7
>
> Attachments: 5520-v1.txt, 5520-v2.txt, 5520-v3.txt, 5520-v4.txt
>
>
> Session info for a trace is showing that a query took > 10 seconds (it timed 
> out).
> {noformat}
> cqlsh:system_traces> select session_id, duration, request from sessions where 
> session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | duration | request
> 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice
> However, the event-level breakdown shows no such large duration:
> cqlsh:system_traces> select * from events where session_id = 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | event_id | activity | source | source_elapsed | thread
> -+---
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | 
> WRITE-/xxx.xxx.4.16
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | 
> Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | 
> Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | 
> ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | 
> Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | 
> Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | 
> Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | 
> Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | 
> Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 202 | 
> ReadStage:41
> c7e36a30-af3a-11e2-9ec9

[jira] [Commented] (CASSANDRA-5746) HHOM.countPendingHints is a trap for the unwary

2013-07-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705205#comment-13705205
 ] 

Jonathan Ellis commented on CASSANDRA-5746:
---

If we want it to be fast enough for monitoring use, we need to denormalize the 
count.  So the question is, do we use a Counters table, a normal int table with 
manual lock-and-read-before-write, or just one-off it with AtomicInteger and 
sync to disk occasionally? 

Personally I'd be inclined towards the last; it's okay if under- or over-count 
(because of periodic CL sync for instance), as long as we reliably distinguish 
between zero and non-zero hints for a given target.  We could sanity check that 
with an approach like listEndpointsPendingHints on startup, which would be 
fairly low-overhead.

> HHOM.countPendingHints is a trap for the unwary
> ---
>
> Key: CASSANDRA-5746
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5746
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Tools
>Reporter: Jonathan Ellis
>Assignee: Tyler Hobbs
> Fix For: 2.0.1
>
>
> countPendingHints can OOM the server fairly easily since it does a per-target 
> seq scan without paging.
> More generally, countPendingHints is far too slow to be useful for routine 
> monitoring.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5746) HHOM.countPendingHints is a trap for the unwary

2013-07-10 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5746:
-

 Summary: HHOM.countPendingHints is a trap for the unwary
 Key: CASSANDRA-5746
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5746
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 2.0.1


countPendingHints can OOM the server fairly easily since it does a per-target 
seq scan without paging.

More generally, countPendingHints is far too slow to be useful for routine 
monitoring.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-07-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705195#comment-13705195
 ] 

Jonathan Ellis commented on CASSANDRA-5745:
---

One solution occurs to me for LCS: if instead of doing no compactions when 
every level is under its quota, we continue pushing data from L1 higher as the 
lowest priority task, then our end state would be exactly one sstable per 
partition, in the highest level, which implicitly solves this problem.

Alternatively, we could give LCS users a "full compaction" lever, which would 
do a one-off compaction of everything into the highest level (still split into 
LCS-sized sstables, so it wouldn't be as evil as STCS full compaction).

WDYT [~krummas] [~yukim]?

> Minor compaction tombstone-removal deadlock
> ---
>
> Key: CASSANDRA-5745
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
> Fix For: 2.0.1
>
>
> From a discussion with Axel Liljencrantz,
> If you have two SSTables that have temporally overlapping data, you can get 
> lodged into a state where a compaction of SSTable A can't drop tombstones 
> because SSTable B contains older data *and vice versa*. Once that's happened, 
> Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
> with tombstone removal. The only way to break the wedge would be to perform a 
> compaction containing both SSTable A and SSTable B. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5745) Minor compaction tombstone-removal deadlock

2013-07-10 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5745:
-

 Summary: Minor compaction tombstone-removal deadlock
 Key: CASSANDRA-5745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
 Fix For: 2.0.1


>From a discussion with Axel Liljencrantz,

If you have two SSTables that have temporally overlapping data, you can get 
lodged into a state where a compaction of SSTable A can't drop tombstones 
because SSTable B contains older data *and vice versa*. Once that's happened, 
Cassandra should be wedged into a state where CASSANDRA-4671 no longer helps 
with tombstone removal. The only way to break the wedge would be to perform a 
compaction containing both SSTable A and SSTable B. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5733:
--

Assignee: Steven Lowenthal

> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Assignee: Steven Lowenthal
>Priority: Minor
> Attachments: 5733.txt
>
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5733:
--

Reviewer: dbrosius

(Adding [~dbrosius] as reviewer.)

> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Priority: Minor
> Attachments: 5733.txt
>
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5171) Save EC2Snitch topology information in system table

2013-07-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705018#comment-13705018
 ] 

Jason Brown commented on CASSANDRA-5171:


damn jira hotkeys - sorry this got reassigned (to me, and back), Vijay :)

> Save EC2Snitch topology information in system table
> ---
>
> Key: CASSANDRA-5171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.1
> Environment: EC2
>Reporter: Vijay
>Assignee: Vijay
>Priority: Critical
> Fix For: 2.0
>
> Attachments: 0001-CASSANDRA-5171.patch, 0001-CASSANDRA-5171-v2.patch
>
>
> EC2Snitch currently waits for the Gossip information to understand the 
> cluster information every time we restart. It will be nice to use already 
> available system table info similar to GPFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5171) Save EC2Snitch topology information in system table

2013-07-10 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown reassigned CASSANDRA-5171:
--

Assignee: Jason Brown  (was: Vijay)

> Save EC2Snitch topology information in system table
> ---
>
> Key: CASSANDRA-5171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.1
> Environment: EC2
>Reporter: Vijay
>Assignee: Jason Brown
>Priority: Critical
> Fix For: 2.0
>
> Attachments: 0001-CASSANDRA-5171.patch, 0001-CASSANDRA-5171-v2.patch
>
>
> EC2Snitch currently waits for the Gossip information to understand the 
> cluster information every time we restart. It will be nice to use already 
> available system table info similar to GPFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5171) Save EC2Snitch topology information in system table

2013-07-10 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-5171:
---

Assignee: Vijay  (was: Jason Brown)

> Save EC2Snitch topology information in system table
> ---
>
> Key: CASSANDRA-5171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.1
> Environment: EC2
>Reporter: Vijay
>Assignee: Vijay
>Priority: Critical
> Fix For: 2.0
>
> Attachments: 0001-CASSANDRA-5171.patch, 0001-CASSANDRA-5171-v2.patch
>
>
> EC2Snitch currently waits for the Gossip information to understand the 
> cluster information every time we restart. It will be nice to use already 
> available system table info similar to GPFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Steven Lowenthal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Lowenthal updated CASSANDRA-5733:


Attachment: 5733.txt

> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Priority: Minor
> Attachments: 5733.txt
>
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Steven Lowenthal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705016#comment-13705016
 ] 

Steven Lowenthal commented on CASSANDRA-5733:
-

Also, with these options, it's easy for the user to specify invalid 
combinations of optsion like "-" for the file when -s -n isn't specified, etc.  
To what extent do we want to be a nanny?

> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Priority: Minor
> Attachments: 5733.txt
>
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5733) json2sstable can not read from a pipe even if -n and -s are specified.

2013-07-10 Thread Steven Lowenthal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705010#comment-13705010
 ] 

Steven Lowenthal commented on CASSANDRA-5733:
-

I also add a tiny bit of code so if the filename is "-", it will read from 
stdin. (Follows the convention of tar and some other random tools).  I'm now 
spewing sstables like there's no tomorrow.

> json2sstable can not read from a pipe even if -n and -s are specified.
> --
>
> Key: CASSANDRA-5733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2.6
>Reporter: Steven Lowenthal
>Priority: Minor
>
> SSTableImport.importSorted always opens the file twice even if the number of 
> keys are specifed.  I changed this to only open the file a second time when 
> -n is not specified.
> I moved the second parser = getparser ...  call inside the if 
> (keyCountToImport == null) block.
> if (keyCountToImport == null)
> {
> keyCountToImport = 0;
> System.out.println("Counting keys to import, please wait... 
> (NOTE: to skip this use -n )");
> parser.nextToken(); // START_ARRAY
> while (parser.nextToken() != null)
> {
> parser.skipChildren();
> if (parser.getCurrentToken() == JsonToken.END_ARRAY)
> break;
> keyCountToImport++;
> }
>   parser = getParser(jsonFile); // renewing parser only if we read 
> the file already - to support streaming.
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5171) Save EC2Snitch topology information in system table

2013-07-10 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704784#comment-13704784
 ] 

Vijay commented on CASSANDRA-5171:
--

Thanks Jason!

> Save EC2Snitch topology information in system table
> ---
>
> Key: CASSANDRA-5171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.1
> Environment: EC2
>Reporter: Vijay
>Assignee: Vijay
>Priority: Critical
> Fix For: 2.0
>
> Attachments: 0001-CASSANDRA-5171.patch, 0001-CASSANDRA-5171-v2.patch
>
>
> EC2Snitch currently waits for the Gossip information to understand the 
> cluster information every time we restart. It will be nice to use already 
> available system table info similar to GPFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5520) Query tracing session info inconsistent with events info

2013-07-10 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-5520:
---

Attachment: 5520-v4.txt

v4 patch removes debug logs from CassandraServer and includes the fixes from v3.

> Query tracing session info inconsistent with events info
> 
>
> Key: CASSANDRA-5520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5520
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.4
> Environment: Linux
>Reporter: Ilya Kirnos
>Assignee: Tyler Hobbs
>Priority: Minor
> Attachments: 5520-v1.txt, 5520-v2.txt, 5520-v3.txt, 5520-v4.txt
>
>
> Session info for a trace is showing that a query took > 10 seconds (it timed 
> out).
> {noformat}
> cqlsh:system_traces> select session_id, duration, request from sessions where 
> session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | duration | request
> 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice
> However, the event-level breakdown shows no such large duration:
> cqlsh:system_traces> select * from events where session_id = 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | event_id | activity | source | source_elapsed | thread
> -+---
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | 
> WRITE-/xxx.xxx.4.16
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | 
> Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | 
> Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | 
> ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | 
> Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | 
> Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | 
> Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | 
> Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | 
> Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 202 | 
> ReadStage:41
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39146-af3a-11e2-9480-e9d811e0fc18 | 
> Enqueuing res

[jira] [Resolved] (CASSANDRA-5276) Clean up StreamHeader/StreamOutSession

2013-07-10 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-5276.
---

Resolution: Invalid

Closing as invalid since streaming 2.0 was introduced.

> Clean up StreamHeader/StreamOutSession
> --
>
> Key: CASSANDRA-5276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5276
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Yuki Morishita
>Priority: Minor
>  Labels: streaming
> Fix For: 2.0
>
>
> StreamHeader splits "first file" ({PendingFile file}) from "remaining files" 
> ({Collection pendingFiles}).  There doesn't seem to be a 
> compelling reason for this distinction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5520) Query tracing session info inconsistent with events info

2013-07-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704744#comment-13704744
 ] 

Jonathan Ellis commented on CASSANDRA-5520:
---

bq. I rarely use debug logging (especially while looking into timeouts), so I 
can't say how useful it is, but I feel like some mention of the timeout 
happening at debug level would be good, especially considering it's a somewhat 
rare event.

In that case let's just clean it up so we're not logging redundant timeouts.

> Query tracing session info inconsistent with events info
> 
>
> Key: CASSANDRA-5520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5520
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.4
> Environment: Linux
>Reporter: Ilya Kirnos
>Assignee: Tyler Hobbs
>Priority: Minor
> Attachments: 5520-v1.txt, 5520-v2.txt, 5520-v3.txt
>
>
> Session info for a trace is showing that a query took > 10 seconds (it timed 
> out).
> {noformat}
> cqlsh:system_traces> select session_id, duration, request from sessions where 
> session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | duration | request
> 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice
> However, the event-level breakdown shows no such large duration:
> cqlsh:system_traces> select * from events where session_id = 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | event_id | activity | source | source_elapsed | thread
> -+---
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | 
> WRITE-/xxx.xxx.4.16
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | 
> Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | 
> Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | 
> ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | 
> Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 362 | 
> WRITE-/xxx.xxx.201.218
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-b8dc-a7032a583115 | 
> Acquiring sstable references | xxx.xxx.213.136 | 164 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9480-e9d811e0fc18 | 
> Merging data from memtables and 0 sstables | xxx.xxx.4.16 | 510 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 376 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39144-af3a-11e2-b8dc-a7032a583115 | 
> Merging memtable contents | xxx.xxx.213.136 | 195 | ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9480-e9d811e0fc18 | 
> Read 0 live cells and 0 tombstoned | xxx.xxx.4.16 | 530 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39145-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.95.237 | xxx.xxx.90.147 | 401 | 
> WRITE-/xxx.xxx.201.218
> c7

[jira] [Assigned] (CASSANDRA-5732) Can not query secondary index

2013-07-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5732:
-

Assignee: Alex Zarutin

> Can not query secondary index
> -
>
> Key: CASSANDRA-5732
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5732
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.5
> Environment: Windows 8, Jre 1.6.0_45 32-bit
>Reporter: Tony Anecito
>Assignee: Alex Zarutin
>
> Noticed after taking a column family that already existed and assigning to an 
> IntegerType index_type:KEYS and the caching was already set to 'ALL' that the 
> prepared statement do not return rows neither did it throw an exception. Here 
> is the sequence.
> 1. Starting state query running with caching off for a Column Family with the 
> query using the secondary index for te WHERE clause.
> 2, Set Column Family caching to ALL using Cassandra-CLI and update CQL. 
> Cassandra-cli Describe shows column family caching set to ALL
> 3. Rerun query and it works.
> 4. Restart Cassandra and run query and no rows returned. Cassandra-cli 
> Describe shows column family caching set to ALL
> 5. Set Column Family caching to NONE using Cassandra-cli and update CQL. 
> Rerun query and no rows returned. Cassandra-cli Describe for column family 
> shows caching set to NONE.
> 6. Restart Cassandra. Rerun query and it is working again. We are now back to 
> the starting state.
> Best Regards,
> -Tony

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5744) Cleanup AbstractType/TypeSerializer classes

2013-07-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5744:
--

Reviewer: carlyeks

Can you review [~carlyeks]?

> Cleanup AbstractType/TypeSerializer classes
> ---
>
> Key: CASSANDRA-5744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5744
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0
>
> Attachments: 0001-Inverse-serialize-deserialize.txt, 
> 0002-Make-sure-deseriaze-don-t-throw-on-empty-BB.txt, 
> 0003-Avoid-duplicating-code.txt
>
>
> This is somewhat a followup to CASSANDRA-4495. I'm attaching 3 patches that 
> do the following:
> # It inverse the serialize and deserialize method in TypeSerializer. Sorry I 
> didn't saw that earlier, but calling serialize the method that takes a byte 
> array to produce an object feels wrong to me (and wikipedia seems to agree 
> with me that this should the other way around: 
> http://en.wikipedia.org/wiki/Serialization :))
> # For historical reasons (which imo were somewhat of a mistake in the first 
> place but that's another story), we accept an empty byte buffer as a valid 
> value for any type. When I say "valid", I mean that validate() never throw 
> (except for InetAddressType as it happens, but that's more of an 
> inconsistency that the patch fixes). However, for some reason most 
> deserialize methods were just throwing a random exception if an empty byte 
> buffer. So I think we should be coherent here, if validate() pass, you should 
> be able to deserialize the value alright, and the 2nd patch make sure of that 
> (return null when there was nothing else making sense).
> # The patch removes a bunch of code duplication. Namely, AbstracType has a 
> getSerializer() method that return the corresponding TypeSerializer, but 
> despite that, every AbstractType subclass was redefining its compose, 
> decompose and validate that were just calling the corresponding method in 
> their deserializer. So the patch makes those method concrete in AbstractType 
> and remove the code duplication all over the place. Furthermore, 
> TypeSerializer had a getString(ByteBuffer) and a toString(T value) methods.  
> But since we also have a deserialize(ByteBuffer), the former getString() is 
> really not useful as it's just toString(deserialize()). So the patch also 
> remove that method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5520) Query tracing session info inconsistent with events info

2013-07-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704673#comment-13704673
 ] 

Tyler Hobbs commented on CASSANDRA-5520:


bq. I was going to commit this but then I realized we still have a bunch of 
logger.debug for the timeouts in CassandraServer.  Inclined to say we should 
just rip out logger.debug on this path after all, it's too noisy to use in a 
live system and tracing is just better all around. WDYT?

I rarely use debug logging (especially while looking into timeouts), so I can't 
say how useful it is, but I feel like some mention of the timeout happening at 
debug level would be good, especially considering it's a somewhat rare event.

bq. Meant to comment on this earlier – it does seem useful to denormalize this 
into sessions, maybe even index it so you can easily query it for failures when 
you have the "trace N% of my requests" turned on. (Should we enable that by 
default in 2.0?) ... However if we are going to standardize tracing events soon 
(CASSANDRA-5672) then we could index the event type directly which would be 
almost as user-friendly, so it might not warrant its own sessions column.

Adding a session column seems a little more human-user friendly to me, so 
perhaps having both would be good.

Regarding enabling tracing by default, I'm -1 on that.  Given that it incurs 
extra overhead, I say we follow the principle of least surprise and only enable 
it when requested.

bq. Either way I think it's worth pulling into a separate ticket.

Thanks, I agree.

> Query tracing session info inconsistent with events info
> 
>
> Key: CASSANDRA-5520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5520
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.2.4
> Environment: Linux
>Reporter: Ilya Kirnos
>Assignee: Tyler Hobbs
>Priority: Minor
> Attachments: 5520-v1.txt, 5520-v2.txt, 5520-v3.txt
>
>
> Session info for a trace is showing that a query took > 10 seconds (it timed 
> out).
> {noformat}
> cqlsh:system_traces> select session_id, duration, request from sessions where 
> session_id = c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | duration | request
> 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | 1230 | multiget_slice
> However, the event-level breakdown shows no such large duration:
> cqlsh:system_traces> select * from events where session_id = 
> c7e36a30-af3a-11e2-9ec9-772ec39805fe;
> session_id | event_id | activity | source | source_elapsed | thread
> -+---
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a30-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 19 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e36a31-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.153.16 | xxx.xxx.90.147 | 246 | 
> WRITE-/xxx.xxx.4.16
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9480-e9d811e0fc18 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.4.16 | 259 | Thread-57
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 253 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39140-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 25 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9480-e9d811e0fc18 | 
> Executing single-partition query on CardHash | xxx.xxx.4.16 | 421 | 
> ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /xxx.xxx.151.214 | xxx.xxx.90.147 | 310 | 
> WRITE-/xxx.xxx.213.136
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39141-af3a-11e2-b8dc-a7032a583115 | 
> Message received from /xxx.xxx.90.147 | xxx.xxx.213.136 | 106 | Thread-94
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9480-e9d811e0fc18 | 
> Acquiring sstable references | xxx.xxx.4.16 | 444 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-9ec9-772ec39805fe | 
> Sending message to /10.248.106.37 | xxx.xxx.90.147 | 352 | 
> WRITE-/xxx.xxx.79.52
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39142-af3a-11e2-b8dc-a7032a583115 | 
> Executing single-partition query on CardHash | xxx.xxx.213.136 | 144 | 
> ReadStage:11
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9480-e9d811e0fc18 | 
> Merging memtable contents | xxx.xxx.4.16 | 472 | ReadStage:5329
> c7e36a30-af3a-11e2-9ec9-772ec39805fe | c7e39143-af3a-11e2-9ec9-772ec39805fe | 
> S

[jira] [Commented] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2013-07-10 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704655#comment-13704655
 ] 

Yuki Morishita commented on CASSANDRA-2698:
---

Hi Benedict,

Sorry for late reply.
I think calculating number of rows and those size per range part is fine.
One thing to point out is that we don't need to serialize those and return to 
the initiator, just log locally like you do is enough for now.

p.s. ActiveRepairService is broken up to o.a.c.repair package, so be careful 
when rebasing.

> Instrument repair to be able to assess it's efficiency (precision)
> --
>
> Key: CASSANDRA-2698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Benedict
>Priority: Minor
>  Labels: lhf
> Attachments: nodetool_repair_and_cfhistogram.tar.gz, 
> patch_2698_v1.txt, patch.diff, patch-rebased.diff, patch.taketwo.alpha.diff
>
>
> Some reports indicate that repair sometime transfer huge amounts of data. One 
> hypothesis is that the merkle tree precision may deteriorate too much at some 
> data size. To check this hypothesis, it would be reasonably to gather 
> statistic during the merkle tree building of how many rows each merkle tree 
> range account for (and the size that this represent). It is probably an 
> interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5732) Can not query secondary index

2013-07-10 Thread Tony Anecito (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704647#comment-13704647
 ] 

Tony Anecito commented on CASSANDRA-5732:
-

Hi Janne,
 
There are simularities. Mine though is a solid failure and I narrowed it down 
to what I said so the Cassandra team should be able to solve the issue.
 
Best Regards,
-Tony

From: Janne Jalkanen (JIRA) 
To: adanec...@yahoo.com 
Sent: Wednesday, July 10, 2013 9:05 AM
Subject: [jira] [Commented] (CASSANDRA-5732) Can not query secondary index



    [ 
https://issues.apache.org/jira/browse/CASSANDRA-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704633#comment-13704633
 ] 

Janne Jalkanen commented on CASSANDRA-5732:
---

Is this the same as CASSANDRA-4785?
                

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


> Can not query secondary index
> -
>
> Key: CASSANDRA-5732
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5732
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.5
> Environment: Windows 8, Jre 1.6.0_45 32-bit
>Reporter: Tony Anecito
>
> Noticed after taking a column family that already existed and assigning to an 
> IntegerType index_type:KEYS and the caching was already set to 'ALL' that the 
> prepared statement do not return rows neither did it throw an exception. Here 
> is the sequence.
> 1. Starting state query running with caching off for a Column Family with the 
> query using the secondary index for te WHERE clause.
> 2, Set Column Family caching to ALL using Cassandra-CLI and update CQL. 
> Cassandra-cli Describe shows column family caching set to ALL
> 3. Rerun query and it works.
> 4. Restart Cassandra and run query and no rows returned. Cassandra-cli 
> Describe shows column family caching set to ALL
> 5. Set Column Family caching to NONE using Cassandra-cli and update CQL. 
> Rerun query and no rows returned. Cassandra-cli Describe for column family 
> shows caching set to NONE.
> 6. Restart Cassandra. Rerun query and it is working again. We are now back to 
> the starting state.
> Best Regards,
> -Tony

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5732) Can not query secondary index

2013-07-10 Thread Janne Jalkanen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704633#comment-13704633
 ] 

Janne Jalkanen edited comment on CASSANDRA-5732 at 7/10/13 3:05 PM:


Is this the same as CASSANDRA-4785 and CASSANDRA-4973?

  was (Author: jalkanen):
Is this the same as CASSANDRA-4785?
  
> Can not query secondary index
> -
>
> Key: CASSANDRA-5732
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5732
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.5
> Environment: Windows 8, Jre 1.6.0_45 32-bit
>Reporter: Tony Anecito
>
> Noticed after taking a column family that already existed and assigning to an 
> IntegerType index_type:KEYS and the caching was already set to 'ALL' that the 
> prepared statement do not return rows neither did it throw an exception. Here 
> is the sequence.
> 1. Starting state query running with caching off for a Column Family with the 
> query using the secondary index for te WHERE clause.
> 2, Set Column Family caching to ALL using Cassandra-CLI and update CQL. 
> Cassandra-cli Describe shows column family caching set to ALL
> 3. Rerun query and it works.
> 4. Restart Cassandra and run query and no rows returned. Cassandra-cli 
> Describe shows column family caching set to ALL
> 5. Set Column Family caching to NONE using Cassandra-cli and update CQL. 
> Rerun query and no rows returned. Cassandra-cli Describe for column family 
> shows caching set to NONE.
> 6. Restart Cassandra. Rerun query and it is working again. We are now back to 
> the starting state.
> Best Regards,
> -Tony

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5732) Can not query secondary index

2013-07-10 Thread Janne Jalkanen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704633#comment-13704633
 ] 

Janne Jalkanen commented on CASSANDRA-5732:
---

Is this the same as CASSANDRA-4785?

> Can not query secondary index
> -
>
> Key: CASSANDRA-5732
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5732
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.5
> Environment: Windows 8, Jre 1.6.0_45 32-bit
>Reporter: Tony Anecito
>
> Noticed after taking a column family that already existed and assigning to an 
> IntegerType index_type:KEYS and the caching was already set to 'ALL' that the 
> prepared statement do not return rows neither did it throw an exception. Here 
> is the sequence.
> 1. Starting state query running with caching off for a Column Family with the 
> query using the secondary index for te WHERE clause.
> 2, Set Column Family caching to ALL using Cassandra-CLI and update CQL. 
> Cassandra-cli Describe shows column family caching set to ALL
> 3. Rerun query and it works.
> 4. Restart Cassandra and run query and no rows returned. Cassandra-cli 
> Describe shows column family caching set to ALL
> 5. Set Column Family caching to NONE using Cassandra-cli and update CQL. 
> Rerun query and no rows returned. Cassandra-cli Describe for column family 
> shows caching set to NONE.
> 6. Restart Cassandra. Rerun query and it is working again. We are now back to 
> the starting state.
> Best Regards,
> -Tony

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5744) Cleanup AbstractType/TypeSerializer classes

2013-07-10 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5744:


Attachment: 0003-Avoid-duplicating-code.txt
0002-Make-sure-deseriaze-don-t-throw-on-empty-BB.txt
0001-Inverse-serialize-deserialize.txt

> Cleanup AbstractType/TypeSerializer classes
> ---
>
> Key: CASSANDRA-5744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5744
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 2.0 beta 1
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0
>
> Attachments: 0001-Inverse-serialize-deserialize.txt, 
> 0002-Make-sure-deseriaze-don-t-throw-on-empty-BB.txt, 
> 0003-Avoid-duplicating-code.txt
>
>
> This is somewhat a followup to CASSANDRA-4495. I'm attaching 3 patches that 
> do the following:
> # It inverse the serialize and deserialize method in TypeSerializer. Sorry I 
> didn't saw that earlier, but calling serialize the method that takes a byte 
> array to produce an object feels wrong to me (and wikipedia seems to agree 
> with me that this should the other way around: 
> http://en.wikipedia.org/wiki/Serialization :))
> # For historical reasons (which imo were somewhat of a mistake in the first 
> place but that's another story), we accept an empty byte buffer as a valid 
> value for any type. When I say "valid", I mean that validate() never throw 
> (except for InetAddressType as it happens, but that's more of an 
> inconsistency that the patch fixes). However, for some reason most 
> deserialize methods were just throwing a random exception if an empty byte 
> buffer. So I think we should be coherent here, if validate() pass, you should 
> be able to deserialize the value alright, and the 2nd patch make sure of that 
> (return null when there was nothing else making sense).
> # The patch removes a bunch of code duplication. Namely, AbstracType has a 
> getSerializer() method that return the corresponding TypeSerializer, but 
> despite that, every AbstractType subclass was redefining its compose, 
> decompose and validate that were just calling the corresponding method in 
> their deserializer. So the patch makes those method concrete in AbstractType 
> and remove the code duplication all over the place. Furthermore, 
> TypeSerializer had a getString(ByteBuffer) and a toString(T value) methods.  
> But since we also have a deserialize(ByteBuffer), the former getString() is 
> really not useful as it's just toString(deserialize()). So the patch also 
> remove that method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5743) Create matching start and stop scripts

2013-07-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704627#comment-13704627
 ] 

Jonathan Ellis edited comment on CASSANDRA-5743 at 7/10/13 2:53 PM:


Start and stop scripts are distribution-specific.  We include an init script in 
the debian/ directory, for instance.

  was (Author: jbellis):
Start and stop scripts are distribution-specific.  We include an init in 
the debian/ directory, for instance.
  
> Create matching start and stop scripts
> --
>
> Key: CASSANDRA-5743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5743
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Affects Versions: 1.2.1
>Reporter: Geert Schuring
>
> In my opinion there should be a matching set of start and stop scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5744) Cleanup AbstractType/TypeSerializer classes

2013-07-10 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-5744:
---

 Summary: Cleanup AbstractType/TypeSerializer classes
 Key: CASSANDRA-5744
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5744
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 2.0 beta 1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0
 Attachments: 0001-Inverse-serialize-deserialize.txt, 
0002-Make-sure-deseriaze-don-t-throw-on-empty-BB.txt, 
0003-Avoid-duplicating-code.txt

This is somewhat a followup to CASSANDRA-4495. I'm attaching 3 patches that do 
the following:
# It inverse the serialize and deserialize method in TypeSerializer. Sorry I 
didn't saw that earlier, but calling serialize the method that takes a byte 
array to produce an object feels wrong to me (and wikipedia seems to agree with 
me that this should the other way around: 
http://en.wikipedia.org/wiki/Serialization :))
# For historical reasons (which imo were somewhat of a mistake in the first 
place but that's another story), we accept an empty byte buffer as a valid 
value for any type. When I say "valid", I mean that validate() never throw 
(except for InetAddressType as it happens, but that's more of an inconsistency 
that the patch fixes). However, for some reason most deserialize methods were 
just throwing a random exception if an empty byte buffer. So I think we should 
be coherent here, if validate() pass, you should be able to deserialize the 
value alright, and the 2nd patch make sure of that (return null when there was 
nothing else making sense).
# The patch removes a bunch of code duplication. Namely, AbstracType has a 
getSerializer() method that return the corresponding TypeSerializer, but 
despite that, every AbstractType subclass was redefining its compose, decompose 
and validate that were just calling the corresponding method in their 
deserializer. So the patch makes those method concrete in AbstractType and 
remove the code duplication all over the place. Furthermore, TypeSerializer had 
a getString(ByteBuffer) and a toString(T value) methods.  But since we also 
have a deserialize(ByteBuffer), the former getString() is really not useful as 
it's just toString(deserialize()). So the patch also remove that method.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5743) Create matching start and stop scripts

2013-07-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5743.
---

Resolution: Invalid

Start and stop scripts are distribution-specific.  We include an init in the 
debian/ directory, for instance.

> Create matching start and stop scripts
> --
>
> Key: CASSANDRA-5743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5743
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Affects Versions: 1.2.1
>Reporter: Geert Schuring
>
> In my opinion there should be a matching set of start and stop scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5719) Expire entries out of ThriftSessionManager

2013-07-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704594#comment-13704594
 ] 

Jason Brown commented on CASSANDRA-5719:


Pushed the ninja-fixed THsHaDisruptorServer & thrift-server-0.2.jar to trunk. 
Will come through in beta2.

> Expire entries out of ThriftSessionManager
> --
>
> Key: CASSANDRA-5719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5719
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: cache, thrift
> Fix For: 1.2.7, 2.0 beta 1
>
> Attachments: 5719-v1.patch, 5719-v2-cass1_2.patch
>
>
> TSM maintains a map of SocketAddress (IpAddr, and the ephemeral port) to 
> ClientState. If the connection goes away, for whatever reason, entries are 
> not removed from the map. In most cases this is a tiny leakage. However, at 
> Netflix, we auto-scale services up and down everyday, sometimes with client 
> instance lifetimes of around 36 hours. These clusters can add hundreds of 
> servers at peak time, and indescriminantly terminate them at the trough. 
> Thus, those Ip addresses are never coming back (for us). The net effect for 
> cassandra is that we'll leave thousands of dead entries in the 
> TSM.activeSocketSessions map. When I looked at an instance in a well-used 
> cluster yesterday, there were almost 400,000 entries in the map.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Expire entries out of ThriftSessionManager (round two, for THsHaDisruptorServer on trunk) patch by jasobrown; reviewed by jbellis for CASSANDRA-5719

2013-07-10 Thread jasobrown
Updated Branches:
  refs/heads/trunk 626b0783c -> 47ac42fdb


Expire entries out of ThriftSessionManager (round two, for THsHaDisruptorServer 
on trunk)
patch by jasobrown; reviewed by jbellis for CASSANDRA-5719


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/47ac42fd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/47ac42fd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/47ac42fd

Branch: refs/heads/trunk
Commit: 47ac42fdbbd0e2944f44beeb52a8881af98cc341
Parents: 626b078
Author: Jason Brown 
Authored: Wed Jul 3 09:56:28 2013 -0700
Committer: Jason Brown 
Committed: Wed Jul 10 07:00:15 2013 -0700

--
 build.xml |   2 +-
 lib/thrift-server-0.1.jar | Bin 122900 -> 0 bytes
 lib/thrift-server-0.2.jar | Bin 0 -> 123020 bytes
 .../cassandra/thrift/THsHaDisruptorServer.java|   7 +++
 4 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/47ac42fd/build.xml
--
diff --git a/build.xml b/build.xml
index b6fb489..20d9b51 100644
--- a/build.xml
+++ b/build.xml
@@ -353,7 +353,7 @@
   
   
   
-  
+  
   
   
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/47ac42fd/lib/thrift-server-0.1.jar
--
diff --git a/lib/thrift-server-0.1.jar b/lib/thrift-server-0.1.jar
deleted file mode 100644
index 2c595a0..000
Binary files a/lib/thrift-server-0.1.jar and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/47ac42fd/lib/thrift-server-0.2.jar
--
diff --git a/lib/thrift-server-0.2.jar b/lib/thrift-server-0.2.jar
new file mode 100644
index 000..3fdedc6
Binary files /dev/null and b/lib/thrift-server-0.2.jar differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/47ac42fd/src/java/org/apache/cassandra/thrift/THsHaDisruptorServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/THsHaDisruptorServer.java 
b/src/java/org/apache/cassandra/thrift/THsHaDisruptorServer.java
index a757315..c363743 100644
--- a/src/java/org/apache/cassandra/thrift/THsHaDisruptorServer.java
+++ b/src/java/org/apache/cassandra/thrift/THsHaDisruptorServer.java
@@ -22,6 +22,7 @@ import java.net.InetSocketAddress;
 
 import com.thinkaurelius.thrift.Message;
 import com.thinkaurelius.thrift.TDisruptorServer;
+import org.apache.thrift.transport.TNonblockingTransport;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -54,6 +55,12 @@ public class THsHaDisruptorServer extends TDisruptorServer
 
ThriftSessionManager.instance.setCurrentSocket(socket.getSocketChannel().socket().getRemoteSocketAddress());
 }
 
+public void beforeClose(Message buffer)
+{
+TNonblockingSocket socket = (TNonblockingSocket) buffer.transport;
+
ThriftSessionManager.instance.connectionComplete(socket.getSocketChannel().socket().getRemoteSocketAddress());
+}
+
 public static class Factory implements TServerFactory
 {
 public TServer buildTServer(Args args)



[jira] [Comment Edited] (CASSANDRA-5171) Save EC2Snitch topology information in system table

2013-07-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704538#comment-13704538
 ] 

Jason Brown edited comment on CASSANDRA-5171 at 7/10/13 1:22 PM:
-

I thought the patch was reasonable enough for 1.2. If you give me a few days, I 
can test it out in our env, and let you know if it borks everything or not.

EDIT: Yeah, will definitely want to test to make sure it's cool with 
CASSANDRA-5669 (i.e. they don't collide to not connect at all).

  was (Author: jasobrown):
I thought the patch was reasonable enough for 1.2. If you give me a few 
days, I can test it out in our env, and let you know if it borks everything or 
not.
  
> Save EC2Snitch topology information in system table
> ---
>
> Key: CASSANDRA-5171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.1
> Environment: EC2
>Reporter: Vijay
>Assignee: Vijay
>Priority: Critical
> Fix For: 2.0
>
> Attachments: 0001-CASSANDRA-5171.patch, 0001-CASSANDRA-5171-v2.patch
>
>
> EC2Snitch currently waits for the Gossip information to understand the 
> cluster information every time we restart. It will be nice to use already 
> available system table info similar to GPFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5171) Save EC2Snitch topology information in system table

2013-07-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704538#comment-13704538
 ] 

Jason Brown commented on CASSANDRA-5171:


I thought the patch was reasonable enough for 1.2. If you give me a few days, I 
can test it out in our env, and let you know if it borks everything or not.

> Save EC2Snitch topology information in system table
> ---
>
> Key: CASSANDRA-5171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.1
> Environment: EC2
>Reporter: Vijay
>Assignee: Vijay
>Priority: Critical
> Fix For: 2.0
>
> Attachments: 0001-CASSANDRA-5171.patch, 0001-CASSANDRA-5171-v2.patch
>
>
> EC2Snitch currently waits for the Gossip information to understand the 
> cluster information every time we restart. It will be nice to use already 
> available system table info similar to GPFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5743) Create matching start and stop scripts

2013-07-10 Thread Geert Schuring (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704344#comment-13704344
 ] 

Geert Schuring commented on CASSANDRA-5743:
---

This is what my start and stop scripts look like on MacOS:

Start Script:
{quote}
export 
JAVA_HOME=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
./cassandra -p lock.pid
{quote}

Stop Script:
{quote}
kill `cat lock.pid` 
{quote}

Simple and effective.

> Create matching start and stop scripts
> --
>
> Key: CASSANDRA-5743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5743
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Affects Versions: 1.2.1
>Reporter: Geert Schuring
>
> In my opinion there should be a matching set of start and stop scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5743) Create matching start and stop scripts

2013-07-10 Thread Geert Schuring (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geert Schuring updated CASSANDRA-5743:
--

  Component/s: Config
  Description: In my opinion there should be a matching set of start 
and stop scripts.
Affects Version/s: 1.2.1
  Summary: Create matching start and stop scripts  (was: X)

> Create matching start and stop scripts
> --
>
> Key: CASSANDRA-5743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5743
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Affects Versions: 1.2.1
>Reporter: Geert Schuring
>
> In my opinion there should be a matching set of start and stop scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5743) X

2013-07-10 Thread Geert Schuring (JIRA)
Geert Schuring created CASSANDRA-5743:
-

 Summary: X
 Key: CASSANDRA-5743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5743
 Project: Cassandra
  Issue Type: Improvement
Reporter: Geert Schuring




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5742) Add command "list snapshots" to nodetool

2013-07-10 Thread Geert Schuring (JIRA)
Geert Schuring created CASSANDRA-5742:
-

 Summary: Add command "list snapshots" to nodetool
 Key: CASSANDRA-5742
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5742
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2.1
Reporter: Geert Schuring
Priority: Minor


It would be nice if the nodetool could tell me which snapshots are present on 
the system instead of me having to browse the filesystem to fetch the names of 
the snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10

2013-07-10 Thread Shamim Ahmed (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704306#comment-13704306
 ] 

Shamim Ahmed commented on CASSANDRA-5234:
-

add the patch as a temporary fix

> Table created through CQL3 are not accessble to Pig 0.10
> 
>
> Key: CASSANDRA-5234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5234
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.2.1
> Environment: Red hat linux 5
>Reporter: Shamim Ahmed
>Assignee: Alex Liu
> Fix For: 1.2.7
>
> Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 
> 5234-2-1.2branch.txt, 5234-3-1.2branch.txt, 5234-3-trunk.txt, 5234.tx, 
> fix_where_clause.patch
>
>
> Hi,
>   i have faced a bug when creating table through CQL3 and trying to load data 
> through pig 0.10 as follows:
> java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ'
>   at 
> org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112)
>   at 
> org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615).
> This effects from Simple table to table with compound key. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10

2013-07-10 Thread Shamim Ahmed (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shamim Ahmed updated CASSANDRA-5234:


Attachment: fix_where_clause.patch

> Table created through CQL3 are not accessble to Pig 0.10
> 
>
> Key: CASSANDRA-5234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5234
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.2.1
> Environment: Red hat linux 5
>Reporter: Shamim Ahmed
>Assignee: Alex Liu
> Fix For: 1.2.7
>
> Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 
> 5234-2-1.2branch.txt, 5234-3-1.2branch.txt, 5234-3-trunk.txt, 5234.tx, 
> fix_where_clause.patch
>
>
> Hi,
>   i have faced a bug when creating table through CQL3 and trying to load data 
> through pig 0.10 as follows:
> java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ'
>   at 
> org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112)
>   at 
> org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615).
> This effects from Simple table to table with compound key. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira