[jira] [Commented] (CASSANDRA-5051) Allow automatic cleanup after gc_grace

2013-05-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662952#comment-13662952
 ] 

Brandon Williams commented on CASSANDRA-5051:
-

bq. I think we might need to add a tell the recipient about the new ranges 
he's going to have step to streaming

This sounds like a good idea to me, since gossip can only guarantee eventual 
consistency, and there might be a partition you're not aware of at the time, 
causing loss.

 Allow automatic cleanup after gc_grace
 --

 Key: CASSANDRA-5051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5051
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Vijay
  Labels: vnodes
 Fix For: 2.0

 Attachments: 0001-5051-v4.patch, 0001-5051-v6.patch, 
 0001-5051-with-test-fixes.patch, 0001-CASSANDRA-5051.patch, 
 0002-5051-remove-upgradesstable.patch, 
 0002-5051-remove-upgradesstable-v4.patch, 0004-5051-additional-test-v4.patch, 
 5051-v2.txt


 When using vnodes, after adding a new node you have to run cleanup on all the 
 machines, because you don't know which are affected and chances are it was 
 most if not all of them.  As an alternative to this intensive process, we 
 could allow cleanup during compaction if the data is older than gc_grace (or 
 perhaps some other time period since people tend to use gc_grace hacks to get 
 rid of tombstones.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5583) CombineHiveInputFormat queries hang when table is empty and aggregation function is used

2013-05-21 Thread Edward Capriolo (JIRA)
Edward Capriolo created CASSANDRA-5583:
--

 Summary: CombineHiveInputFormat queries hang when table is empty 
and aggregation function is used
 Key: CASSANDRA-5583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5583
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Priority: Blocker


Running hadoop 0.20.2. Hive 0.10. The new default is combined input format. 
When you aggregate and emtpy table, or a table not empty with an empty 
partition the query produces 0 maps and 1 reduce and hangs forever. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5583) CombineHiveInputFormat queries hang when table is empty and aggregation function is used

2013-05-21 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo resolved CASSANDRA-5583.


Resolution: Invalid

Wrong project. Sorry.

 CombineHiveInputFormat queries hang when table is empty and aggregation 
 function is used
 

 Key: CASSANDRA-5583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5583
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Priority: Blocker

 Running hadoop 0.20.2. Hive 0.10. The new default is combined input format. 
 When you aggregate and emtpy table, or a table not empty with an empty 
 partition the query produces 0 maps and 1 reduce and hangs forever. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA

2013-05-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663062#comment-13663062
 ] 

Aleksey Yeschenko commented on CASSANDRA-5489:
--

(Also, should remember to enable the rename dtest back after this is fixed).

 Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
 --

 Key: CASSANDRA-5489
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Affects Versions: 2.0
Reporter: Aleksey Yeschenko
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0

 Attachments: 5489-1.2.txt, 5489.txt


 CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are 
 serialized in schema. Prior to that we never kept nulls in the the json 
 pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving 
 such migrations as well. The patch reverts this behavior and also slightly 
 modifies cqlsh itself to ignore non-regular columns from 
 system.schema_columns table.
 This patch breaks nothing, since 2.0 already handles 1.2 non-null padded 
 alias lists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5273) Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-05-21 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663065#comment-13663065
 ] 

Marcus Eriksson commented on CASSANDRA-5273:


[~jbellis] you think the timeouts would be enough?

 Hanging system after OutOfMemory. Server cannot die due to uncaughtException 
 handling
 -

 Key: CASSANDRA-5273
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5273
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: linux, 64 bit
Reporter: Ignace Desimpel
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-5273-add-timeouts-to-the-blocking-commitlo.patch, 
 0001-CASSANDRA-5273-add-timeouts-to-the-blocking-commitlo.patch, CassHangs.txt


 On out of memory exception, there is an uncaughtexception handler that is 
 calling System.exit(). However, multiple threads are calling this handler 
 causing a deadlock and the server cannot stop working. See 
 http://www.mail-archive.com/user@cassandra.apache.org/msg27898.html. And see 
 stack trace in attachement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5545) Add SASL authentication to CQL native protocol

2013-05-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663073#comment-13663073
 ] 

Aleksey Yeschenko commented on CASSANDRA-5545:
--

Can we do this without alterting IAuthenticator? Why not make SASLAuthenticator 
extend IAuthenticator (and make Legacy and Password authenticators extend 
SASLAuthenticator instead of IAuthenticator). And replace that null check with 
instanceof?

I don't like this trend of modifying IAuthenticator with every major release. 
Last time it was necessary, but this time we can avoid it.

 Add SASL authentication to CQL native protocol
 --

 Key: CASSANDRA-5545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5545
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.0

 Attachments: 
 0001-Add-SASL-authentication-to-CQL-native-protocol.patch, 
 0001-Add-SASL-hooks-to-CQL-native-protocol.patch


 Adding hooks for SASL authentication would make it much easier to integrate 
 with external auth providers, such as Kerberos  NTLM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5545) Add SASL authentication to CQL native protocol

2013-05-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663074#comment-13663074
 ] 

Sylvain Lebresne commented on CASSANDRA-5545:
-

It feels weird to have SaslAuthenticator return a string map and go through the 
old style authenticate method since true SASL mechanism will do the actual 
authentication during the evaluateResponse() call (and so may have to hack the 
return of getCredentials() to make their authenticate() method work properly). 
Instead, I'd prefer changing the SaslAuthenticator API to:
{noformat}
public interface SaslAuthenticator
{
public byte[] evaluateResponse(byte[] clientResponse) throws 
AuthenticationException;
public boolean isComplete();
public AuthenticatedUser getAuthenticatedUser();
}
{noformat}
We would then change ClientState.login to just take an AuthenticatedUser 
parameters, and the call to authenticate() would be moved to the thrift sid 
(and in CredentialsMessage).

That way authenticate() is a thrift/protocol v1 only method and can be made to 
throw an error for authenticator that don't care about that (of course, in the 
case of PlainTextSaslAuthenticator, it can just call authenticate internally).

Other small remarks/nits:
* We really need authentication to throw AuthenticationException (as in my 
suggestion above), not SaslException since the later is not known by the 
protocol (which will send it to the client as a server error (i.e. a bug 
server side), which is not the case).
* We need to refuse SASL_RESPONSE messages in v1 and AUTHENTICATE messages in 
v2 (just throwing a ProtocolException in their respective decode method would 
be fine).
* Might be worth reseting the saslAuthenticator to null in ServerConnection 
once authentication is comple to have it garbage collected?
* Nit: few minor code style related fix (indentation for try in SaslResponse)
* Nit: I'd have move SaslAuthenticator and PlainTextSaslAuthenticator to the 
org.apache.cassandra.auth package directly (and would have make 
PlainTextSaslAuthenticator a private static inner class in 
PasswordAuthenticator in fact).


 Add SASL authentication to CQL native protocol
 --

 Key: CASSANDRA-5545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5545
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.0

 Attachments: 
 0001-Add-SASL-authentication-to-CQL-native-protocol.patch, 
 0001-Add-SASL-hooks-to-CQL-native-protocol.patch


 Adding hooks for SASL authentication would make it much easier to integrate 
 with external auth providers, such as Kerberos  NTLM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5488) CassandraStorage throws NullPointerException (NPE) when widerows is set to 'true'

2013-05-21 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-5488:


Attachment: 5488-2.txt

An alternative way to do it with consolidating the two methods and checking for 
null in that method.

 CassandraStorage throws NullPointerException (NPE) when widerows is set to 
 'true'
 -

 Key: CASSANDRA-5488
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5488
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.9, 1.2.4
 Environment: Ubuntu 12.04.1 x64, Cassandra 1.2.4
Reporter: Sheetal Gosrani
Priority: Minor
  Labels: cassandra, hadoop, pig
 Fix For: 1.2.6

 Attachments: 5488-2.txt, 5488.txt


 CassandraStorage throws NPE when widerows is set to 'true'. 
 2 problems in getNextWide:
 1. Creation of tuple without specifying size
 2. Calling addKeyToTuple on lastKey instead of key
 java.lang.NullPointerException
 at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:124)
 at org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:73)
 at org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:93)
 at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:34)
 at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:26)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.addKeyToTuple(CassandraStorage.java:313)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getNextWide(CassandraStorage.java:196)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.java:224)
 at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:194)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
 at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
 2013-04-16 12:28:03,671 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[6/6] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-21 Thread brandonwilliams
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/44f178d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/44f178d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/44f178d1

Branch: refs/heads/trunk
Commit: 44f178d1e07eaa94f111ef7811c4de3b14484b85
Parents: 1d2c122 9d0eec2
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue May 21 11:14:23 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue May 21 11:14:23 2013 -0500

--
 CHANGES.txt|2 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 13 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/44f178d1/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/44f178d1/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --cc src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index d4fb577,76feb5a..ba87c42
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@@ -146,9 -147,9 +146,9 @@@ public class CassandraStorage extends L
  if (tuple.size() == 0) // lastRow is a new one
  {
  key = (ByteBuffer)reader.getCurrentKey();
- addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
+ tuple = addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
  }
 -for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 +for (Map.EntryByteBuffer, Column entry : 
lastRow.entrySet())
  {
  bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
  }
@@@ -182,22 -183,22 +182,22 @@@
  key = (ByteBuffer)reader.getCurrentKey();
  if (lastKey != null  !(key.equals(lastKey))) // last 
key only had one value
  {
- addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+ tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 -for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 +for (Map.EntryByteBuffer, Column entry : 
lastRow.entrySet())
  {
  bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
  }
  tuple.append(bag);
  lastKey = key;
 -lastRow = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 +lastRow = 
(SortedMapByteBuffer,Column)reader.getCurrentValue();
  return tuple;
  }
- addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+ tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
  }
 -SortedMapByteBuffer,IColumn row = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 +SortedMapByteBuffer,Column row = 
(SortedMapByteBuffer,Column)reader.getCurrentValue();
  if (lastRow != null) // prepend what was read last time
  {
 -for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 +for (Map.EntryByteBuffer, Column entry : 
lastRow.entrySet())
  {
  bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
  }
@@@ -311,10 -309,10 +308,10 @@@
  {
  setTupleValue(tuple, 0, 
getDefaultMarshallers(cfDef).get(MarshallerType.KEY_VALIDATOR).compose(key));
  }
- 
+ return tuple;
  }
  
 -private Tuple columnToTuple(IColumn col, CfDef cfDef, AbstractType 
comparator) throws IOException
 +private Tuple columnToTuple(Column col, CfDef cfDef, AbstractType 
comparator) throws IOException
  {
  Tuple pair = 

[3/6] git commit: Fix NPE in Pig's widerow mode. Patch by Sheetal Gorsani and Jeremy Hanna, reviewed by brandonwilliams for CASSANDRA-5488

2013-05-21 Thread brandonwilliams
Fix NPE in Pig's widerow mode.
Patch by Sheetal Gorsani and Jeremy Hanna, reviewed by brandonwilliams
for CASSANDRA-5488


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d2ce5f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d2ce5f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d2ce5f9

Branch: refs/heads/trunk
Commit: 7d2ce5f957b1fb392617c1ff05a561571eccd593
Parents: c5dc029
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue May 21 11:08:50 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue May 21 11:08:50 2013 -0500

--
 CHANGES.txt|1 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 12 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7c89987..256e69a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
(CASSANDRA-5497)
  * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
  * Fix Bound intersection computation (CASSANDRA-5551)
+ * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 
 1.1.11
  * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/examples/pig/test/test_storage.pig
--
diff --git a/examples/pig/test/test_storage.pig 
b/examples/pig/test/test_storage.pig
index 026cb02..93dd91f 100644
--- a/examples/pig/test/test_storage.pig
+++ b/examples/pig/test/test_storage.pig
@@ -1,4 +1,4 @@
-rows = LOAD 'cassandra://PigTest/SomeApp' USING CassandraStorage();
+rows = LOAD 'cassandra://PigTest/SomeApp?widerows=true' USING 
CassandraStorage();
 -- full copy
 STORE rows INTO 'cassandra://PigTest/CopyOfSomeApp' USING CassandraStorage();
 -- single tuple

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index 55ccbb9..b681ee3 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@ -144,7 +144,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 if (tuple.size() == 0) // lastRow is a new one
 {
 key = (ByteBuffer)reader.getCurrentKey();
-addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
 }
 for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 {
@@ -180,7 +180,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 key = (ByteBuffer)reader.getCurrentKey();
 if (lastKey != null  !(key.equals(lastKey))) // last key 
only had one value
 {
-addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 {
 bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
@@ -190,7 +190,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 lastRow = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 return tuple;
 }
-addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 }
 SortedMapByteBuffer,IColumn row = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 if (lastRow != null) // prepend what was read last time
@@ -233,7 +233,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 // output tuple, will hold the key, each indexed column 

[1/6] git commit: Fix NPE in Pig's widerow mode. Patch by Sheetal Gorsani and Jeremy Hanna, reviewed by brandonwilliams for CASSANDRA-5488

2013-05-21 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.1 c5dc0292e - 7d2ce5f95
  refs/heads/cassandra-1.2 b69c1aa4c - 9d0eec217
  refs/heads/trunk 1d2c12242 - 44f178d1e


Fix NPE in Pig's widerow mode.
Patch by Sheetal Gorsani and Jeremy Hanna, reviewed by brandonwilliams
for CASSANDRA-5488


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d2ce5f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d2ce5f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d2ce5f9

Branch: refs/heads/cassandra-1.1
Commit: 7d2ce5f957b1fb392617c1ff05a561571eccd593
Parents: c5dc029
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue May 21 11:08:50 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue May 21 11:08:50 2013 -0500

--
 CHANGES.txt|1 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 12 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7c89987..256e69a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
(CASSANDRA-5497)
  * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
  * Fix Bound intersection computation (CASSANDRA-5551)
+ * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 
 1.1.11
  * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/examples/pig/test/test_storage.pig
--
diff --git a/examples/pig/test/test_storage.pig 
b/examples/pig/test/test_storage.pig
index 026cb02..93dd91f 100644
--- a/examples/pig/test/test_storage.pig
+++ b/examples/pig/test/test_storage.pig
@@ -1,4 +1,4 @@
-rows = LOAD 'cassandra://PigTest/SomeApp' USING CassandraStorage();
+rows = LOAD 'cassandra://PigTest/SomeApp?widerows=true' USING 
CassandraStorage();
 -- full copy
 STORE rows INTO 'cassandra://PigTest/CopyOfSomeApp' USING CassandraStorage();
 -- single tuple

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index 55ccbb9..b681ee3 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@ -144,7 +144,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 if (tuple.size() == 0) // lastRow is a new one
 {
 key = (ByteBuffer)reader.getCurrentKey();
-addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
 }
 for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 {
@@ -180,7 +180,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 key = (ByteBuffer)reader.getCurrentKey();
 if (lastKey != null  !(key.equals(lastKey))) // last key 
only had one value
 {
-addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 {
 bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
@@ -190,7 +190,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 lastRow = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 return tuple;
 }
-addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 }
 SortedMapByteBuffer,IColumn row = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 if (lastRow != null) // prepend what was read last time
@@ 

[4/6] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-05-21 Thread brandonwilliams
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d0eec21
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d0eec21
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d0eec21

Branch: refs/heads/trunk
Commit: 9d0eec217181e472faf4dcf720fe30f04296804a
Parents: b69c1aa 7d2ce5f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue May 21 11:10:47 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue May 21 11:10:47 2013 -0500

--
 CHANGES.txt|2 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 13 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d0eec21/CHANGES.txt
--
diff --cc CHANGES.txt
index 619e415,256e69a..24e9163
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,113 -1,13 +1,115 @@@
 -1.1.12
 +1.2.6
 + * Write row markers when serializing schema (CASSANDRA-5572)
 + * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
++Merged from 1.1
++ * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 +
 +
 +1.2.5
 + * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
 + * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)
 + * fix 2i updates with identical values and timestamps (CASSANDRA-5540)
 + * fix compaction throttling bursty-ness (CASSANDRA-4316)
 + * reduce memory consumption of IndexSummary (CASSANDRA-5506)
 + * remove per-row column name bloom filters (CASSANDRA-5492)
 + * Include fatal errors in trace events (CASSANDRA-5447)
 + * Ensure that PerRowSecondaryIndex is notified of row-level deletes
 +   (CASSANDRA-5445)
 + * Allow empty blob literals in CQL3 (CASSANDRA-5452)
 + * Fix streaming RangeTombstones at column index boundary (CASSANDRA-5418)
 + * Fix preparing statements when current keyspace is not set (CASSANDRA-5468)
 + * Fix SemanticVersion.isSupportedBy minor/patch handling (CASSANDRA-5496)
 + * Don't provide oldCfId for post-1.1 system cfs (CASSANDRA-5490)
 + * Fix primary range ignores replication strategy (CASSANDRA-5424)
 + * Fix shutdown of binary protocol server (CASSANDRA-5507)
 + * Fix repair -snapshot not working (CASSANDRA-5512)
 + * Set isRunning flag later in binary protocol server (CASSANDRA-5467)
 + * Fix use of CQL3 functions with descending clustering order (CASSANDRA-5472)
 + * Disallow renaming columns one at a time for thrift table in CQL3
 +   (CASSANDRA-5531)
 + * cqlsh: add CLUSTERING ORDER BY support to DESCRIBE (CASSANDRA-5528)
 + * Add custom secondary index support to CQL3 (CASSANDRA-5484)
 + * Fix repair hanging silently on unexpected error (CASSANDRA-5229)
 + * Fix Ec2Snitch regression introduced by CASSANDRA-5171 (CASSANDRA-5432)
 + * Add nodetool enablebackup/disablebackup (CASSANDRA-5556)
 + * cqlsh: fix DESCRIBE after case insensitive USE (CASSANDRA-5567)
 +Merged from 1.1
   * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
 - * Use allocator information to improve memtable memory usage estimate 
 + * Use allocator information to improve memtable memory usage estimate
 (CASSANDRA-5497)
 + * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
   * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
   * Fix Bound intersection computation (CASSANDRA-5551)
 - * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 + * sstablescrub now respects max memory size in cassandra.in.sh 
(CASSANDRA-5562)
 +
 +
 +1.2.4
 + * Ensure that PerRowSecondaryIndex updates see the most recent values
 +   (CASSANDRA-5397)
 + * avoid duplicate index entries ind PrecompactedRow and 
 +   ParallelCompactionIterable (CASSANDRA-5395)
 + * remove the index entry on oldColumn when new column is a tombstone 
 +   (CASSANDRA-5395)
 + * Change default stream throughput from 400 to 200 mbps (CASSANDRA-5036)
 + * Gossiper logs DOWN for symmetry with UP (CASSANDRA-5187)
 + * Fix mixing prepared statements between keyspaces (CASSANDRA-5352)
 + * Fix consistency level during bootstrap - strike 3 (CASSANDRA-5354)
 + * Fix transposed arguments in AlreadyExistsException (CASSANDRA-5362)
 + * Improve asynchronous hint delivery (CASSANDRA-5179)
 + * Fix Guava dependency version (12.0 - 13.0.1) for Maven (CASSANDRA-5364)
 + * Validate that provided CQL3 collection value are  64K (CASSANDRA-5355)
 + * Make upgradeSSTable skip current version sstables by default 
(CASSANDRA-5366)
 + * Optimize min/max timestamp collection (CASSANDRA-5373)
 + * Invalid streamId in cql binary protocol when using invalid CL 

[5/6] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-05-21 Thread brandonwilliams
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d0eec21
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d0eec21
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d0eec21

Branch: refs/heads/cassandra-1.2
Commit: 9d0eec217181e472faf4dcf720fe30f04296804a
Parents: b69c1aa 7d2ce5f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue May 21 11:10:47 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue May 21 11:10:47 2013 -0500

--
 CHANGES.txt|2 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 13 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d0eec21/CHANGES.txt
--
diff --cc CHANGES.txt
index 619e415,256e69a..24e9163
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,113 -1,13 +1,115 @@@
 -1.1.12
 +1.2.6
 + * Write row markers when serializing schema (CASSANDRA-5572)
 + * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
++Merged from 1.1
++ * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 +
 +
 +1.2.5
 + * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
 + * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)
 + * fix 2i updates with identical values and timestamps (CASSANDRA-5540)
 + * fix compaction throttling bursty-ness (CASSANDRA-4316)
 + * reduce memory consumption of IndexSummary (CASSANDRA-5506)
 + * remove per-row column name bloom filters (CASSANDRA-5492)
 + * Include fatal errors in trace events (CASSANDRA-5447)
 + * Ensure that PerRowSecondaryIndex is notified of row-level deletes
 +   (CASSANDRA-5445)
 + * Allow empty blob literals in CQL3 (CASSANDRA-5452)
 + * Fix streaming RangeTombstones at column index boundary (CASSANDRA-5418)
 + * Fix preparing statements when current keyspace is not set (CASSANDRA-5468)
 + * Fix SemanticVersion.isSupportedBy minor/patch handling (CASSANDRA-5496)
 + * Don't provide oldCfId for post-1.1 system cfs (CASSANDRA-5490)
 + * Fix primary range ignores replication strategy (CASSANDRA-5424)
 + * Fix shutdown of binary protocol server (CASSANDRA-5507)
 + * Fix repair -snapshot not working (CASSANDRA-5512)
 + * Set isRunning flag later in binary protocol server (CASSANDRA-5467)
 + * Fix use of CQL3 functions with descending clustering order (CASSANDRA-5472)
 + * Disallow renaming columns one at a time for thrift table in CQL3
 +   (CASSANDRA-5531)
 + * cqlsh: add CLUSTERING ORDER BY support to DESCRIBE (CASSANDRA-5528)
 + * Add custom secondary index support to CQL3 (CASSANDRA-5484)
 + * Fix repair hanging silently on unexpected error (CASSANDRA-5229)
 + * Fix Ec2Snitch regression introduced by CASSANDRA-5171 (CASSANDRA-5432)
 + * Add nodetool enablebackup/disablebackup (CASSANDRA-5556)
 + * cqlsh: fix DESCRIBE after case insensitive USE (CASSANDRA-5567)
 +Merged from 1.1
   * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
 - * Use allocator information to improve memtable memory usage estimate 
 + * Use allocator information to improve memtable memory usage estimate
 (CASSANDRA-5497)
 + * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
   * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
   * Fix Bound intersection computation (CASSANDRA-5551)
 - * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 + * sstablescrub now respects max memory size in cassandra.in.sh 
(CASSANDRA-5562)
 +
 +
 +1.2.4
 + * Ensure that PerRowSecondaryIndex updates see the most recent values
 +   (CASSANDRA-5397)
 + * avoid duplicate index entries ind PrecompactedRow and 
 +   ParallelCompactionIterable (CASSANDRA-5395)
 + * remove the index entry on oldColumn when new column is a tombstone 
 +   (CASSANDRA-5395)
 + * Change default stream throughput from 400 to 200 mbps (CASSANDRA-5036)
 + * Gossiper logs DOWN for symmetry with UP (CASSANDRA-5187)
 + * Fix mixing prepared statements between keyspaces (CASSANDRA-5352)
 + * Fix consistency level during bootstrap - strike 3 (CASSANDRA-5354)
 + * Fix transposed arguments in AlreadyExistsException (CASSANDRA-5362)
 + * Improve asynchronous hint delivery (CASSANDRA-5179)
 + * Fix Guava dependency version (12.0 - 13.0.1) for Maven (CASSANDRA-5364)
 + * Validate that provided CQL3 collection value are  64K (CASSANDRA-5355)
 + * Make upgradeSSTable skip current version sstables by default 
(CASSANDRA-5366)
 + * Optimize min/max timestamp collection (CASSANDRA-5373)
 + * Invalid streamId in cql binary protocol when using 

[2/6] git commit: Fix NPE in Pig's widerow mode. Patch by Sheetal Gorsani and Jeremy Hanna, reviewed by brandonwilliams for CASSANDRA-5488

2013-05-21 Thread brandonwilliams
Fix NPE in Pig's widerow mode.
Patch by Sheetal Gorsani and Jeremy Hanna, reviewed by brandonwilliams
for CASSANDRA-5488


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d2ce5f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d2ce5f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d2ce5f9

Branch: refs/heads/cassandra-1.2
Commit: 7d2ce5f957b1fb392617c1ff05a561571eccd593
Parents: c5dc029
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue May 21 11:08:50 2013 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue May 21 11:08:50 2013 -0500

--
 CHANGES.txt|1 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 12 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7c89987..256e69a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
(CASSANDRA-5497)
  * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
  * Fix Bound intersection computation (CASSANDRA-5551)
+ * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 
 1.1.11
  * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/examples/pig/test/test_storage.pig
--
diff --git a/examples/pig/test/test_storage.pig 
b/examples/pig/test/test_storage.pig
index 026cb02..93dd91f 100644
--- a/examples/pig/test/test_storage.pig
+++ b/examples/pig/test/test_storage.pig
@@ -1,4 +1,4 @@
-rows = LOAD 'cassandra://PigTest/SomeApp' USING CassandraStorage();
+rows = LOAD 'cassandra://PigTest/SomeApp?widerows=true' USING 
CassandraStorage();
 -- full copy
 STORE rows INTO 'cassandra://PigTest/CopyOfSomeApp' USING CassandraStorage();
 -- single tuple

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d2ce5f9/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index 55ccbb9..b681ee3 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@ -144,7 +144,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 if (tuple.size() == 0) // lastRow is a new one
 {
 key = (ByteBuffer)reader.getCurrentKey();
-addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, key, cfDef, 
parseType(cfDef.getKey_validation_class()));
 }
 for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 {
@@ -180,7 +180,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 key = (ByteBuffer)reader.getCurrentKey();
 if (lastKey != null  !(key.equals(lastKey))) // last key 
only had one value
 {
-addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 for (Map.EntryByteBuffer, IColumn entry : 
lastRow.entrySet())
 {
 bag.add(columnToTuple(entry.getValue(), cfDef, 
parseType(cfDef.getComparator_type(;
@@ -190,7 +190,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 lastRow = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 return tuple;
 }
-addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
+tuple = addKeyToTuple(tuple, lastKey, cfDef, 
parseType(cfDef.getKey_validation_class()));
 }
 SortedMapByteBuffer,IColumn row = 
(SortedMapByteBuffer,IColumn)reader.getCurrentValue();
 if (lastRow != null) // prepend what was read last time
@@ -233,7 +233,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 // output tuple, will hold the key, each indexed 

[jira] [Commented] (CASSANDRA-5545) Add SASL authentication to CQL native protocol

2013-05-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663081#comment-13663081
 ] 

Sylvain Lebresne commented on CASSANDRA-5545:
-

bq. Why not make SASLAuthenticator extend IAuthenticator

The problem is that SASLAuthenticator is intrinsically stateful, i.e. we need a 
new for each new authentication. And IAuthenticator has methods like setup() 
that somewhat suggest there is just one IAuthenticator.

 Add SASL authentication to CQL native protocol
 --

 Key: CASSANDRA-5545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5545
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.0

 Attachments: 
 0001-Add-SASL-authentication-to-CQL-native-protocol.patch, 
 0001-Add-SASL-hooks-to-CQL-native-protocol.patch


 Adding hooks for SASL authentication would make it much easier to integrate 
 with external auth providers, such as Kerberos  NTLM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5488) CassandraStorage throws NullPointerException (NPE) when widerows is set to 'true'

2013-05-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-5488:
---

Assignee: Sheetal Gosrani

 CassandraStorage throws NullPointerException (NPE) when widerows is set to 
 'true'
 -

 Key: CASSANDRA-5488
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5488
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.9, 1.2.4
 Environment: Ubuntu 12.04.1 x64, Cassandra 1.2.4
Reporter: Sheetal Gosrani
Assignee: Sheetal Gosrani
Priority: Minor
  Labels: cassandra, hadoop, pig
 Fix For: 1.1.12, 1.2.6

 Attachments: 5488-2.txt, 5488.txt


 CassandraStorage throws NPE when widerows is set to 'true'. 
 2 problems in getNextWide:
 1. Creation of tuple without specifying size
 2. Calling addKeyToTuple on lastKey instead of key
 java.lang.NullPointerException
 at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:124)
 at org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:73)
 at org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:93)
 at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:34)
 at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:26)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.addKeyToTuple(CassandraStorage.java:313)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getNextWide(CassandraStorage.java:196)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.java:224)
 at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:194)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
 at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
 2013-04-16 12:28:03,671 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5529) ColumnFamilyRecordReader fails for large datasets

2013-05-21 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663104#comment-13663104
 ] 

T Jake Luciani commented on CASSANDRA-5529:
---

Looks fine.  I filed THRIFT-1975 to get this issue fixed in general


 ColumnFamilyRecordReader fails for large datasets
 -

 Key: CASSANDRA-5529
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5529
 Project: Cassandra
  Issue Type: Bug
  Components: API, Hadoop
Affects Versions: 0.6
Reporter: Rob Timpe
Assignee: Jonathan Ellis
 Fix For: 1.2.6

 Attachments: 5529-1.1.txt, 5529.txt


 When running mapreduce jobs that read directly from cassandra, the job will 
 sometimes fail with an exception like this:
 java.lang.RuntimeException: com.rockmelt.org.apache.thrift.TException: 
 Message length exceeded: 40
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:400)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:406)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:329)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:109)
   at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:522)
   at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:547)
   at 
 org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: com.rockmelt.org.apache.thrift.TException: Message length 
 exceeded: 40
   at 
 com.rockmelt.org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:393)
   at 
 com.rockmelt.org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
   at org.apache.cassandra.thrift.Column.read(Column.java:528)
   at 
 org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:507)
   at org.apache.cassandra.thrift.KeySlice.read(KeySlice.java:408)
   at 
 org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12422)
   at 
 com.rockmelt.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:696)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:680)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:362)
   ... 16 more
 In ColumnFamilyRecordReader#initialize, a TBinaryProtocol is created as 
 follows:
 TTransport transport = 
 ConfigHelper.getInputTransportFactory(conf).openTransport(socket, conf);
 TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport, 
 ConfigHelper.getThriftMaxMessageLength(conf));
 client = new Cassandra.Client(binaryProtocol);
 But each time a call to cassandra is made, checkReadLength(int length) is 
 called in TBinaryProtocol, which includes this:
 readLength_ -= length;
 if (readLength_  0) {
throw new TException(Message length exceeded:  + length);
 }
 The result is that readLength_ is decreased each time, until it goes negative 
 and exception is thrown.  This will only happen if you're reading a lot of 
 data and your split size is large (which is maybe why people haven't noticed 
 it earlier).  This happens regardless of whether you use wide row support.
 I'm not sure what the right fix is.  It seems like you could either reset the 
 length of TBinaryProtocol after each call or just use a new TBinaryProtocol 
 each time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5545) Add SASL authentication to CQL native protocol

2013-05-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663122#comment-13663122
 ] 

Aleksey Yeschenko commented on CASSANDRA-5545:
--

That still doesn't mean that IAuthenticator modification is necessary. At worst 
this means two new interface: one extending IAuthenticator, with 
newAuthenticator(), and another one - SaslAuthenticator from the patch (that 
one should be nested inside ISaslAwareAuthenticator).

 Add SASL authentication to CQL native protocol
 --

 Key: CASSANDRA-5545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5545
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.0

 Attachments: 
 0001-Add-SASL-authentication-to-CQL-native-protocol.patch, 
 0001-Add-SASL-hooks-to-CQL-native-protocol.patch


 Adding hooks for SASL authentication would make it much easier to integrate 
 with external auth providers, such as Kerberos  NTLM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5545) Add SASL authentication to CQL native protocol

2013-05-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663129#comment-13663129
 ] 

Sylvain Lebresne commented on CASSANDRA-5545:
-

Fair enough, I'm fine doing that.

 Add SASL authentication to CQL native protocol
 --

 Key: CASSANDRA-5545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5545
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.0

 Attachments: 
 0001-Add-SASL-authentication-to-CQL-native-protocol.patch, 
 0001-Add-SASL-hooks-to-CQL-native-protocol.patch


 Adding hooks for SASL authentication would make it much easier to integrate 
 with external auth providers, such as Kerberos  NTLM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5584) Incorrect use of System.nanoTime()

2013-05-21 Thread Mikhail Mazursky (JIRA)
Mikhail Mazursky created CASSANDRA-5584:
---

 Summary: Incorrect use of System.nanoTime()
 Key: CASSANDRA-5584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5584
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Mazursky
Priority: Trivial


From System.nanoTime() JavaDoc:
{noformat}
For example, to measure how long some code takes to execute:
 long startTime = System.nanoTime();
 // ... the code being measured ...
 long estimatedTime = System.nanoTime() - startTime; 

To compare two nanoTime values
 long t0 = System.nanoTime();
 ...
 long t1 = System.nanoTime();
one should use t1 - t0  0, not t1  t0, because of the possibility of 
numerical overflow.
{noformat}
I found one place with such incorrect use that can result in overflow and in 
incorrect timeout handling. See attached patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5584) Incorrect use of System.nanoTime()

2013-05-21 Thread Mikhail Mazursky (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Mazursky updated CASSANDRA-5584:


Attachment: trunk-5584.txt

 Incorrect use of System.nanoTime()
 --

 Key: CASSANDRA-5584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5584
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Mazursky
Priority: Trivial
 Attachments: trunk-5584.txt


 From System.nanoTime() JavaDoc:
 {noformat}
 For example, to measure how long some code takes to execute:
  long startTime = System.nanoTime();
  // ... the code being measured ...
  long estimatedTime = System.nanoTime() - startTime; 
 To compare two nanoTime values
  long t0 = System.nanoTime();
  ...
  long t1 = System.nanoTime();
 one should use t1 - t0  0, not t1  t0, because of the possibility of 
 numerical overflow.
 {noformat}
 I found one place with such incorrect use that can result in overflow and in 
 incorrect timeout handling. See attached patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5314) Replaying old batches can 'undo' deletes

2013-05-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663220#comment-13663220
 ] 

Jonathan Ellis commented on CASSANDRA-5314:
---

Why do we add this?

{code}
cf.addColumn(new Column(columnName(), 
ByteBufferUtil.EMPTY_BYTE_BUFFER, timestamp));
{code}

Nit: CopyOnWriteArraySet is a bit of an odd choice, since we expect to mutate 
it once for each entry.

 Replaying old batches can 'undo' deletes
 

 Key: CASSANDRA-5314
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5314
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.6


 Batchlog manager does not subtract the time spent in the batchlog from hints' 
 ttls and this may cause undoing deletes. The attached patch fixes it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5314) Replaying old batches can 'undo' deletes

2013-05-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663245#comment-13663245
 ] 

Aleksey Yeschenko commented on CASSANDRA-5314:
--

bq. Why do we add this?

It's the CQL3 row marker. Technically we don't have to do this, but it's just 
the right way to do it and I would sleep better with it in place. Could prevent 
something like CASSANDRA-5572 from happening again in the future (although it's 
true that we only remove the whole rows in the batchlog so the probability of 
something like this is low).

bq. Nit: CopyOnWriteArraySet is a bit of an odd choice, since we expect to 
mutate it once for each entry.

It's mostly irrelevant here - just used the first thread-safe set that came to 
mind.

 Replaying old batches can 'undo' deletes
 

 Key: CASSANDRA-5314
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5314
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.6


 Batchlog manager does not subtract the time spent in the batchlog from hints' 
 ttls and this may cause undoing deletes. The attached patch fixes it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[02/11] git commit: Remove buggy thrift max message length option patch by jbellis; reviewed by tjake for CASSANDRA-5529

2013-05-21 Thread jbellis
Remove buggy thrift max message length option
patch by jbellis; reviewed by tjake for CASSANDRA-5529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ac19c121
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ac19c121
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ac19c121

Branch: refs/heads/trunk
Commit: ac19c121524c928ff6f3237e12a26e42766ae836
Parents: c5dc029
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:36:26 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:36:26 2013 -0500

--
 CHANGES.txt|2 +
 conf/cassandra.yaml|6 +
 src/java/org/apache/cassandra/config/Config.java   |2 +
 .../cassandra/config/DatabaseDescriptor.java   |   10 
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |2 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   15 ++-
 .../apache/cassandra/thrift/CassandraDaemon.java   |2 +-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   19 ---
 9 files changed, 12 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7c89987..501a68f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,10 +1,12 @@
 1.1.12
+ * Remove buggy thrift max message length option (CASSANDRA-5529)
  * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
  * Use allocator information to improve memtable memory usage estimate 
(CASSANDRA-5497)
  * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
  * Fix Bound intersection computation (CASSANDRA-5551)
 
+
 1.1.11
  * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
  * Update offline scrub for 1.0 - 1.1 directory structure (CASSANDRA-5195)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 37f41fb..027479d 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -330,15 +330,11 @@ rpc_server_type: sync
 # rpc_send_buff_size_in_bytes:
 # rpc_recv_buff_size_in_bytes:
 
-# Frame size for thrift (maximum field length).
+# Frame size for thrift (maximum message length).
 # 0 disables TFramedTransport in favor of TSocket. This option
 # is deprecated; we strongly recommend using Framed mode.
 thrift_framed_transport_size_in_mb: 15
 
-# The max length of a thrift message, including all fields and
-# internal thrift overhead.
-thrift_max_message_length_in_mb: 16
-
 # Set to true to have Cassandra create a hard link to each sstable
 # flushed or streamed locally in a backups/ subdirectory of the
 # Keyspace data.  Removing these links is the operator's

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index a08a694..11beea6 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -77,7 +77,9 @@ public class Config
 public Integer rpc_send_buff_size_in_bytes;
 public Integer rpc_recv_buff_size_in_bytes;
 
+@Deprecated
 public Integer thrift_max_message_length_in_mb = 16;
+
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
 public Boolean auto_snapshot = true;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 0c460dc..f55c89a 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -316,11 +316,6 @@ public class DatabaseDescriptor
 if (conf.thrift_framed_transport_size_in_mb = 0)
 throw new 
ConfigurationException(thrift_framed_transport_size_in_mb must be positive);
 
-if (conf.thrift_framed_transport_size_in_mb  0  
conf.thrift_max_message_length_in_mb  conf.thrift_framed_transport_size_in_mb)
-{
-throw new 
ConfigurationException(thrift_max_message_length_in_mb must be greater 

[04/11] git commit: merge from 1.1

2013-05-21 Thread jbellis
merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40669a33
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40669a33
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40669a33

Branch: refs/heads/trunk
Commit: 40669a33027d241ceda76e842afb5de0bcada5b6
Parents: b69c1aa ac19c12
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:41:46 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:41:46 2013 -0500

--
 CHANGES.txt|   11 
 conf/cassandra.yaml|6 +
 src/java/org/apache/cassandra/config/Config.java   |2 +
 .../cassandra/config/DatabaseDescriptor.java   |8 --
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |2 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   15 ++-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   19 ---
 .../org/apache/cassandra/thrift/ThriftServer.java  |2 +-
 9 files changed, 21 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/40669a33/CHANGES.txt
--
diff --cc CHANGES.txt
index 619e415,501a68f..103f659
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,113 -1,14 +1,124 @@@
 -1.1.12
 +1.2.6
 + * Write row markers when serializing schema (CASSANDRA-5572)
 + * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
++Merged from 1.1:
+  * Remove buggy thrift max message length option (CASSANDRA-5529)
 +
 +
 +1.2.5
 + * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
 + * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)
 + * fix 2i updates with identical values and timestamps (CASSANDRA-5540)
 + * fix compaction throttling bursty-ness (CASSANDRA-4316)
 + * reduce memory consumption of IndexSummary (CASSANDRA-5506)
 + * remove per-row column name bloom filters (CASSANDRA-5492)
 + * Include fatal errors in trace events (CASSANDRA-5447)
 + * Ensure that PerRowSecondaryIndex is notified of row-level deletes
 +   (CASSANDRA-5445)
 + * Allow empty blob literals in CQL3 (CASSANDRA-5452)
 + * Fix streaming RangeTombstones at column index boundary (CASSANDRA-5418)
 + * Fix preparing statements when current keyspace is not set (CASSANDRA-5468)
 + * Fix SemanticVersion.isSupportedBy minor/patch handling (CASSANDRA-5496)
 + * Don't provide oldCfId for post-1.1 system cfs (CASSANDRA-5490)
 + * Fix primary range ignores replication strategy (CASSANDRA-5424)
 + * Fix shutdown of binary protocol server (CASSANDRA-5507)
 + * Fix repair -snapshot not working (CASSANDRA-5512)
 + * Set isRunning flag later in binary protocol server (CASSANDRA-5467)
 + * Fix use of CQL3 functions with descending clustering order (CASSANDRA-5472)
 + * Disallow renaming columns one at a time for thrift table in CQL3
 +   (CASSANDRA-5531)
 + * cqlsh: add CLUSTERING ORDER BY support to DESCRIBE (CASSANDRA-5528)
 + * Add custom secondary index support to CQL3 (CASSANDRA-5484)
 + * Fix repair hanging silently on unexpected error (CASSANDRA-5229)
 + * Fix Ec2Snitch regression introduced by CASSANDRA-5171 (CASSANDRA-5432)
 + * Add nodetool enablebackup/disablebackup (CASSANDRA-5556)
 + * cqlsh: fix DESCRIBE after case insensitive USE (CASSANDRA-5567)
 +Merged from 1.1
   * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
 - * Use allocator information to improve memtable memory usage estimate 
 + * Use allocator information to improve memtable memory usage estimate
 (CASSANDRA-5497)
 + * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
   * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
   * Fix Bound intersection computation (CASSANDRA-5551)
 + * sstablescrub now respects max memory size in cassandra.in.sh 
(CASSANDRA-5562)
 +
 +
 +1.2.4
 + * Ensure that PerRowSecondaryIndex updates see the most recent values
 +   (CASSANDRA-5397)
 + * avoid duplicate index entries ind PrecompactedRow and 
 +   ParallelCompactionIterable (CASSANDRA-5395)
 + * remove the index entry on oldColumn when new column is a tombstone 
 +   (CASSANDRA-5395)
 + * Change default stream throughput from 400 to 200 mbps (CASSANDRA-5036)
 + * Gossiper logs DOWN for symmetry with UP (CASSANDRA-5187)
 + * Fix mixing prepared statements between keyspaces (CASSANDRA-5352)
 + * Fix consistency level during bootstrap - strike 3 (CASSANDRA-5354)
 + * Fix transposed arguments in AlreadyExistsException (CASSANDRA-5362)
 + * Improve asynchronous hint delivery (CASSANDRA-5179)
 + * Fix Guava dependency version (12.0 - 13.0.1) for Maven (CASSANDRA-5364)
 + * 

[03/11] git commit: merge from 1.1

2013-05-21 Thread jbellis
merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40669a33
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40669a33
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40669a33

Branch: refs/heads/cassandra-1.2
Commit: 40669a33027d241ceda76e842afb5de0bcada5b6
Parents: b69c1aa ac19c12
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:41:46 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:41:46 2013 -0500

--
 CHANGES.txt|   11 
 conf/cassandra.yaml|6 +
 src/java/org/apache/cassandra/config/Config.java   |2 +
 .../cassandra/config/DatabaseDescriptor.java   |8 --
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |2 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   15 ++-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   19 ---
 .../org/apache/cassandra/thrift/ThriftServer.java  |2 +-
 9 files changed, 21 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/40669a33/CHANGES.txt
--
diff --cc CHANGES.txt
index 619e415,501a68f..103f659
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,113 -1,14 +1,124 @@@
 -1.1.12
 +1.2.6
 + * Write row markers when serializing schema (CASSANDRA-5572)
 + * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
++Merged from 1.1:
+  * Remove buggy thrift max message length option (CASSANDRA-5529)
 +
 +
 +1.2.5
 + * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
 + * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)
 + * fix 2i updates with identical values and timestamps (CASSANDRA-5540)
 + * fix compaction throttling bursty-ness (CASSANDRA-4316)
 + * reduce memory consumption of IndexSummary (CASSANDRA-5506)
 + * remove per-row column name bloom filters (CASSANDRA-5492)
 + * Include fatal errors in trace events (CASSANDRA-5447)
 + * Ensure that PerRowSecondaryIndex is notified of row-level deletes
 +   (CASSANDRA-5445)
 + * Allow empty blob literals in CQL3 (CASSANDRA-5452)
 + * Fix streaming RangeTombstones at column index boundary (CASSANDRA-5418)
 + * Fix preparing statements when current keyspace is not set (CASSANDRA-5468)
 + * Fix SemanticVersion.isSupportedBy minor/patch handling (CASSANDRA-5496)
 + * Don't provide oldCfId for post-1.1 system cfs (CASSANDRA-5490)
 + * Fix primary range ignores replication strategy (CASSANDRA-5424)
 + * Fix shutdown of binary protocol server (CASSANDRA-5507)
 + * Fix repair -snapshot not working (CASSANDRA-5512)
 + * Set isRunning flag later in binary protocol server (CASSANDRA-5467)
 + * Fix use of CQL3 functions with descending clustering order (CASSANDRA-5472)
 + * Disallow renaming columns one at a time for thrift table in CQL3
 +   (CASSANDRA-5531)
 + * cqlsh: add CLUSTERING ORDER BY support to DESCRIBE (CASSANDRA-5528)
 + * Add custom secondary index support to CQL3 (CASSANDRA-5484)
 + * Fix repair hanging silently on unexpected error (CASSANDRA-5229)
 + * Fix Ec2Snitch regression introduced by CASSANDRA-5171 (CASSANDRA-5432)
 + * Add nodetool enablebackup/disablebackup (CASSANDRA-5556)
 + * cqlsh: fix DESCRIBE after case insensitive USE (CASSANDRA-5567)
 +Merged from 1.1
   * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
 - * Use allocator information to improve memtable memory usage estimate 
 + * Use allocator information to improve memtable memory usage estimate
 (CASSANDRA-5497)
 + * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
   * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
   * Fix Bound intersection computation (CASSANDRA-5551)
 + * sstablescrub now respects max memory size in cassandra.in.sh 
(CASSANDRA-5562)
 +
 +
 +1.2.4
 + * Ensure that PerRowSecondaryIndex updates see the most recent values
 +   (CASSANDRA-5397)
 + * avoid duplicate index entries ind PrecompactedRow and 
 +   ParallelCompactionIterable (CASSANDRA-5395)
 + * remove the index entry on oldColumn when new column is a tombstone 
 +   (CASSANDRA-5395)
 + * Change default stream throughput from 400 to 200 mbps (CASSANDRA-5036)
 + * Gossiper logs DOWN for symmetry with UP (CASSANDRA-5187)
 + * Fix mixing prepared statements between keyspaces (CASSANDRA-5352)
 + * Fix consistency level during bootstrap - strike 3 (CASSANDRA-5354)
 + * Fix transposed arguments in AlreadyExistsException (CASSANDRA-5362)
 + * Improve asynchronous hint delivery (CASSANDRA-5179)
 + * Fix Guava dependency version (12.0 - 13.0.1) for Maven (CASSANDRA-5364)
 + * 

[01/11] git commit: Remove buggy thrift max message length option patch by jbellis; reviewed by tjake for CASSANDRA-5529

2013-05-21 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 7d2ce5f95 - 9879fa612
  refs/heads/cassandra-1.2 9d0eec217 - 950efdef9
  refs/heads/trunk 44f178d1e - f3b42d2a6


Remove buggy thrift max message length option
patch by jbellis; reviewed by tjake for CASSANDRA-5529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ac19c121
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ac19c121
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ac19c121

Branch: refs/heads/cassandra-1.2
Commit: ac19c121524c928ff6f3237e12a26e42766ae836
Parents: c5dc029
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:36:26 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:36:26 2013 -0500

--
 CHANGES.txt|2 +
 conf/cassandra.yaml|6 +
 src/java/org/apache/cassandra/config/Config.java   |2 +
 .../cassandra/config/DatabaseDescriptor.java   |   10 
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |2 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   15 ++-
 .../apache/cassandra/thrift/CassandraDaemon.java   |2 +-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   19 ---
 9 files changed, 12 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7c89987..501a68f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,10 +1,12 @@
 1.1.12
+ * Remove buggy thrift max message length option (CASSANDRA-5529)
  * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
  * Use allocator information to improve memtable memory usage estimate 
(CASSANDRA-5497)
  * fsync leveled manifest to avoid corruption (CASSANDRA-5535)
  * Fix Bound intersection computation (CASSANDRA-5551)
 
+
 1.1.11
  * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
  * Update offline scrub for 1.0 - 1.1 directory structure (CASSANDRA-5195)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 37f41fb..027479d 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -330,15 +330,11 @@ rpc_server_type: sync
 # rpc_send_buff_size_in_bytes:
 # rpc_recv_buff_size_in_bytes:
 
-# Frame size for thrift (maximum field length).
+# Frame size for thrift (maximum message length).
 # 0 disables TFramedTransport in favor of TSocket. This option
 # is deprecated; we strongly recommend using Framed mode.
 thrift_framed_transport_size_in_mb: 15
 
-# The max length of a thrift message, including all fields and
-# internal thrift overhead.
-thrift_max_message_length_in_mb: 16
-
 # Set to true to have Cassandra create a hard link to each sstable
 # flushed or streamed locally in a backups/ subdirectory of the
 # Keyspace data.  Removing these links is the operator's

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index a08a694..11beea6 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -77,7 +77,9 @@ public class Config
 public Integer rpc_send_buff_size_in_bytes;
 public Integer rpc_recv_buff_size_in_bytes;
 
+@Deprecated
 public Integer thrift_max_message_length_in_mb = 16;
+
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
 public Boolean auto_snapshot = true;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac19c121/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 0c460dc..f55c89a 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -316,11 +316,6 @@ public class DatabaseDescriptor
 if (conf.thrift_framed_transport_size_in_mb = 0)
 throw new 
ConfigurationException(thrift_framed_transport_size_in_mb must be positive);
 
-if (conf.thrift_framed_transport_size_in_mb  0  

[05/11] git commit: merge from 1.2

2013-05-21 Thread jbellis
merge from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8aa6222b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8aa6222b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8aa6222b

Branch: refs/heads/trunk
Commit: 8aa6222bf16b1087f3b9a245ac3fef424269cac8
Parents: 1d2c122 40669a3
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:45:22 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:45:22 2013 -0500

--
 CHANGES.txt|   11 
 conf/cassandra.yaml|6 +
 src/java/org/apache/cassandra/config/Config.java   |2 +
 .../cassandra/config/DatabaseDescriptor.java   |8 --
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |2 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |2 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   17 +
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   19 ---
 .../org/apache/cassandra/thrift/ThriftServer.java  |2 +-
 9 files changed, 18 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8aa6222b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8aa6222b/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8aa6222b/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8aa6222b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --cc src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 775e884,a5bfcf2..b33f6fd
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@@ -85,392 -91,422 +85,389 @@@ public class DatabaseDescripto
  private static String localDC;
  private static ComparatorInetAddress localComparator;
  
 -/**
 - * Inspect the classpath to find storage configuration file
 - */
 -static URL getStorageConfigURL() throws ConfigurationException
 +static
  {
 -String configUrl = System.getProperty(cassandra.config);
 -if (configUrl == null)
 -configUrl = DEFAULT_CONFIGURATION;
 -
 -URL url;
 -try
 -{
 -url = new URL(configUrl);
 -url.openStream().close(); // catches well-formed but bogus URLs
 -}
 -catch (Exception e)
 +// In client mode, we use a default configuration. Note that the 
fields of this class will be
 +// left unconfigured however (the partitioner or localDC will be null 
for instance) so this
 +// should be used with care.
 +if (Config.isClientMode())
  {
 -ClassLoader loader = DatabaseDescriptor.class.getClassLoader();
 -url = loader.getResource(configUrl);
 -if (url == null)
 -throw new ConfigurationException(Cannot locate  + 
configUrl);
 +conf = new Config();
  }
 -
 -return url;
 -}
 -
 -static
 -{
 -if (Config.getLoadYaml())
 -loadYaml();
  else
 -conf = new Config();
 -}
 -static void loadYaml()
 -{
 -try
  {
 -URL url = getStorageConfigURL();
 -logger.info(Loading settings from  + url);
 -InputStream input;
  try
  {
 -input = url.openStream();
 +applyConfig(loadConfig());
  }
 -catch (IOException e)
 +catch (ConfigurationException e)
  {
 -// getStorageConfigURL should have ruled this out
 -throw new AssertionError(e);
 +logger.error(Fatal configuration error, e);
 +System.err.println(e.getMessage() + \nFatal configuration 
error; unable to start server. See log for stacktrace.);
 +System.exit(1);
  }
 -org.yaml.snakeyaml.constructor.Constructor constructor = new 
org.yaml.snakeyaml.constructor.Constructor(Config.class);
 -TypeDescription seedDesc = new 
TypeDescription(SeedProviderDef.class);
 -seedDesc.putMapPropertyType(parameters, String.class, 
String.class);
 -constructor.addTypeDescription(seedDesc);
 -Yaml yaml = new Yaml(new Loader(constructor));
 -conf = (Config)yaml.load(input);
 -
 - 

[06/11] git commit: r/m local copy of TBinaryProtocol

2013-05-21 Thread jbellis
r/m local copy of TBinaryProtocol


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e66ec49f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e66ec49f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e66ec49f

Branch: refs/heads/trunk
Commit: e66ec49f810c009ba2e6bf98fbaebb76f9de622e
Parents: 8aa6222
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:50:29 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:50:29 2013 -0500

--
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |1 +
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |3 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |1 +
 .../cassandra/hadoop/pig/CassandraStorage.java |2 +
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   88 ---
 .../org/apache/cassandra/thrift/ThriftServer.java  |1 +
 6 files changed, 7 insertions(+), 89 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e66ec49f/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
index 6ed9f80..d727a20 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
@@ -32,6 +32,7 @@ import org.apache.cassandra.thrift.*;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.*;
 import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.transport.TSocket;
 import org.apache.thrift.transport.TTransport;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e66ec49f/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index bbf4dca..2b258b2 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -24,6 +24,8 @@ import java.nio.ByteBuffer;
 import java.util.*;
 
 import com.google.common.collect.*;
+
+import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.transport.TTransport;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -49,7 +51,6 @@ import org.apache.cassandra.thrift.KeySlice;
 import org.apache.cassandra.thrift.KsDef;
 import org.apache.cassandra.thrift.SlicePredicate;
 import org.apache.cassandra.thrift.SuperColumn;
-import org.apache.cassandra.thrift.TBinaryProtocol;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e66ec49f/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java 
b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
index 71e6634..90f5045 100644
--- a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
+++ b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
@@ -40,6 +40,7 @@ import org.apache.thrift.TBase;
 import org.apache.thrift.TDeserializer;
 import org.apache.thrift.TException;
 import org.apache.thrift.TSerializer;
+import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.transport.TSocket;
 import org.apache.thrift.transport.TTransport;
 import org.apache.thrift.transport.TTransportException;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e66ec49f/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index d4fb577..16b6fdb 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@ -48,6 +48,8 @@ import org.apache.pig.ResourceSchema.ResourceFieldSchema;
 import org.apache.thrift.TDeserializer;
 import org.apache.thrift.TException;
 import org.apache.thrift.TSerializer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e66ec49f/src/java/org/apache/cassandra/thrift/TBinaryProtocol.java

[08/11] git commit: merge

2013-05-21 Thread jbellis
merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/950efdef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/950efdef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/950efdef

Branch: refs/heads/cassandra-1.2
Commit: 950efdef9c6406d80268aed2ce7713ef5d396a9a
Parents: 40669a3 9d0eec2
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:53:00 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:53:00 2013 -0500

--
 CHANGES.txt|1 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 12 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/950efdef/CHANGES.txt
--
diff --cc CHANGES.txt
index 103f659,24e9163..25290cd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,8 +1,9 @@@
  1.2.6
   * Write row markers when serializing schema (CASSANDRA-5572)
   * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
 -Merged from 1.1
 +Merged from 1.1:
 + * Remove buggy thrift max message length option (CASSANDRA-5529)
+  * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
  
  
  1.2.5



[09/11] git commit: merge

2013-05-21 Thread jbellis
merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/950efdef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/950efdef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/950efdef

Branch: refs/heads/trunk
Commit: 950efdef9c6406d80268aed2ce7713ef5d396a9a
Parents: 40669a3 9d0eec2
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:53:00 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:53:00 2013 -0500

--
 CHANGES.txt|1 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 12 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/950efdef/CHANGES.txt
--
diff --cc CHANGES.txt
index 103f659,24e9163..25290cd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,8 +1,9 @@@
  1.2.6
   * Write row markers when serializing schema (CASSANDRA-5572)
   * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
 -Merged from 1.1
 +Merged from 1.1:
 + * Remove buggy thrift max message length option (CASSANDRA-5529)
+  * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
  
  
  1.2.5



[07/11] git commit: Remove buggy thrift max message length option patch by jbellis; reviewed by tjake for CASSANDRA-5529

2013-05-21 Thread jbellis
Remove buggy thrift max message length option
patch by jbellis; reviewed by tjake for CASSANDRA-5529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9879fa61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9879fa61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9879fa61

Branch: refs/heads/cassandra-1.1
Commit: 9879fa6122d325951d98f8bc601ff64dd04c2c67
Parents: 7d2ce5f
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:36:26 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:50:45 2013 -0500

--
 CHANGES.txt|2 +
 conf/cassandra.yaml|6 +
 src/java/org/apache/cassandra/config/Config.java   |2 +
 .../cassandra/config/DatabaseDescriptor.java   |   10 
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |2 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   15 ++-
 .../apache/cassandra/thrift/CassandraDaemon.java   |2 +-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |   19 ---
 9 files changed, 12 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9879fa61/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 256e69a..69df3de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.1.12
+ * Remove buggy thrift max message length option (CASSANDRA-5529)
  * Add retry mechanism to OTC for non-droppable_verbs (CASSANDRA-5393)
  * Use allocator information to improve memtable memory usage estimate 
(CASSANDRA-5497)
@@ -6,6 +7,7 @@
  * Fix Bound intersection computation (CASSANDRA-5551)
  * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
 
+
 1.1.11
  * Fix trying to load deleted row into row cache on startup (CASSANDRA-4463)
  * Update offline scrub for 1.0 - 1.1 directory structure (CASSANDRA-5195)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9879fa61/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 37f41fb..027479d 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -330,15 +330,11 @@ rpc_server_type: sync
 # rpc_send_buff_size_in_bytes:
 # rpc_recv_buff_size_in_bytes:
 
-# Frame size for thrift (maximum field length).
+# Frame size for thrift (maximum message length).
 # 0 disables TFramedTransport in favor of TSocket. This option
 # is deprecated; we strongly recommend using Framed mode.
 thrift_framed_transport_size_in_mb: 15
 
-# The max length of a thrift message, including all fields and
-# internal thrift overhead.
-thrift_max_message_length_in_mb: 16
-
 # Set to true to have Cassandra create a hard link to each sstable
 # flushed or streamed locally in a backups/ subdirectory of the
 # Keyspace data.  Removing these links is the operator's

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9879fa61/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index a08a694..11beea6 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -77,7 +77,9 @@ public class Config
 public Integer rpc_send_buff_size_in_bytes;
 public Integer rpc_recv_buff_size_in_bytes;
 
+@Deprecated
 public Integer thrift_max_message_length_in_mb = 16;
+
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
 public Boolean auto_snapshot = true;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9879fa61/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 0c460dc..f55c89a 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -316,11 +316,6 @@ public class DatabaseDescriptor
 if (conf.thrift_framed_transport_size_in_mb = 0)
 throw new 
ConfigurationException(thrift_framed_transport_size_in_mb must be positive);
 
-if (conf.thrift_framed_transport_size_in_mb  0  
conf.thrift_max_message_length_in_mb  conf.thrift_framed_transport_size_in_mb)
-{
-throw new 
ConfigurationException(thrift_max_message_length_in_mb must be 

[10/11] git commit: merge

2013-05-21 Thread jbellis
merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f32c988a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f32c988a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f32c988a

Branch: refs/heads/trunk
Commit: f32c988abff8887a43fcbb21177cb7e9b0a0183e
Parents: e66ec49 44f178d
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 13:53:58 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 13:53:58 2013 -0500

--
 CHANGES.txt|1 +
 examples/pig/test/test_storage.pig |2 +-
 .../cassandra/hadoop/pig/CassandraStorage.java |   23 ++
 3 files changed, 12 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f32c988a/CHANGES.txt
--
diff --cc CHANGES.txt
index 96c9247,fe19895..151c9f1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -53,8 -53,8 +53,9 @@@
  1.2.6
   * Write row markers when serializing schema (CASSANDRA-5572)
   * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
 -Merged from 1.1
 +Merged from 1.1:
 + * Remove buggy thrift max message length option (CASSANDRA-5529)
+  * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
  
  
  1.2.5

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f32c988a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--



[jira] [Updated] (CASSANDRA-5529) thrift_max_message_length_in_mb makes long-lived connections error out

2013-05-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5529:
--

Fix Version/s: 1.1.12
  Summary: thrift_max_message_length_in_mb makes long-lived connections 
error out  (was: ColumnFamilyRecordReader fails for large datasets)

 thrift_max_message_length_in_mb makes long-lived connections error out
 --

 Key: CASSANDRA-5529
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5529
 Project: Cassandra
  Issue Type: Bug
  Components: API, Hadoop
Affects Versions: 0.6
Reporter: Rob Timpe
Assignee: Jonathan Ellis
 Fix For: 1.1.12, 1.2.6

 Attachments: 5529-1.1.txt, 5529.txt


 When running mapreduce jobs that read directly from cassandra, the job will 
 sometimes fail with an exception like this:
 java.lang.RuntimeException: com.rockmelt.org.apache.thrift.TException: 
 Message length exceeded: 40
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:400)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:406)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:329)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:109)
   at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:522)
   at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:547)
   at 
 org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: com.rockmelt.org.apache.thrift.TException: Message length 
 exceeded: 40
   at 
 com.rockmelt.org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:393)
   at 
 com.rockmelt.org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
   at org.apache.cassandra.thrift.Column.read(Column.java:528)
   at 
 org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:507)
   at org.apache.cassandra.thrift.KeySlice.read(KeySlice.java:408)
   at 
 org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12422)
   at 
 com.rockmelt.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:696)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:680)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:362)
   ... 16 more
 In ColumnFamilyRecordReader#initialize, a TBinaryProtocol is created as 
 follows:
 TTransport transport = 
 ConfigHelper.getInputTransportFactory(conf).openTransport(socket, conf);
 TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport, 
 ConfigHelper.getThriftMaxMessageLength(conf));
 client = new Cassandra.Client(binaryProtocol);
 But each time a call to cassandra is made, checkReadLength(int length) is 
 called in TBinaryProtocol, which includes this:
 readLength_ -= length;
 if (readLength_  0) {
throw new TException(Message length exceeded:  + length);
 }
 The result is that readLength_ is decreased each time, until it goes negative 
 and exception is thrown.  This will only happen if you're reading a lot of 
 data and your split size is large (which is maybe why people haven't noticed 
 it earlier).  This happens regardless of whether you use wide row support.
 I'm not sure what the right fix is.  It seems like you could either reset the 
 length of TBinaryProtocol after each call or just use a new TBinaryProtocol 
 each time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Commented] (CASSANDRA-5314) Replaying old batches can 'undo' deletes

2013-05-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663255#comment-13663255
 ] 

Jonathan Ellis commented on CASSANDRA-5314:
---

+1

 Replaying old batches can 'undo' deletes
 

 Key: CASSANDRA-5314
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5314
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.6


 Batchlog manager does not subtract the time spent in the batchlog from hints' 
 ttls and this may cause undoing deletes. The attached patch fixes it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[2/3] git commit: fix overflow possibility patch by Mikhail Mazursky; reviewed by jbellis for CASSANDRA-5584

2013-05-21 Thread jbellis
fix overflow possibility
patch by Mikhail Mazursky; reviewed by jbellis for CASSANDRA-5584


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dac69926
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dac69926
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dac69926

Branch: refs/heads/trunk
Commit: dac69926613f96fdbacd49ada869bded21d0e3ab
Parents: 950efde
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 14:18:49 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 14:18:49 2013 -0500

--
 .../commitlog/BatchCommitLogExecutorService.java   |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dac69926/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java 
b/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
index 4434532..39c33b2 100644
--- 
a/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
+++ 
b/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
@@ -76,7 +76,8 @@ class BatchCommitLogExecutorService extends 
AbstractCommitLogExecutorService
 //  so we have to break it into firstTask / extra tasks)
 incompleteTasks.clear();
 taskValues.clear();
-long end = System.nanoTime() + (long)(100 * 
DatabaseDescriptor.getCommitLogSyncBatchWindow());
+long start = System.nanoTime();
+long window = (long)(100 * 
DatabaseDescriptor.getCommitLogSyncBatchWindow());
 
 // it doesn't seem worth bothering future-izing the exception
 // since if a commitlog op throws, we're probably screwed anyway
@@ -84,7 +85,7 @@ class BatchCommitLogExecutorService extends 
AbstractCommitLogExecutorService
 taskValues.add(firstTask.getRawCallable().call());
 while (!queue.isEmpty()
 queue.peek().getRawCallable() instanceof 
CommitLog.LogRecordAdder
-System.nanoTime()  end)
+System.nanoTime() - start  window)
 {
 CheaterFutureTask task = queue.remove();
 incompleteTasks.add(task);



[1/3] git commit: fix overflow possibility patch by Mikhail Mazursky; reviewed by jbellis for CASSANDRA-5584

2013-05-21 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 950efdef9 - dac699266
  refs/heads/trunk f3b42d2a6 - c609f27cd


fix overflow possibility
patch by Mikhail Mazursky; reviewed by jbellis for CASSANDRA-5584


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dac69926
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dac69926
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dac69926

Branch: refs/heads/cassandra-1.2
Commit: dac69926613f96fdbacd49ada869bded21d0e3ab
Parents: 950efde
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 14:18:49 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 14:18:49 2013 -0500

--
 .../commitlog/BatchCommitLogExecutorService.java   |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dac69926/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java 
b/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
index 4434532..39c33b2 100644
--- 
a/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
+++ 
b/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
@@ -76,7 +76,8 @@ class BatchCommitLogExecutorService extends 
AbstractCommitLogExecutorService
 //  so we have to break it into firstTask / extra tasks)
 incompleteTasks.clear();
 taskValues.clear();
-long end = System.nanoTime() + (long)(100 * 
DatabaseDescriptor.getCommitLogSyncBatchWindow());
+long start = System.nanoTime();
+long window = (long)(100 * 
DatabaseDescriptor.getCommitLogSyncBatchWindow());
 
 // it doesn't seem worth bothering future-izing the exception
 // since if a commitlog op throws, we're probably screwed anyway
@@ -84,7 +85,7 @@ class BatchCommitLogExecutorService extends 
AbstractCommitLogExecutorService
 taskValues.add(firstTask.getRawCallable().call());
 while (!queue.isEmpty()
 queue.peek().getRawCallable() instanceof 
CommitLog.LogRecordAdder
-System.nanoTime()  end)
+System.nanoTime() - start  window)
 {
 CheaterFutureTask task = queue.remove();
 incompleteTasks.add(task);



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-21 Thread jbellis
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c609f27c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c609f27c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c609f27c

Branch: refs/heads/trunk
Commit: c609f27cd33eb78d29255424954da7f32b204230
Parents: f3b42d2 dac6992
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 21 14:18:54 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 21 14:18:54 2013 -0500

--
 .../commitlog/BatchCommitLogExecutorService.java   |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c609f27c/src/java/org/apache/cassandra/db/commitlog/BatchCommitLogExecutorService.java
--



[jira] [Commented] (CASSANDRA-4905) Repair should exclude gcable tombstones from merkle-tree computation

2013-05-21 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663282#comment-13663282
 ] 

Robert Coli commented on CASSANDRA-4905:


Have a 1.1 era cluster with TTL where repair turns ~500gb of actual data into 
1.5TB. Would love this merged into 1.1. :)

 Repair should exclude gcable tombstones from merkle-tree computation
 

 Key: CASSANDRA-4905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4905
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Christian Spriegel
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 3

 Attachments: 4905.txt


 Currently gcable tombstones get repaired if some replicas compacted already, 
 but some are not compacted.
 This could be avoided by ignoring all gcable tombstones during merkle tree 
 calculation.
 This was discussed with Sylvain on the mailing list:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4905) Repair should exclude gcable tombstones from merkle-tree computation

2013-05-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663286#comment-13663286
 ] 

Jonathan Ellis commented on CASSANDRA-4905:
---

1.1 is over a year old; we're really shooting for stability over new 
functionality there now.

The good news is, by now 1.2.x should be about as stable as 
1.1-with-everything-people-want-backported would be.

 Repair should exclude gcable tombstones from merkle-tree computation
 

 Key: CASSANDRA-4905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4905
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Christian Spriegel
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 3

 Attachments: 4905.txt


 Currently gcable tombstones get repaired if some replicas compacted already, 
 but some are not compacted.
 This could be avoided by ignoring all gcable tombstones during merkle tree 
 calculation.
 This was discussed with Sylvain on the mailing list:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-05-21 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663297#comment-13663297
 ] 

Robert Coli commented on CASSANDRA-5424:


{quote}
get rid of token selection on bootstrap and force people to either use vnodes 
or specify token manually
{quote}

This has seemed operationally sane to me since approximately 0.6 series. We 
gain almost nothing (noobs will really be discouraged by having to set a token 
manually?) and expose ourselves to unnecessary complexity and edge cases like 
this. +1


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
 (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed 
 merkle tree to /10.2.29.38 for (Keyspace1,Standard1)
 root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
 root@ip-10-72-111-225:/home/ubuntu# 
 ---
 # nodetool -h 10.46.113.236  repair -pr Keyspace1 Standard1
 [2013-04-03 15:48:00,274] Starting repair command 

[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-05-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663298#comment-13663298
 ] 

Jonathan Ellis commented on CASSANDRA-5424:
---

Done in CASSANDRA-5518.

 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
 (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed 
 merkle tree to /10.2.29.38 for (Keyspace1,Standard1)
 root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
 root@ip-10-72-111-225:/home/ubuntu# 
 ---
 # nodetool -h 10.46.113.236  repair -pr Keyspace1 Standard1
 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:48:02,032] Repair session dcb91540-9c75-11e2--a839ee2ccbef 
 for range 
 (42535295865117307932921825928971026432,127605887595351923798765477786913079296]
  finished
 [2013-04-03 15:48:02,033] Repair command #1 finished
 root@ip-10-46-113-236:/home/ubuntu# grep 

[jira] [Commented] (CASSANDRA-5398) Remove localTimestamp from merkle-tree calculation (for tombstones)

2013-05-21 Thread Christian Spriegel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663303#comment-13663303
 ] 

Christian Spriegel commented on CASSANDRA-5398:
---

[~jbellis]: Would this ticket be a candidate for 1.2? It seems repair is 
getting a lot of attention lately, so there might be interest in reducing 
overrepair whereever possible.

 Remove localTimestamp from merkle-tree calculation (for tombstones)
 ---

 Key: CASSANDRA-5398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5398
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Christian Spriegel
Priority: Trivial
 Attachments: V1.patch


 DeletedColumn and RangeTombstone use the local-timestamp to update the digest 
 during repair.
 Even though its only a second-precision timestamp, I think it still causes 
 some differences in the merkle tree, therefore causing overrepair.
 I attached a patch on trunk that adds a modified updateDigest() to 
 DeletedColumn, which does not use the value field for its calculation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5585) Drop CQL2/CQL3-beta support from cqlsh

2013-05-21 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-5585:


 Summary: Drop CQL2/CQL3-beta support from cqlsh
 Key: CASSANDRA-5585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5585
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0


Drop CQL2/CQL3-beta support from cqlsh in 2.0. (If somebody really needs that 
for some reason in 2.0, they'd still be able to use cqlsh from 1.2).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5585) Drop CQL2/CQL3-beta support from cqlsh

2013-05-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5585:
-

Component/s: Tools
 Labels: cql cql3 cqlsh  (was: )

 Drop CQL2/CQL3-beta support from cqlsh
 --

 Key: CASSANDRA-5585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5585
 Project: Cassandra
  Issue Type: Task
  Components: Tools
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql, cql3, cqlsh
 Fix For: 2.0


 Drop CQL2/CQL3-beta support from cqlsh in 2.0. (If somebody really needs that 
 for some reason in 2.0, they'd still be able to use cqlsh from 1.2).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5585) Drop CQL2/CQL3-beta support from cqlsh

2013-05-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5585:
-

Attachment: 5585.txt

 Drop CQL2/CQL3-beta support from cqlsh
 --

 Key: CASSANDRA-5585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5585
 Project: Cassandra
  Issue Type: Task
  Components: Tools
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql, cql3, cqlsh
 Fix For: 2.0

 Attachments: 5585.txt


 Drop CQL2/CQL3-beta support from cqlsh in 2.0. (If somebody really needs that 
 for some reason in 2.0, they'd still be able to use cqlsh from 1.2).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5585) Drop CQL2/CQL3-beta support from cqlsh

2013-05-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663405#comment-13663405
 ] 

Brandon Williams commented on CASSANDRA-5585:
-

+1

 Drop CQL2/CQL3-beta support from cqlsh
 --

 Key: CASSANDRA-5585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5585
 Project: Cassandra
  Issue Type: Task
  Components: Tools
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql, cql3, cqlsh
 Fix For: 2.0

 Attachments: 5585.txt


 Drop CQL2/CQL3-beta support from cqlsh in 2.0. (If somebody really needs that 
 for some reason in 2.0, they'd still be able to use cqlsh from 1.2).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5586) Remove cli usage from dtests

2013-05-21 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-5586:
---

 Summary: Remove cli usage from dtests
 Key: CASSANDRA-5586
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5586
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
Assignee: Ryan McGuire
Priority: Minor


The dtests in some situations fork the cli.  With the cli essentially stagnant 
now, there's no need to do this when the same thing can be accomplished with a 
thrift or cql call. (ccm's convenience api for invoking the cli could probably 
also be removed at this point)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Improve batchlog replay behavior and hint ttl handling

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 dac699266 - 3a51ccf2d


Improve batchlog replay behavior and hint ttl handling

patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-5314


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a51ccf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a51ccf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a51ccf2

Branch: refs/heads/cassandra-1.2
Commit: 3a51ccf2d12a5fcfaa1378eff0209526c9a33278
Parents: dac6992
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 00:25:27 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 00:25:27 2013 +0300

--
 CHANGES.txt|1 +
 .../apache/cassandra/cql3/UntypedResultSet.java|5 +
 .../org/apache/cassandra/db/BatchlogManager.java   |  173 ---
 src/java/org/apache/cassandra/db/RowMutation.java  |   38 ++-
 .../org/apache/cassandra/service/StorageProxy.java |   26 ++-
 .../org/apache/cassandra/db/HintedHandOffTest.java |2 +-
 6 files changed, 145 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a51ccf2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 25290cd..3902dec 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 1.2.6
  * Write row markers when serializing schema (CASSANDRA-5572)
  * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
+ * Improve batchlog replay behavior and hint ttl handling (CASSANDRA-5314)
 Merged from 1.1:
  * Remove buggy thrift max message length option (CASSANDRA-5529)
  * Fix NPE in Pig's widerow mode (CASSANDRA-5488)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a51ccf2/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java 
b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
index b6fcb55..9bee563 100644
--- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
@@ -131,6 +131,11 @@ public class UntypedResultSet implements 
IterableUntypedResultSet.Row
 return DateType.instance.compose(data.get(column));
 }
 
+public long getLong(String column)
+{
+return LongType.instance.compose(data.get(column));
+}
+
 public T SetT getSet(String column, AbstractTypeT type)
 {
 ByteBuffer raw = data.get(column);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a51ccf2/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 9da9b2d..c56e106 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -22,6 +22,7 @@ import java.lang.management.ManagementFactory;
 import java.net.InetAddress;
 import java.nio.ByteBuffer;
 import java.util.*;
+import java.util.concurrent.CopyOnWriteArraySet;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
@@ -29,31 +30,29 @@ import java.util.concurrent.atomic.AtomicLong;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
-import com.google.common.collect.ImmutableSortedSet;
 import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.filter.IDiskAtomFilter;
-import org.apache.cassandra.db.filter.NamesQueryFilter;
-import org.apache.cassandra.db.filter.QueryFilter;
-import org.apache.cassandra.db.filter.QueryPath;
 import org.apache.cassandra.db.marshal.LongType;
 import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.db.marshal.UUIDType;
-import org.apache.cassandra.dht.AbstractBounds;
-import org.apache.cassandra.dht.IPartitioner;
-import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
+import org.apache.cassandra.exceptions.WriteTimeoutException;
+import org.apache.cassandra.gms.FailureDetector;
 import org.apache.cassandra.io.sstable.Descriptor;
 import 

[jira] [Updated] (CASSANDRA-5582) Replace CustomHsHaServer with better optimized solution based on LMAX Disruptor

2013-05-21 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-5582:
---

Attachment: (was: CASSANDRA-5582.patch)

 Replace CustomHsHaServer with better optimized solution based on LMAX 
 Disruptor
 ---

 Key: CASSANDRA-5582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5582
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
 Fix For: 2.0

 Attachments: CASSANDRA-5530-invoker-fix.patch, 
 disruptor-3.0.0.beta3.jar, Pavel's Patch.rtf


 I have been working on https://github.com/xedin/disruptor_thrift_server and 
 consider it as stable and performant enough for integration with Cassandra. 
 Proposed replacement can work in both on/off Heap modes (depending if JNA is 
 available) and doesn't blindly reallocate things, which allows to resolve 
 CASSANDRA-4265 as Won't Fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5582) Replace CustomHsHaServer with better optimized solution based on LMAX Disruptor

2013-05-21 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-5582:
---

Attachment: (was: disruptor-thrift-0.1-SNAPSHOT.jar)

 Replace CustomHsHaServer with better optimized solution based on LMAX 
 Disruptor
 ---

 Key: CASSANDRA-5582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5582
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
 Fix For: 2.0

 Attachments: CASSANDRA-5530-invoker-fix.patch, 
 disruptor-3.0.0.beta3.jar, Pavel's Patch.rtf


 I have been working on https://github.com/xedin/disruptor_thrift_server and 
 consider it as stable and performant enough for integration with Cassandra. 
 Proposed replacement can work in both on/off Heap modes (depending if JNA is 
 available) and doesn't blindly reallocate things, which allows to resolve 
 CASSANDRA-4265 as Won't Fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5582) Replace CustomHsHaServer with better optimized solution based on LMAX Disruptor

2013-05-21 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-5582:
---

Attachment: CASSANDRA-5582.patch
disruptor-thrift-0.1-SNAPSHOT.jar

changed package name from tinkerpop to thinkaurelius.

 Replace CustomHsHaServer with better optimized solution based on LMAX 
 Disruptor
 ---

 Key: CASSANDRA-5582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5582
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
 Fix For: 2.0

 Attachments: CASSANDRA-5530-invoker-fix.patch, CASSANDRA-5582.patch, 
 disruptor-3.0.0.beta3.jar, disruptor-thrift-0.1-SNAPSHOT.jar, Pavel's 
 Patch.rtf


 I have been working on https://github.com/xedin/disruptor_thrift_server and 
 consider it as stable and performant enough for integration with Cassandra. 
 Proposed replacement can work in both on/off Heap modes (depending if JNA is 
 available) and doesn't blindly reallocate things, which allows to resolve 
 CASSANDRA-4265 as Won't Fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/2] git commit: Improve batchlog replay behavior and hint ttl handling

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/trunk c609f27cd - 2ee90305c


Improve batchlog replay behavior and hint ttl handling

patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-5314


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a51ccf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a51ccf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a51ccf2

Branch: refs/heads/trunk
Commit: 3a51ccf2d12a5fcfaa1378eff0209526c9a33278
Parents: dac6992
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 00:25:27 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 00:25:27 2013 +0300

--
 CHANGES.txt|1 +
 .../apache/cassandra/cql3/UntypedResultSet.java|5 +
 .../org/apache/cassandra/db/BatchlogManager.java   |  173 ---
 src/java/org/apache/cassandra/db/RowMutation.java  |   38 ++-
 .../org/apache/cassandra/service/StorageProxy.java |   26 ++-
 .../org/apache/cassandra/db/HintedHandOffTest.java |2 +-
 6 files changed, 145 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a51ccf2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 25290cd..3902dec 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 1.2.6
  * Write row markers when serializing schema (CASSANDRA-5572)
  * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
+ * Improve batchlog replay behavior and hint ttl handling (CASSANDRA-5314)
 Merged from 1.1:
  * Remove buggy thrift max message length option (CASSANDRA-5529)
  * Fix NPE in Pig's widerow mode (CASSANDRA-5488)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a51ccf2/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java 
b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
index b6fcb55..9bee563 100644
--- a/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/UntypedResultSet.java
@@ -131,6 +131,11 @@ public class UntypedResultSet implements 
IterableUntypedResultSet.Row
 return DateType.instance.compose(data.get(column));
 }
 
+public long getLong(String column)
+{
+return LongType.instance.compose(data.get(column));
+}
+
 public T SetT getSet(String column, AbstractTypeT type)
 {
 ByteBuffer raw = data.get(column);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a51ccf2/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 9da9b2d..c56e106 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -22,6 +22,7 @@ import java.lang.management.ManagementFactory;
 import java.net.InetAddress;
 import java.nio.ByteBuffer;
 import java.util.*;
+import java.util.concurrent.CopyOnWriteArraySet;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
@@ -29,31 +30,29 @@ import java.util.concurrent.atomic.AtomicLong;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
-import com.google.common.collect.ImmutableSortedSet;
 import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.filter.IDiskAtomFilter;
-import org.apache.cassandra.db.filter.NamesQueryFilter;
-import org.apache.cassandra.db.filter.QueryFilter;
-import org.apache.cassandra.db.filter.QueryPath;
 import org.apache.cassandra.db.marshal.LongType;
 import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.db.marshal.UUIDType;
-import org.apache.cassandra.dht.AbstractBounds;
-import org.apache.cassandra.dht.IPartitioner;
-import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
+import org.apache.cassandra.exceptions.WriteTimeoutException;
+import org.apache.cassandra.gms.FailureDetector;
 import org.apache.cassandra.io.sstable.Descriptor;
 import 

[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-21 Thread aleksey
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/BatchlogManager.java
src/java/org/apache/cassandra/db/RowMutation.java
src/java/org/apache/cassandra/service/StorageProxy.java
test/unit/org/apache/cassandra/db/HintedHandOffTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ee90305
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ee90305
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ee90305

Branch: refs/heads/trunk
Commit: 2ee90305cd1e62033d2b78269487a73819a20c21
Parents: c609f27 3a51ccf
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 00:41:50 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 00:41:50 2013 +0300

--
 CHANGES.txt|1 +
 .../apache/cassandra/cql3/UntypedResultSet.java|5 +
 .../org/apache/cassandra/db/BatchlogManager.java   |  172 ---
 .../apache/cassandra/db/HintedHandOffManager.java  |   25 ++-
 .../org/apache/cassandra/service/StorageProxy.java |   27 ++-
 .../org/apache/cassandra/db/HintedHandOffTest.java |2 +-
 6 files changed, 136 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ee90305/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ee90305/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --cc src/java/org/apache/cassandra/db/BatchlogManager.java
index 9b0c334,c56e106..6f9cb35
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@@ -134,11 -120,14 +120,12 @@@ public class BatchlogManager implement
  ByteBuffer writtenAt = LongType.instance.decompose(timestamp / 1000);
  ByteBuffer data = serializeRowMutations(mutations);
  
 -ColumnFamily cf = ColumnFamily.create(CFMetaData.BatchlogCf);
 +ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(CFMetaData.BatchlogCf);
- cf.addColumn(new Column(DATA, data, timestamp));
- cf.addColumn(new Column(WRITTEN_AT, writtenAt, timestamp));
+ cf.addColumn(new Column(columnName(), 
ByteBufferUtil.EMPTY_BYTE_BUFFER, timestamp));
+ cf.addColumn(new Column(columnName(written_at), writtenAt, 
timestamp));
+ cf.addColumn(new Column(columnName(data), data, timestamp));
 -RowMutation rm = new RowMutation(Table.SYSTEM_KS, 
UUIDType.instance.decompose(uuid));
 -rm.add(cf);
  
 -return rm;
 +return new RowMutation(Table.SYSTEM_KS, 
UUIDType.instance.decompose(uuid), cf);
  }
  
  private static ByteBuffer serializeRowMutations(CollectionRowMutation 
mutations)
@@@ -222,47 -198,90 +196,90 @@@
  DataInputStream in = new 
DataInputStream(ByteBufferUtil.inputStream(data));
  int size = in.readInt();
  for (int i = 0; i  size; i++)
- writeHintsForMutation(RowMutation.serializer.deserialize(in, 
VERSION));
+ replaySerializedMutation(RowMutation.serializer.deserialize(in, 
VERSION), writtenAt);
  }
  
- private static void writeHintsForMutation(RowMutation mutation)
+ /*
+  * We try to deliver the mutations to the replicas ourselves if they are 
alive and only resort to writing hints
+  * when a replica is down or a write request times out.
+  */
+ private void replaySerializedMutation(RowMutation mutation, long 
writtenAt) throws IOException
  {
- String table = mutation.getTable();
+ int ttl = calculateHintTTL(mutation, writtenAt);
+ if (ttl = 0)
+ return; // the mutation isn't safe to replay.
+ 
+ SetInetAddress liveEndpoints = new HashSetInetAddress();
+ String ks = mutation.getTable();
  Token tk = StorageService.getPartitioner().getToken(mutation.key());
- ListInetAddress naturalEndpoints = 
StorageService.instance.getNaturalEndpoints(table, tk);
- CollectionInetAddress pendingEndpoints = 
StorageService.instance.getTokenMetadata().pendingEndpointsFor(tk, table);
- for (InetAddress target : Iterables.concat(naturalEndpoints, 
pendingEndpoints))
+ for (InetAddress endpoint : 
Iterables.concat(StorageService.instance.getNaturalEndpoints(ks, tk),
+  
StorageService.instance.getTokenMetadata().pendingEndpointsFor(tk, ks)))
  {
- if (target.equals(FBUtilities.getBroadcastAddress()))
+ if (endpoint.equals(FBUtilities.getBroadcastAddress()))
  mutation.apply();
+ else if 

[jira] [Updated] (CASSANDRA-5398) Remove localTimestamp from merkle-tree calculation (for tombstones)

2013-05-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5398:
--

 Reviewer: yukim
 Priority: Minor  (was: Trivial)
Fix Version/s: 1.2.6
 Assignee: Christian Spriegel

 Remove localTimestamp from merkle-tree calculation (for tombstones)
 ---

 Key: CASSANDRA-5398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5398
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 1.2.6

 Attachments: V1.patch


 DeletedColumn and RangeTombstone use the local-timestamp to update the digest 
 during repair.
 Even though its only a second-precision timestamp, I think it still causes 
 some differences in the merkle tree, therefore causing overrepair.
 I attached a patch on trunk that adds a modified updateDigest() to 
 DeletedColumn, which does not use the value field for its calculation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/2] cqlsh: drop CQL2/CQL3-beta support

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 2ee90305c - 7f6ac19ef


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f6ac19e/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 321ee5f..ea4f77c 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -90,29 +90,22 @@ class CqlshCompletionCase(BaseTestCase):
 def strategies(self):
 return self.module.CqlRuleSet.replication_strategies
 
-class TestCqlshCompletion_CQL2(CqlshCompletionCase):
-cqlver = 2
-module = cqlsh.cqlhandling
+class TestCqlshCompletion(CqlshCompletionCase):
+cqlver = '3.1.0'
+module = cqlsh.cql3handling
 
 def test_complete_on_empty_string(self):
 self.trycompletions('', choices=('?', 'ALTER', 'BEGIN', 'CAPTURE', 
'CONSISTENCY',
  'COPY', 'CREATE', 'DEBUG', 'DELETE', 
'DESC', 'DESCRIBE',
- 'DROP', 'HELP', 'INSERT', 'SELECT', 
'SHOW', 'SOURCE',
- 'TRACING', 'TRUNCATE', 'UPDATE', 
'USE', 'exit', 'quit'))
+ 'DROP', 'GRANT', 'HELP', 'INSERT', 
'LIST', 'REVOKE',
+ 'SELECT', 'SHOW', 'SOURCE', 
'TRACING', 'TRUNCATE', 'UPDATE',
+ 'USE', 'exit', 'quit'))
 
 def test_complete_command_words(self):
 self.trycompletions('alt', '\b\b\bALTER ')
 self.trycompletions('I', 'NSERT INTO ')
 self.trycompletions('exit', ' ')
 
-def test_complete_in_string_literals(self):
-# would be great if we could get a space after this sort of completion,
-# but readline really wants to make things difficult for us
-self.trycompletions(insert into system.'NodeId, Info')
-self.trycompletions(USE ', choices=('system', self.cqlsh.keyspace), 
other_choices_ok=True)
-self.trycompletions(create keyspace blah with strategy_class = 'Sim,
-pleStrategy')
-
 def test_complete_in_uuid(self):
 pass
 
@@ -132,59 +125,6 @@ class TestCqlshCompletion_CQL2(CqlshCompletionCase):
 pass
 
 def test_complete_in_create_keyspace(self):
-self.trycompletions('create keyspace ', '', 
choices=('new_keyspace_name',))
-self.trycompletions('create keyspace moo ', WITH strategy_class = ')
-self.trycompletions(create keyspace '12SomeName' with , 
strategy_class = ')
-self.trycompletions(create keyspace moo with strategy_class,  = ')
-self.trycompletions(create keyspace moo with strategy_class=',
-choices=self.strategies())
-self.trycompletions(create keySPACE 123 with 
strategy_class='SimpleStrategy' A,
-ND strategy_options:replication_factor = )
-self.trycompletions(create keyspace fish with 
strategy_class='SimpleStrategy'
-  and strategy_options:replication_factor = 
, '',
-choices=('option_value',))
-self.trycompletions(create keyspace 'PB and J' with strategy_class=
-   'NetworkTopologyStrategy' AND, ' ')
-self.trycompletions(create keyspace 'PB and J' with strategy_class=
-   'NetworkTopologyStrategy' AND , '',
-choices=('strategy_option_name',))
-
-def test_complete_in_drop_keyspace(self):
-pass
-
-def test_complete_in_create_columnfamily(self):
-pass
-
-def test_complete_in_drop_columnfamily(self):
-pass
-
-def test_complete_in_truncate(self):
-pass
-
-def test_complete_in_alter_columnfamily(self):
-pass
-
-def test_complete_in_use(self):
-pass
-
-def test_complete_in_create_index(self):
-pass
-
-def test_complete_in_drop_index(self):
-pass
-
-class TestCqlshCompletion_CQL3final(TestCqlshCompletion_CQL2):
-cqlver = '3.0.0'
-module = cqlsh.cql3handling
-
-def test_complete_on_empty_string(self):
-self.trycompletions('', choices=('?', 'ALTER', 'BEGIN', 'CAPTURE', 
'CONSISTENCY',
- 'COPY', 'CREATE', 'DEBUG', 'DELETE', 
'DESC', 'DESCRIBE',
- 'DROP', 'GRANT', 'HELP', 'INSERT', 
'LIST', 'REVOKE',
- 'SELECT', 'SHOW', 'SOURCE', 
'TRACING', 'TRUNCATE', 'UPDATE',
- 'USE', 'exit', 'quit'))
-
-def test_complete_in_create_keyspace(self):
 self.trycompletions('create keyspace ', '', choices=('identifier', 
'quotedName'))
 self.trycompletions('create keyspace moo ',
 WITH replication = {'class': ')
@@ 

[jira] [Commented] (CASSANDRA-4905) Repair should exclude gcable tombstones from merkle-tree computation

2013-05-21 Thread Michael Theroux (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663449#comment-13663449
 ] 

Michael Theroux commented on CASSANDRA-4905:


Can anyone comment on the risk of a user (such as myself) backporting this fix 
and patching locally?  The code that was changed in the patch looks identical 
in 1.1.11.  

We have a situation where a column family with lots of deletes running under 
leveled compaction.  The validation doesn't take too long, but afterwards we 
get 2k compaction tasks that takes several hours to run, when really there 
shouldn't be any inconsistency.  What I suspect is happening is as tombstones 
are getting gc_graced they are compacted away on some nodes and not others at 
the time repair is run.  I suspect the majority of the 2k compactions are 
gc_graced tombstones getting back in-sync.

I'm setting up a test environment with baseline data, going to reproduce the 
repair, reset to baseline, and re-run the repair with this patch to see if this 
is indeed the issue. This might take a few days to setup and run.

Cassandra is mission and business critical for us.  Moving to 1.2 will take 
some time, as we should setup a test environment, practice migrations and test. 
 We also use the ByteOrderedPartitioner, which in general concerns me as its 
not the most popular use of Cassandra, and maybe a source of issues as its 
pounded on less by the general user community.

 Repair should exclude gcable tombstones from merkle-tree computation
 

 Key: CASSANDRA-4905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4905
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Christian Spriegel
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 3

 Attachments: 4905.txt


 Currently gcable tombstones get repaired if some replicas compacted already, 
 but some are not compacted.
 This could be avoided by ignoring all gcable tombstones during merkle tree 
 calculation.
 This was discussed with Sylvain on the mailing list:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4693) CQL Protocol should allow multiple PreparedStatements to be atomically executed

2013-05-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663492#comment-13663492
 ] 

Aleksey Yeschenko commented on CASSANDRA-4693:
--

Could you rebase? It no longer applies because of trigger changes to 
BatchStatement.

 CQL Protocol should allow multiple PreparedStatements to be atomically 
 executed
 ---

 Key: CASSANDRA-4693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4693
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
  Labels: cql, protocol
 Fix For: 2.0

 Attachments: 
 0001-Binary-protocol-adds-message-to-batch-prepared-or-not-.txt


 Currently the only way to insert multiple records on the same partition key, 
 atomically and using PreparedStatements is to use a CQL BATCH command. 
 Unfortunately when doing so the amount of records to be inserted must be 
 known prior to prepare the statement which is rarely the case. Thus the only 
 workaround if one want to keep atomicity is currently to use unprepared 
 statements which send a bulk of CQL strings and is fairly inefficient.
 Therefore CQL Protocol should allow clients to send multiple 
 PreparedStatements to be executed with similar guarantees and semantic as CQL 
 BATCH command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Ninja-correct NEWS.txt typo (uses to - used to)

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 3a51ccf2d - d8f0ed555


Ninja-correct NEWS.txt typo (uses to - used to)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8f0ed55
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8f0ed55
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8f0ed55

Branch: refs/heads/cassandra-1.2
Commit: d8f0ed555f6fa5e94d0d5bf6b9a94eb13d5c231a
Parents: 3a51ccf
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 02:04:37 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 02:04:37 2013 +0300

--
 NEWS.txt |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8f0ed55/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 63d1e58..706cd29 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -42,7 +42,7 @@ Features
 
 Upgrading
 -
-- CQL3 uses to be case-insensitive for property map key in ALTER and CREATE
+- CQL3 used to be case-insensitive for property map key in ALTER and CREATE
   statements. In other words:
 CREATE KEYSPACE test WITH replication = { 'CLASS' : 'SimpleStrategy',
   'REPLICATION_FACTOR' : '1' }



[1/2] git commit: Ninja-correct NEWS.txt typo (uses to - used to)

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 7f6ac19ef - 288e503d2


Ninja-correct NEWS.txt typo (uses to - used to)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8f0ed55
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8f0ed55
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8f0ed55

Branch: refs/heads/trunk
Commit: d8f0ed555f6fa5e94d0d5bf6b9a94eb13d5c231a
Parents: 3a51ccf
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 02:04:37 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 02:04:37 2013 +0300

--
 NEWS.txt |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8f0ed55/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 63d1e58..706cd29 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -42,7 +42,7 @@ Features
 
 Upgrading
 -
-- CQL3 uses to be case-insensitive for property map key in ALTER and CREATE
+- CQL3 used to be case-insensitive for property map key in ALTER and CREATE
   statements. In other words:
 CREATE KEYSPACE test WITH replication = { 'CLASS' : 'SimpleStrategy',
   'REPLICATION_FACTOR' : '1' }



git commit: cqlsh: ninja-fix DROP INDEX autocompletion for CQL3 tables

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 d8f0ed555 - 687ac710c


cqlsh: ninja-fix DROP INDEX autocompletion for CQL3 tables


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/687ac710
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/687ac710
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/687ac710

Branch: refs/heads/cassandra-1.2
Commit: 687ac710cc9ec010eab6ff00dd8b6f1a3136c635
Parents: d8f0ed5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 02:34:44 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 02:34:44 2013 +0300

--
 bin/cqlsh |   14 --
 1 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/687ac710/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 974c916..2f6568a 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -669,12 +669,14 @@ class Shell(cmd.Cmd):
 return [c.name for c in self.get_columnfamilies(ksname)]
 
 def get_index_names(self, ksname=None):
-indnames = []
-for c in self.get_columnfamilies(ksname):
-for md in c.column_metadata:
-if md.index_name is not None:
-indnames.append(md.index_name)
-return indnames
+cols = []
+if self.cqlver_atleast(3) and not self.is_cql3_beta():
+for cfname in self.get_columnfamily_names_cql3(ksname=ksname):
+cols.extend(self.get_columnfamily_layout(ksname, 
cfname).columns)
+else:
+for cf in self.get_columnfamilies(ksname):
+cols.extend(cf.column_metadata)
+return [col.index_name for col in cols if col.index_name is not None]
 
 def filterable_column_names(self, cfdef):
 filterable = set()



[1/3] git commit: cqlsh: ninja-fix DROP INDEX autocompletion for CQL3 tables

2013-05-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 288e503d2 - f3d2a52b9


cqlsh: ninja-fix DROP INDEX autocompletion for CQL3 tables


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/687ac710
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/687ac710
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/687ac710

Branch: refs/heads/trunk
Commit: 687ac710cc9ec010eab6ff00dd8b6f1a3136c635
Parents: d8f0ed5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 02:34:44 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 02:34:44 2013 +0300

--
 bin/cqlsh |   14 --
 1 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/687ac710/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 974c916..2f6568a 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -669,12 +669,14 @@ class Shell(cmd.Cmd):
 return [c.name for c in self.get_columnfamilies(ksname)]
 
 def get_index_names(self, ksname=None):
-indnames = []
-for c in self.get_columnfamilies(ksname):
-for md in c.column_metadata:
-if md.index_name is not None:
-indnames.append(md.index_name)
-return indnames
+cols = []
+if self.cqlver_atleast(3) and not self.is_cql3_beta():
+for cfname in self.get_columnfamily_names_cql3(ksname=ksname):
+cols.extend(self.get_columnfamily_layout(ksname, 
cfname).columns)
+else:
+for cf in self.get_columnfamilies(ksname):
+cols.extend(cf.column_metadata)
+return [col.index_name for col in cols if col.index_name is not None]
 
 def filterable_column_names(self, cfdef):
 filterable = set()



[2/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-21 Thread aleksey
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b53d3c45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b53d3c45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b53d3c45

Branch: refs/heads/trunk
Commit: b53d3c453b7b5b912140c3a54108057e2741c964
Parents: 288e503 687ac71
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 02:35:34 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 02:35:34 2013 +0300

--
 bin/cqlsh |   14 --
 1 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b53d3c45/bin/cqlsh
--
diff --cc bin/cqlsh
index 8dcc69f,2f6568a..c7ac533
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@@ -581,24 -664,31 +581,26 @@@ class Shell(cmd.Cmd)
  raise ColumnFamilyNotFound(Unconfigured column family %r % 
(cfname,))
  
  def get_columnfamily_names(self, ksname=None):
 -if self.cqlver_atleast(3) and not self.is_cql3_beta():
 -return self.get_columnfamily_names_cql3(ksname=ksname)
 -return [c.name for c in self.get_columnfamilies(ksname)]
 +if ksname is None:
 +ksname = self.current_keyspace
 +cf_q = select columnfamily_name from system.schema_columnfamilies
 +   where keyspace_name=:ks
 +self.cursor.execute(cf_q,
 +{'ks': self.cql_unprotect_name(ksname)},
 +consistency_level='ONE')
 +return [str(row[0]) for row in self.cursor.fetchall()]
  
 +# TODO: FIXME
  def get_index_names(self, ksname=None):
- indnames = []
- for c in self.get_columnfamilies(ksname):
- for md in c.column_metadata:
- if md.index_name is not None:
- indnames.append(md.index_name)
- return indnames
+ cols = []
+ if self.cqlver_atleast(3) and not self.is_cql3_beta():
+ for cfname in self.get_columnfamily_names_cql3(ksname=ksname):
+ cols.extend(self.get_columnfamily_layout(ksname, 
cfname).columns)
+ else:
+ for cf in self.get_columnfamilies(ksname):
+ cols.extend(cf.column_metadata)
+ return [col.index_name for col in cols if col.index_name is not None]
  
 -def filterable_column_names(self, cfdef):
 -filterable = set()
 -if cfdef.key_alias is not None and cfdef.key_alias != 'KEY':
 -filterable.add(cfdef.key_alias)
 -else:
 -filterable.add('KEY')
 -for cm in cfdef.column_metadata:
 -if cm.index_name is not None:
 -filterable.add(cm.name)
 -return filterable
 -
  def get_column_names(self, ksname, cfname):
  if ksname is None:
  ksname = self.current_keyspace



[3/3] git commit: cqlsh: ninja-fix DROP INDEX autocompletion for CQL3 tables (2.0)

2013-05-21 Thread aleksey
cqlsh: ninja-fix DROP INDEX autocompletion for CQL3 tables (2.0)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3d2a52b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3d2a52b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3d2a52b

Branch: refs/heads/trunk
Commit: f3d2a52b99b789586363a5966a766629710bcb40
Parents: b53d3c4
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed May 22 02:53:18 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed May 22 02:53:18 2013 +0300

--
 bin/cqlsh |   31 ++-
 1 files changed, 6 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3d2a52b/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index c7ac533..4a4fced 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -564,22 +564,6 @@ class Shell(cmd.Cmd):
 def get_keyspace_names(self):
 return [k.name for k in self.get_keyspaces()]
 
-def get_columnfamilies(self, ksname=None):
-if ksname is None:
-ksname = self.current_keyspace
-if ksname is None:
-raise NoKeyspaceError(Not in any keyspace.)
-return self.get_keyspace(ksname).cf_defs
-
-def get_columnfamily(self, cfname, ksname=None):
-if ksname is None:
-ksname = self.current_keyspace
-cf_defs = self.get_columnfamilies(ksname)
-for c in cf_defs:
-if c.name == cfname:
-return c
-raise ColumnFamilyNotFound(Unconfigured column family %r % (cfname,))
-
 def get_columnfamily_names(self, ksname=None):
 if ksname is None:
 ksname = self.current_keyspace
@@ -590,16 +574,13 @@ class Shell(cmd.Cmd):
 consistency_level='ONE')
 return [str(row[0]) for row in self.cursor.fetchall()]
 
-# TODO: FIXME
 def get_index_names(self, ksname=None):
-cols = []
-if self.cqlver_atleast(3) and not self.is_cql3_beta():
-for cfname in self.get_columnfamily_names_cql3(ksname=ksname):
-cols.extend(self.get_columnfamily_layout(ksname, 
cfname).columns)
-else:
-for cf in self.get_columnfamilies(ksname):
-cols.extend(cf.column_metadata)
-return [col.index_name for col in cols if col.index_name is not None]
+idxnames = []
+for cfname in self.get_columnfamily_names(ksname=ksname):
+for col in self.get_columnfamily_layout(ksname, cfname).columns:
+if col.index_name is not None:
+idxnames.append(col.index_name)
+return idxnames
 
 def get_column_names(self, ksname, cfname):
 if ksname is None:



git commit: spilleng

2013-05-21 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-1.2 687ac710c - de2ee6e28


spilleng


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de2ee6e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de2ee6e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de2ee6e2

Branch: refs/heads/cassandra-1.2
Commit: de2ee6e28fc65340c1702583fef343d7d81bbc3b
Parents: 687ac71
Author: Dave Brosius dbros...@apache.org
Authored: Tue May 21 20:49:20 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Tue May 21 20:49:20 2013 -0400

--
 .../org/apache/cassandra/tools/SSTableExport.java  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de2ee6e2/src/java/org/apache/cassandra/tools/SSTableExport.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableExport.java 
b/src/java/org/apache/cassandra/tools/SSTableExport.java
index 90274d1..1d898ef 100644
--- a/src/java/org/apache/cassandra/tools/SSTableExport.java
+++ b/src/java/org/apache/cassandra/tools/SSTableExport.java
@@ -505,7 +505,7 @@ public class SSTableExport
 Descriptor descriptor = Descriptor.fromFilename(ssTableFileName);
 if (Schema.instance.getCFMetaData(descriptor) == null)
 {
-System.err.println(String.format(The provided column family is 
not part of this cassandra database: keysapce = %s, column family = %s,
+System.err.println(String.format(The provided column family is 
not part of this cassandra database: keyspace = %s, column family = %s,
  descriptor.ksname, 
descriptor.cfname));
 System.exit(1);
 }



git commit: spilleng

2013-05-21 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-1.1 9879fa612 - 93cfbc187


spilleng


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93cfbc18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93cfbc18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93cfbc18

Branch: refs/heads/cassandra-1.1
Commit: 93cfbc187133d90d88a28826ad5989a53bde3e2f
Parents: 9879fa6
Author: Dave Brosius dbros...@apache.org
Authored: Tue May 21 20:51:04 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Tue May 21 20:51:04 2013 -0400

--
 .../org/apache/cassandra/tools/SSTableExport.java  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/93cfbc18/src/java/org/apache/cassandra/tools/SSTableExport.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableExport.java 
b/src/java/org/apache/cassandra/tools/SSTableExport.java
index 9ac2123..37f158b 100644
--- a/src/java/org/apache/cassandra/tools/SSTableExport.java
+++ b/src/java/org/apache/cassandra/tools/SSTableExport.java
@@ -401,7 +401,7 @@ public class SSTableExport
 Descriptor descriptor = Descriptor.fromFilename(ssTableFileName);
 if (Schema.instance.getCFMetaData(descriptor) == null)
 {
-System.err.println(String.format(The provided column family is 
not part of this cassandra database: keysapce = %s, column family = %s,
+System.err.println(String.format(The provided column family is 
not part of this cassandra database: keyspace = %s, column family = %s,
  descriptor.ksname, 
descriptor.cfname));
 System.exit(1);
 }



[1/2] git commit: spilleng

2013-05-21 Thread dbrosius
Updated Branches:
  refs/heads/trunk f3d2a52b9 - bac41da61


spilleng


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de2ee6e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de2ee6e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de2ee6e2

Branch: refs/heads/trunk
Commit: de2ee6e28fc65340c1702583fef343d7d81bbc3b
Parents: 687ac71
Author: Dave Brosius dbros...@apache.org
Authored: Tue May 21 20:49:20 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Tue May 21 20:49:20 2013 -0400

--
 .../org/apache/cassandra/tools/SSTableExport.java  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de2ee6e2/src/java/org/apache/cassandra/tools/SSTableExport.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableExport.java 
b/src/java/org/apache/cassandra/tools/SSTableExport.java
index 90274d1..1d898ef 100644
--- a/src/java/org/apache/cassandra/tools/SSTableExport.java
+++ b/src/java/org/apache/cassandra/tools/SSTableExport.java
@@ -505,7 +505,7 @@ public class SSTableExport
 Descriptor descriptor = Descriptor.fromFilename(ssTableFileName);
 if (Schema.instance.getCFMetaData(descriptor) == null)
 {
-System.err.println(String.format(The provided column family is 
not part of this cassandra database: keysapce = %s, column family = %s,
+System.err.println(String.format(The provided column family is 
not part of this cassandra database: keyspace = %s, column family = %s,
  descriptor.ksname, 
descriptor.cfname));
 System.exit(1);
 }



[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-21 Thread dbrosius
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bac41da6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bac41da6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bac41da6

Branch: refs/heads/trunk
Commit: bac41da6118518a99161979d035d8a8f3cde585f
Parents: f3d2a52 de2ee6e
Author: Dave Brosius dbros...@apache.org
Authored: Tue May 21 20:52:35 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Tue May 21 20:52:35 2013 -0400

--
 .../org/apache/cassandra/tools/SSTableExport.java  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bac41da6/src/java/org/apache/cassandra/tools/SSTableExport.java
--



[jira] [Commented] (CASSANDRA-3741) OOMs because delete operations are not accounted

2013-05-21 Thread Radim Kolar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663802#comment-13663802
 ] 

Radim Kolar commented on CASSANDRA-3741:


i retested it. bug from 1.0 do not exists in 1.1 and 1.2. But its still not 
optimal and can lead to OOM because it do not adds enough bytes count for 
tombstone to live data count.

If i remember some hardcoded constant was used, it needs to be raised.

 OOMs because delete operations are not accounted
 

 Key: CASSANDRA-3741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3741
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: FreeBSD
Reporter: Vitalii Tymchyshyn
Assignee: Andriy Kolyadenko
 Fix For: 1.1.1


 Currently we are moving to new data format where new format is written into 
 new CFs and old one is deleted key-by-key. 
 I have started getting OOMs and found out that delete operations are not 
 accounted and so, column families are not flushed (changed == 0 with delete 
 only operations) by storage manager.
 This is pull request that fixed this problem for me: 
 https://github.com/apache/cassandra/pull/5

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5587) BulkLoader fails with NoSuchElementException

2013-05-21 Thread JIRA
Julien Aymé created CASSANDRA-5587:
--

 Summary: BulkLoader fails with NoSuchElementException
 Key: CASSANDRA-5587
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5587
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.5, 1.2.4
Reporter: Julien Aymé


When using BulkLoader tool (sstableloader command) to transfer data from a 
cluster to another, 
a java.util.NoSuchElementException is thrown whenever the directory contains a 
snapshot sub directory,
and the bulk load fails.

The fix should be quite simple:
Catch any NoSuchElementException thrown in {{SSTableLoader#openSSTables()}}

The directory structure:
{noformat}
user@cassandrasrv01:~$ ls /var/lib/cassandra/data/Keyspace1/CF1/
Keyspace1-CF1-ib-1872-CompressionInfo.db
Keyspace1-CF1-ib-1872-Data.db
Keyspace1-CF1-ib-1872-Filter.db
Keyspace1-CF1-ib-1872-Index.db
Keyspace1-CF1-ib-1872-Statistics.db
Keyspace1-CF1-ib-1872-Summary.db
Keyspace1-CF1-ib-1872-TOC.txt
Keyspace1-CF1-ib-2166-CompressionInfo.db
Keyspace1-CF1-ib-2166-Data.db
Keyspace1-CF1-ib-2166-Filter.db
Keyspace1-CF1-ib-2166-Index.db
Keyspace1-CF1-ib-2166-Statistics.db
Keyspace1-CF1-ib-2166-Summary.db
Keyspace1-CF1-ib-2166-TOC.txt
Keyspace1-CF1-ib-5-CompressionInfo.db
Keyspace1-CF1-ib-5-Data.db
Keyspace1-CF1-ib-5-Filter.db
Keyspace1-CF1-ib-5-Index.db
Keyspace1-CF1-ib-5-Statistics.db
Keyspace1-CF1-ib-5-Summary.db
Keyspace1-CF1-ib-5-TOC.txt
...
snapshots
{noformat}


The stacktrace: 
{noformat}
user@cassandrasrv01:~$ ./cassandra/bin/sstableloader -v --debug -d 
cassandrabck01 /var/lib/cassandra/data/Keyspace1/CF1/
null
java.util.NoSuchElementException
at java.util.StringTokenizer.nextToken(StringTokenizer.java:349)
at 
org.apache.cassandra.io.sstable.Descriptor.fromFilename(Descriptor.java:265)
at 
org.apache.cassandra.io.sstable.Component.fromFilename(Component.java:122)
at 
org.apache.cassandra.io.sstable.SSTable.tryComponentFromFilename(SSTable.java:194)
at 
org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:71)
at java.io.File.list(File.java:1087)
at 
org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:67)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:119)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Update of ClientOptionsThrift by DaveBrosius

2013-05-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ClientOptionsThrift page has been changed by DaveBrosius:
https://wiki.apache.org/cassandra/ClientOptionsThrift?action=diffrev1=1rev2=2

* Feedly-Cassandra (ORM library): https://github.com/kireet/feedly-cassandra
   * Scala
* Cascal: https://github.com/Shimi/cascal
+   * Cassie: https://github.com/twitter/cassie 
   * Node.js
* Helenus: https://github.com/simplereach/helenus
   * Clojure