[jira] [Commented] (CASSANDRA-2319) Promote row index

2012-04-11 Thread Sylvain Lebresne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252243#comment-13252243
 ] 

Sylvain Lebresne commented on CASSANDRA-2319:
-

bq. What if we dropped the "main" index and just kept the "sample" index of 
every 1/128 columns? Seems like we'd trade a little more seq i/o to do less 
random i/o, and being able to get rid of the index sampling phase on startup...

I am not sure I follow. 

> Promote row index
> -
>
> Key: CASSANDRA-2319
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2319
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Stu Hood
>Assignee: Sylvain Lebresne
>  Labels: index, timeseries
> Fix For: 1.2
>
> Attachments: 2319-v1.tgz, 2319-v2.tgz, promotion.pdf, version-f.txt, 
> version-g-lzf.txt, version-g.txt
>
>
> The row index contains entries for configurably sized blocks of a wide row. 
> For a row of appreciable size, the row index ends up directing the third seek 
> (1. index, 2. row index, 3. content) to nearby the first column of a scan.
> Since the row index is always used for wide rows, and since it contains 
> information that tells us whether or not the 3rd seek is necessary (the 
> column range or name we are trying to slice may not exist in a given 
> sstable), promoting the row index into the sstable index would allow us to 
> drop the maximum number of seeks for wide rows back to 2, and, more 
> importantly, would allow sstables to be eliminated using only the index.
> An example usecase that benefits greatly from this change is time series data 
> in wide rows, where data is appended to the beginning or end of the row. Our 
> existing compaction strategy gets lucky and clusters the oldest data in the 
> oldest sstables: for queries to recently appended data, we would be able to 
> eliminate wide rows using only the sstable index, rather than needing to seek 
> into the data file to determine that it isn't interesting. For narrow rows, 
> this change would have no effect, as they will not reach the threshold for 
> indexing anyway.
> A first cut design for this change would look very similar to the file format 
> design proposed on #674: 
> http://wiki.apache.org/cassandra/FileFormatDesignDoc: row keys clustered, 
> column names clustered, and offsets clustered and delta encoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4052) Add way to force the cassandra-cli to refresh it's schema

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252203#comment-13252203
 ] 

Jonathan Ellis commented on CASSANDRA-4052:
---

Ah, I bet you're right.

> Add way to force the cassandra-cli to refresh it's schema
> -
>
> Key: CASSANDRA-4052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4052
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.0.8
>Reporter: Tupshin Harper
>Priority: Minor
>
> By design, the cassandra-cli caches the schema and doesn't refresh it when 
> various commands like "describe keyspaces" are run. This is reasonable, and 
> it is easy enough to restart the cli  if necessary. However, this does lead 
> to confusion since a new user can reasonably assume that describe keyspaces 
> will always show an accurate current represention of the ring. We should find 
> a way to reduce the surprise (and lack of easy discoverability) of this 
> behaviour.
> I propose any one of the following(#1 is probably the easiest and most 
> likely):
> 1) Add a command (that would be documented in the cli's help) to explicitly 
> refresh the schema ("schema refresh", "refresh schema", or anything similar).
> 2) Always force a refresh of the schema when performing at least the 
> "describe keyspaces" command.
> 3) Add a flag to cassandra-cli to explicitly enable schema caching. If that 
> flag is not passed, then schema caching will be disabled for that session. 
> This suggestion assumes that for simple deployments (few CFs, etc), schema 
> caching isn't very important to the performance of the cli.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4046) when creating keyspace with simple strategy, it should only acception "replication_factor" as an option

2012-04-11 Thread Dave Brosius (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252198#comment-13252198
 ] 

Dave Brosius commented on CASSANDRA-4046:
-

Implementations of AbstractReplicationStrategy.validateOptions only does 
positive validations, negative validations for these would need to be 
implemented for this issue.

> when creating keyspace with simple strategy, it should only acception 
> "replication_factor" as an option
> ---
>
> Key: CASSANDRA-4046
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4046
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jackson Chung
>Priority: Minor
>
> currently I could do this:
> {panel}
> [default@unknown] create keyspace test
> ... with placement_strategy = 'SimpleStrategy'
> ... and strategy_options = \{DC : testdc, replication_factor :1\};
> ebc5f430-6d47-11e1--edee3ea2cbff
> Waiting for schema agreement...
> ... schemas agree across the cluster
> [default@unknown] 
> {panel}
> while i don't think this creates any "problem" in terms of the actual 
> replication being used for the CL , we probably should acknowledge to the 
> user that "DC : testdc" is not an valid option for the SimpleStrategy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4052) Add way to force the cassandra-cli to refresh it's schema

2012-04-11 Thread Dave Brosius (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252196#comment-13252196
 ] 

Dave Brosius commented on CASSANDRA-4052:
-

Is this schema cached to allow for the assumption of cf attributes?

ie,   assume cf validator as utf8;

?

> Add way to force the cassandra-cli to refresh it's schema
> -
>
> Key: CASSANDRA-4052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4052
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.0.8
>Reporter: Tupshin Harper
>Priority: Minor
>
> By design, the cassandra-cli caches the schema and doesn't refresh it when 
> various commands like "describe keyspaces" are run. This is reasonable, and 
> it is easy enough to restart the cli  if necessary. However, this does lead 
> to confusion since a new user can reasonably assume that describe keyspaces 
> will always show an accurate current represention of the ring. We should find 
> a way to reduce the surprise (and lack of easy discoverability) of this 
> behaviour.
> I propose any one of the following(#1 is probably the easiest and most 
> likely):
> 1) Add a command (that would be documented in the cli's help) to explicitly 
> refresh the schema ("schema refresh", "refresh schema", or anything similar).
> 2) Always force a refresh of the schema when performing at least the 
> "describe keyspaces" command.
> 3) Add a flag to cassandra-cli to explicitly enable schema caching. If that 
> flag is not passed, then schema caching will be disabled for that session. 
> This suggestion assumes that for simple deployments (few CFs, etc), schema 
> caching isn't very important to the performance of the cli.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4140) Build stress classes in a location that allows tools/stress/bin/stress to find them

2012-04-11 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4140:
-

Attachment: 0001-CASSANDRA-4140.patch

Alright, attached patch will make the following possible...

Build from Source -> 
#ant stress-build
#tools/stress/bin/stress

Installed Binary ->
#tools/stress/bin/stress

NOTE: executing from source file only works on Unix like systems i dont have a 
machine to test stress.bat hence left it untouched.

> Build stress classes in a location that allows tools/stress/bin/stress to 
> find them
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Assignee: Vijay
>Priority: Trivial
> Fix For: 1.2
>
> Attachments: 0001-CASSANDRA-4140.patch
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4141) Looks like Serializing cache broken in 1.1

2012-04-11 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4141:
--

Reviewer: xedin

> Looks like Serializing cache broken in 1.1
> --
>
> Key: CASSANDRA-4141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.0
>Reporter: Vijay
>Assignee: Vijay
> Fix For: 1.1.0
>
> Attachments: 0001-CASSANDRA-4141.patch
>
>
> I get the following error while setting the row cache to be 1500 MB
> INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and 
> provider org.apache.cassandra.cache.SerializingCacheProvider
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid26402.hprof ...
> havent spend a lot of time looking into the issue but looks like SC 
> constructor has 
> .initialCapacity(capacity)
> .maximumWeightedCapacity(capacity)
>  which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-4140) Build stress classes in a location that allows tools/stress/bin/stress to find them

2012-04-11 Thread Jonathan Ellis (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4140:
-

Assignee: Vijay

> Build stress classes in a location that allows tools/stress/bin/stress to 
> find them
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Assignee: Vijay
>Priority: Trivial
> Fix For: 1.2
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-4141) Looks like Serializing cache broken in 1.1

2012-04-11 Thread Vijay (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay reassigned CASSANDRA-4141:


Assignee: Vijay

> Looks like Serializing cache broken in 1.1
> --
>
> Key: CASSANDRA-4141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.0
>Reporter: Vijay
>Assignee: Vijay
> Fix For: 1.1.0
>
> Attachments: 0001-CASSANDRA-4141.patch
>
>
> I get the following error while setting the row cache to be 1500 MB
> INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and 
> provider org.apache.cassandra.cache.SerializingCacheProvider
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid26402.hprof ...
> havent spend a lot of time looking into the issue but looks like SC 
> constructor has 
> .initialCapacity(capacity)
> .maximumWeightedCapacity(capacity)
>  which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4141) Looks like Serializing cache broken in 1.1

2012-04-11 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4141:
-

Attachment: 0001-CASSANDRA-4141.patch

attached fixes this issue... changes to ConcurrentLinkedHashCache is not needed 
but thought the default was good instead of setting it to 0.

> Looks like Serializing cache broken in 1.1
> --
>
> Key: CASSANDRA-4141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.0
>Reporter: Vijay
> Fix For: 1.1.0
>
> Attachments: 0001-CASSANDRA-4141.patch
>
>
> I get the following error while setting the row cache to be 1500 MB
> INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and 
> provider org.apache.cassandra.cache.SerializingCacheProvider
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid26402.hprof ...
> havent spend a lot of time looking into the issue but looks like SC 
> constructor has 
> .initialCapacity(capacity)
> .maximumWeightedCapacity(capacity)
>  which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4141) Looks like Serializing cache broken in 1.1

2012-04-11 Thread Vijay (Created) (JIRA)
Looks like Serializing cache broken in 1.1
--

 Key: CASSANDRA-4141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
 Fix For: 1.1.0


I get the following error while setting the row cache to be 1500 MB

INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and provider 
org.apache.cassandra.cache.SerializingCacheProvider
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid26402.hprof ...

havent spend a lot of time looking into the issue but looks like SC constructor 
has 

.initialCapacity(capacity)
.maximumWeightedCapacity(capacity)

 which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4140) Build stress classes in a location that allows tools/stress/bin/stress to find them

2012-04-11 Thread Nick Bailey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-4140:
---

Summary: Build stress classes in a location that allows 
tools/stress/bin/stress to find them  (was: Move stress to main jar)

You've convinced me. Updated title to reflect what I really would like this 
ticket to accomplish.

> Build stress classes in a location that allows tools/stress/bin/stress to 
> find them
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Priority: Trivial
> Fix For: 1.2
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2319) Promote row index

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252058#comment-13252058
 ] 

Jonathan Ellis commented on CASSANDRA-2319:
---

bq. I don't see an easy way to merge those 2 settings into 1 if that was what 
you were hinting to.

Yes, that's where I was going.

What if we dropped the "main" index and just kept the "sample" index of every 
1/128 columns?  Seems like we'd trade a little more seq i/o to do less random 
i/o, and being able to get rid of the index sampling phase on startup...

> Promote row index
> -
>
> Key: CASSANDRA-2319
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2319
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Stu Hood
>Assignee: Sylvain Lebresne
>  Labels: index, timeseries
> Fix For: 1.2
>
> Attachments: 2319-v1.tgz, 2319-v2.tgz, promotion.pdf, version-f.txt, 
> version-g-lzf.txt, version-g.txt
>
>
> The row index contains entries for configurably sized blocks of a wide row. 
> For a row of appreciable size, the row index ends up directing the third seek 
> (1. index, 2. row index, 3. content) to nearby the first column of a scan.
> Since the row index is always used for wide rows, and since it contains 
> information that tells us whether or not the 3rd seek is necessary (the 
> column range or name we are trying to slice may not exist in a given 
> sstable), promoting the row index into the sstable index would allow us to 
> drop the maximum number of seeks for wide rows back to 2, and, more 
> importantly, would allow sstables to be eliminated using only the index.
> An example usecase that benefits greatly from this change is time series data 
> in wide rows, where data is appended to the beginning or end of the row. Our 
> existing compaction strategy gets lucky and clusters the oldest data in the 
> oldest sstables: for queries to recently appended data, we would be able to 
> eliminate wide rows using only the sstable index, rather than needing to seek 
> into the data file to determine that it isn't interesting. For narrow rows, 
> this change would have no effect, as they will not reach the threshold for 
> indexing anyway.
> A first cut design for this change would look very similar to the file format 
> design proposed on #674: 
> http://wiki.apache.org/cassandra/FileFormatDesignDoc: row keys clustered, 
> column names clustered, and offsets clustered and delta encoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4140) Move stress to main jar

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252032#comment-13252032
 ] 

Jonathan Ellis commented on CASSANDRA-4140:
---

bq. I do think moving things like you mentioned into tools makes them less 
'discoverable'

I'm totally fine with that.  In fact I think it's a feature: 99% of people 
using sstable2json are Doing It Wrong; it's meant to be a debugging tool, end 
of story.

> Move stress to main jar
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Priority: Trivial
> Fix For: 1.2
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4140) Move stress to main jar

2012-04-11 Thread Nick Bailey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252029#comment-13252029
 ] 

Nick Bailey commented on CASSANDRA-4140:


bq. Wow, that's a definition of "hard" I'm unfamiliar with.

Hmm maybe I should have gone with confusing :). Plus I'm lazy. I don't want to 
wait for artifacts to build the javadoc AND then make 
build/dist/tools/stress/bin/stress executable so I can run it. C'mon, two whole 
extra steps!!??

Really though fixing that ^ was the spirit of this ticket. Moving stress was 
just a suggestion from irc. I would be fine if 'ant stress-build' put the built 
files in a place where tools/stress/bin/stress can find them.

Having said that I do think moving things like you mentioned into tools makes 
them less 'discoverable'. And moving stress would make it more 'discoverable'. 

> Move stress to main jar
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Priority: Trivial
> Fix For: 1.2
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4140) Move stress to main jar

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252028#comment-13252028
 ] 

Jonathan Ellis commented on CASSANDRA-4140:
---

To clarify, stress build is broken post-CASSANDRA-4103, but "let's stuff 
everything in one big jar" is not my preferred solution.

> Move stress to main jar
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Priority: Trivial
> Fix For: 1.2
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4140) Move stress to main jar

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252027#comment-13252027
 ] 

Jonathan Ellis commented on CASSANDRA-4140:
---

bq. Right now its hard to run stress from a checkout of trunk

Wow, that's a definition of "hard" I'm unfamiliar with. :)

I would rather move *more* non-core things to tools/ (sstable export/import, 
sstablekeys, sstableloader), than the other way around.

> Move stress to main jar
> ---
>
> Key: CASSANDRA-4140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Affects Versions: 1.2
>Reporter: Nick Bailey
>Priority: Trivial
> Fix For: 1.2
>
>
> Right now its hard to run stress from a checkout of trunk. You need to do 
> 'ant artifacts' and then run the stress tool in the generated artifacts.
> A discussion on irc came up with the proposal to just move stress to the main 
> jar, but the stress/stressd bash scripts in bin/, and drop the tools 
> directory altogether. It will be easier for users to find that way and will 
> make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4140) Move stress to main jar

2012-04-11 Thread Nick Bailey (Created) (JIRA)
Move stress to main jar
---

 Key: CASSANDRA-4140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2
Reporter: Nick Bailey
Priority: Trivial
 Fix For: 1.2


Right now its hard to run stress from a checkout of trunk. You need to do 'ant 
artifacts' and then run the stress tool in the generated artifacts.

A discussion on irc came up with the proposal to just move stress to the main 
jar, but the stress/stressd bash scripts in bin/, and drop the tools directory 
altogether. It will be easier for users to find that way and will make running 
stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3974) Per-CF TTL

2012-04-11 Thread Kirk True (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251966#comment-13251966
 ] 

Kirk True edited comment on CASSANDRA-3974 at 4/11/12 9:57 PM:
---

In the initial patch, I had made changes to both 
{{UpdateStatement.addToMutation}} and {{ColumnFamily.addColumn}} to use the 
larger of the column's TTL or the column family default TTL. I tested against 
the {{cassandra-cli}} and {{cqlsh}} tools and both show the default TTL being 
used if none is specified.

This is all to say that it _looks_ like both the Thrift and CQL paths are 
working as expected. Perhaps it's high time I found the unit tests and added 
some...

  was (Author: kirktrue):
I made changes to both {{UpdateStatement.addToMutation}} and 
{{ColumnFamily.addColumn}} to use the larger of the column's TTL or the column 
family default TTL. I tested against the {{cassandra-cli}} and {{cqlsh}} tools 
and both show the default TTL being used if none is specified.

This is all to say that it _looks_ like both the Thrift and CQL paths are 
working as expected. Perhaps it's high time I found the unit tests and added 
some...
  
> Per-CF TTL
> --
>
> Key: CASSANDRA-3974
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3974
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Kirk True
>Priority: Minor
> Fix For: 1.2
>
> Attachments: trunk-3974.txt
>
>
> Per-CF TTL would allow compaction optimizations ("drop an entire sstable's 
> worth of expired data") that we can't do with per-column.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3974) Per-CF TTL

2012-04-11 Thread Kirk True (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251966#comment-13251966
 ] 

Kirk True commented on CASSANDRA-3974:
--

I made changes to both {{UpdateStatement.addToMutation}} and 
{{ColumnFamily.addColumn}} to use the larger of the column's TTL or the column 
family default TTL. I tested against the {{cassandra-cli}} and {{cqlsh}} tools 
and both show the default TTL being used if none is specified.

This is all to say that it _looks_ like both the Thrift and CQL paths are 
working as expected. Perhaps it's high time I found the unit tests and added 
some...

> Per-CF TTL
> --
>
> Key: CASSANDRA-3974
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3974
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Kirk True
>Priority: Minor
> Fix For: 1.2
>
> Attachments: trunk-3974.txt
>
>
> Per-CF TTL would allow compaction optimizations ("drop an entire sstable's 
> worth of expired data") that we can't do with per-column.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




git commit: underscores.

2012-04-11 Thread brandonwilliams
Updated Branches:
  refs/heads/trunk 0cf35dc62 -> b289de1a7


underscores.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b289de1a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b289de1a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b289de1a

Branch: refs/heads/trunk
Commit: b289de1a7cf3ed491ee952f1c177e11d13cd04be
Parents: 0cf35dc
Author: Brandon Williams 
Authored: Wed Apr 11 15:14:27 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:14:27 2012 -0500

--
 .../apache/cassandra/service/StorageService.java   |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b289de1a/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 4ec5de3..e668c1c 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1530,7 +1530,7 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 
 public void onFailure()
 {
-logger_.warn("Streaming from " + source + " failed");
+logger.warn("Streaming from " + source + " failed");
 onSuccess(); // calling onSuccess to send notification
 }
 };
@@ -2846,7 +2846,7 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 
 public void onFailure()
 {
-logger_.warn("Streaming to " + endPointEntry + " 
failed");
+logger.warn("Streaming to " + endPointEntry + " 
failed");
 onSuccess(); // calling onSuccess for latch countdown
 }
 };
@@ -2902,7 +2902,7 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 
 public void onFailure()
 {
-logger_.warn("Streaming from " + source + " failed");
+logger.warn("Streaming from " + source + " failed");
 onSuccess(); // calling onSuccess for latch countdown
 }
 };



[5/5] git commit: Add failure callbacks for outgoing streams. Patch by Yuki Morishita, reviewed by brandonwilliams for CASSANDRA-4051

2012-04-11 Thread brandonwilliams
Add failure callbacks for outgoing streams.
Patch by Yuki Morishita, reviewed by brandonwilliams for CASSANDRA-4051


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34c1fc0b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34c1fc0b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34c1fc0b

Branch: refs/heads/cassandra-1.1
Commit: 34c1fc0b7cbdf568ec7869a564fe614f96dcbe9b
Parents: 97aa922
Author: Brandon Williams 
Authored: Wed Apr 11 15:06:45 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:06:45 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|6 -
 .../apache/cassandra/io/sstable/SSTableLoader.java |6 -
 .../apache/cassandra/service/StorageService.java   |   19 --
 .../apache/cassandra/streaming/FileStreamTask.java |7 +
 .../cassandra/streaming/StreamInSession.java   |   12 -
 5 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index dac05cf..6f7beb0 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -230,7 +230,11 @@ public class RangeStreamer
  source, table, opType, latch.getCount()));
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+logger.warn("Streaming from " + source + " failed");
+onSuccess(); // calling onSuccess for latch countdown
+}
 };
 if (logger.isDebugEnabled())
 logger.debug("" + opType + "ing from " + source + " ranges " + 
StringUtils.join(ranges, ", "));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 85b5146..79259ec 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -227,7 +227,11 @@ public class SSTableLoader
 client.stop();
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+outputHandler.output(String.format("Streaming session to %s 
failed", endpoint));
+onSuccess(); // call onSuccess for latch countdown
+}
 }
 
 public interface OutputHandler

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 84c0096..88b9c19 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1502,7 +1502,11 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 }
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+logger_.warn("Streaming from " + source + " failed");
+onSuccess(); // calling onSuccess to send notification
+}
 };
 if (logger_.isDebugEnabled())
 logger_.debug("Requesting from " + source + " ranges " + 
StringUtils.join(ranges, ", "));
@@ -2813,7 +2817,12 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 latch.countDown();
 }
 }
-public void onFailure() {}
+
+public void onFailure()
+{
+logger_.warn("Streaming to " + endPointEntry + " 
failed");
+onSuccess(); // calling onSuccess for latch countdown
+}
 };
 
 StageManager.getStage(Stage.STREAM).execute(new Runnable()
@@ -2865,7 +2874,11 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 latch.countDown();
 }

[3/5] git commit: Merge branch 'cassandra-1.1.0' into cassandra-1.1

2012-04-11 Thread brandonwilliams
Merge branch 'cassandra-1.1.0' into cassandra-1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8909e1a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8909e1a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8909e1a9

Branch: refs/heads/trunk
Commit: 8909e1a92bf729d7644beb01f5189f050c8a
Parents: bf4d6fb 34c1fc0
Author: Brandon Williams 
Authored: Wed Apr 11 15:07:48 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:07:48 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|6 -
 .../apache/cassandra/io/sstable/SSTableLoader.java |6 -
 .../apache/cassandra/service/StorageService.java   |   19 --
 .../apache/cassandra/streaming/FileStreamTask.java |7 +
 .../cassandra/streaming/StreamInSession.java   |   12 -
 5 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8909e1a9/src/java/org/apache/cassandra/service/StorageService.java
--



[4/5] git commit: Add failure callbacks for outgoing streams. Patch by Yuki Morishita, reviewed by brandonwilliams for CASSANDRA-4051

2012-04-11 Thread brandonwilliams
Add failure callbacks for outgoing streams.
Patch by Yuki Morishita, reviewed by brandonwilliams for CASSANDRA-4051


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34c1fc0b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34c1fc0b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34c1fc0b

Branch: refs/heads/trunk
Commit: 34c1fc0b7cbdf568ec7869a564fe614f96dcbe9b
Parents: 97aa922
Author: Brandon Williams 
Authored: Wed Apr 11 15:06:45 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:06:45 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|6 -
 .../apache/cassandra/io/sstable/SSTableLoader.java |6 -
 .../apache/cassandra/service/StorageService.java   |   19 --
 .../apache/cassandra/streaming/FileStreamTask.java |7 +
 .../cassandra/streaming/StreamInSession.java   |   12 -
 5 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index dac05cf..6f7beb0 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -230,7 +230,11 @@ public class RangeStreamer
  source, table, opType, latch.getCount()));
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+logger.warn("Streaming from " + source + " failed");
+onSuccess(); // calling onSuccess for latch countdown
+}
 };
 if (logger.isDebugEnabled())
 logger.debug("" + opType + "ing from " + source + " ranges " + 
StringUtils.join(ranges, ", "));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 85b5146..79259ec 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -227,7 +227,11 @@ public class SSTableLoader
 client.stop();
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+outputHandler.output(String.format("Streaming session to %s 
failed", endpoint));
+onSuccess(); // call onSuccess for latch countdown
+}
 }
 
 public interface OutputHandler

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 84c0096..88b9c19 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1502,7 +1502,11 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 }
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+logger_.warn("Streaming from " + source + " failed");
+onSuccess(); // calling onSuccess to send notification
+}
 };
 if (logger_.isDebugEnabled())
 logger_.debug("Requesting from " + source + " ranges " + 
StringUtils.join(ranges, ", "));
@@ -2813,7 +2817,12 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 latch.countDown();
 }
 }
-public void onFailure() {}
+
+public void onFailure()
+{
+logger_.warn("Streaming to " + endPointEntry + " 
failed");
+onSuccess(); // calling onSuccess for latch countdown
+}
 };
 
 StageManager.getStage(Stage.STREAM).execute(new Runnable()
@@ -2865,7 +2874,11 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 latch.countDown();
 }
 
- 

[1/5] git commit: Merge branch 'cassandra-1.1' into trunk

2012-04-11 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.1 bf4d6fbbd -> 8909e1a92
  refs/heads/trunk 8cd1792e1 -> 0cf35dc62


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0cf35dc6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0cf35dc6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0cf35dc6

Branch: refs/heads/trunk
Commit: 0cf35dc624eed8541dbc928595a7e96d94ead2bd
Parents: 8cd1792 8909e1a
Author: Brandon Williams 
Authored: Wed Apr 11 15:08:31 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:08:31 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|6 -
 .../apache/cassandra/io/sstable/SSTableLoader.java |6 -
 .../apache/cassandra/service/StorageService.java   |   19 --
 .../apache/cassandra/streaming/FileStreamTask.java |7 +
 .../cassandra/streaming/StreamInSession.java   |   12 -
 5 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cf35dc6/src/java/org/apache/cassandra/dht/RangeStreamer.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cf35dc6/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cf35dc6/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 4b12383,5e12364..4ec5de3
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -1528,10 -1508,14 +1528,14 @@@ public class StorageService implements 
  }
  }
  
- public void onFailure() {}
+ public void onFailure()
+ {
+ logger_.warn("Streaming from " + source + " failed");
+ onSuccess(); // calling onSuccess to send notification
+ }
  };
 -if (logger_.isDebugEnabled())
 -logger_.debug("Requesting from " + source + " ranges " + 
StringUtils.join(ranges, ", "));
 +if (logger.isDebugEnabled())
 +logger.debug("Requesting from " + source + " ranges " + 
StringUtils.join(ranges, ", "));
  StreamIn.requestRanges(source, table, ranges, callback, 
OperationType.RESTORE_REPLICA_COUNT);
  }
  }
@@@ -2891,11 -2880,15 +2900,15 @@@
  latch.countDown();
  }
  
- public void onFailure() {}
+ public void onFailure()
+ {
+ logger_.warn("Streaming from " + source + " failed");
+ onSuccess(); // calling onSuccess for latch countdown
+ }
  };
  
 -if (logger_.isDebugEnabled())
 -logger_.debug("Requesting from " + source + " ranges " + 
StringUtils.join(toFetch, ", "));
 +if (logger.isDebugEnabled())
 +logger.debug("Requesting from " + source + " ranges " + 
StringUtils.join(toFetch, ", "));
  
  // sending actual request
  StreamIn.requestRanges(source, table, toFetch, callback, 
OperationType.BOOTSTRAP);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cf35dc6/src/java/org/apache/cassandra/streaming/FileStreamTask.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cf35dc6/src/java/org/apache/cassandra/streaming/StreamInSession.java
--



[2/5] git commit: Merge branch 'cassandra-1.1.0' into cassandra-1.1

2012-04-11 Thread brandonwilliams
Merge branch 'cassandra-1.1.0' into cassandra-1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8909e1a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8909e1a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8909e1a9

Branch: refs/heads/cassandra-1.1
Commit: 8909e1a92bf729d7644beb01f5189f050c8a
Parents: bf4d6fb 34c1fc0
Author: Brandon Williams 
Authored: Wed Apr 11 15:07:48 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:07:48 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|6 -
 .../apache/cassandra/io/sstable/SSTableLoader.java |6 -
 .../apache/cassandra/service/StorageService.java   |   19 --
 .../apache/cassandra/streaming/FileStreamTask.java |7 +
 .../cassandra/streaming/StreamInSession.java   |   12 -
 5 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8909e1a9/src/java/org/apache/cassandra/service/StorageService.java
--



git commit: Add failure callbacks for outgoing streams. Patch by Yuki Morishita, reviewed by brandonwilliams for CASSANDRA-4051

2012-04-11 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.1.0 97aa922a7 -> 34c1fc0b7


Add failure callbacks for outgoing streams.
Patch by Yuki Morishita, reviewed by brandonwilliams for CASSANDRA-4051


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34c1fc0b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34c1fc0b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34c1fc0b

Branch: refs/heads/cassandra-1.1.0
Commit: 34c1fc0b7cbdf568ec7869a564fe614f96dcbe9b
Parents: 97aa922
Author: Brandon Williams 
Authored: Wed Apr 11 15:06:45 2012 -0500
Committer: Brandon Williams 
Committed: Wed Apr 11 15:06:45 2012 -0500

--
 .../org/apache/cassandra/dht/RangeStreamer.java|6 -
 .../apache/cassandra/io/sstable/SSTableLoader.java |6 -
 .../apache/cassandra/service/StorageService.java   |   19 --
 .../apache/cassandra/streaming/FileStreamTask.java |7 +
 .../cassandra/streaming/StreamInSession.java   |   12 -
 5 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index dac05cf..6f7beb0 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -230,7 +230,11 @@ public class RangeStreamer
  source, table, opType, latch.getCount()));
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+logger.warn("Streaming from " + source + " failed");
+onSuccess(); // calling onSuccess for latch countdown
+}
 };
 if (logger.isDebugEnabled())
 logger.debug("" + opType + "ing from " + source + " ranges " + 
StringUtils.join(ranges, ", "));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 85b5146..79259ec 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -227,7 +227,11 @@ public class SSTableLoader
 client.stop();
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+outputHandler.output(String.format("Streaming session to %s 
failed", endpoint));
+onSuccess(); // call onSuccess for latch countdown
+}
 }
 
 public interface OutputHandler

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34c1fc0b/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 84c0096..88b9c19 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1502,7 +1502,11 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 }
 }
 
-public void onFailure() {}
+public void onFailure()
+{
+logger_.warn("Streaming from " + source + " failed");
+onSuccess(); // calling onSuccess to send notification
+}
 };
 if (logger_.isDebugEnabled())
 logger_.debug("Requesting from " + source + " ranges " + 
StringUtils.join(ranges, ", "));
@@ -2813,7 +2817,12 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 latch.countDown();
 }
 }
-public void onFailure() {}
+
+public void onFailure()
+{
+logger_.warn("Streaming to " + endPointEntry + " 
failed");
+onSuccess(); // calling onSuccess for latch countdown
+}
 };
 
 StageManager.getStage(Stage.STREAM).execute(new Runnable()
@@ -2865,7 +2874,11 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, Storage

[jira] [Created] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-11 Thread Vijay (Created) (JIRA)
Add varint encoding to Serializing Cache


 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Vijay
Assignee: Vijay
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4139) Add varint encoding to Messaging service

2012-04-11 Thread Vijay (Created) (JIRA)
Add varint encoding to Messaging service


 Key: CASSANDRA-4139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4139
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Vijay
Assignee: Vijay




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-11 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3883.
---

Resolution: Fixed

committed

> CFIF WideRowIterator only returns batch size columns
> 
>
> Key: CASSANDRA-3883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
>Reporter: Brandon Williams
>Assignee: Jonathan Ellis
> Fix For: 1.1.0
>
> Attachments: 3883-v1.txt, 3883-v2.txt, 3883-v3.txt
>
>
> Most evident with the word count, where there are 1250 'word1' items in two 
> rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
> to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[21/21] git commit: Fix get_paged_slice

2012-04-11 Thread jbellis
Fix get_paged_slice

patch by slebresne; reviewed by jbellis for CASSANDRA-4136


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7e8640
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7e8640
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7e8640

Branch: refs/heads/trunk
Commit: fc7e86404a27963071e416ff4deb0c7143e68bfc
Parents: c14e266
Author: Sylvain Lebresne 
Authored: Wed Apr 11 16:31:36 2012 +0200
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 16:31:36 2012 +0200

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/SelectStatement.java |3 +-
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 +-
 .../org/apache/cassandra/db/RangeSliceCommand.java |   23 +++--
 .../apache/cassandra/db/filter/ExtendedFilter.java |   30 --
 .../cassandra/db/filter/SliceQueryFilter.java  |3 +-
 .../cassandra/db/index/keys/KeysSearcher.java  |2 +-
 .../cassandra/service/RangeSliceVerbHandler.java   |2 +-
 .../org/apache/cassandra/service/StorageProxy.java |3 +-
 .../apache/cassandra/thrift/CassandraServer.java   |2 +-
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   92 +--
 11 files changed, 133 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 26315be..df030b9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * fix terminination of the stress.java when errors were encountered
(CASSANDRA-4128)
  * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
+ * Fix get_paged_slice (CASSANDRA-4136)
 Merged from 1.0:
  * add auto_snapshot option allowing disabling snapshot before drop/truncate
(CASSANDRA-3710)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index b95d6ba..5bcd37a 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -285,7 +285,8 @@ public class SelectStatement implements CQLStatement
 bounds,
 
expressions,
 getLimit(),
-true), // 
limit by columns, not keys
+true, // 
limit by columns, not keys
+false),
   parameters.consistencyLevel);
 }
 catch (IOException e)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index cea2fee..a4e2e51 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1353,12 +1353,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter)
 {
-return getRangeSlice(superColumn, range, maxResults, columnFilter, 
rowFilter, false);
+return getRangeSlice(superColumn, range, maxResults, columnFilter, 
rowFilter, false, false);
 }
 
-public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter, boolean maxIsColumns)
+public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter, boolean maxIsColumns, boolean isPaging)
 {
-return filter(getSequentialIterator(superColumn, range, columnFilter), 
ExtendedFilter.create(this, columnFilter, rowFilter, maxResults, maxIsColumns));
+return filter(getSequentialIterator(superColumn, range, columnFilter), 
ExtendedFilter.create(this, columnFilter, rowFilter, maxResults, maxIsColumns, 
isPaging));
 }
 
 public List search(List clause, 
AbstractBounds range, int max

[13/21] git commit: add log4j properties

2012-04-11 Thread jbellis
add log4j properties


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7321adf4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7321adf4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7321adf4

Branch: refs/heads/trunk
Commit: 7321adf4326381ab7ce89346dbd0703a56af0c4e
Parents: fd3bfac
Author: Jonathan Ellis 
Authored: Tue Apr 10 15:58:00 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:32 2012 -0500

--
 examples/hadoop_word_count/README.txt|   13 +
 examples/hadoop_word_count/bin/word_count|1 +
 examples/hadoop_word_count/conf/log4j.properties |   15 +++
 examples/hadoop_word_count/src/WordCount.java|3 ++-
 4 files changed, 31 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/README.txt
--
diff --git a/examples/hadoop_word_count/README.txt 
b/examples/hadoop_word_count/README.txt
index 1ec13f7..cf8a344 100644
--- a/examples/hadoop_word_count/README.txt
+++ b/examples/hadoop_word_count/README.txt
@@ -1,9 +1,16 @@
+Introduction
+
+
 WordCount hadoop example: Inserts a bunch of words across multiple rows,
 and counts them, with RandomPartitioner. The word_count_counters example sums
 the value of counter columns for a key.
 
 The scripts in bin/ assume you are running with cwd of contrib/word_count.
 
+
+Running
+===
+
 First build and start a Cassandra server with the default configuration*, 
 then run
 
@@ -32,3 +39,9 @@ is written to a text file in /tmp/word_count_counters.
 
 *If you want to point wordcount at a real cluster, modify the seed
 and listenaddress settings accordingly.
+
+
+Troubleshooting
+===
+
+word_count uses conf/log4j.properties to log to wc.out.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/bin/word_count
--
diff --git a/examples/hadoop_word_count/bin/word_count 
b/examples/hadoop_word_count/bin/word_count
index 58f325e..a0c5aa0 100755
--- a/examples/hadoop_word_count/bin/word_count
+++ b/examples/hadoop_word_count/bin/word_count
@@ -30,6 +30,7 @@ if [ ! -e $cwd/../build/word_count.jar ]; then
 exit 1
 fi
 
+CLASSPATH=$CLASSPATH:$cwd/../conf
 CLASSPATH=$CLASSPATH:$cwd/../build/word_count.jar
 CLASSPATH=$CLASSPATH:$cwd/../../../build/classes/main
 CLASSPATH=$CLASSPATH:$cwd/../../../build/classes/thrift

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/conf/log4j.properties
--
diff --git a/examples/hadoop_word_count/conf/log4j.properties 
b/examples/hadoop_word_count/conf/log4j.properties
new file mode 100644
index 000..070d21e
--- /dev/null
+++ b/examples/hadoop_word_count/conf/log4j.properties
@@ -0,0 +1,15 @@
+log4j.rootLogger=DEBUG,stdout,F
+
+#stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%5p %d{HH:mm:ss,SSS} %m%n
+
+# log file
+log4j.appender.F=org.apache.log4j.FileAppender
+log4j.appender.F.Append=false
+log4j.appender.F.layout=org.apache.log4j.PatternLayout
+log4j.appender.F.layout.ConversionPattern=%5p [%t] %d{ISO8601} %F (line %L) 
%m%n
+# Edit the next line to point to your logs directory
+log4j.appender.F.File=wc.out
+

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/src/WordCount.java
--
diff --git a/examples/hadoop_word_count/src/WordCount.java 
b/examples/hadoop_word_count/src/WordCount.java
index d3cee0e..96bcb1b 100644
--- a/examples/hadoop_word_count/src/WordCount.java
+++ b/examples/hadoop_word_count/src/WordCount.java
@@ -93,7 +93,8 @@ public class WordCount extends Configured implements Tool
 else
 value = ByteBufferUtil.string(column.value());

-System.err.println("read " + ByteBufferUtil.string(key) + ":" 
+name + ":" + value + " from " + context.getInputSplit());
+logger.debug("read {}:{}={} from {}",
+ new Object[] {ByteBufferUtil.string(key), name, 
value, context.getInputSplit()});
 
 StringTokenizer itr = new StringTokenizer(value);
 while (itr.hasMoreTokens())



[14/21] git commit: add log4j properties

2012-04-11 Thread jbellis
add log4j properties


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7321adf4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7321adf4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7321adf4

Branch: refs/heads/cassandra-1.1.0
Commit: 7321adf4326381ab7ce89346dbd0703a56af0c4e
Parents: fd3bfac
Author: Jonathan Ellis 
Authored: Tue Apr 10 15:58:00 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:32 2012 -0500

--
 examples/hadoop_word_count/README.txt|   13 +
 examples/hadoop_word_count/bin/word_count|1 +
 examples/hadoop_word_count/conf/log4j.properties |   15 +++
 examples/hadoop_word_count/src/WordCount.java|3 ++-
 4 files changed, 31 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/README.txt
--
diff --git a/examples/hadoop_word_count/README.txt 
b/examples/hadoop_word_count/README.txt
index 1ec13f7..cf8a344 100644
--- a/examples/hadoop_word_count/README.txt
+++ b/examples/hadoop_word_count/README.txt
@@ -1,9 +1,16 @@
+Introduction
+
+
 WordCount hadoop example: Inserts a bunch of words across multiple rows,
 and counts them, with RandomPartitioner. The word_count_counters example sums
 the value of counter columns for a key.
 
 The scripts in bin/ assume you are running with cwd of contrib/word_count.
 
+
+Running
+===
+
 First build and start a Cassandra server with the default configuration*, 
 then run
 
@@ -32,3 +39,9 @@ is written to a text file in /tmp/word_count_counters.
 
 *If you want to point wordcount at a real cluster, modify the seed
 and listenaddress settings accordingly.
+
+
+Troubleshooting
+===
+
+word_count uses conf/log4j.properties to log to wc.out.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/bin/word_count
--
diff --git a/examples/hadoop_word_count/bin/word_count 
b/examples/hadoop_word_count/bin/word_count
index 58f325e..a0c5aa0 100755
--- a/examples/hadoop_word_count/bin/word_count
+++ b/examples/hadoop_word_count/bin/word_count
@@ -30,6 +30,7 @@ if [ ! -e $cwd/../build/word_count.jar ]; then
 exit 1
 fi
 
+CLASSPATH=$CLASSPATH:$cwd/../conf
 CLASSPATH=$CLASSPATH:$cwd/../build/word_count.jar
 CLASSPATH=$CLASSPATH:$cwd/../../../build/classes/main
 CLASSPATH=$CLASSPATH:$cwd/../../../build/classes/thrift

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/conf/log4j.properties
--
diff --git a/examples/hadoop_word_count/conf/log4j.properties 
b/examples/hadoop_word_count/conf/log4j.properties
new file mode 100644
index 000..070d21e
--- /dev/null
+++ b/examples/hadoop_word_count/conf/log4j.properties
@@ -0,0 +1,15 @@
+log4j.rootLogger=DEBUG,stdout,F
+
+#stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%5p %d{HH:mm:ss,SSS} %m%n
+
+# log file
+log4j.appender.F=org.apache.log4j.FileAppender
+log4j.appender.F.Append=false
+log4j.appender.F.layout=org.apache.log4j.PatternLayout
+log4j.appender.F.layout.ConversionPattern=%5p [%t] %d{ISO8601} %F (line %L) 
%m%n
+# Edit the next line to point to your logs directory
+log4j.appender.F.File=wc.out
+

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/src/WordCount.java
--
diff --git a/examples/hadoop_word_count/src/WordCount.java 
b/examples/hadoop_word_count/src/WordCount.java
index d3cee0e..96bcb1b 100644
--- a/examples/hadoop_word_count/src/WordCount.java
+++ b/examples/hadoop_word_count/src/WordCount.java
@@ -93,7 +93,8 @@ public class WordCount extends Configured implements Tool
 else
 value = ByteBufferUtil.string(column.value());

-System.err.println("read " + ByteBufferUtil.string(key) + ":" 
+name + ":" + value + " from " + context.getInputSplit());
+logger.debug("read {}:{}={} from {}",
+ new Object[] {ByteBufferUtil.string(key), name, 
value, context.getInputSplit()});
 
 StringTokenizer itr = new StringTokenizer(value);
 while (itr.hasMoreTokens())



[19/21] git commit: Merge branch 'cassandra-1.1.0' into cassandra-1.1

2012-04-11 Thread jbellis
Merge branch 'cassandra-1.1.0' into cassandra-1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de80c6c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de80c6c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de80c6c6

Branch: refs/heads/trunk
Commit: de80c6c6d1d0ff11837bb1f3467b2902c5b841ff
Parents: 6564b33 d49113f
Author: Sylvain Lebresne 
Authored: Wed Apr 11 17:23:54 2012 +0200
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 17:23:54 2012 +0200

--
 CHANGES.txt|2 +
 .../cassandra/cql3/statements/SelectStatement.java |   64 +--
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 +-
 .../org/apache/cassandra/db/RangeSliceCommand.java |   23 +++--
 .../apache/cassandra/db/filter/ExtendedFilter.java |   30 --
 .../cassandra/db/filter/SliceQueryFilter.java  |3 +-
 .../cassandra/db/index/keys/KeysSearcher.java  |2 +-
 .../cassandra/service/RangeSliceVerbHandler.java   |2 +-
 .../org/apache/cassandra/service/StorageProxy.java |3 +-
 .../apache/cassandra/thrift/CassandraServer.java   |2 +-
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   92 +--
 11 files changed, 186 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index b0d067c,0485857..b7c12dd
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -600,11 -604,22 +605,22 @@@ public class SelectStatement implement
  if (c.isMarkedForDelete())
  continue;
  
 -thriftColumns = new ArrayList();
 +thriftColumns = new ArrayList(selection.size());
  
- ByteBuffer[] components = cfDef.isComposite
- ? 
((CompositeType)cfDef.cfm.comparator).split(c.name())
- : null;
+ ByteBuffer[] components = null;
+ 
+ if (cfDef.isComposite)
+ {
+ components = 
((CompositeType)cfDef.cfm.comparator).split(c.name());
+ }
+ else if (sliceRestriction != null)
+ {
+ // For dynamic CF, the column could be out of the 
requested bounds, filter here
+ if (!sliceRestriction.isInclusive(Bound.START) && 
c.name().equals(sliceRestriction.bound(Bound.START).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+ continue;
+ if (!sliceRestriction.isInclusive(Bound.END) && 
c.name().equals(sliceRestriction.bound(Bound.END).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+ continue;
+ }
  
  // Respect selection order
  for (Pair p : 
selection)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/service/RangeSliceVerbHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/service/StorageProxy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



[17/21] git commit: make memory metering use an unbounded queue to avoid blocking the write path patch by pschuller and jbellis; reviewed by slebresne for CASSANDRA-4032

2012-04-11 Thread jbellis
make memory metering use an unbounded queue to avoid blocking the write path
patch by pschuller and jbellis; reviewed by slebresne for CASSANDRA-4032


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd3bfac6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd3bfac6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd3bfac6

Branch: refs/heads/cassandra-1.1.0
Commit: fd3bfac6cbc487e36ac1c39740c5897e350d0d16
Parents: d49113f
Author: Jonathan Ellis 
Authored: Tue Apr 10 10:59:35 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:09 2012 -0500

--
 src/java/org/apache/cassandra/db/Memtable.java |   86 ++
 1 files changed, 48 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd3bfac6/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 81dac7c..d9e9570 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -42,6 +42,7 @@ import org.apache.cassandra.io.sstable.SSTableWriter;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.SlabAllocator;
 import org.apache.cassandra.utils.WrappedRunnable;
+import org.cliffc.high_scale_lib.NonBlockingHashSet;
 import org.github.jamm.MemoryMeter;
 
 public class Memtable
@@ -53,14 +54,17 @@ public class Memtable
 // max liveratio seen w/ 1-byte columns on a 64-bit jvm was 19. If it gets 
higher than 64 something is probably broken.
 private static final double MAX_SANE_LIVE_RATIO = 64.0;
 
-// we're careful to only allow one count to run at a time because counting 
is slow
-// (can be minutes, for a large memtable and a busy server), so we could 
keep memtables
-// alive after they're flushed and would otherwise be GC'd.
+// we want to limit the amount of concurrently running and/or queued 
meterings, because counting is slow (can be
+// minutes, for a large memtable and a busy server). so we could keep 
memtables
+// alive after they're flushed and would otherwise be GC'd. the approach 
we take is to bound the number of
+// outstanding/running meterings to a maximum of one per CFS using this 
set; the executor's queue is unbounded but
+// will implicitly be bounded by the number of CFS:s.
+private static final Set meteringInProgress = new 
NonBlockingHashSet();
 private static final ExecutorService meterExecutor = new 
DebuggableThreadPoolExecutor(1,

   1,

   Integer.MAX_VALUE,

   TimeUnit.MILLISECONDS,
-   
   new SynchronousQueue(),
+   
   new LinkedBlockingQueue(),

   new NamedThreadFactory("MemoryMeter"))
 {
 @Override
@@ -152,7 +156,7 @@ public class Memtable
 resolve(key, columnFamily);
 }
 
-public void updateLiveRatio()
+public void updateLiveRatio() throws RuntimeException
 {
 if (!MemoryMeter.isInitialized())
 {
@@ -162,50 +166,56 @@ public class Memtable
 return;
 }
 
+if (!meteringInProgress.add(cfs))
+{
+logger.debug("Metering already pending or active for {}; skipping 
liveRatio update", cfs);
+return;
+}
+
 Runnable runnable = new Runnable()
 {
 public void run()
 {
-activelyMeasuring = Memtable.this;
-
-long start = System.currentTimeMillis();
-// ConcurrentSkipListMap has cycles, so measureDeep will have 
to track a reference to EACH object it visits.
-// So to reduce the memory overhead of doing a measurement, we 
break it up to row-at-a-time.
-long deepSize = meter.measure(columnFamilies);
-int objects = 0;
-for (Map.Entry entry : 
columnFamilies.entrySet())
-{
-deepSize += meter.measureDeep(entry.getKey()) + 
meter.measureDeep(entry.getValue());
-objects += entry.getValue().getColumnCount();
-}
-double newRatio = (double) deepSize / currentThroughput.get();
-
-

[15/21] git commit: add log4j properties

2012-04-11 Thread jbellis
add log4j properties


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7321adf4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7321adf4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7321adf4

Branch: refs/heads/cassandra-1.1
Commit: 7321adf4326381ab7ce89346dbd0703a56af0c4e
Parents: fd3bfac
Author: Jonathan Ellis 
Authored: Tue Apr 10 15:58:00 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:32 2012 -0500

--
 examples/hadoop_word_count/README.txt|   13 +
 examples/hadoop_word_count/bin/word_count|1 +
 examples/hadoop_word_count/conf/log4j.properties |   15 +++
 examples/hadoop_word_count/src/WordCount.java|3 ++-
 4 files changed, 31 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/README.txt
--
diff --git a/examples/hadoop_word_count/README.txt 
b/examples/hadoop_word_count/README.txt
index 1ec13f7..cf8a344 100644
--- a/examples/hadoop_word_count/README.txt
+++ b/examples/hadoop_word_count/README.txt
@@ -1,9 +1,16 @@
+Introduction
+
+
 WordCount hadoop example: Inserts a bunch of words across multiple rows,
 and counts them, with RandomPartitioner. The word_count_counters example sums
 the value of counter columns for a key.
 
 The scripts in bin/ assume you are running with cwd of contrib/word_count.
 
+
+Running
+===
+
 First build and start a Cassandra server with the default configuration*, 
 then run
 
@@ -32,3 +39,9 @@ is written to a text file in /tmp/word_count_counters.
 
 *If you want to point wordcount at a real cluster, modify the seed
 and listenaddress settings accordingly.
+
+
+Troubleshooting
+===
+
+word_count uses conf/log4j.properties to log to wc.out.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/bin/word_count
--
diff --git a/examples/hadoop_word_count/bin/word_count 
b/examples/hadoop_word_count/bin/word_count
index 58f325e..a0c5aa0 100755
--- a/examples/hadoop_word_count/bin/word_count
+++ b/examples/hadoop_word_count/bin/word_count
@@ -30,6 +30,7 @@ if [ ! -e $cwd/../build/word_count.jar ]; then
 exit 1
 fi
 
+CLASSPATH=$CLASSPATH:$cwd/../conf
 CLASSPATH=$CLASSPATH:$cwd/../build/word_count.jar
 CLASSPATH=$CLASSPATH:$cwd/../../../build/classes/main
 CLASSPATH=$CLASSPATH:$cwd/../../../build/classes/thrift

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/conf/log4j.properties
--
diff --git a/examples/hadoop_word_count/conf/log4j.properties 
b/examples/hadoop_word_count/conf/log4j.properties
new file mode 100644
index 000..070d21e
--- /dev/null
+++ b/examples/hadoop_word_count/conf/log4j.properties
@@ -0,0 +1,15 @@
+log4j.rootLogger=DEBUG,stdout,F
+
+#stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%5p %d{HH:mm:ss,SSS} %m%n
+
+# log file
+log4j.appender.F=org.apache.log4j.FileAppender
+log4j.appender.F.Append=false
+log4j.appender.F.layout=org.apache.log4j.PatternLayout
+log4j.appender.F.layout.ConversionPattern=%5p [%t] %d{ISO8601} %F (line %L) 
%m%n
+# Edit the next line to point to your logs directory
+log4j.appender.F.File=wc.out
+

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7321adf4/examples/hadoop_word_count/src/WordCount.java
--
diff --git a/examples/hadoop_word_count/src/WordCount.java 
b/examples/hadoop_word_count/src/WordCount.java
index d3cee0e..96bcb1b 100644
--- a/examples/hadoop_word_count/src/WordCount.java
+++ b/examples/hadoop_word_count/src/WordCount.java
@@ -93,7 +93,8 @@ public class WordCount extends Configured implements Tool
 else
 value = ByteBufferUtil.string(column.value());

-System.err.println("read " + ByteBufferUtil.string(key) + ":" 
+name + ":" + value + " from " + context.getInputSplit());
+logger.debug("read {}:{}={} from {}",
+ new Object[] {ByteBufferUtil.string(key), name, 
value, context.getInputSplit()});
 
 StringTokenizer itr = new StringTokenizer(value);
 while (itr.hasMoreTokens())



[18/21] git commit: make memory metering use an unbounded queue to avoid blocking the write path patch by pschuller and jbellis; reviewed by slebresne for CASSANDRA-4032

2012-04-11 Thread jbellis
make memory metering use an unbounded queue to avoid blocking the write path
patch by pschuller and jbellis; reviewed by slebresne for CASSANDRA-4032


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd3bfac6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd3bfac6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd3bfac6

Branch: refs/heads/cassandra-1.1
Commit: fd3bfac6cbc487e36ac1c39740c5897e350d0d16
Parents: d49113f
Author: Jonathan Ellis 
Authored: Tue Apr 10 10:59:35 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:09 2012 -0500

--
 src/java/org/apache/cassandra/db/Memtable.java |   86 ++
 1 files changed, 48 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd3bfac6/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 81dac7c..d9e9570 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -42,6 +42,7 @@ import org.apache.cassandra.io.sstable.SSTableWriter;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.SlabAllocator;
 import org.apache.cassandra.utils.WrappedRunnable;
+import org.cliffc.high_scale_lib.NonBlockingHashSet;
 import org.github.jamm.MemoryMeter;
 
 public class Memtable
@@ -53,14 +54,17 @@ public class Memtable
 // max liveratio seen w/ 1-byte columns on a 64-bit jvm was 19. If it gets 
higher than 64 something is probably broken.
 private static final double MAX_SANE_LIVE_RATIO = 64.0;
 
-// we're careful to only allow one count to run at a time because counting 
is slow
-// (can be minutes, for a large memtable and a busy server), so we could 
keep memtables
-// alive after they're flushed and would otherwise be GC'd.
+// we want to limit the amount of concurrently running and/or queued 
meterings, because counting is slow (can be
+// minutes, for a large memtable and a busy server). so we could keep 
memtables
+// alive after they're flushed and would otherwise be GC'd. the approach 
we take is to bound the number of
+// outstanding/running meterings to a maximum of one per CFS using this 
set; the executor's queue is unbounded but
+// will implicitly be bounded by the number of CFS:s.
+private static final Set meteringInProgress = new 
NonBlockingHashSet();
 private static final ExecutorService meterExecutor = new 
DebuggableThreadPoolExecutor(1,

   1,

   Integer.MAX_VALUE,

   TimeUnit.MILLISECONDS,
-   
   new SynchronousQueue(),
+   
   new LinkedBlockingQueue(),

   new NamedThreadFactory("MemoryMeter"))
 {
 @Override
@@ -152,7 +156,7 @@ public class Memtable
 resolve(key, columnFamily);
 }
 
-public void updateLiveRatio()
+public void updateLiveRatio() throws RuntimeException
 {
 if (!MemoryMeter.isInitialized())
 {
@@ -162,50 +166,56 @@ public class Memtable
 return;
 }
 
+if (!meteringInProgress.add(cfs))
+{
+logger.debug("Metering already pending or active for {}; skipping 
liveRatio update", cfs);
+return;
+}
+
 Runnable runnable = new Runnable()
 {
 public void run()
 {
-activelyMeasuring = Memtable.this;
-
-long start = System.currentTimeMillis();
-// ConcurrentSkipListMap has cycles, so measureDeep will have 
to track a reference to EACH object it visits.
-// So to reduce the memory overhead of doing a measurement, we 
break it up to row-at-a-time.
-long deepSize = meter.measure(columnFamilies);
-int objects = 0;
-for (Map.Entry entry : 
columnFamilies.entrySet())
-{
-deepSize += meter.measureDeep(entry.getKey()) + 
meter.measureDeep(entry.getValue());
-objects += entry.getValue().getColumnCount();
-}
-double newRatio = (double) deepSize / currentThroughput.get();
-
-  

[20/21] git commit: CQL3: Support slice with exclusive start and stop

2012-04-11 Thread jbellis
CQL3: Support slice with exclusive start and stop

patch by slebresne; reviewed by jbellis for CASSANDRA-3785


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d49113fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d49113fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d49113fa

Branch: refs/heads/trunk
Commit: d49113fad1bf7a15ca052156b872b9bdc01b6d73
Parents: fc7e864
Author: Sylvain Lebresne 
Authored: Thu Jan 26 10:49:56 2012 +0100
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 17:17:59 2012 +0200

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/SelectStatement.java |   61 --
 2 files changed, 53 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d49113fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index df030b9..4ef47c5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -13,6 +13,7 @@
(CASSANDRA-4128)
  * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
  * Fix get_paged_slice (CASSANDRA-4136)
+ * CQL3: Support slice with exclusive start and stop (CASSANDRA-3785)
 Merged from 1.0:
  * add auto_snapshot option allowing disabling snapshot before drop/truncate
(CASSANDRA-3710)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d49113fa/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 5bcd37a..0485857 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -91,6 +91,7 @@ public class SelectStatement implements CQLStatement
 private Restriction keyRestriction;
 private final Restriction[] columnRestrictions;
 private final Map metadataRestrictions = 
new HashMap();
+private Restriction sliceRestriction;
 
 private static enum Bound
 {
@@ -323,10 +324,13 @@ public class SelectStatement implements CQLStatement
 
 private int getLimit()
 {
+// Internally, we don't support exclusive bounds for slices. Instead,
+// we query one more element if necessary and exclude
+int limit = sliceRestriction != null && 
!sliceRestriction.isInclusive(Bound.START) ? parameters.limit + 1 : 
parameters.limit;
 // For sparse, we'll end up merging all defined colums into the same 
CqlRow. Thus we should query up
 // to 'defined columns' * 'asked limit' to be sure to have enough 
columns. We'll trim after query if
 // this end being too much.
-return cfDef.isCompact ? parameters.limit : cfDef.metadata.size() * 
parameters.limit;
+return cfDef.isCompact ? limit : cfDef.metadata.size() * limit;
 }
 
 private boolean isKeyRange()
@@ -602,9 +606,20 @@ public class SelectStatement implements CQLStatement
 
 thriftColumns = new ArrayList();
 
-ByteBuffer[] components = cfDef.isComposite
-? 
((CompositeType)cfDef.cfm.comparator).split(c.name())
-: null;
+ByteBuffer[] components = null;
+
+if (cfDef.isComposite)
+{
+components = 
((CompositeType)cfDef.cfm.comparator).split(c.name());
+}
+else if (sliceRestriction != null)
+{
+// For dynamic CF, the column could be out of the 
requested bounds, filter here
+if (!sliceRestriction.isInclusive(Bound.START) && 
c.name().equals(sliceRestriction.bound(Bound.START).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+continue;
+if (!sliceRestriction.isInclusive(Bound.END) && 
c.name().equals(sliceRestriction.bound(Bound.END).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+continue;
+}
 
 // Respect selection order
 for (Pair p : 
selection)
@@ -711,9 +726,9 @@ public class SelectStatement implements CQLStatement
 if (parameters.isColumnsReversed)
 Collections.reverse(cqlRows);
 
+// Trim result if needed to respect the limit
 cqlRows = cqlRows.size() > parameters.limit ? cqlRows.subList(0, 
parameters.limit) : cqlRows;
 
-// Trim result if needed to respect the limit
 return cqlRows;
 }
 
@@ -880,14 +895,26 @@ public 

[16/21] git commit: make memory metering use an unbounded queue to avoid blocking the write path patch by pschuller and jbellis; reviewed by slebresne for CASSANDRA-4032

2012-04-11 Thread jbellis
make memory metering use an unbounded queue to avoid blocking the write path
patch by pschuller and jbellis; reviewed by slebresne for CASSANDRA-4032


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd3bfac6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd3bfac6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd3bfac6

Branch: refs/heads/trunk
Commit: fd3bfac6cbc487e36ac1c39740c5897e350d0d16
Parents: d49113f
Author: Jonathan Ellis 
Authored: Tue Apr 10 10:59:35 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:09 2012 -0500

--
 src/java/org/apache/cassandra/db/Memtable.java |   86 ++
 1 files changed, 48 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd3bfac6/src/java/org/apache/cassandra/db/Memtable.java
--
diff --git a/src/java/org/apache/cassandra/db/Memtable.java 
b/src/java/org/apache/cassandra/db/Memtable.java
index 81dac7c..d9e9570 100644
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@ -42,6 +42,7 @@ import org.apache.cassandra.io.sstable.SSTableWriter;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.SlabAllocator;
 import org.apache.cassandra.utils.WrappedRunnable;
+import org.cliffc.high_scale_lib.NonBlockingHashSet;
 import org.github.jamm.MemoryMeter;
 
 public class Memtable
@@ -53,14 +54,17 @@ public class Memtable
 // max liveratio seen w/ 1-byte columns on a 64-bit jvm was 19. If it gets 
higher than 64 something is probably broken.
 private static final double MAX_SANE_LIVE_RATIO = 64.0;
 
-// we're careful to only allow one count to run at a time because counting 
is slow
-// (can be minutes, for a large memtable and a busy server), so we could 
keep memtables
-// alive after they're flushed and would otherwise be GC'd.
+// we want to limit the amount of concurrently running and/or queued 
meterings, because counting is slow (can be
+// minutes, for a large memtable and a busy server). so we could keep 
memtables
+// alive after they're flushed and would otherwise be GC'd. the approach 
we take is to bound the number of
+// outstanding/running meterings to a maximum of one per CFS using this 
set; the executor's queue is unbounded but
+// will implicitly be bounded by the number of CFS:s.
+private static final Set meteringInProgress = new 
NonBlockingHashSet();
 private static final ExecutorService meterExecutor = new 
DebuggableThreadPoolExecutor(1,

   1,

   Integer.MAX_VALUE,

   TimeUnit.MILLISECONDS,
-   
   new SynchronousQueue(),
+   
   new LinkedBlockingQueue(),

   new NamedThreadFactory("MemoryMeter"))
 {
 @Override
@@ -152,7 +156,7 @@ public class Memtable
 resolve(key, columnFamily);
 }
 
-public void updateLiveRatio()
+public void updateLiveRatio() throws RuntimeException
 {
 if (!MemoryMeter.isInitialized())
 {
@@ -162,50 +166,56 @@ public class Memtable
 return;
 }
 
+if (!meteringInProgress.add(cfs))
+{
+logger.debug("Metering already pending or active for {}; skipping 
liveRatio update", cfs);
+return;
+}
+
 Runnable runnable = new Runnable()
 {
 public void run()
 {
-activelyMeasuring = Memtable.this;
-
-long start = System.currentTimeMillis();
-// ConcurrentSkipListMap has cycles, so measureDeep will have 
to track a reference to EACH object it visits.
-// So to reduce the memory overhead of doing a measurement, we 
break it up to row-at-a-time.
-long deepSize = meter.measure(columnFamilies);
-int objects = 0;
-for (Map.Entry entry : 
columnFamilies.entrySet())
-{
-deepSize += meter.measureDeep(entry.getKey()) + 
meter.measureDeep(entry.getValue());
-objects += entry.getValue().getColumnCount();
-}
-double newRatio = (double) deepSize / currentThroughput.get();
-
-if (ne

[6/21] git commit: update get_paged_slice to allow starting with a key; fixes for WideRowIterator patch by jbellis; reviewed by brandonwilliams for CASSANDRA-3883

2012-04-11 Thread jbellis
update get_paged_slice to allow starting with a key; fixes for WideRowIterator
patch by jbellis; reviewed by brandonwilliams for CASSANDRA-3883


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97aa922a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97aa922a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97aa922a

Branch: refs/heads/cassandra-1.1.0
Commit: 97aa922a7476dce06121ae289877abccf161afae
Parents: dbc0f59
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:06:39 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:56 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |   25 ++-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |  136 --
 .../apache/cassandra/thrift/CassandraServer.java   |5 +-
 .../apache/cassandra/thrift/ThriftValidation.java  |   19 +--
 4 files changed, 108 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/97aa922a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
index ef56678..354903d 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
@@ -34,6 +34,8 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
+import com.google.common.collect.ImmutableList;
+
 import org.apache.cassandra.db.IColumn;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Range;
@@ -87,6 +89,7 @@ public class ColumnFamilyInputFormat extends 
InputFormat getSplits(JobContext context) throws IOException
@@ -115,6 +118,8 @@ public class ColumnFamilyInputFormat extends 
InputFormat>> splitfutures = new 
ArrayList>>();
 KeyRange jobKeyRange = ConfigHelper.getInputKeyRange(conf);
-IPartitioner partitioner = null;
 Range jobRange = null;
 if (jobKeyRange != null && jobKeyRange.start_token != null)
 {
-partitioner = 
ConfigHelper.getInputPartitioner(context.getConfiguration());
 assert partitioner.preservesOrder() : 
"ConfigHelper.setInputKeyRange(..) can only be used with a order preserving 
paritioner";
 assert jobKeyRange.start_key == null : "only start_token 
supported";
 assert jobKeyRange.end_key == null : "only end_token 
supported";
@@ -219,11 +222,19 @@ public class ColumnFamilyInputFormat extends 
InputFormat range = new Range(left, right, 
partitioner);
+List> ranges = range.isWrapAround() ? 
range.unwrap() : ImmutableList.of(range);
+for (Range subrange : ranges)
+{
+ColumnFamilySplit split = new 
ColumnFamilySplit(factory.toString(subrange.left), 
factory.toString(subrange.right), endpoints);
+logger.debug("adding " + split);
+splits.add(split);
+}
 }
 return splits;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/97aa922a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 483c040..600cf13 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -29,10 +29,9 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.util.*;
 
-import com.google.common.collect.AbstractIterator;
-import com.google.common.collect.ImmutableSortedMap;
-import com.google.common.collect.Iterables;
-import org.apache.commons.lang.ArrayUtils;
+import com.google.common.collect.*;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.auth.IAuthenticator;
 import org.apache.cassandra.config.ConfigurationException;
@@ -55,6 +54,8 @@ import org.apache.thrift.transport.TSocket;
 public class ColumnFamilyRecordReader extends RecordReader>
 implements org.apache.hadoop.mapred.RecordReader>
 {
+private static final Logger logger = 
LoggerFactory.getLogger(ColumnFamilyRecordReader.class);
+
 public static final int CASSANDRA_HADOOP_MAX_KEY_SIZE_DEFAULT = 8192;
 
 private ColumnFamilySplit split;
@@ -179,6 +180,7 @@ public class ColumnFamilyRecordReader extends 
RecordReader>>
 {
 prot

[11/21] git commit: clean up toString

2012-04-11 Thread jbellis
clean up toString


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3fe2975
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3fe2975
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3fe2975

Branch: refs/heads/trunk
Commit: a3fe297510acdaefbb40b0a91aaf8c38843f9a8b
Parents: 7321adf
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:03:43 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:34 2012 -0500

--
 .../apache/cassandra/hadoop/ColumnFamilySplit.java |9 -
 1 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3fe2975/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
index 71a1c34..bd2e487 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
@@ -100,11 +100,10 @@ public class ColumnFamilySplit extends InputSplit 
implements Writable, org.apach
 @Override
 public String toString()
 {
-return "ColumnFamilySplit{" +
-   "startToken='" + startToken + '\'' +
-   ", endToken='" + endToken + '\'' +
-   ", dataNodes=" + (dataNodes == null ? null : 
Arrays.asList(dataNodes)) +
-   '}';
+return "ColumnFamilySplit(" +
+   "(" + startToken
+   + ", '" + endToken + ']'
+   + " @" + (dataNodes == null ? null : Arrays.asList(dataNodes)) 
+ ')';
 }
 
 public static ColumnFamilySplit read(DataInput in) throws IOException



[10/21] git commit: clean up toString

2012-04-11 Thread jbellis
clean up toString


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3fe2975
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3fe2975
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3fe2975

Branch: refs/heads/cassandra-1.1.0
Commit: a3fe297510acdaefbb40b0a91aaf8c38843f9a8b
Parents: 7321adf
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:03:43 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:34 2012 -0500

--
 .../apache/cassandra/hadoop/ColumnFamilySplit.java |9 -
 1 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3fe2975/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
index 71a1c34..bd2e487 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
@@ -100,11 +100,10 @@ public class ColumnFamilySplit extends InputSplit 
implements Writable, org.apach
 @Override
 public String toString()
 {
-return "ColumnFamilySplit{" +
-   "startToken='" + startToken + '\'' +
-   ", endToken='" + endToken + '\'' +
-   ", dataNodes=" + (dataNodes == null ? null : 
Arrays.asList(dataNodes)) +
-   '}';
+return "ColumnFamilySplit(" +
+   "(" + startToken
+   + ", '" + endToken + ']'
+   + " @" + (dataNodes == null ? null : Arrays.asList(dataNodes)) 
+ ')';
 }
 
 public static ColumnFamilySplit read(DataInput in) throws IOException



[12/21] git commit: clean up toString

2012-04-11 Thread jbellis
clean up toString


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3fe2975
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3fe2975
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3fe2975

Branch: refs/heads/cassandra-1.1
Commit: a3fe297510acdaefbb40b0a91aaf8c38843f9a8b
Parents: 7321adf
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:03:43 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:34 2012 -0500

--
 .../apache/cassandra/hadoop/ColumnFamilySplit.java |9 -
 1 files changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3fe2975/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
index 71a1c34..bd2e487 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilySplit.java
@@ -100,11 +100,10 @@ public class ColumnFamilySplit extends InputSplit 
implements Writable, org.apach
 @Override
 public String toString()
 {
-return "ColumnFamilySplit{" +
-   "startToken='" + startToken + '\'' +
-   ", endToken='" + endToken + '\'' +
-   ", dataNodes=" + (dataNodes == null ? null : 
Arrays.asList(dataNodes)) +
-   '}';
+return "ColumnFamilySplit(" +
+   "(" + startToken
+   + ", '" + endToken + ']'
+   + " @" + (dataNodes == null ? null : Arrays.asList(dataNodes)) 
+ ')';
 }
 
 public static ColumnFamilySplit read(DataInput in) throws IOException



[8/21] git commit: inline widerows field

2012-04-11 Thread jbellis
inline widerows field


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc0f599
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc0f599
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc0f599

Branch: refs/heads/cassandra-1.1
Commit: dbc0f599fe83651369802b98b8349331d1f617b8
Parents: a3fe297
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:05:58 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:35 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc0f599/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 89cf840..483c040 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -71,7 +71,6 @@ public class ColumnFamilyRecordReader extends 
RecordReader filter;
-private boolean widerows;
 
 public ColumnFamilyRecordReader()
 {
@@ -140,7 +139,7 @@ public class ColumnFamilyRecordReader extends 
RecordReader

[7/21] git commit: inline widerows field

2012-04-11 Thread jbellis
inline widerows field


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc0f599
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc0f599
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc0f599

Branch: refs/heads/cassandra-1.1.0
Commit: dbc0f599fe83651369802b98b8349331d1f617b8
Parents: a3fe297
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:05:58 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:35 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc0f599/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 89cf840..483c040 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -71,7 +71,6 @@ public class ColumnFamilyRecordReader extends 
RecordReader filter;
-private boolean widerows;
 
 public ColumnFamilyRecordReader()
 {
@@ -140,7 +139,7 @@ public class ColumnFamilyRecordReader extends 
RecordReader

[5/21] git commit: update get_paged_slice to allow starting with a key; fixes for WideRowIterator patch by jbellis; reviewed by brandonwilliams for CASSANDRA-3883

2012-04-11 Thread jbellis
update get_paged_slice to allow starting with a key; fixes for WideRowIterator
patch by jbellis; reviewed by brandonwilliams for CASSANDRA-3883


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97aa922a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97aa922a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97aa922a

Branch: refs/heads/cassandra-1.1
Commit: 97aa922a7476dce06121ae289877abccf161afae
Parents: dbc0f59
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:06:39 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:56 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |   25 ++-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |  136 --
 .../apache/cassandra/thrift/CassandraServer.java   |5 +-
 .../apache/cassandra/thrift/ThriftValidation.java  |   19 +--
 4 files changed, 108 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/97aa922a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
index ef56678..354903d 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
@@ -34,6 +34,8 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
+import com.google.common.collect.ImmutableList;
+
 import org.apache.cassandra.db.IColumn;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Range;
@@ -87,6 +89,7 @@ public class ColumnFamilyInputFormat extends 
InputFormat getSplits(JobContext context) throws IOException
@@ -115,6 +118,8 @@ public class ColumnFamilyInputFormat extends 
InputFormat>> splitfutures = new 
ArrayList>>();
 KeyRange jobKeyRange = ConfigHelper.getInputKeyRange(conf);
-IPartitioner partitioner = null;
 Range jobRange = null;
 if (jobKeyRange != null && jobKeyRange.start_token != null)
 {
-partitioner = 
ConfigHelper.getInputPartitioner(context.getConfiguration());
 assert partitioner.preservesOrder() : 
"ConfigHelper.setInputKeyRange(..) can only be used with a order preserving 
paritioner";
 assert jobKeyRange.start_key == null : "only start_token 
supported";
 assert jobKeyRange.end_key == null : "only end_token 
supported";
@@ -219,11 +222,19 @@ public class ColumnFamilyInputFormat extends 
InputFormat range = new Range(left, right, 
partitioner);
+List> ranges = range.isWrapAround() ? 
range.unwrap() : ImmutableList.of(range);
+for (Range subrange : ranges)
+{
+ColumnFamilySplit split = new 
ColumnFamilySplit(factory.toString(subrange.left), 
factory.toString(subrange.right), endpoints);
+logger.debug("adding " + split);
+splits.add(split);
+}
 }
 return splits;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/97aa922a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 483c040..600cf13 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -29,10 +29,9 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.util.*;
 
-import com.google.common.collect.AbstractIterator;
-import com.google.common.collect.ImmutableSortedMap;
-import com.google.common.collect.Iterables;
-import org.apache.commons.lang.ArrayUtils;
+import com.google.common.collect.*;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.auth.IAuthenticator;
 import org.apache.cassandra.config.ConfigurationException;
@@ -55,6 +54,8 @@ import org.apache.thrift.transport.TSocket;
 public class ColumnFamilyRecordReader extends RecordReader>
 implements org.apache.hadoop.mapred.RecordReader>
 {
+private static final Logger logger = 
LoggerFactory.getLogger(ColumnFamilyRecordReader.class);
+
 public static final int CASSANDRA_HADOOP_MAX_KEY_SIZE_DEFAULT = 8192;
 
 private ColumnFamilySplit split;
@@ -179,6 +180,7 @@ public class ColumnFamilyRecordReader extends 
RecordReader>>
 {
 protec

[9/21] git commit: inline widerows field

2012-04-11 Thread jbellis
inline widerows field


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc0f599
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc0f599
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc0f599

Branch: refs/heads/trunk
Commit: dbc0f599fe83651369802b98b8349331d1f617b8
Parents: a3fe297
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:05:58 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:35 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc0f599/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 89cf840..483c040 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -71,7 +71,6 @@ public class ColumnFamilyRecordReader extends 
RecordReader filter;
-private boolean widerows;
 
 public ColumnFamilyRecordReader()
 {
@@ -140,7 +139,7 @@ public class ColumnFamilyRecordReader extends 
RecordReader

[2/21] git commit: Merge branch 'cassandra-1.1.0' into cassandra-1.1

2012-04-11 Thread jbellis
Merge branch 'cassandra-1.1.0' into cassandra-1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf4d6fbb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf4d6fbb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf4d6fbb

Branch: refs/heads/cassandra-1.1
Commit: bf4d6fbbd928c4be823a8451e9b3d99d0aa001f0
Parents: de80c6c 97aa922
Author: Jonathan Ellis 
Authored: Wed Apr 11 13:25:19 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:25:19 2012 -0500

--
 examples/hadoop_word_count/README.txt  |   13 ++
 examples/hadoop_word_count/bin/word_count  |1 +
 examples/hadoop_word_count/conf/log4j.properties   |   15 ++
 examples/hadoop_word_count/src/WordCount.java  |3 +-
 src/java/org/apache/cassandra/db/Memtable.java |   86 +
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |   25 ++-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |  139 --
 .../apache/cassandra/hadoop/ColumnFamilySplit.java |9 +-
 .../apache/cassandra/thrift/CassandraServer.java   |5 +-
 .../apache/cassandra/thrift/ThriftValidation.java  |   19 +--
 10 files changed, 192 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf4d6fbb/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



[3/21] git commit: Merge branch 'cassandra-1.1.0' into cassandra-1.1

2012-04-11 Thread jbellis
Merge branch 'cassandra-1.1.0' into cassandra-1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf4d6fbb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf4d6fbb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf4d6fbb

Branch: refs/heads/trunk
Commit: bf4d6fbbd928c4be823a8451e9b3d99d0aa001f0
Parents: de80c6c 97aa922
Author: Jonathan Ellis 
Authored: Wed Apr 11 13:25:19 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:25:19 2012 -0500

--
 examples/hadoop_word_count/README.txt  |   13 ++
 examples/hadoop_word_count/bin/word_count  |1 +
 examples/hadoop_word_count/conf/log4j.properties   |   15 ++
 examples/hadoop_word_count/src/WordCount.java  |3 +-
 src/java/org/apache/cassandra/db/Memtable.java |   86 +
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |   25 ++-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |  139 --
 .../apache/cassandra/hadoop/ColumnFamilySplit.java |9 +-
 .../apache/cassandra/thrift/CassandraServer.java   |5 +-
 .../apache/cassandra/thrift/ThriftValidation.java  |   19 +--
 10 files changed, 192 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf4d6fbb/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



[4/21] git commit: update get_paged_slice to allow starting with a key; fixes for WideRowIterator patch by jbellis; reviewed by brandonwilliams for CASSANDRA-3883

2012-04-11 Thread jbellis
update get_paged_slice to allow starting with a key; fixes for WideRowIterator
patch by jbellis; reviewed by brandonwilliams for CASSANDRA-3883


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97aa922a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97aa922a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97aa922a

Branch: refs/heads/trunk
Commit: 97aa922a7476dce06121ae289877abccf161afae
Parents: dbc0f59
Author: Jonathan Ellis 
Authored: Tue Apr 10 16:06:39 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:24:56 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |   25 ++-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |  136 --
 .../apache/cassandra/thrift/CassandraServer.java   |5 +-
 .../apache/cassandra/thrift/ThriftValidation.java  |   19 +--
 4 files changed, 108 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/97aa922a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
index ef56678..354903d 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
@@ -34,6 +34,8 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
+import com.google.common.collect.ImmutableList;
+
 import org.apache.cassandra.db.IColumn;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Range;
@@ -87,6 +89,7 @@ public class ColumnFamilyInputFormat extends 
InputFormat getSplits(JobContext context) throws IOException
@@ -115,6 +118,8 @@ public class ColumnFamilyInputFormat extends 
InputFormat>> splitfutures = new 
ArrayList>>();
 KeyRange jobKeyRange = ConfigHelper.getInputKeyRange(conf);
-IPartitioner partitioner = null;
 Range jobRange = null;
 if (jobKeyRange != null && jobKeyRange.start_token != null)
 {
-partitioner = 
ConfigHelper.getInputPartitioner(context.getConfiguration());
 assert partitioner.preservesOrder() : 
"ConfigHelper.setInputKeyRange(..) can only be used with a order preserving 
paritioner";
 assert jobKeyRange.start_key == null : "only start_token 
supported";
 assert jobKeyRange.end_key == null : "only end_token 
supported";
@@ -219,11 +222,19 @@ public class ColumnFamilyInputFormat extends 
InputFormat range = new Range(left, right, 
partitioner);
+List> ranges = range.isWrapAround() ? 
range.unwrap() : ImmutableList.of(range);
+for (Range subrange : ranges)
+{
+ColumnFamilySplit split = new 
ColumnFamilySplit(factory.toString(subrange.left), 
factory.toString(subrange.right), endpoints);
+logger.debug("adding " + split);
+splits.add(split);
+}
 }
 return splits;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/97aa922a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 483c040..600cf13 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -29,10 +29,9 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.util.*;
 
-import com.google.common.collect.AbstractIterator;
-import com.google.common.collect.ImmutableSortedMap;
-import com.google.common.collect.Iterables;
-import org.apache.commons.lang.ArrayUtils;
+import com.google.common.collect.*;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.auth.IAuthenticator;
 import org.apache.cassandra.config.ConfigurationException;
@@ -55,6 +54,8 @@ import org.apache.thrift.transport.TSocket;
 public class ColumnFamilyRecordReader extends RecordReader>
 implements org.apache.hadoop.mapred.RecordReader>
 {
+private static final Logger logger = 
LoggerFactory.getLogger(ColumnFamilyRecordReader.class);
+
 public static final int CASSANDRA_HADOOP_MAX_KEY_SIZE_DEFAULT = 8192;
 
 private ColumnFamilySplit split;
@@ -179,6 +180,7 @@ public class ColumnFamilyRecordReader extends 
RecordReader>>
 {
 protected List

[1/21] git commit: Merge branch 'cassandra-1.1' into trunk

2012-04-11 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 de80c6c6d -> bf4d6fbbd
  refs/heads/cassandra-1.1.0 d49113fad -> 97aa922a7
  refs/heads/trunk 8f9b37c3d -> 8cd1792e1


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8cd1792e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8cd1792e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8cd1792e

Branch: refs/heads/trunk
Commit: 8cd1792e1e40b418b2a232d00150cb78afe87b1c
Parents: 8f9b37c bf4d6fb
Author: Jonathan Ellis 
Authored: Wed Apr 11 13:25:28 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 13:25:28 2012 -0500

--
 CHANGES.txt|2 +
 examples/hadoop_word_count/README.txt  |   13 ++
 examples/hadoop_word_count/bin/word_count  |1 +
 examples/hadoop_word_count/conf/log4j.properties   |   15 ++
 examples/hadoop_word_count/src/WordCount.java  |3 +-
 .../cassandra/cql3/statements/SelectStatement.java |   64 ++-
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 +-
 src/java/org/apache/cassandra/db/Memtable.java |   86 +
 .../org/apache/cassandra/db/RangeSliceCommand.java |   23 ++-
 .../apache/cassandra/db/filter/ExtendedFilter.java |   30 +++-
 .../cassandra/db/filter/SliceQueryFilter.java  |3 +-
 .../cassandra/db/index/keys/KeysSearcher.java  |2 +-
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |   25 ++-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |  139 --
 .../apache/cassandra/hadoop/ColumnFamilySplit.java |9 +-
 .../cassandra/service/RangeSliceVerbHandler.java   |2 +-
 .../org/apache/cassandra/service/StorageProxy.java |3 +-
 .../apache/cassandra/thrift/CassandraServer.java   |7 +-
 .../apache/cassandra/thrift/ThriftValidation.java  |   19 +--
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   92 +-
 20 files changed, 378 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/db/Memtable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/db/RangeSliceCommand.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java
--
diff --cc src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java
index d688d14,1a4a912..9901130
--- a/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java
@@@ -37,9 -41,10 +37,10 @@@ import org.apache.cassandra.io.util.Fil
  
  public class SliceQueryFilter implements IFilter
  {
 -private static Logger logger = 
LoggerFactory.getLogger(SliceQueryFilter.class);
 +private static final Logger logger = 
LoggerFactory.getLogger(SliceQueryFilter.class);
  
- public final ByteBuffer start; public final ByteBuffer finish;
+ public volatile ByteBuffer start;
+ public volatile ByteBuffer finish;
  public final boolean reversed;
  public volatile int count;
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/db/index/keys/KeysSearcher.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd1792e/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
--
diff --cc src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
index dc137a5,354903d..03a61f0
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
@@@ -210,16 -217,24 +213,24 @@@ public class ColumnFamilyInputFormat ex
  

[jira] [Commented] (CASSANDRA-3710) Add a configuration option to disable snapshots

2012-04-11 Thread Dave Brosius (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251796#comment-13251796
 ] 

Dave Brosius commented on CASSANDRA-3710:
-

Generated the same way - I'm guessing just fat fingers, on edit/review. Don't 
know.

> Add a configuration option to disable snapshots
> ---
>
> Key: CASSANDRA-3710
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3710
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Brandon Williams
>Assignee: Dave Brosius
>Priority: Minor
> Fix For: 1.0.10, 1.1.0
>
> Attachments: Cassandra107Patch_TestModeV1.txt, auto_snapshot.diff, 
> auto_snapshot_2.diff, auto_snapshot_3.diff
>
>
> Let me first say, I hate this idea.  It gives cassandra the ability to 
> permanently delete data at a large scale without any means of recovery.  
> However, I've seen this requested multiple times, and it is in fact useful in 
> some scenarios, such as when your application is using an embedded cassandra 
> instance for testing and need to truncate, which without JNA will timeout 
> more often than not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4004) Add support for ReversedType

2012-04-11 Thread Sylvain Lebresne (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4004:


Attachment: 4004.txt

Attached simple patch implementing the syntax above (a simple test has been 
push in the dtests).

> Add support for ReversedType
> 
>
> Key: CASSANDRA-4004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4004
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: API
>Reporter: Sylvain Lebresne
>Priority: Trivial
> Fix For: 1.2
>
> Attachments: 4004.txt
>
>
> It would be nice to add a native syntax for the use of ReversedType. I'm sure 
> there is anything in SQL that we inspired ourselves from, so I would propose 
> something like:
> {noformat}
> CREATE TABLE timeseries (
>   key text,
>   time uuid,
>   value text,
>   PRIMARY KEY (key, time DESC)
> )
> {noformat}
> Alternatively, the DESC could also be put after the column name definition 
> but one argument for putting it in the PK instead is that this only apply to 
> keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4137) QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries

2012-04-11 Thread Thibaut (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated CASSANDRA-4137:
---

Priority: Major  (was: Critical)

> QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries
> -
>
> Key: CASSANDRA-4137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.8.9
>Reporter: Thibaut
>
> From the mailing list:
> I created a new test keyspace and added 10 000 keys to it. The cluster has 3 
> machines, RF=3, read repair disabled (enabling it didn't change anything). 
> The keyspace doesn't contain any thumbstones. No keys were deleted.
> When I fetch a rangeslice through hector and set the consistency level to 
> quorum, according to cfstats (and also to the output files on the hd), 
> cassandra seems to execute a write request for each read I execute. The write 
> count in cfstats is increased when I execute the rangeslice function over the 
> same range again and again (without saving anything at all).
> If I set the consistency level to ONE or ALL, no writes are executed.
> I checked the writes on one machine. They increased by 2300 for each 
> iteration over the 1 keys. I didn't check, but this probably corresponds 
> to the number of keys for which the machine is responsible.
> Code:
> Keyspace ks = getConnection(cluster, 
> consistencylevel);
>   RangeSlicesQuery 
> rangeSlicesQuery = HFactory.createRangeSlicesQuery(ks, 
> StringSerializer.get(), StringSerializer.get(), s);
>   rangeSlicesQuery.setColumnFamily(columnFamily);
>   rangeSlicesQuery.setColumnNames(column);
>   rangeSlicesQuery.setKeys(start, end);
>   rangeSlicesQuery.setRowCount(maxrows);
>   QueryResult> 
> result = rangeSlicesQuery.execute();
>   return result.get();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4137) QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries

2012-04-11 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4137.
---

   Resolution: Duplicate
Fix Version/s: (was: 0.8.11)

fixed in 1.0.8 for CASSANDRA-3843

> QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries
> -
>
> Key: CASSANDRA-4137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.8.9
>Reporter: Thibaut
>
> From the mailing list:
> I created a new test keyspace and added 10 000 keys to it. The cluster has 3 
> machines, RF=3, read repair disabled (enabling it didn't change anything). 
> The keyspace doesn't contain any thumbstones. No keys were deleted.
> When I fetch a rangeslice through hector and set the consistency level to 
> quorum, according to cfstats (and also to the output files on the hd), 
> cassandra seems to execute a write request for each read I execute. The write 
> count in cfstats is increased when I execute the rangeslice function over the 
> same range again and again (without saving anything at all).
> If I set the consistency level to ONE or ALL, no writes are executed.
> I checked the writes on one machine. They increased by 2300 for each 
> iteration over the 1 keys. I didn't check, but this probably corresponds 
> to the number of keys for which the machine is responsible.
> Code:
> Keyspace ks = getConnection(cluster, 
> consistencylevel);
>   RangeSlicesQuery 
> rangeSlicesQuery = HFactory.createRangeSlicesQuery(ks, 
> StringSerializer.get(), StringSerializer.get(), s);
>   rangeSlicesQuery.setColumnFamily(columnFamily);
>   rangeSlicesQuery.setColumnNames(column);
>   rangeSlicesQuery.setKeys(start, end);
>   rangeSlicesQuery.setRowCount(maxrows);
>   QueryResult> 
> result = rangeSlicesQuery.execute();
>   return result.get();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4137) QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries

2012-04-11 Thread Thibaut (Created) (JIRA)
QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries
-

 Key: CASSANDRA-4137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.9
Reporter: Thibaut
Priority: Critical
 Fix For: 0.8.11


>From the mailing list:

I created a new test keyspace and added 10 000 keys to it. The cluster has 3 
machines, RF=3, read repair disabled (enabling it didn't change anything). The 
keyspace doesn't contain any thumbstones. No keys were deleted.

When I fetch a rangeslice through hector and set the consistency level to 
quorum, according to cfstats (and also to the output files on the hd), 
cassandra seems to execute a write request for each read I execute. The write 
count in cfstats is increased when I execute the rangeslice function over the 
same range again and again (without saving anything at all).

If I set the consistency level to ONE or ALL, no writes are executed.

I checked the writes on one machine. They increased by 2300 for each iteration 
over the 1 keys. I didn't check, but this probably corresponds to the 
number of keys for which the machine is responsible.

Code:
Keyspace ks = getConnection(cluster, 
consistencylevel);

RangeSlicesQuery 
rangeSlicesQuery = HFactory.createRangeSlicesQuery(ks, StringSerializer.get(), 
StringSerializer.get(), s);

rangeSlicesQuery.setColumnFamily(columnFamily);
rangeSlicesQuery.setColumnNames(column);

rangeSlicesQuery.setKeys(start, end);
rangeSlicesQuery.setRowCount(maxrows);

QueryResult> 
result = rangeSlicesQuery.execute();
return result.get();




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-11 Thread Brandon Williams (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251703#comment-13251703
 ] 

Brandon Williams commented on CASSANDRA-3883:
-

LGTM, +1

> CFIF WideRowIterator only returns batch size columns
> 
>
> Key: CASSANDRA-3883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
>Reporter: Brandon Williams
>Assignee: Jonathan Ellis
> Fix For: 1.1.0
>
> Attachments: 3883-v1.txt, 3883-v2.txt, 3883-v3.txt
>
>
> Most evident with the word count, where there are 1250 'word1' items in two 
> rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
> to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[3/3] git commit: Fix get_paged_slice

2012-04-11 Thread slebresne
Fix get_paged_slice

patch by slebresne; reviewed by jbellis for CASSANDRA-4136


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7e8640
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7e8640
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7e8640

Branch: refs/heads/cassandra-1.1
Commit: fc7e86404a27963071e416ff4deb0c7143e68bfc
Parents: c14e266
Author: Sylvain Lebresne 
Authored: Wed Apr 11 16:31:36 2012 +0200
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 16:31:36 2012 +0200

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/SelectStatement.java |3 +-
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 +-
 .../org/apache/cassandra/db/RangeSliceCommand.java |   23 +++--
 .../apache/cassandra/db/filter/ExtendedFilter.java |   30 --
 .../cassandra/db/filter/SliceQueryFilter.java  |3 +-
 .../cassandra/db/index/keys/KeysSearcher.java  |2 +-
 .../cassandra/service/RangeSliceVerbHandler.java   |2 +-
 .../org/apache/cassandra/service/StorageProxy.java |3 +-
 .../apache/cassandra/thrift/CassandraServer.java   |2 +-
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   92 +--
 11 files changed, 133 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 26315be..df030b9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * fix terminination of the stress.java when errors were encountered
(CASSANDRA-4128)
  * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
+ * Fix get_paged_slice (CASSANDRA-4136)
 Merged from 1.0:
  * add auto_snapshot option allowing disabling snapshot before drop/truncate
(CASSANDRA-3710)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index b95d6ba..5bcd37a 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -285,7 +285,8 @@ public class SelectStatement implements CQLStatement
 bounds,
 
expressions,
 getLimit(),
-true), // 
limit by columns, not keys
+true, // 
limit by columns, not keys
+false),
   parameters.consistencyLevel);
 }
 catch (IOException e)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index cea2fee..a4e2e51 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1353,12 +1353,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter)
 {
-return getRangeSlice(superColumn, range, maxResults, columnFilter, 
rowFilter, false);
+return getRangeSlice(superColumn, range, maxResults, columnFilter, 
rowFilter, false, false);
 }
 
-public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter, boolean maxIsColumns)
+public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter, boolean maxIsColumns, boolean isPaging)
 {
-return filter(getSequentialIterator(superColumn, range, columnFilter), 
ExtendedFilter.create(this, columnFilter, rowFilter, maxResults, maxIsColumns));
+return filter(getSequentialIterator(superColumn, range, columnFilter), 
ExtendedFilter.create(this, columnFilter, rowFilter, maxResults, maxIsColumns, 
isPaging));
 }
 
 public List search(List clause, 
AbstractBounds range,

[2/3] git commit: CQL3: Support slice with exclusive start and stop

2012-04-11 Thread slebresne
CQL3: Support slice with exclusive start and stop

patch by slebresne; reviewed by jbellis for CASSANDRA-3785


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d49113fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d49113fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d49113fa

Branch: refs/heads/cassandra-1.1
Commit: d49113fad1bf7a15ca052156b872b9bdc01b6d73
Parents: fc7e864
Author: Sylvain Lebresne 
Authored: Thu Jan 26 10:49:56 2012 +0100
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 17:17:59 2012 +0200

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/SelectStatement.java |   61 --
 2 files changed, 53 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d49113fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index df030b9..4ef47c5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -13,6 +13,7 @@
(CASSANDRA-4128)
  * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
  * Fix get_paged_slice (CASSANDRA-4136)
+ * CQL3: Support slice with exclusive start and stop (CASSANDRA-3785)
 Merged from 1.0:
  * add auto_snapshot option allowing disabling snapshot before drop/truncate
(CASSANDRA-3710)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d49113fa/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 5bcd37a..0485857 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -91,6 +91,7 @@ public class SelectStatement implements CQLStatement
 private Restriction keyRestriction;
 private final Restriction[] columnRestrictions;
 private final Map metadataRestrictions = 
new HashMap();
+private Restriction sliceRestriction;
 
 private static enum Bound
 {
@@ -323,10 +324,13 @@ public class SelectStatement implements CQLStatement
 
 private int getLimit()
 {
+// Internally, we don't support exclusive bounds for slices. Instead,
+// we query one more element if necessary and exclude
+int limit = sliceRestriction != null && 
!sliceRestriction.isInclusive(Bound.START) ? parameters.limit + 1 : 
parameters.limit;
 // For sparse, we'll end up merging all defined colums into the same 
CqlRow. Thus we should query up
 // to 'defined columns' * 'asked limit' to be sure to have enough 
columns. We'll trim after query if
 // this end being too much.
-return cfDef.isCompact ? parameters.limit : cfDef.metadata.size() * 
parameters.limit;
+return cfDef.isCompact ? limit : cfDef.metadata.size() * limit;
 }
 
 private boolean isKeyRange()
@@ -602,9 +606,20 @@ public class SelectStatement implements CQLStatement
 
 thriftColumns = new ArrayList();
 
-ByteBuffer[] components = cfDef.isComposite
-? 
((CompositeType)cfDef.cfm.comparator).split(c.name())
-: null;
+ByteBuffer[] components = null;
+
+if (cfDef.isComposite)
+{
+components = 
((CompositeType)cfDef.cfm.comparator).split(c.name());
+}
+else if (sliceRestriction != null)
+{
+// For dynamic CF, the column could be out of the 
requested bounds, filter here
+if (!sliceRestriction.isInclusive(Bound.START) && 
c.name().equals(sliceRestriction.bound(Bound.START).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+continue;
+if (!sliceRestriction.isInclusive(Bound.END) && 
c.name().equals(sliceRestriction.bound(Bound.END).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+continue;
+}
 
 // Respect selection order
 for (Pair p : 
selection)
@@ -711,9 +726,9 @@ public class SelectStatement implements CQLStatement
 if (parameters.isColumnsReversed)
 Collections.reverse(cqlRows);
 
+// Trim result if needed to respect the limit
 cqlRows = cqlRows.size() > parameters.limit ? cqlRows.subList(0, 
parameters.limit) : cqlRows;
 
-// Trim result if needed to respect the limit
 return cqlRows;
 }
 
@@ -880,14 +895,26 @@

[1/3] git commit: Merge branch 'cassandra-1.1.0' into cassandra-1.1

2012-04-11 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1 6564b33c7 -> de80c6c6d


Merge branch 'cassandra-1.1.0' into cassandra-1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de80c6c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de80c6c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de80c6c6

Branch: refs/heads/cassandra-1.1
Commit: de80c6c6d1d0ff11837bb1f3467b2902c5b841ff
Parents: 6564b33 d49113f
Author: Sylvain Lebresne 
Authored: Wed Apr 11 17:23:54 2012 +0200
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 17:23:54 2012 +0200

--
 CHANGES.txt|2 +
 .../cassandra/cql3/statements/SelectStatement.java |   64 +--
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 +-
 .../org/apache/cassandra/db/RangeSliceCommand.java |   23 +++--
 .../apache/cassandra/db/filter/ExtendedFilter.java |   30 --
 .../cassandra/db/filter/SliceQueryFilter.java  |3 +-
 .../cassandra/db/index/keys/KeysSearcher.java  |2 +-
 .../cassandra/service/RangeSliceVerbHandler.java   |2 +-
 .../org/apache/cassandra/service/StorageProxy.java |3 +-
 .../apache/cassandra/thrift/CassandraServer.java   |2 +-
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   92 +--
 11 files changed, 186 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index b0d067c,0485857..b7c12dd
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -600,11 -604,22 +605,22 @@@ public class SelectStatement implement
  if (c.isMarkedForDelete())
  continue;
  
 -thriftColumns = new ArrayList();
 +thriftColumns = new ArrayList(selection.size());
  
- ByteBuffer[] components = cfDef.isComposite
- ? 
((CompositeType)cfDef.cfm.comparator).split(c.name())
- : null;
+ ByteBuffer[] components = null;
+ 
+ if (cfDef.isComposite)
+ {
+ components = 
((CompositeType)cfDef.cfm.comparator).split(c.name());
+ }
+ else if (sliceRestriction != null)
+ {
+ // For dynamic CF, the column could be out of the 
requested bounds, filter here
+ if (!sliceRestriction.isInclusive(Bound.START) && 
c.name().equals(sliceRestriction.bound(Bound.START).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+ continue;
+ if (!sliceRestriction.isInclusive(Bound.END) && 
c.name().equals(sliceRestriction.bound(Bound.END).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+ continue;
+ }
  
  // Respect selection order
  for (Pair p : 
selection)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/service/RangeSliceVerbHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/service/StorageProxy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de80c6c6/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



git commit: CQL3: Support slice with exclusive start and stop

2012-04-11 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1.0 fc7e86404 -> d49113fad


CQL3: Support slice with exclusive start and stop

patch by slebresne; reviewed by jbellis for CASSANDRA-3785


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d49113fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d49113fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d49113fa

Branch: refs/heads/cassandra-1.1.0
Commit: d49113fad1bf7a15ca052156b872b9bdc01b6d73
Parents: fc7e864
Author: Sylvain Lebresne 
Authored: Thu Jan 26 10:49:56 2012 +0100
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 17:17:59 2012 +0200

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/SelectStatement.java |   61 --
 2 files changed, 53 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d49113fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index df030b9..4ef47c5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -13,6 +13,7 @@
(CASSANDRA-4128)
  * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
  * Fix get_paged_slice (CASSANDRA-4136)
+ * CQL3: Support slice with exclusive start and stop (CASSANDRA-3785)
 Merged from 1.0:
  * add auto_snapshot option allowing disabling snapshot before drop/truncate
(CASSANDRA-3710)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d49113fa/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 5bcd37a..0485857 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -91,6 +91,7 @@ public class SelectStatement implements CQLStatement
 private Restriction keyRestriction;
 private final Restriction[] columnRestrictions;
 private final Map metadataRestrictions = 
new HashMap();
+private Restriction sliceRestriction;
 
 private static enum Bound
 {
@@ -323,10 +324,13 @@ public class SelectStatement implements CQLStatement
 
 private int getLimit()
 {
+// Internally, we don't support exclusive bounds for slices. Instead,
+// we query one more element if necessary and exclude
+int limit = sliceRestriction != null && 
!sliceRestriction.isInclusive(Bound.START) ? parameters.limit + 1 : 
parameters.limit;
 // For sparse, we'll end up merging all defined colums into the same 
CqlRow. Thus we should query up
 // to 'defined columns' * 'asked limit' to be sure to have enough 
columns. We'll trim after query if
 // this end being too much.
-return cfDef.isCompact ? parameters.limit : cfDef.metadata.size() * 
parameters.limit;
+return cfDef.isCompact ? limit : cfDef.metadata.size() * limit;
 }
 
 private boolean isKeyRange()
@@ -602,9 +606,20 @@ public class SelectStatement implements CQLStatement
 
 thriftColumns = new ArrayList();
 
-ByteBuffer[] components = cfDef.isComposite
-? 
((CompositeType)cfDef.cfm.comparator).split(c.name())
-: null;
+ByteBuffer[] components = null;
+
+if (cfDef.isComposite)
+{
+components = 
((CompositeType)cfDef.cfm.comparator).split(c.name());
+}
+else if (sliceRestriction != null)
+{
+// For dynamic CF, the column could be out of the 
requested bounds, filter here
+if (!sliceRestriction.isInclusive(Bound.START) && 
c.name().equals(sliceRestriction.bound(Bound.START).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+continue;
+if (!sliceRestriction.isInclusive(Bound.END) && 
c.name().equals(sliceRestriction.bound(Bound.END).getByteBuffer(cfDef.cfm.comparator,
 variables)))
+continue;
+}
 
 // Respect selection order
 for (Pair p : 
selection)
@@ -711,9 +726,9 @@ public class SelectStatement implements CQLStatement
 if (parameters.isColumnsReversed)
 Collections.reverse(cqlRows);
 
+// Trim result if needed to respect the limit
 cqlRows = cqlRows.size() > parameters.limit ? cqlRows.subList(0, 
parameters.limit) : cqlRows;
 
-// Trim result if needed to

git commit: Fix get_paged_slice

2012-04-11 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1.0 c14e266eb -> fc7e86404


Fix get_paged_slice

patch by slebresne; reviewed by jbellis for CASSANDRA-4136


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7e8640
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7e8640
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7e8640

Branch: refs/heads/cassandra-1.1.0
Commit: fc7e86404a27963071e416ff4deb0c7143e68bfc
Parents: c14e266
Author: Sylvain Lebresne 
Authored: Wed Apr 11 16:31:36 2012 +0200
Committer: Sylvain Lebresne 
Committed: Wed Apr 11 16:31:36 2012 +0200

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/SelectStatement.java |3 +-
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 +-
 .../org/apache/cassandra/db/RangeSliceCommand.java |   23 +++--
 .../apache/cassandra/db/filter/ExtendedFilter.java |   30 --
 .../cassandra/db/filter/SliceQueryFilter.java  |3 +-
 .../cassandra/db/index/keys/KeysSearcher.java  |2 +-
 .../cassandra/service/RangeSliceVerbHandler.java   |2 +-
 .../org/apache/cassandra/service/StorageProxy.java |3 +-
 .../apache/cassandra/thrift/CassandraServer.java   |2 +-
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   92 +--
 11 files changed, 133 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 26315be..df030b9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * fix terminination of the stress.java when errors were encountered
(CASSANDRA-4128)
  * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
+ * Fix get_paged_slice (CASSANDRA-4136)
 Merged from 1.0:
  * add auto_snapshot option allowing disabling snapshot before drop/truncate
(CASSANDRA-3710)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index b95d6ba..5bcd37a 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -285,7 +285,8 @@ public class SelectStatement implements CQLStatement
 bounds,
 
expressions,
 getLimit(),
-true), // 
limit by columns, not keys
+true, // 
limit by columns, not keys
+false),
   parameters.consistencyLevel);
 }
 catch (IOException e)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7e8640/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index cea2fee..a4e2e51 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1353,12 +1353,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter)
 {
-return getRangeSlice(superColumn, range, maxResults, columnFilter, 
rowFilter, false);
+return getRangeSlice(superColumn, range, maxResults, columnFilter, 
rowFilter, false, false);
 }
 
-public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter, boolean maxIsColumns)
+public List getRangeSlice(ByteBuffer superColumn, final 
AbstractBounds range, int maxResults, IFilter columnFilter, 
List rowFilter, boolean maxIsColumns, boolean isPaging)
 {
-return filter(getSequentialIterator(superColumn, range, columnFilter), 
ExtendedFilter.create(this, columnFilter, rowFilter, maxResults, maxIsColumns));
+return filter(getSequentialIterator(superColumn, range, columnFilter), 
ExtendedFilter.create(this, columnFilter, rowFilter, maxResults, maxIsColumns, 
isPagi

[jira] [Issue Comment Edited] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-11 Thread Jonathan Ellis (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251625#comment-13251625
 ] 

Jonathan Ellis edited comment on CASSANDRA-3883 at 4/11/12 2:29 PM:


https://github.com/jbellis/cassandra/branches/3883-6 is up, with CASSANDRA-4136 
incorporated.  the results look good for the word_count test, as posted on 4136.

(the other minor change with -6 is adding conf/ to the classpath for log4j.)

  was (Author: jbellis):
https://github.com/jbellis/cassandra/branches/3883-6 is up, with 
CASSANDRA-4136 incorporated.  the results look good for the word_count test, as 
posted on 4136.
  
> CFIF WideRowIterator only returns batch size columns
> 
>
> Key: CASSANDRA-3883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
>Reporter: Brandon Williams
>Assignee: Jonathan Ellis
> Fix For: 1.1.0
>
> Attachments: 3883-v1.txt, 3883-v2.txt, 3883-v3.txt
>
>
> Most evident with the word count, where there are 1250 'word1' items in two 
> rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
> to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251625#comment-13251625
 ] 

Jonathan Ellis commented on CASSANDRA-3883:
---

https://github.com/jbellis/cassandra/branches/3883-6 is up, with CASSANDRA-4136 
incorporated.  the results look good for the word_count test, as posted on 4136.

> CFIF WideRowIterator only returns batch size columns
> 
>
> Key: CASSANDRA-3883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Affects Versions: 1.1.0
>Reporter: Brandon Williams
>Assignee: Jonathan Ellis
> Fix For: 1.1.0
>
> Attachments: 3883-v1.txt, 3883-v2.txt, 3883-v3.txt
>
>
> Most evident with the word count, where there are 1250 'word1' items in two 
> rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
> to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4032) memtable.updateLiveRatio() is blocking, causing insane latencies for writes

2012-04-11 Thread Sylvain Lebresne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251622#comment-13251622
 ] 

Sylvain Lebresne commented on CASSANDRA-4032:
-

+1 with two nits:
* In the comment "to a maximum of one per CFS using this map" could have a 
s/map/set/
* Note sure if there was an intent in changing the maximumPoolSize of the 
meterExecutor to Integer.MAX_VALUE. As it stands, with an unbounded queue the 
maximumPoolSize is ignored so that doesn't really matter, but just wanted to 
mention it.

> memtable.updateLiveRatio() is blocking, causing insane latencies for writes
> ---
>
> Key: CASSANDRA-4032
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4032
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Peter Schuller
>Assignee: Peter Schuller
> Fix For: 1.1.0
>
> Attachments: 4032-v3.txt, 4032-v4.txt, CASSANDRA-4032-1.1.0-v1.txt, 
> CASSANDRA-4032-1.1.0-v2.txt
>
>
> Reproduce by just starting a fresh cassandra with a heap large enough for 
> live ratio calculation (which is {{O(n)}}) to be insanely slow, and then 
> running {{./bin/stress -d host -n1 -t10}}. With a large enough heap 
> and default flushing behavior this is bad enough that stress gets timeouts.
> Example ("blocked for" is my debug log added around submit()):
> {code}
>  INFO [MemoryMeter:1] 2012-03-09 15:07:30,857 Memtable.java (line 198) 
> CFS(Keyspace='Keyspace1', ColumnFamily='Standard1') liveRatio is 
> 8.89014894083727 (just-counted was 8.89014894083727).  calculation took 
> 28273ms for 1320245 columns
>  WARN [MutationStage:8] 2012-03-09 15:07:30,857 Memtable.java (line 209) 
> submit() blocked for: 231135
> {code}
> The calling code was written assuming a RejectedExecutionException is thrown, 
> but it's not because {{DebuggableThreadPoolExecutor}} installs a blocking 
> rejection handler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4136) get_paged_slices doesn't reset startColumn after first row

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251619#comment-13251619
 ] 

Jonathan Ellis commented on CASSANDRA-4136:
---

With this and the 3883 patches I get

{noformat}
$ cat /tmp/word_count5/part-r-0
0   250
1   250
2   250
3   250
word1   2002
word2   1
{noformat}

which is the expected result.

+1

> get_paged_slices doesn't reset startColumn after first row
> --
>
> Key: CASSANDRA-4136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4136
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.0
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.1.0
>
> Attachments: 4136.txt
>
>
> As an example, consider the WordCount example (see CASSANDRA-3883).  
> WordCountSetup inserts 1000 rows, each with three columns: text3, text4, 
> int1.  (Some other miscellaneous columns are inserted in a few rows, but we 
> can ignore them here.)
> Paging through with get_paged_slice calls with a count of 99, CFRecordReader 
> will first retrieve 33 rows, the last of which we will call K.  Then it will 
> attempt to fetch 99 more columns, starting with row K column text4.
> The bug is that it will only fetch text4 for *each* subsequent row K+i, 
> instead of returning (K, text4), (K+1, int1), (K+1, int3), (K+1, text4), etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4136) get_paged_slices doesn't reset startColumn after first row

2012-04-11 Thread Sylvain Lebresne (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4136:


Attachment: 4136.txt

Attached patch adds the ability to do paging through multiple rows. The support 
added by the patch is limited to what get_paged_slices requires (in particular 
using a SliceQueryFilter where finish != "" is not supported with that new 
option). The patch contains a unit test.

> get_paged_slices doesn't reset startColumn after first row
> --
>
> Key: CASSANDRA-4136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4136
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.0
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>Priority: Critical
> Fix For: 1.1.0
>
> Attachments: 4136.txt
>
>
> As an example, consider the WordCount example (see CASSANDRA-3883).  
> WordCountSetup inserts 1000 rows, each with three columns: text3, text4, 
> int1.  (Some other miscellaneous columns are inserted in a few rows, but we 
> can ignore them here.)
> Paging through with get_paged_slice calls with a count of 99, CFRecordReader 
> will first retrieve 33 rows, the last of which we will call K.  Then it will 
> attempt to fetch 99 more columns, starting with row K column text4.
> The bug is that it will only fetch text4 for *each* subsequent row K+i, 
> instead of returning (K, text4), (K+1, int1), (K+1, int3), (K+1, text4), etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[16/20] Improve picking of comparator for composite column with CQL3

2012-04-11 Thread jbellis
http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d1d3bca/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index f6d8209..bfd9481 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -48,39 +48,51 @@ public class ColumnDefinition
 private Map index_options;
 private String index_name;
 
-public ColumnDefinition(ByteBuffer name, AbstractType validator, 
IndexType index_type, Map index_options, String index_name)
+/*
+ * If the column comparator is a composite type, indicates to which
+ * component this definition refers to. If null, the definition refers to
+ * the full column name.
+ */
+public final Integer componentIndex;
+
+public ColumnDefinition(ByteBuffer name, AbstractType validator, 
IndexType index_type, Map index_options, String index_name, 
Integer componentIndex)
 {
 assert name != null && validator != null;
 this.name = name;
 this.index_name = index_name;
 this.validator = validator;
-
+this.componentIndex = componentIndex;
 this.setIndexType(index_type, index_options);
 }
 
-public static ColumnDefinition ascii(String name)
+public static ColumnDefinition ascii(String name, Integer cidx)
+{
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
AsciiType.instance, null, null, null, cidx);
+}
+
+public static ColumnDefinition bool(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
AsciiType.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
BooleanType.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition bool(String name)
+public static ColumnDefinition utf8(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
BooleanType.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
UTF8Type.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition utf8(String name)
+public static ColumnDefinition int32(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
UTF8Type.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
Int32Type.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition int32(String name)
+public static ColumnDefinition double_(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
Int32Type.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
DoubleType.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition double_(String name)
+public ColumnDefinition clone()
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
DoubleType.instance, null, null, null);
+return new ColumnDefinition(name, validator, index_type, 
index_options, index_name, componentIndex);
 }
 
 @Override
@@ -100,6 +112,8 @@ public class ColumnDefinition
 return false;
 if (!name.equals(that.name))
 return false;
+if (componentIndex != null ? 
!componentIndex.equals(that.componentIndex) : that.componentIndex != null)
+return false;
 return !(validator != null ? !validator.equals(that.validator) : 
that.validator != null);
 }
 
@@ -111,6 +125,7 @@ public class ColumnDefinition
 result = 31 * result + (index_type != null ? index_type.hashCode() : 
0);
 result = 31 * result + (index_options != null ? 
index_options.hashCode() : 0);
 result = 31 * result + (index_name != null ? index_name.hashCode() : 
0);
+result = 31 * result + (componentIndex != null ? 
componentIndex.hashCode() : 0);
 return result;
 }
 
@@ -136,7 +151,8 @@ public class ColumnDefinition
 
TypeParser.parse(thriftColumnDef.validation_class),
 thriftColumnDef.index_type,
 thriftColumnDef.index_options,
-thriftColumnDef.index_name);
+thriftColumnDef.index_name,
+null);
 }
 
 public static Map fromThrift(List 
thriftDefs) throws ConfigurationException
@@ -181,6 +197,8 @@ public class ColumnDefinition
: 
Column.create(json(index_options), timestamp, cfName, 
comparator.getString(name), "index_options"));
 cf.addColumn(index_name == null ? DeletedColumn.create(ldt, t

[9/20] git commit: add auto_snapshot option allowing disabling snapshot before drop/truncate patch by dbrosius; reviewed by jbellis for CASSANDRA-3710

2012-04-11 Thread jbellis
add auto_snapshot option allowing disabling snapshot before drop/truncate
patch by dbrosius; reviewed by jbellis for CASSANDRA-3710


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/142e8c1a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/142e8c1a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/142e8c1a

Branch: refs/heads/cassandra-1.0
Commit: 142e8c1a2967947a3ed5314e4577f4be3cfe4bd3
Parents: 9b746da
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:01:56 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:01:56 2012 -0500

--
 CHANGES.txt|2 ++
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 .../cassandra/db/migration/DropColumnFamily.java   |3 ++-
 .../cassandra/db/migration/DropKeyspace.java   |4 +++-
 7 files changed, 20 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1349299..e2fc4de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.0.10
+ * add auto_snapshot option allowing disabling snapshot before drop/truncate
+   (CASSANDRA-3710)
  * allow short snitch names (CASSANDRA-4130)
  * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
  * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 207d8f9..3249747 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -256,6 +256,12 @@ incremental_backups: false
 # is a data format change.
 snapshot_before_compaction: false
 
+# Whether or not a snapshot is taken of the data before keyspace truncation
+# or dropping of column families. The STRONGLY advised default of true 
+# should be used to provide data safety. If you set this flag to false, you 
will
+# lose data on truncation or drop.
+auto_snapshot: true
+
 # Add column indexes to a row after its contents reach this size.
 # Increase if your column values are large, or if you have a very large
 # number of columns.  The competing causes are, Cassandra has to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 7cff37f..e079590 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -80,6 +80,7 @@ public class Config
 public Integer thrift_max_message_length_in_mb = 16;
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
+public Boolean auto_snapshot = true;
 
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 667c90a..09686d2 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -900,6 +900,10 @@ public class DatabaseDescriptor
 return conf.snapshot_before_compaction;
 }
 
+public static boolean isAutoSnapshot() {
+return conf.auto_snapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 9c790ae..b7d74bc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1653,7 +1653,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 throw new AssertionError(e);
   

[2/20] git commit: merge from 1.1.0

2012-04-11 Thread jbellis
merge from 1.1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6564b33c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6564b33c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6564b33c

Branch: refs/heads/cassandra-1.1
Commit: 6564b33c76df99152f48f2408938c4af439f9344
Parents: 092dc58 c14e266
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:08:06 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:10:37 2012 -0500

--
 CHANGES.txt|3 +++
 conf/cassandra.yaml|6 ++
 .../org/apache/cassandra/thrift/Cassandra.java |4 
 .../org/apache/cassandra/thrift/Constants.java |2 +-
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 src/java/org/apache/cassandra/db/DefsTable.java|6 --
 8 files changed, 25 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
--
diff --cc interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
index d68f1ca,d9b51a5..a10f936
--- a/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
+++ b/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
@@@ -17817,6 -17733,6 +17817,8 @@@ public class Cassandra 
  
  private void readObject(java.io.ObjectInputStream in) throws 
java.io.IOException, ClassNotFoundException {
try {
++// it doesn't seem like you should have to do this, but java 
serialization is wacky, and doesn't call the default constructor.
++__isset_bit_vector = new BitSet(1);
  read(new org.apache.thrift.protocol.TCompactProtocol(new 
org.apache.thrift.transport.TIOStreamTransport(in)));
} catch (org.apache.thrift.TException te) {
  throw new java.io.IOException(te);
@@@ -34876,6 -34146,6 +34878,8 @@@
  
  private void readObject(java.io.ObjectInputStream in) throws 
java.io.IOException, ClassNotFoundException {
try {
++// it doesn't seem like you should have to do this, but java 
serialization is wacky, and doesn't call the default constructor.
++__isset_bit_vector = new BitSet(1);
  read(new org.apache.thrift.protocol.TCompactProtocol(new 
org.apache.thrift.transport.TIOStreamTransport(in)));
} catch (org.apache.thrift.TException te) {
  throw new java.io.IOException(te);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
--
diff --cc interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
index 270976b,270976b..723562b
--- a/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
+++ b/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
@@@ -44,6 -44,6 +44,6 @@@ import org.slf4j.LoggerFactory
  
  public class Constants {
  
--  public static final String VERSION = "19.30.0";
++  public static final String VERSION = "19.31.0";
  
  }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/db/DefsTable.java
--



[5/20] git commit: merge from 1.0

2012-04-11 Thread jbellis
merge from 1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c14e266e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c14e266e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c14e266e

Branch: refs/heads/trunk
Commit: c14e266eb383a4e10bb945b864ad7d24ca5eb98b
Parents: 0d1d3bc 142e8c1
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:05:33 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:05:33 2012 -0500

--
 CHANGES.txt|3 +
 build.xml  |   49 ++-
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 +-
 src/java/org/apache/cassandra/db/DefsTable.java|6 +-
 tools/stress/bin/stress|   16 +---
 tools/stress/bin/stress.bat|4 +-
 tools/stress/bin/stressd   |   16 +---
 tools/stress/build.xml |   70 ---
 11 files changed, 69 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/CHANGES.txt
--
diff --cc CHANGES.txt
index 43ee218,e2fc4de..26315be
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,54 -1,8 +1,57 @@@
 -1.0.10
 +1.1-dev
 + * fix read_repair_chance to really default to 0.1 in the cli (CASSANDRA-4114)
 + * Adds caching and bloomFilterFpChange to CQL options (CASSANDRA-4042)
 + * Adds posibility to autoconfigure size of the KeyCache (CASSANDRA-4087)
 + * fix KEYS index from skipping results (CASSANDRA-3996)
 + * Remove sliced_buffer_size_in_kb dead option (CASSANDRA-4076)
 + * make loadNewSStable preserve sstable version (CASSANDRA-4077)
 + * Respect 1.0 cache settings as much as possible when upgrading 
 +   (CASSANDRA-4088)
 + * relax path length requirement for sstable files when upgrading on 
 +   non-Windows platforms (CASSANDRA-4110)
 + * fix terminination of the stress.java when errors were encountered
 +   (CASSANDRA-4128)
 + * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
 +Merged from 1.0:
+  * add auto_snapshot option allowing disabling snapshot before drop/truncate
+(CASSANDRA-3710)
   * allow short snitch names (CASSANDRA-4130)
 +
++
 +1.1-beta2
 + * rename loaded sstables to avoid conflicts with local snapshots
 +   (CASSANDRA-3967)
 + * start hint replay as soon as FD notifies that the target is back up
 +   (CASSANDRA-3958)
 + * avoid unproductive deserializing of cached rows during compaction
 +   (CASSANDRA-3921)
 + * fix concurrency issues with CQL keyspace creation (CASSANDRA-3903)
 + * Show Effective Owership via Nodetool ring  (CASSANDRA-3412)
 + * Update ORDER BY syntax for CQL3 (CASSANDRA-3925)
 + * Fix BulkRecordWriter to not throw NPE if reducer gets no map data from 
Hadoop (CASSANDRA-3944)
 + * Fix bug with counters in super columns (CASSANDRA-3821)
 + * Remove deprecated merge_shard_chance (CASSANDRA-3940)
 + * add a convenient way to reset a node's schema (CASSANDRA-2963)
 + * fix for intermittent SchemaDisagreementException (CASSANDRA-3884)
 + * ignore deprecated KsDef/CfDef/ColumnDef fields in native schema 
(CASSANDRA-3963)
 + * CLI to report when unsupported column_metadata pair was given 
(CASSANDRA-3959)
 + * reincarnate removed and deprecated KsDef/CfDef attributes (CASSANDRA-3953)
 + * Fix race between writes and read for cache (CASSANDRA-3862)
 + * perform static initialization of StorageProxy on start-up (CASSANDRA-3797)
 + * support trickling fsync() on writes (CASSANDRA-3950)
 + * expose counters for unavailable/timeout exceptions given to thrift clients 
(CASSANDRA-3671)
 + * avoid quadratic startup time in LeveledManifest (CASSANDRA-3952)
 + * Add type information to new schema_ columnfamilies and remove thrift
 +   serialization for schema (CASSANDRA-3792)
 + * add missing column validator options to the CLI help (CASSANDRA-3926)
 + * skip reading saved key cache if CF's caching strategy is NONE or ROWS_ONLY 
(CASSANDRA-3954)
 + * Unify migration code (CASSANDRA-4017)
 +Merged from 1.0:
   * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
 +
 +
 +1.0.9
 +===
   * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/build.xml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/conf/cassandra.yaml
--

http://git-wip

[13/20] git commit: Add stress tool to binaries patch by Vijay; reviewed by driftx for CASSANDRA-4103

2012-04-11 Thread jbellis
Add stress tool to binaries
patch by Vijay; reviewed by driftx for CASSANDRA-4103


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b746daf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b746daf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b746daf

Branch: refs/heads/cassandra-1.1.0
Commit: 9b746daf7f2d0725b6ebe678dfbedb303b71bf57
Parents: 0162447
Author: Vijay Parthasarathy 
Authored: Mon Apr 9 17:09:53 2012 -0700
Committer: Vijay Parthasarathy 
Committed: Mon Apr 9 17:09:53 2012 -0700

--
 build.xml   |   49 +--
 tools/stress/bin/stress |   16 +
 tools/stress/bin/stress.bat |4 +--
 tools/stress/bin/stressd|   16 +
 tools/stress/build.xml  |   70 --
 5 files changed, 49 insertions(+), 106 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b746daf/build.xml
--
diff --git a/build.xml b/build.xml
index ebd80f6..5cf96cb 100644
--- a/build.xml
+++ b/build.xml
@@ -641,7 +641,7 @@
 The build target builds all the .class files
 -->
 
+
depends="maven-ant-tasks-retrieve-build,avro-generate,build-subprojects,build-project,stress-build"
 description="Compile Cassandra classes"/>
 
 
 
@@ -670,12 +670,34 @@
 
 
 
+
+
+
+   
+
+
+
+
+
+   
+
+
+
+
+
+
+
+
+
+
+
+
 
 
 
   
   
@@ -737,6 +759,20 @@
   
 
   
+
+  
+  
+
+
+  
+  
+  
+  
+
+
+
+
+  
 
 
 
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-



[12/20] git commit: Add stress tool to binaries patch by Vijay; reviewed by driftx for CASSANDRA-4103

2012-04-11 Thread jbellis
Add stress tool to binaries
patch by Vijay; reviewed by driftx for CASSANDRA-4103


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b746daf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b746daf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b746daf

Branch: refs/heads/trunk
Commit: 9b746daf7f2d0725b6ebe678dfbedb303b71bf57
Parents: 0162447
Author: Vijay Parthasarathy 
Authored: Mon Apr 9 17:09:53 2012 -0700
Committer: Vijay Parthasarathy 
Committed: Mon Apr 9 17:09:53 2012 -0700

--
 build.xml   |   49 +--
 tools/stress/bin/stress |   16 +
 tools/stress/bin/stress.bat |4 +--
 tools/stress/bin/stressd|   16 +
 tools/stress/build.xml  |   70 --
 5 files changed, 49 insertions(+), 106 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b746daf/build.xml
--
diff --git a/build.xml b/build.xml
index ebd80f6..5cf96cb 100644
--- a/build.xml
+++ b/build.xml
@@ -641,7 +641,7 @@
 The build target builds all the .class files
 -->
 
+
depends="maven-ant-tasks-retrieve-build,avro-generate,build-subprojects,build-project,stress-build"
 description="Compile Cassandra classes"/>
 
 
 
@@ -670,12 +670,34 @@
 
 
 
+
+
+
+   
+
+
+
+
+
+   
+
+
+
+
+
+
+
+
+
+
+
+
 
 
 
   
   
@@ -737,6 +759,20 @@
   
 
   
+
+  
+  
+
+
+  
+  
+  
+  
+
+
+
+
+  
 
 
 
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-



[14/20] git commit: Add stress tool to binaries patch by Vijay; reviewed by driftx for CASSANDRA-4103

2012-04-11 Thread jbellis
Add stress tool to binaries
patch by Vijay; reviewed by driftx for CASSANDRA-4103


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b746daf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b746daf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b746daf

Branch: refs/heads/cassandra-1.1
Commit: 9b746daf7f2d0725b6ebe678dfbedb303b71bf57
Parents: 0162447
Author: Vijay Parthasarathy 
Authored: Mon Apr 9 17:09:53 2012 -0700
Committer: Vijay Parthasarathy 
Committed: Mon Apr 9 17:09:53 2012 -0700

--
 build.xml   |   49 +--
 tools/stress/bin/stress |   16 +
 tools/stress/bin/stress.bat |4 +--
 tools/stress/bin/stressd|   16 +
 tools/stress/build.xml  |   70 --
 5 files changed, 49 insertions(+), 106 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b746daf/build.xml
--
diff --git a/build.xml b/build.xml
index ebd80f6..5cf96cb 100644
--- a/build.xml
+++ b/build.xml
@@ -641,7 +641,7 @@
 The build target builds all the .class files
 -->
 
+
depends="maven-ant-tasks-retrieve-build,avro-generate,build-subprojects,build-project,stress-build"
 description="Compile Cassandra classes"/>
 
 
 
@@ -670,12 +670,34 @@
 
 
 
+
+
+
+   
+
+
+
+
+
+   
+
+
+
+
+
+
+
+
+
+
+
+
 
 
 
   
   
@@ -737,6 +759,20 @@
   
 
   
+
+  
+  
+
+
+  
+  
+  
+  
+
+
+
+
+  
 
 
 
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-



[18/20] Improve picking of comparator for composite column with CQL3

2012-04-11 Thread jbellis
http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d1d3bca/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index f6d8209..bfd9481 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -48,39 +48,51 @@ public class ColumnDefinition
 private Map index_options;
 private String index_name;
 
-public ColumnDefinition(ByteBuffer name, AbstractType validator, 
IndexType index_type, Map index_options, String index_name)
+/*
+ * If the column comparator is a composite type, indicates to which
+ * component this definition refers to. If null, the definition refers to
+ * the full column name.
+ */
+public final Integer componentIndex;
+
+public ColumnDefinition(ByteBuffer name, AbstractType validator, 
IndexType index_type, Map index_options, String index_name, 
Integer componentIndex)
 {
 assert name != null && validator != null;
 this.name = name;
 this.index_name = index_name;
 this.validator = validator;
-
+this.componentIndex = componentIndex;
 this.setIndexType(index_type, index_options);
 }
 
-public static ColumnDefinition ascii(String name)
+public static ColumnDefinition ascii(String name, Integer cidx)
+{
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
AsciiType.instance, null, null, null, cidx);
+}
+
+public static ColumnDefinition bool(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
AsciiType.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
BooleanType.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition bool(String name)
+public static ColumnDefinition utf8(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
BooleanType.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
UTF8Type.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition utf8(String name)
+public static ColumnDefinition int32(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
UTF8Type.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
Int32Type.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition int32(String name)
+public static ColumnDefinition double_(String name, Integer cidx)
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
Int32Type.instance, null, null, null);
+return new ColumnDefinition(ByteBufferUtil.bytes(name), 
DoubleType.instance, null, null, null, cidx);
 }
 
-public static ColumnDefinition double_(String name)
+public ColumnDefinition clone()
 {
-return new ColumnDefinition(ByteBufferUtil.bytes(name), 
DoubleType.instance, null, null, null);
+return new ColumnDefinition(name, validator, index_type, 
index_options, index_name, componentIndex);
 }
 
 @Override
@@ -100,6 +112,8 @@ public class ColumnDefinition
 return false;
 if (!name.equals(that.name))
 return false;
+if (componentIndex != null ? 
!componentIndex.equals(that.componentIndex) : that.componentIndex != null)
+return false;
 return !(validator != null ? !validator.equals(that.validator) : 
that.validator != null);
 }
 
@@ -111,6 +125,7 @@ public class ColumnDefinition
 result = 31 * result + (index_type != null ? index_type.hashCode() : 
0);
 result = 31 * result + (index_options != null ? 
index_options.hashCode() : 0);
 result = 31 * result + (index_name != null ? index_name.hashCode() : 
0);
+result = 31 * result + (componentIndex != null ? 
componentIndex.hashCode() : 0);
 return result;
 }
 
@@ -136,7 +151,8 @@ public class ColumnDefinition
 
TypeParser.parse(thriftColumnDef.validation_class),
 thriftColumnDef.index_type,
 thriftColumnDef.index_options,
-thriftColumnDef.index_name);
+thriftColumnDef.index_name,
+null);
 }
 
 public static Map fromThrift(List 
thriftDefs) throws ConfigurationException
@@ -181,6 +197,8 @@ public class ColumnDefinition
: 
Column.create(json(index_options), timestamp, cfName, 
comparator.getString(name), "index_options"));
 cf.addColumn(index_name == null ? DeletedColumn.create(ldt, t

[7/20] git commit: add auto_snapshot option allowing disabling snapshot before drop/truncate patch by dbrosius; reviewed by jbellis for CASSANDRA-3710

2012-04-11 Thread jbellis
add auto_snapshot option allowing disabling snapshot before drop/truncate
patch by dbrosius; reviewed by jbellis for CASSANDRA-3710


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/142e8c1a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/142e8c1a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/142e8c1a

Branch: refs/heads/cassandra-1.1.0
Commit: 142e8c1a2967947a3ed5314e4577f4be3cfe4bd3
Parents: 9b746da
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:01:56 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:01:56 2012 -0500

--
 CHANGES.txt|2 ++
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 .../cassandra/db/migration/DropColumnFamily.java   |3 ++-
 .../cassandra/db/migration/DropKeyspace.java   |4 +++-
 7 files changed, 20 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1349299..e2fc4de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.0.10
+ * add auto_snapshot option allowing disabling snapshot before drop/truncate
+   (CASSANDRA-3710)
  * allow short snitch names (CASSANDRA-4130)
  * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
  * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 207d8f9..3249747 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -256,6 +256,12 @@ incremental_backups: false
 # is a data format change.
 snapshot_before_compaction: false
 
+# Whether or not a snapshot is taken of the data before keyspace truncation
+# or dropping of column families. The STRONGLY advised default of true 
+# should be used to provide data safety. If you set this flag to false, you 
will
+# lose data on truncation or drop.
+auto_snapshot: true
+
 # Add column indexes to a row after its contents reach this size.
 # Increase if your column values are large, or if you have a very large
 # number of columns.  The competing causes are, Cassandra has to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 7cff37f..e079590 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -80,6 +80,7 @@ public class Config
 public Integer thrift_max_message_length_in_mb = 16;
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
+public Boolean auto_snapshot = true;
 
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 667c90a..09686d2 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -900,6 +900,10 @@ public class DatabaseDescriptor
 return conf.snapshot_before_compaction;
 }
 
+public static boolean isAutoSnapshot() {
+return conf.auto_snapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 9c790ae..b7d74bc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1653,7 +1653,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 throw new AssertionError(e);
 

[11/20] git commit: Make scrub and cleanup operations throttled patch by Vijay; reviewed by yukim for CASSANDRA-4100

2012-04-11 Thread jbellis
Make scrub and cleanup operations throttled
patch by Vijay; reviewed by yukim for CASSANDRA-4100


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/092dc586
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/092dc586
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/092dc586

Branch: refs/heads/trunk
Commit: 092dc586f556b1c2bef048d9ee8672240a31f442
Parents: eef93e7
Author: Vijay Parthasarathy 
Authored: Tue Apr 10 17:34:18 2012 -0700
Committer: Vijay Parthasarathy 
Committed: Tue Apr 10 17:34:18 2012 -0700

--
 .../db/compaction/AbstractCompactionIterable.java  |   20 --
 .../db/compaction/CompactionController.java|   21 +++
 .../db/compaction/CompactionIterable.java  |2 +-
 .../cassandra/db/compaction/CompactionManager.java |6 
 .../db/compaction/ParallelCompactionIterable.java  |2 +-
 5 files changed, 29 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/092dc586/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
index e05a64c..95e6590 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionIterable.java
@@ -28,12 +28,9 @@ import java.util.List;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.io.sstable.SSTableScanner;
-import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.utils.CloseableIterator;
-import org.apache.cassandra.utils.Throttle;
 
 public abstract class AbstractCompactionIterable extends CompactionInfo.Holder 
implements Iterable
 {
@@ -45,8 +42,6 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
 protected volatile long bytesRead;
 protected final List scanners;
 
-protected final Throttle throttle;
-
 public AbstractCompactionIterable(CompactionController controller, 
OperationType type, List scanners)
 {
 this.controller = controller;
@@ -58,21 +53,6 @@ public abstract class AbstractCompactionIterable extends 
CompactionInfo.Holder i
 for (SSTableScanner scanner : scanners)
 bytes += scanner.getFileLength();
 this.totalBytes = bytes;
-
-this.throttle = new Throttle(toString(), new 
Throttle.ThroughputFunction()
-{
-/** @return Instantaneous throughput target in bytes per 
millisecond. */
-public int targetThroughput()
-{
-if (DatabaseDescriptor.getCompactionThroughputMbPerSec() < 1 
|| StorageService.instance.isBootstrapMode())
-// throttling disabled
-return 0;
-// total throughput
-int totalBytesPerMS = 
DatabaseDescriptor.getCompactionThroughputMbPerSec() * 1024 * 1024 / 1000;
-// per stream throughput (target bytes per MS)
-return totalBytesPerMS / Math.max(1, 
CompactionManager.instance.getActiveCompactions());
-}
-});
 }
 
 protected static List getScanners(Iterable 
sstables) throws IOException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/092dc586/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index 1da6f9c..9eaefe7 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -30,6 +30,8 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.io.sstable.SSTableIdentityIterator;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.service.CacheService;
+import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.utils.Throttle;
 import org.apache.cassandra.utils.IntervalTree.Interval;
 import org.apache.cassandra.utils.IntervalTree.IntervalTree;
 
@@ -47,6 +49,20 @@ public class CompactionController
 public final int gcBefore;
 public boolean keyExistenceIsExpensive;
 public final int mergeShardBefore;
+private final Throttle throttle = new Throttle("Cassandra_Throttle", new 
Throttle.ThroughputFunction()
+

[8/20] git commit: add auto_snapshot option allowing disabling snapshot before drop/truncate patch by dbrosius; reviewed by jbellis for CASSANDRA-3710

2012-04-11 Thread jbellis
add auto_snapshot option allowing disabling snapshot before drop/truncate
patch by dbrosius; reviewed by jbellis for CASSANDRA-3710


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/142e8c1a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/142e8c1a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/142e8c1a

Branch: refs/heads/cassandra-1.1
Commit: 142e8c1a2967947a3ed5314e4577f4be3cfe4bd3
Parents: 9b746da
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:01:56 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:01:56 2012 -0500

--
 CHANGES.txt|2 ++
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 .../cassandra/db/migration/DropColumnFamily.java   |3 ++-
 .../cassandra/db/migration/DropKeyspace.java   |4 +++-
 7 files changed, 20 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1349299..e2fc4de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.0.10
+ * add auto_snapshot option allowing disabling snapshot before drop/truncate
+   (CASSANDRA-3710)
  * allow short snitch names (CASSANDRA-4130)
  * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
  * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 207d8f9..3249747 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -256,6 +256,12 @@ incremental_backups: false
 # is a data format change.
 snapshot_before_compaction: false
 
+# Whether or not a snapshot is taken of the data before keyspace truncation
+# or dropping of column families. The STRONGLY advised default of true 
+# should be used to provide data safety. If you set this flag to false, you 
will
+# lose data on truncation or drop.
+auto_snapshot: true
+
 # Add column indexes to a row after its contents reach this size.
 # Increase if your column values are large, or if you have a very large
 # number of columns.  The competing causes are, Cassandra has to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 7cff37f..e079590 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -80,6 +80,7 @@ public class Config
 public Integer thrift_max_message_length_in_mb = 16;
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
+public Boolean auto_snapshot = true;
 
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 667c90a..09686d2 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -900,6 +900,10 @@ public class DatabaseDescriptor
 return conf.snapshot_before_compaction;
 }
 
+public static boolean isAutoSnapshot() {
+return conf.auto_snapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 9c790ae..b7d74bc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1653,7 +1653,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 throw new AssertionError(e);
   

[1/20] git commit: merge from 1.1

2012-04-11 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.0 9b746daf7 -> 142e8c1a2
  refs/heads/cassandra-1.1 092dc586f -> 6564b33c7
  refs/heads/cassandra-1.1.0 0d1d3bca1 -> c14e266eb
  refs/heads/trunk f774b7fc3 -> 8f9b37c3d


merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8f9b37c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8f9b37c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8f9b37c3

Branch: refs/heads/trunk
Commit: 8f9b37c3dd73fde832b865843ae447a2451e6e70
Parents: f774b7f 6564b33
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:12:55 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:12:55 2012 -0500

--
 CHANGES.txt|3 +++
 conf/cassandra.yaml|6 ++
 .../org/apache/cassandra/thrift/Cassandra.java |4 
 .../org/apache/cassandra/thrift/Constants.java |2 +-
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 src/java/org/apache/cassandra/db/DefsTable.java|6 --
 .../db/compaction/CompactionController.java|2 +-
 9 files changed, 26 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f9b37c3/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f9b37c3/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f9b37c3/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f9b37c3/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f9b37c3/src/java/org/apache/cassandra/db/DefsTable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f9b37c3/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionController.java
index f7d5354,9eaefe7..402deaa
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@@ -26,15 -26,14 +26,15 @@@ import org.slf4j.Logger
  import org.slf4j.LoggerFactory;
  
  import org.apache.cassandra.config.DatabaseDescriptor;
 -import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.ColumnFamilyStore;
 +import org.apache.cassandra.db.DataTracker;
 +import org.apache.cassandra.db.DecoratedKey;
  import org.apache.cassandra.io.sstable.SSTableIdentityIterator;
  import org.apache.cassandra.io.sstable.SSTableReader;
 -import org.apache.cassandra.service.CacheService;
  import org.apache.cassandra.service.StorageService;
--import org.apache.cassandra.utils.Throttle;
  import org.apache.cassandra.utils.IntervalTree.Interval;
  import org.apache.cassandra.utils.IntervalTree.IntervalTree;
++import org.apache.cassandra.utils.Throttle;
  
  /**
   * Manage compaction options.



[3/20] git commit: merge from 1.1.0

2012-04-11 Thread jbellis
merge from 1.1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6564b33c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6564b33c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6564b33c

Branch: refs/heads/trunk
Commit: 6564b33c76df99152f48f2408938c4af439f9344
Parents: 092dc58 c14e266
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:08:06 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:10:37 2012 -0500

--
 CHANGES.txt|3 +++
 conf/cassandra.yaml|6 ++
 .../org/apache/cassandra/thrift/Cassandra.java |4 
 .../org/apache/cassandra/thrift/Constants.java |2 +-
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 src/java/org/apache/cassandra/db/DefsTable.java|6 --
 8 files changed, 25 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
--
diff --cc interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
index d68f1ca,d9b51a5..a10f936
--- a/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
+++ b/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
@@@ -17817,6 -17733,6 +17817,8 @@@ public class Cassandra 
  
  private void readObject(java.io.ObjectInputStream in) throws 
java.io.IOException, ClassNotFoundException {
try {
++// it doesn't seem like you should have to do this, but java 
serialization is wacky, and doesn't call the default constructor.
++__isset_bit_vector = new BitSet(1);
  read(new org.apache.thrift.protocol.TCompactProtocol(new 
org.apache.thrift.transport.TIOStreamTransport(in)));
} catch (org.apache.thrift.TException te) {
  throw new java.io.IOException(te);
@@@ -34876,6 -34146,6 +34878,8 @@@
  
  private void readObject(java.io.ObjectInputStream in) throws 
java.io.IOException, ClassNotFoundException {
try {
++// it doesn't seem like you should have to do this, but java 
serialization is wacky, and doesn't call the default constructor.
++__isset_bit_vector = new BitSet(1);
  read(new org.apache.thrift.protocol.TCompactProtocol(new 
org.apache.thrift.transport.TIOStreamTransport(in)));
} catch (org.apache.thrift.TException te) {
  throw new java.io.IOException(te);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
--
diff --cc interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
index 270976b,270976b..723562b
--- a/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
+++ b/interface/thrift/gen-java/org/apache/cassandra/thrift/Constants.java
@@@ -44,6 -44,6 +44,6 @@@ import org.slf4j.LoggerFactory
  
  public class Constants {
  
--  public static final String VERSION = "19.30.0";
++  public static final String VERSION = "19.31.0";
  
  }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6564b33c/src/java/org/apache/cassandra/db/DefsTable.java
--



[6/20] git commit: merge from 1.0

2012-04-11 Thread jbellis
merge from 1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c14e266e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c14e266e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c14e266e

Branch: refs/heads/cassandra-1.1
Commit: c14e266eb383a4e10bb945b864ad7d24ca5eb98b
Parents: 0d1d3bc 142e8c1
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:05:33 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:05:33 2012 -0500

--
 CHANGES.txt|3 +
 build.xml  |   49 ++-
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 +-
 src/java/org/apache/cassandra/db/DefsTable.java|6 +-
 tools/stress/bin/stress|   16 +---
 tools/stress/bin/stress.bat|4 +-
 tools/stress/bin/stressd   |   16 +---
 tools/stress/build.xml |   70 ---
 11 files changed, 69 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/CHANGES.txt
--
diff --cc CHANGES.txt
index 43ee218,e2fc4de..26315be
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,54 -1,8 +1,57 @@@
 -1.0.10
 +1.1-dev
 + * fix read_repair_chance to really default to 0.1 in the cli (CASSANDRA-4114)
 + * Adds caching and bloomFilterFpChange to CQL options (CASSANDRA-4042)
 + * Adds posibility to autoconfigure size of the KeyCache (CASSANDRA-4087)
 + * fix KEYS index from skipping results (CASSANDRA-3996)
 + * Remove sliced_buffer_size_in_kb dead option (CASSANDRA-4076)
 + * make loadNewSStable preserve sstable version (CASSANDRA-4077)
 + * Respect 1.0 cache settings as much as possible when upgrading 
 +   (CASSANDRA-4088)
 + * relax path length requirement for sstable files when upgrading on 
 +   non-Windows platforms (CASSANDRA-4110)
 + * fix terminination of the stress.java when errors were encountered
 +   (CASSANDRA-4128)
 + * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
 +Merged from 1.0:
+  * add auto_snapshot option allowing disabling snapshot before drop/truncate
+(CASSANDRA-3710)
   * allow short snitch names (CASSANDRA-4130)
 +
++
 +1.1-beta2
 + * rename loaded sstables to avoid conflicts with local snapshots
 +   (CASSANDRA-3967)
 + * start hint replay as soon as FD notifies that the target is back up
 +   (CASSANDRA-3958)
 + * avoid unproductive deserializing of cached rows during compaction
 +   (CASSANDRA-3921)
 + * fix concurrency issues with CQL keyspace creation (CASSANDRA-3903)
 + * Show Effective Owership via Nodetool ring  (CASSANDRA-3412)
 + * Update ORDER BY syntax for CQL3 (CASSANDRA-3925)
 + * Fix BulkRecordWriter to not throw NPE if reducer gets no map data from 
Hadoop (CASSANDRA-3944)
 + * Fix bug with counters in super columns (CASSANDRA-3821)
 + * Remove deprecated merge_shard_chance (CASSANDRA-3940)
 + * add a convenient way to reset a node's schema (CASSANDRA-2963)
 + * fix for intermittent SchemaDisagreementException (CASSANDRA-3884)
 + * ignore deprecated KsDef/CfDef/ColumnDef fields in native schema 
(CASSANDRA-3963)
 + * CLI to report when unsupported column_metadata pair was given 
(CASSANDRA-3959)
 + * reincarnate removed and deprecated KsDef/CfDef attributes (CASSANDRA-3953)
 + * Fix race between writes and read for cache (CASSANDRA-3862)
 + * perform static initialization of StorageProxy on start-up (CASSANDRA-3797)
 + * support trickling fsync() on writes (CASSANDRA-3950)
 + * expose counters for unavailable/timeout exceptions given to thrift clients 
(CASSANDRA-3671)
 + * avoid quadratic startup time in LeveledManifest (CASSANDRA-3952)
 + * Add type information to new schema_ columnfamilies and remove thrift
 +   serialization for schema (CASSANDRA-3792)
 + * add missing column validator options to the CLI help (CASSANDRA-3926)
 + * skip reading saved key cache if CF's caching strategy is NONE or ROWS_ONLY 
(CASSANDRA-3954)
 + * Unify migration code (CASSANDRA-4017)
 +Merged from 1.0:
   * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
 +
 +
 +1.0.9
 +===
   * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/build.xml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/conf/cassandra.yaml
--

http:/

[10/20] git commit: add auto_snapshot option allowing disabling snapshot before drop/truncate patch by dbrosius; reviewed by jbellis for CASSANDRA-3710

2012-04-11 Thread jbellis
add auto_snapshot option allowing disabling snapshot before drop/truncate
patch by dbrosius; reviewed by jbellis for CASSANDRA-3710


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/142e8c1a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/142e8c1a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/142e8c1a

Branch: refs/heads/trunk
Commit: 142e8c1a2967947a3ed5314e4577f4be3cfe4bd3
Parents: 9b746da
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:01:56 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:01:56 2012 -0500

--
 CHANGES.txt|2 ++
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 ++-
 .../cassandra/db/migration/DropColumnFamily.java   |3 ++-
 .../cassandra/db/migration/DropKeyspace.java   |4 +++-
 7 files changed, 20 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1349299..e2fc4de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.0.10
+ * add auto_snapshot option allowing disabling snapshot before drop/truncate
+   (CASSANDRA-3710)
  * allow short snitch names (CASSANDRA-4130)
  * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
  * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 207d8f9..3249747 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -256,6 +256,12 @@ incremental_backups: false
 # is a data format change.
 snapshot_before_compaction: false
 
+# Whether or not a snapshot is taken of the data before keyspace truncation
+# or dropping of column families. The STRONGLY advised default of true 
+# should be used to provide data safety. If you set this flag to false, you 
will
+# lose data on truncation or drop.
+auto_snapshot: true
+
 # Add column indexes to a row after its contents reach this size.
 # Increase if your column values are large, or if you have a very large
 # number of columns.  The competing causes are, Cassandra has to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 7cff37f..e079590 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -80,6 +80,7 @@ public class Config
 public Integer thrift_max_message_length_in_mb = 16;
 public Integer thrift_framed_transport_size_in_mb = 15;
 public Boolean snapshot_before_compaction = false;
+public Boolean auto_snapshot = true;
 
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 667c90a..09686d2 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -900,6 +900,10 @@ public class DatabaseDescriptor
 return conf.snapshot_before_compaction;
 }
 
+public static boolean isAutoSnapshot() {
+return conf.auto_snapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/142e8c1a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 9c790ae..b7d74bc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1653,7 +1653,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 throw new AssertionError(e);
 }

[4/20] git commit: merge from 1.0

2012-04-11 Thread jbellis
merge from 1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c14e266e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c14e266e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c14e266e

Branch: refs/heads/cassandra-1.1.0
Commit: c14e266eb383a4e10bb945b864ad7d24ca5eb98b
Parents: 0d1d3bc 142e8c1
Author: Jonathan Ellis 
Authored: Wed Apr 11 09:05:33 2012 -0500
Committer: Jonathan Ellis 
Committed: Wed Apr 11 09:05:33 2012 -0500

--
 CHANGES.txt|3 +
 build.xml  |   49 ++-
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |1 +
 .../cassandra/config/DatabaseDescriptor.java   |4 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |3 +-
 src/java/org/apache/cassandra/db/DefsTable.java|6 +-
 tools/stress/bin/stress|   16 +---
 tools/stress/bin/stress.bat|4 +-
 tools/stress/bin/stressd   |   16 +---
 tools/stress/build.xml |   70 ---
 11 files changed, 69 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/CHANGES.txt
--
diff --cc CHANGES.txt
index 43ee218,e2fc4de..26315be
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,54 -1,8 +1,57 @@@
 -1.0.10
 +1.1-dev
 + * fix read_repair_chance to really default to 0.1 in the cli (CASSANDRA-4114)
 + * Adds caching and bloomFilterFpChange to CQL options (CASSANDRA-4042)
 + * Adds posibility to autoconfigure size of the KeyCache (CASSANDRA-4087)
 + * fix KEYS index from skipping results (CASSANDRA-3996)
 + * Remove sliced_buffer_size_in_kb dead option (CASSANDRA-4076)
 + * make loadNewSStable preserve sstable version (CASSANDRA-4077)
 + * Respect 1.0 cache settings as much as possible when upgrading 
 +   (CASSANDRA-4088)
 + * relax path length requirement for sstable files when upgrading on 
 +   non-Windows platforms (CASSANDRA-4110)
 + * fix terminination of the stress.java when errors were encountered
 +   (CASSANDRA-4128)
 + * Move CfDef and KsDef validation out of thrift (CASSANDRA-4037)
 +Merged from 1.0:
+  * add auto_snapshot option allowing disabling snapshot before drop/truncate
+(CASSANDRA-3710)
   * allow short snitch names (CASSANDRA-4130)
 +
++
 +1.1-beta2
 + * rename loaded sstables to avoid conflicts with local snapshots
 +   (CASSANDRA-3967)
 + * start hint replay as soon as FD notifies that the target is back up
 +   (CASSANDRA-3958)
 + * avoid unproductive deserializing of cached rows during compaction
 +   (CASSANDRA-3921)
 + * fix concurrency issues with CQL keyspace creation (CASSANDRA-3903)
 + * Show Effective Owership via Nodetool ring  (CASSANDRA-3412)
 + * Update ORDER BY syntax for CQL3 (CASSANDRA-3925)
 + * Fix BulkRecordWriter to not throw NPE if reducer gets no map data from 
Hadoop (CASSANDRA-3944)
 + * Fix bug with counters in super columns (CASSANDRA-3821)
 + * Remove deprecated merge_shard_chance (CASSANDRA-3940)
 + * add a convenient way to reset a node's schema (CASSANDRA-2963)
 + * fix for intermittent SchemaDisagreementException (CASSANDRA-3884)
 + * ignore deprecated KsDef/CfDef/ColumnDef fields in native schema 
(CASSANDRA-3963)
 + * CLI to report when unsupported column_metadata pair was given 
(CASSANDRA-3959)
 + * reincarnate removed and deprecated KsDef/CfDef attributes (CASSANDRA-3953)
 + * Fix race between writes and read for cache (CASSANDRA-3862)
 + * perform static initialization of StorageProxy on start-up (CASSANDRA-3797)
 + * support trickling fsync() on writes (CASSANDRA-3950)
 + * expose counters for unavailable/timeout exceptions given to thrift clients 
(CASSANDRA-3671)
 + * avoid quadratic startup time in LeveledManifest (CASSANDRA-3952)
 + * Add type information to new schema_ columnfamilies and remove thrift
 +   serialization for schema (CASSANDRA-3792)
 + * add missing column validator options to the CLI help (CASSANDRA-3926)
 + * skip reading saved key cache if CF's caching strategy is NONE or ROWS_ONLY 
(CASSANDRA-3954)
 + * Unify migration code (CASSANDRA-4017)
 +Merged from 1.0:
   * cqlsh: guess correct version of Python for Arch Linux (CASSANDRA-4090)
 +
 +
 +1.0.9
 +===
   * (CLI) properly handle quotes in create/update keyspace commands 
(CASSANDRA-4129)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/build.xml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c14e266e/conf/cassandra.yaml
--

http

[jira] [Commented] (CASSANDRA-3974) Per-CF TTL

2012-04-11 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251561#comment-13251561
 ] 

Jonathan Ellis commented on CASSANDRA-3974:
---

bq. Part of the code I changed was in CFMetaData's toThrift and fromThrift 
methods

Let me back up.  I can see two main approaches towards respecting the per-CF 
ttl:

# Set the column TTL to the max(column, CF) ttl on insert; then the rest of the 
code doesn't have to know anything changed
# Take max(column, CF) ttl during operations like compaction, and leave column 
ttl to specify *only* the column TTL

The code in UpdateStatement led me to believe you're going with option 1.  So 
what I meant by my comment was, you need to make a similar change for inserts 
done over Thrift RPC, as well.  (to/from Thrift methods are used for telling 
Thrift clients about the schema, but are not used for insert/update operations.)

Does that help?

bq. Sorry, I'm not sure to which part of the code you're referring

CFMetadata.getTimeToLive.  Sounds like you addressed this anyway.

> Per-CF TTL
> --
>
> Key: CASSANDRA-3974
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3974
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Kirk True
>Priority: Minor
> Fix For: 1.2
>
> Attachments: trunk-3974.txt
>
>
> Per-CF TTL would allow compaction optimizations ("drop an entire sstable's 
> worth of expired data") that we can't do with per-column.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3974) Per-CF TTL

2012-04-11 Thread Jonathan Ellis (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251561#comment-13251561
 ] 

Jonathan Ellis edited comment on CASSANDRA-3974 at 4/11/12 1:42 PM:


bq. Part of the code I changed was in CFMetaData's toThrift and fromThrift 
methods

Let me back up.  I can see two main approaches towards respecting the per-CF 
ttl:

# Set the column TTL to the max(column, CF) ttl on insert; then the rest of the 
code doesn't have to know anything changed
# Take max(column, CF) ttl during operations like compaction, and leave column 
ttl (which is to say, ExpiringColumn objects) to specify *only* the column TTL

The code in UpdateStatement led me to believe you're going with option 1.  So 
what I meant by my comment was, you need to make a similar change for inserts 
done over Thrift RPC, as well.  (to/from Thrift methods are used for telling 
Thrift clients about the schema, but are not used for insert/update operations.)

Does that help?

bq. Sorry, I'm not sure to which part of the code you're referring

CFMetadata.getTimeToLive.  Sounds like you addressed this anyway.

  was (Author: jbellis):
bq. Part of the code I changed was in CFMetaData's toThrift and fromThrift 
methods

Let me back up.  I can see two main approaches towards respecting the per-CF 
ttl:

# Set the column TTL to the max(column, CF) ttl on insert; then the rest of the 
code doesn't have to know anything changed
# Take max(column, CF) ttl during operations like compaction, and leave column 
ttl to specify *only* the column TTL

The code in UpdateStatement led me to believe you're going with option 1.  So 
what I meant by my comment was, you need to make a similar change for inserts 
done over Thrift RPC, as well.  (to/from Thrift methods are used for telling 
Thrift clients about the schema, but are not used for insert/update operations.)

Does that help?

bq. Sorry, I'm not sure to which part of the code you're referring

CFMetadata.getTimeToLive.  Sounds like you addressed this anyway.
  
> Per-CF TTL
> --
>
> Key: CASSANDRA-3974
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3974
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Kirk True
>Priority: Minor
> Fix For: 1.2
>
> Attachments: trunk-3974.txt
>
>
> Per-CF TTL would allow compaction optimizations ("drop an entire sstable's 
> worth of expired data") that we can't do with per-column.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira