[jira] [Updated] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-6538:
---
Attachment: (was: 6538.patch)

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-6538:
---
Attachment: (was: sizeFzt.PNG)

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8771) Remove commit log segment recycling

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8771:

Labels: commitlog  (was: )

> Remove commit log segment recycling
> ---
>
> Key: CASSANDRA-8771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8771
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Ariel Weisberg
>  Labels: commitlog
>
> For discussion
> Commit log segment recycling introduces a lot of complexity in the existing 
> code.
> CASSANDRA-8729 is a side effect of commit log segment recycling and 
> addressing it will require memory management code and thread coordination for 
> memory that the filesystem will no longer handle for us.
> There is some discussion about what storage configurations actually benefit 
> from preallocated files. Fast random access devices like SSDs, or 
> non-volatile write caches etc. make the distinction not that great. 
> I haven't measured any difference in throughput for bulk appending vs 
> overwriting although it was pointed out that I didn't test with concurrent IO 
> streams.
> What would it take to make removing commit log segment recycling acceptable? 
> Maybe a benchmark on a spinning disk that measures the performance impact of 
> preallocation when there are other IO streams?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8707) Move SegmentedFile, IndexSummary and BloomFilter to utilising RefCounted

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313868#comment-14313868
 ] 

Marcus Eriksson commented on CASSANDRA-8707:


ok, lgtm, +1

pushed 3 small comments to 
https://github.com/krummas/cassandra/commits/bes/8707-2 (with a rebase)

I'll fill out the details about compaction in the comment in SSTableReader in 
CASSANDRA-8764

> Move SegmentedFile, IndexSummary and BloomFilter to utilising RefCounted
> 
>
> Key: CASSANDRA-8707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8707
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.3
>
>
> There are still a few bugs with resource management, especially around 
> SSTableReader cleanup, esp. when intermixing with compaction. This migration 
> should help. We can simultaneously "simplify" the logic in SSTableReader to 
> not track the replacement chain, only to take a new reference to each of the 
> underlying resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8729) Commitlog causes read before write when overwriting

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8729:

Labels: commitlog  (was: )

> Commitlog causes read before write when overwriting
> ---
>
> Key: CASSANDRA-8729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8729
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Ariel Weisberg
>  Labels: commitlog
>
> The memory mapped commit log implementation writes directly to the page 
> cache. If a page is not in the cache the kernel will read it in even though 
> we are going to overwrite.
> The way to avoid this is to write to private memory, and then pad the write 
> with 0s at the end so it is page (4k) aligned before writing to a file.
> The commit log would benefit from being refactored into something that looks 
> more like a pipeline with incoming requests receiving private memory to write 
> in, completed buffers being submitted to a  parallelized compression/checksum 
> step, followed by submission to another thread for writing to a file that 
> preserves the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8707) Move SegmentedFile, IndexSummary and BloomFilter to utilising RefCounted

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313887#comment-14313887
 ] 

Benedict commented on CASSANDRA-8707:
-

Thanks. I'll commit once 2.1.3 is out. I've actually pushed a really tiny 
update to your version 
[here|https://github.com/belliottsmith/cassandra/commits/bes/8707-2] that makes 
SharedCloseableImpl.close() idempotent; this isn't important here, but will be 
useful later, and is encouraged by the API in AutoCloseable.

> Move SegmentedFile, IndexSummary and BloomFilter to utilising RefCounted
> 
>
> Key: CASSANDRA-8707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8707
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.3
>
>
> There are still a few bugs with resource management, especially around 
> SSTableReader cleanup, esp. when intermixing with compaction. This migration 
> should help. We can simultaneously "simplify" the logic in SSTableReader to 
> not track the replacement chain, only to take a new reference to each of the 
> underlying resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7554) Make CommitLogSegment sync/close asynchronous wrt each other

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-7554.
-
Resolution: Won't Fix

Closing this for now since I hope we can simplify the CL with CASSANDRA-8771 
and CASSANDRA-8729 and some followups so that this is not necessary.

> Make CommitLogSegment sync/close asynchronous wrt each other
> 
>
> Key: CASSANDRA-7554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7554
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.0
>
>
> There are a few minor issues with CLS I wanted to tidy up after working on 
> nearby code a bit recently, namely:
> 1) We use synchronized() for sync() and for various minor accessors, meaning 
> either can block on the other, which is bad since sync() is lengthy
> 2) Currently close() (and hence recycle()) must wait for a sync() to 
> complete, which means even if we have room available in segments waiting to 
> be recycled an ongoing sync might prevent us from reclaiming the space, 
> prematurely bottlenecking on the disk here
> 3) recycle() currently depends on close(), which depends on sync(); if we've 
> decided to recycle/close a file before it is synced, this means we do not 
> care about the contents so can actually _avoid_ syncing to disk (which is 
> great in cases where the flush writers get ahead of the CL sync)
> To solve these problems I've introduced a new fairly simple concurrency 
> primitive called AsyncLock, which only supports tryLock(), or 
> tryLock(Runnable) - with the latter executing the provided runnable on the 
> thread _currently owning the lock_ after it relinquishes it. I've used this 
> to make close() take a Runnable to be executed _when the segment is actually 
> ready to be disposed of_ - which is either immediately, or once any in 
> progress sync has completed. This means the manager thread never blocks on a 
> sync.
> There is a knock on effect here, which is that we are even less inclined to 
> obey the CL limit (which has always been a soft limit), so I will file a 
> separate minor ticket to introduce a hard limit for CL size in case users 
> want to control this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8749) Cleanup SegmentedFile

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313928#comment-14313928
 ] 

Benedict commented on CASSANDRA-8749:
-

force-pushed a version with some of the MmappedSegmentedFile changes rolled 
back, since they were a mistake (i've added some comments to explain their 
behaviour, since it wasn't clear before) 

> Cleanup SegmentedFile
> -
>
> Key: CASSANDRA-8749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8749
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Trivial
> Fix For: 2.1.4
>
>
> As a follow up to 8707 (building upon it for ease, since that edits these 
> files), and a precursor to another follow up, this ticket cleans up the 
> SegmentedFile hierarchy a little, and makes it encapsulate the construction 
> of a new reader, so we implementation details don't leak into SSTableReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8749) Cleanup SegmentedFile

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313932#comment-14313932
 ] 

Marcus Eriksson commented on CASSANDRA-8749:


+1

> Cleanup SegmentedFile
> -
>
> Key: CASSANDRA-8749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8749
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Trivial
> Fix For: 2.1.4
>
>
> As a follow up to 8707 (building upon it for ease, since that edits these 
> files), and a precursor to another follow up, this ticket cleans up the 
> SegmentedFile hierarchy a little, and makes it encapsulate the construction 
> of a new reader, so we implementation details don't leak into SSTableReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8692) Coalesce intra-cluster network messages

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314029#comment-14314029
 ] 

Benedict commented on CASSANDRA-8692:
-

I've pushed a quick sketch of the approach I'm suggesting for time horizon 
based moving average calculation 
[here|https://github.com/belliottsmith/cassandra/tree/C-8692]. Note for 
lurkers: This is only the moving average calculation part of the suggestion, 
not the decision making off the back of that calculation.

> Coalesce intra-cluster network messages
> ---
>
> Key: CASSANDRA-8692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8692
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 2.1.4
>
> Attachments: batching-benchmark.png
>
>
> While researching CASSANDRA-8457 we found that it is effective and can be 
> done without introducing additional latency at low concurrency/throughput.
> The patch from that was used and found to be useful in a real life scenario 
> so I propose we implement this in 2.1 in addition to 3.0.
> The change set is a single file and is small enough to be reviewable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8654) Data validation test

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314058#comment-14314058
 ] 

Benedict commented on CASSANDRA-8654:
-

I think I'm leaning toward this being perhaps an extension to the current 
stress featureset. Stress already can cope with schema population and querying; 
we need to add validation to this anyway. To begin with we could perhaps just 
run a validating stress workload in parallel with some chaos generation that 
should in no way affect the results produced, and see what happens. We will 
have to think about what features need to be added to stress to maximise the 
utility, of course. Things like LWT will need to be supported, as will function 
calls etc. We will probably want to introduce some randomness to the kind of 
behaviour stress undertakes as well (e.g. introduce non-client actions, 
including sleeping, flushing, repair, etc, with some random incidence) but 
these can perhaps be introduced as an overlay. We will also want to think about 
creating some more deterministic partitions, so that edge cases can more 
directly be exercised. The advantage, of course, is that as we improve it we 
also expand the use cases stress can validate for users as well. The main guts 
of the client are also already there and ready to go.

Originally I thought the featureset would be too different, but really it's not 
_so_ different. We already track inside of stress an idea of what the partition 
looks like on the server, it's just not as complete as it could be. We're also 
not in anyway trying to create failure cases, but we randomness may be enough 
to start, and over time we can introduce tweaks to that randomness that favour 
breakage.

I'm not super keen on the idea of two parallel clusters; for one this halves 
our bandwidth (we could instead run two parallel validations, which means twice 
the chance of hitting something useful), and for another it means we only 
really catch regressions. I'm also not so keen on a python client, because I'd 
like to see as much validation as possible performed in a given time horizon. 
One of the reasons we moved away from python for stress was because it simply 
wasn't exercising the cluster enough.

> Data validation test
> 
>
> Key: CASSANDRA-8654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8654
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>
> There was a recent discussion about the utility of data validation testing.
> The goal here would be a harness of some kind that can mix operations and 
> track its own notion of what the DB state should look like, and verify it in  
> detail, or perhaps a sampling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8772) cassandra-stress should support workload logging and replay

2015-02-10 Thread Benedict (JIRA)
Benedict created CASSANDRA-8772:
---

 Summary: cassandra-stress should support workload logging and 
replay
 Key: CASSANDRA-8772
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8772
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Priority: Minor


cassandra-stress produces its workloads procedurally, meaning a minimal amount 
of information is necessary to retain in order to reproduce a consistent 
workload. If we start using stress for validation purposes, this could also be 
used to reproduce a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8773) cassandra-stress should validate its results in "user" mode

2015-02-10 Thread Benedict (JIRA)
Benedict created CASSANDRA-8773:
---

 Summary: cassandra-stress should validate its results in "user" 
mode
 Key: CASSANDRA-8773
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8773
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8769) Extend cassandra-stress to be slightly more configurable

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8769:

 Reviewer: Benedict
  Component/s: Tools
Fix Version/s: (was: 2.1.3)
   2.1.4

> Extend cassandra-stress to be slightly more configurable
> 
>
> Key: CASSANDRA-8769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8769
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
>Priority: Minor
> Fix For: 2.1.4
>
> Attachments: stress-extensions-patch.txt
>
>
> Some simple extensions to cassandra stress:
>   * Configurable warm up iterations
>   * Output results by command type for USER (e.g. 5000 ops/sec, 1000 inserts, 
> 1000 reads, 3000 range reads)
>   * Count errors when ignore flag is set
>   * Configurable truncate for more consistent results
> Patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8298) cassandra-stress legacy

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314062#comment-14314062
 ] 

Benedict commented on CASSANDRA-8298:
-

I'm kind of tempted to drop support for legacy legacy. I know it was there for 
2.0 users, but we're already heading to have a legacy mode within 2.1 (i.e. 
non-user mode)

>  cassandra-stress legacy
> 
>
> Key: CASSANDRA-8298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Centos 6.5 Cassandra 2.1.1
>Reporter: Edgardo Vega
>Assignee: T Jake Luciani
>
> Running cassandra-stress legacy failed immediately with a error.
> Running in legacy support mode. Translating command to:
> stress write n=100 -col n=fixed(5) size=fixed(34) data=repeat(1) -rate 
> threads=50 -log interval=10 -mode thrift
> Invalid parameter data=repeat(1)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8500) Improve cassandra-stress help pages

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reassigned CASSANDRA-8500:
---

Assignee: Benedict

> Improve cassandra-stress help pages
> ---
>
> Key: CASSANDRA-8500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8500
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Benedict
>
> cassandra-stress is flummoxing a lot of people. As well as rewriting its 
> README, we should improve the help pages so that they're more legible. 
> We should offer an "all" option that prints every sub-page, so it can be 
> scanned like a README (and perhaps make the basis of said file), and we 
> should at least stop printing all of the distribution parameter options every 
> time they appear, as they're very common now. 
> Offering some help about how to make the best out of the help might itself be 
> a good idea, as well as perhaps printing what all of the options within each 
> subgroup are in the summary page, so there is no pecking at them to be done.
> There should be a dedicated distribution help page that can explain all of 
> the parameters that are currently just given names we hope are sufficiently 
> descriptive.
> Finally, we should make sure all of the descriptions of each option are clear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8300) cassandra-stress insert documentation

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-8300.
-
Resolution: Duplicate

> cassandra-stress insert documentation
> -
>
> Key: CASSANDRA-8300
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8300
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: Centos 6.5 Cassandra 2.1.1
>Reporter: Edgardo Vega
>
> The help documentation for the would benefit greatly from some additional 
> documentation. It would be very nice to have some example on how to do some 
> common operations such as I would like inserts to be done in batches of 1000. 
> It would also be nice to have some practical uses of the options and how they 
> effect each other. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8299) cassandra-stress unique keys

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-8299.
-
Resolution: Not a Problem

59K keys is 10x the keys specified?

There isn't, admittedly, a way to absolutely guarantee uniqueness of keys. But 
with n=1 you will almost certainly get 10k unique keys. I suspect the 
problem is CASSANDRA-8524, i.e. that prior to 2.1.3 you need to manually 
specify that your population distribution is sequential with -pop seq=1..1, 
as it defaults to uniform otherwise (i.e. you're uniformly sampling 10k items 
from a population of 10k, so you visit the same key multiple times).

> cassandra-stress unique keys
> 
>
> Key: CASSANDRA-8299
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8299
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Centos 6.5 Cassandra version 2.1.1
>Reporter: Edgardo Vega
>Assignee: T Jake Luciani
>
> In the old stress tool you could use -n 1 and get 1 unique keys in 
> the keyspace. 
> In the new stress tool there doesn't seem to be a way to do this. For example 
> if I have the following definition:
> table_definition: |
>   CREATE TABLE table(
> key uuid PRIMARY KEY,
> col1 text,
> col2 text,
> col3 text
>   ) WITH comment='A table'
> ### Column Distribution Specifications ###
> columnspec:
>   - name: key
> size: fixed(36)   
> population: uniform(1..100B)   
>   - name: col1
> size: fixed(100)
>   - name: col2
> size: fixed(100)
>   - name: col3
> size: fixed(100)
> and then run 
> cassandra-stress user n=1 profile=stress.yaml ops\(insert=1\)
> If you look at the keyspace was only 59000 keys. The new tool needs to be 
> able to generated unique ids. In our tested we want to see how the number of 
> keys effects the cluster when doing queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8768) Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap

2015-02-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314073#comment-14314073
 ] 

Aleksey Yeschenko commented on CASSANDRA-8768:
--

You aren't supposed to mix upgrades and topology changes, it's not explicitly 
supported  (may or may not work).

Upgrade all nodes to 2.1, then add your new 2.1 node. Or add that new node as 
2.0, and then upgrade them all to 2.1.

> Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap
> --
>
> Key: CASSANDRA-8768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ron Kuris
> Fix For: 2.1.3
>
> Attachments: gossip-with-2.0-patch.txt
>
>
> If you spin up a Cassandra 2.0 cluster with some seeds, and then attempt to 
> attach a Cassandra 2.1 node to it, you get the following message:
> {code}OutboundTcpConnection.java:429 - Handshaking version with 
> /10.24.0.10{code}
> Turning on debug, you get a few additional messages:
> {code}DEBUG [WRITE-/(ip)] MessagingService.java:789 - Setting version 7 for 
> /10.24.0.10
> DEBUG [WRITE-/(ip)] OutboundTcpConnection.java:369 - Target max version is 7; 
> will reconnect with that version{code}
> However, the code never reconnects. See the comments as to why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8683) Ensure early reopening has no overlap with replaced files

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314155#comment-14314155
 ] 

Benedict commented on CASSANDRA-8683:
-

[~aboudrealt] do you have an easier reproduction test by any chance? couldn't 
get that test to run on my box, and besides it's a difficult one to debug. I 
tried running a basic test to see if there were any obvious screwups and didn't 
see it.

> Ensure early reopening has no overlap with replaced files
> -
>
> Key: CASSANDRA-8683
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8683
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 0001-avoid-NPE-in-getPositionsForRanges.patch, system.log
>
>
> When introducing CASSANDRA-6916 we permitted the early opened files to 
> overlap with the files they were replacing by one DecoratedKey, as this 
> permitted a few minor simplifications. Unfortunately this breaks assumptions 
> in LeveledCompactionScanner, that are causing the intermittent unit test 
> failures: 
> http://cassci.datastax.com/job/trunk_utest/1330/testReport/junit/org.apache.cassandra.db.compaction/LeveledCompactionStrategyTest/testValidationMultipleSSTablePerLevel/
> This patch by itself does not fix the bug, but fixes the described aspect of 
> it, by ensuring the replaced and replacing files never overlap. This is 
> achieved first by always selecting the replaced file start as the next key 
> present in the file greater than the last key in the new file(s).  If there 
> is no such key, however, there is no data to return for the reader, but to 
> permit abort and atomic replacement at the end of a macro compaction action, 
> we must keep the file in the DataTracker for replacement purposes, but not 
> return it to consumers (esp. as many assume a non-empty range). For this I 
> have introduced a new OpenReason called SHADOWED, and a 
> DataTracker.View.shadowed collection of sstables, that tracks those we still 
> consider to be in the live set, but from which we no longer answer any 
> queries.
> CASSANDRA-8744 (and then CASSANDRA-8750) then ensures that these bounds are 
> honoured, so that we never break the assumption that files in LCS never 
> overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8774) BulkOutputFormat never completes if streaming have errors

2015-02-10 Thread Erik Forsberg (JIRA)
Erik Forsberg created CASSANDRA-8774:


 Summary: BulkOutputFormat never completes if streaming have errors
 Key: CASSANDRA-8774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8774
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Erik Forsberg


With BulkoutputFormat in Cassandra 1.2.18, if any streaming errors occured, the 
hadoop task would fail. This doesn't seem to happen with 2.0.12.

I have a hadoop map task that use BulkoutputFormat. If one of the cassandra 
nodes I'm writing to is down, I'm getting the following syslog output from the 
map task:

{noformat}
2015-02-10 10:54:15,162 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded 
the native-hadoop library
2015-02-10 10:54:15,601 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=MAP, sessionId=
2015-02-10 10:54:15,901 INFO org.apache.hadoop.util.ProcessTree: setsid exited 
with exit code 0
2015-02-10 10:54:15,907 INFO org.apache.hadoop.mapred.Task:  Using 
ResourceCalculatorPlugin : 
org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4984451e
2015-02-10 10:54:16,110 INFO org.apache.hadoop.mapred.MapTask: Processing 
split: 
hdfs://hdpmt01.osp-hadoop.osa:9000/user/jenkins/syst/5ef13_osp/tvstore/sumcombinations/hourly/2015021002/per_period-5ba2faa4b1e4aa21fa163e82bc46-sumcombinations/0/data/part-00047:0+462
2015-02-10 10:54:16,739 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory: 
Successfully loaded & initialized native-zlib library
2015-02-10 10:54:16,740 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2015-02-10 10:54:16,741 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2015-02-10 10:54:16,741 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2015-02-10 10:54:16,741 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2015-02-10 10:54:16,927 ERROR org.apache.cassandra.cql3.QueryProcessor: Unable 
to initialize MemoryMeter (jamm not specified as javaagent).  This means 
Cassandra will be unable to measure object sizes accurately and may 
consequently OOM.
2015-02-10 10:54:17,780 INFO org.apache.cassandra.utils.CLibrary: JNA not 
found. Native methods will be disabled.
2015-02-10 10:54:19,446 INFO org.apache.cassandra.io.sstable.SSTableReader: 
Opening 
/opera/log1/hadoop/mapred/local/taskTracker/jenkins/jobcache/job_201502041226_13903/attempt_201502041226_13903_m_00_0/work/tmp/syst5ef13osp/Data_hourly/syst5ef13osp-Data_hourly-jb-1
 (1018 bytes)
2015-02-10 10:54:20,713 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Executing streaming plan for 
Bulk Load
2015-02-10 10:54:20,713 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Beginning stream session with 
/ipv6:prefix:1:441:0:0:0:7
2015-02-10 10:54:20,714 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Beginning stream session with 
/ipv6:prefix:1:441:0:0:0:8
2015-02-10 10:54:20,715 INFO org.apache.cassandra.streaming.StreamSession: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Starting streaming to 
/ipv6:prefix:1:441:0:0:0:7
2015-02-10 10:54:20,730 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Beginning stream session with 
/ipv6:prefix:1:441:0:0:0:4
2015-02-10 10:54:20,750 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Beginning stream session with 
/ipv6:prefix:1:441:0:0:0:3
2015-02-10 10:54:20,731 INFO org.apache.cassandra.streaming.StreamSession: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Starting streaming to 
/ipv6:prefix:1:441:0:0:0:8
2015-02-10 10:54:20,750 INFO org.apache.cassandra.streaming.StreamSession: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Starting streaming to 
/ipv6:prefix:1:441:0:0:0:4
2015-02-10 10:54:20,770 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Beginning stream session with 
/ipv6:prefix:1:441:0:0:0:6
2015-02-10 10:54:20,778 INFO org.apache.cassandra.streaming.StreamResultFuture: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Beginning stream session with 
/ipv6:prefix:1:441:0:0:0:5
2015-02-10 10:54:20,786 INFO org.apache.cassandra.streaming.StreamSession: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Starting streaming to 
/ipv6:prefix:1:441:0:0:0:3
2015-02-10 10:54:20,790 INFO org.apache.cassandra.streaming.StreamSession: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Starting streaming to 
/ipv6:prefix:1:441:0:0:0:6
2015-02-10 10:54:20,867 INFO org.apache.cassandra.streaming.StreamSession: 
[Stream #29f27cd0-b113-11e4-a465-91cc09fc46f1] Starting streaming to 
/ipv6:prefix:1:441:0:0:0:5
2015-02-10 10:54:20,897 INF

[jira] [Created] (CASSANDRA-8775) Memtables do not track the on-heap overheads of DecoratedKey

2015-02-10 Thread Benedict (JIRA)
Benedict created CASSANDRA-8775:
---

 Summary: Memtables do not track the on-heap overheads of 
DecoratedKey
 Key: CASSANDRA-8775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8775
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4


Looks like a slight oversight, that I had some code to fix intermingled with 
CASSANDRA-7282. I will split it out shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8775) Memtables do not track the on-heap overheads of DecoratedKey

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-8775.
-
Resolution: Not a Problem

My mistake. This is dealt with in the ROW_OVERHEAD constant.

> Memtables do not track the on-heap overheads of DecoratedKey
> 
>
> Key: CASSANDRA-8775
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8775
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 2.1.4
>
>
> Looks like a slight oversight, that I had some code to fix intermingled with 
> CASSANDRA-7282. I will split it out shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-6538:
---
Attachment: sizeFzt.PNG

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
> Attachments: 6538.patch, sizeFzt.PNG
>
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-6538:
---
Attachment: 6538.patch

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
> Attachments: 6538.patch
>
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Anuja Mandlecha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314180#comment-14314180
 ] 

Anuja Mandlecha commented on CASSANDRA-6538:


I am taking a step back and reuploading the patch. Now I am considering 
everything as BLOB(bytebuffer) and the sizeOf() calculates the size for all 
data types(including standard data types like int,float).
But on a second thought I think calculating size of data types like integers, 
floats etc is of no use since they are fixed and the sizeOf() should only 
calculate the size for variable length data types like text,BLOB and collection 
types. Let me know your views and comments on the same.

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
> Attachments: 6538.patch
>
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7662) Implement templated CREATE TABLE functionality (CREATE TABLE LIKE)

2015-02-10 Thread Pramod Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pramod Nair updated CASSANDRA-7662:
---
Attachment: 7662.patch

Attaching the patch for the CREATE TABLE LIKE functionality.

> Implement templated CREATE TABLE functionality (CREATE TABLE LIKE)
> --
>
> Key: CASSANDRA-7662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7662
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.1.3
>
> Attachments: 7662.patch
>
>
> Implement templated CREATE TABLE functionality (CREATE TABLE LIKE) to 
> simplify creating new tables duplicating existing ones (see parent_table part 
> of  http://www.postgresql.org/docs/9.1/static/sql-createtable.html).
> CREATE TABLE  LIKE ; - would create a new table with 
> the same columns and options as 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8750) Ensure SSTableReader.last corresponds exactly with the file end

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314221#comment-14314221
 ] 

Marcus Eriksson commented on CASSANDRA-8750:


+1

added 2 comments and an assert to make things a bit more clear;
{code}
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index ea0d785..ad53e83 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -150,6 +150,8 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 {
 if (overrideLength <= 0)
 return metadataWriter.open(originalSize, chunkOffset, isFinal ? 
FINAL : SHARED_FINAL);
+// we are early opening the file, make sure we open metadata with the 
correct size
+assert !isFinal;
 return metadataWriter.open(overrideLength, chunkOffset, SHARED);
 }
 
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index ad087c7..fd8248e 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -351,6 +351,7 @@ public class CompressionMetadata
 this.offsets.unreference();
 }
 // null out our reference to the original shared data to 
catch accidental reuse
+// note that since noone is writing to this Writer while 
we open it, null:ing out this.offsets is safe
 this.offsets = null;
 if (type == OpenType.SHARED_FINAL)
 {
{code}

> Ensure SSTableReader.last corresponds exactly with the file end
> ---
>
> Key: CASSANDRA-8750
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8750
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 2.1.4
>
>
> Following on from CASSANDRA-8744, CASSANDRA-8749 and CASSANDRA-8747, this 
> patch attempts to make the whole opening early of compaction results more 
> robust and with more clearly understood behaviour. The improvements of 
> CASSANDRA-8747 permit is to easily align the last key with a summary 
> boundary, and an index and data file end position. This patch modifies 
> SegmentedFile to permit the provision of an explicit length, which is then 
> provided to any readers, which enforce it, ensuring no code may accidentally 
> see an end inconsistent with the one advertised. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8224) Checksum Gossip state

2015-02-10 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314229#comment-14314229
 ] 

Brandon Williams commented on CASSANDRA-8224:
-

This is a prime example of why you should have ECC memory, so we don't have to 
exert ourselves in software to accomplish what hardware can already provide.  
That said, the way I see this working is to provide the checksum as a gossip 
state itself (that way older nodes can just ignore it) which is a checksum of 
everything except the checksum state itself.  But again, it does feel like a 
problem better solved elsewhere.

> Checksum Gossip state
> -
>
> Key: CASSANDRA-8224
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8224
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>
>  We have seen that a single machine with bad memory can corrupt the gossip of 
> other nodes and cause entire cluster to be affected. If we store and pass the 
> checksum of the entire state, we can detect corruption. If a bad machine 
> tries to bump the generation number or other things, it will be detected and 
> ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8776) nodetool status reports success for missing keyspace

2015-02-10 Thread Stuart Bishop (JIRA)
Stuart Bishop created CASSANDRA-8776:


 Summary: nodetool status reports success for missing keyspace
 Key: CASSANDRA-8776
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8776
 Project: Cassandra
  Issue Type: Bug
Reporter: Stuart Bishop
Priority: Minor


'nodetool status somethinginvalid' will correctly output an error message that 
the keyspace does not exist, but still returns a 'success' code of 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8683) Ensure early reopening has no overlap with replaced files

2015-02-10 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-8683:
---
Attachment: test.sh

[~benedict] Can you try this simple one and let me know if you can reproduce 
the issue. I do see the message on each run.

> Ensure early reopening has no overlap with replaced files
> -
>
> Key: CASSANDRA-8683
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8683
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 0001-avoid-NPE-in-getPositionsForRanges.patch, 
> system.log, test.sh
>
>
> When introducing CASSANDRA-6916 we permitted the early opened files to 
> overlap with the files they were replacing by one DecoratedKey, as this 
> permitted a few minor simplifications. Unfortunately this breaks assumptions 
> in LeveledCompactionScanner, that are causing the intermittent unit test 
> failures: 
> http://cassci.datastax.com/job/trunk_utest/1330/testReport/junit/org.apache.cassandra.db.compaction/LeveledCompactionStrategyTest/testValidationMultipleSSTablePerLevel/
> This patch by itself does not fix the bug, but fixes the described aspect of 
> it, by ensuring the replaced and replacing files never overlap. This is 
> achieved first by always selecting the replaced file start as the next key 
> present in the file greater than the last key in the new file(s).  If there 
> is no such key, however, there is no data to return for the reader, but to 
> permit abort and atomic replacement at the end of a macro compaction action, 
> we must keep the file in the DataTracker for replacement purposes, but not 
> return it to consumers (esp. as many assume a non-empty range). For this I 
> have introduced a new OpenReason called SHADOWED, and a 
> DataTracker.View.shadowed collection of sstables, that tracks those we still 
> consider to be in the live set, but from which we no longer answer any 
> queries.
> CASSANDRA-8744 (and then CASSANDRA-8750) then ensures that these bounds are 
> honoured, so that we never break the assumption that files in LCS never 
> overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8776) nodetool status reports success for missing keyspace

2015-02-10 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8776:

Labels: lhf  (was: )

> nodetool status reports success for missing keyspace
> 
>
> Key: CASSANDRA-8776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8776
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stuart Bishop
>Priority: Minor
>  Labels: lhf
>
> 'nodetool status somethinginvalid' will correctly output an error message 
> that the keyspace does not exist, but still returns a 'success' code of 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8683) Ensure early reopening has no overlap with replaced files

2015-02-10 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314256#comment-14314256
 ] 

Alan Boudreault commented on CASSANDRA-8683:


If you can rebase your branch with latest 2.1, I can retry it too. 

> Ensure early reopening has no overlap with replaced files
> -
>
> Key: CASSANDRA-8683
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8683
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 0001-avoid-NPE-in-getPositionsForRanges.patch, 
> system.log, test.sh
>
>
> When introducing CASSANDRA-6916 we permitted the early opened files to 
> overlap with the files they were replacing by one DecoratedKey, as this 
> permitted a few minor simplifications. Unfortunately this breaks assumptions 
> in LeveledCompactionScanner, that are causing the intermittent unit test 
> failures: 
> http://cassci.datastax.com/job/trunk_utest/1330/testReport/junit/org.apache.cassandra.db.compaction/LeveledCompactionStrategyTest/testValidationMultipleSSTablePerLevel/
> This patch by itself does not fix the bug, but fixes the described aspect of 
> it, by ensuring the replaced and replacing files never overlap. This is 
> achieved first by always selecting the replaced file start as the next key 
> present in the file greater than the last key in the new file(s).  If there 
> is no such key, however, there is no data to return for the reader, but to 
> permit abort and atomic replacement at the end of a macro compaction action, 
> we must keep the file in the DataTracker for replacement purposes, but not 
> return it to consumers (esp. as many assume a non-empty range). For this I 
> have introduced a new OpenReason called SHADOWED, and a 
> DataTracker.View.shadowed collection of sstables, that tracks those we still 
> consider to be in the live set, but from which we no longer answer any 
> queries.
> CASSANDRA-8744 (and then CASSANDRA-8750) then ensures that these bounds are 
> honoured, so that we never break the assumption that files in LCS never 
> overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8768) Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap

2015-02-10 Thread Ron Kuris (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314264#comment-14314264
 ] 

Ron Kuris edited comment on CASSANDRA-8768 at 2/10/15 2:59 PM:
---

This works fine, though, when going the other direction already (that is, a 2.1 
seed and a 2.0 node bootstrapping). It also works fine if the node happens to 
be in the cached list; it only fails when the node doesn't have the cached 
entry for this IP.

The schema was fetched just fine from the 2.0 nodes. The code drops down to the 
older version, and there is a lot of code to support this already due to the 
reverse case working fine.

Even if you decide this should not be fixed, the error message is terrible, and 
requires the user to turn it up to debug before getting a possible clue as to 
the problem. I'd suggest at least:
{code}logger.warn("Seed gossip version is {}; will not connect with that 
version", maxTargetVersion);{code}


was (Author: rkuris):
This works fine, though, when going the other direction already (that is, a 2.1 
seed and a 2.0 node bootstrapping). It also works fine if the node happens to 
be in the cached list; it only fails when the node doesn't have the cached 
entry for this IP.

The schema was fetched just fine from the 2.0 nodes. The code drops down to the 
older version, and there is a lot of code to support this already due to the 
reverse case working fine.

Even if you decide this should not be fixed, the error message is terrible, and 
requires the user to turn it up to debug before getting a possible clue as to 
the problem. I'd suggest at least:
{code}logger.warn("Seed gossip version is {}; will not connect with that 
version", maxTargetVersion);

> Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap
> --
>
> Key: CASSANDRA-8768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ron Kuris
> Fix For: 2.1.3
>
> Attachments: gossip-with-2.0-patch.txt
>
>
> If you spin up a Cassandra 2.0 cluster with some seeds, and then attempt to 
> attach a Cassandra 2.1 node to it, you get the following message:
> {code}OutboundTcpConnection.java:429 - Handshaking version with 
> /10.24.0.10{code}
> Turning on debug, you get a few additional messages:
> {code}DEBUG [WRITE-/(ip)] MessagingService.java:789 - Setting version 7 for 
> /10.24.0.10
> DEBUG [WRITE-/(ip)] OutboundTcpConnection.java:369 - Target max version is 7; 
> will reconnect with that version{code}
> However, the code never reconnects. See the comments as to why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8768) Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap

2015-02-10 Thread Ron Kuris (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314264#comment-14314264
 ] 

Ron Kuris commented on CASSANDRA-8768:
--

This works fine, though, when going the other direction already (that is, a 2.1 
seed and a 2.0 node bootstrapping). It also works fine if the node happens to 
be in the cached list; it only fails when the node doesn't have the cached 
entry for this IP.

The schema was fetched just fine from the 2.0 nodes. The code drops down to the 
older version, and there is a lot of code to support this already due to the 
reverse case working fine.

Even if you decide this should not be fixed, the error message is terrible, and 
requires the user to turn it up to debug before getting a possible clue as to 
the problem. I'd suggest at least:
{code}logger.warn("Seed gossip version is {}; will not connect with that 
version", maxTargetVersion);

> Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap
> --
>
> Key: CASSANDRA-8768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ron Kuris
> Fix For: 2.1.3
>
> Attachments: gossip-with-2.0-patch.txt
>
>
> If you spin up a Cassandra 2.0 cluster with some seeds, and then attempt to 
> attach a Cassandra 2.1 node to it, you get the following message:
> {code}OutboundTcpConnection.java:429 - Handshaking version with 
> /10.24.0.10{code}
> Turning on debug, you get a few additional messages:
> {code}DEBUG [WRITE-/(ip)] MessagingService.java:789 - Setting version 7 for 
> /10.24.0.10
> DEBUG [WRITE-/(ip)] OutboundTcpConnection.java:369 - Target max version is 7; 
> will reconnect with that version{code}
> However, the code never reconnects. See the comments as to why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314267#comment-14314267
 ] 

Marcus Eriksson commented on CASSANDRA-8535:


+1

and with CASSANDRA-8766 in mind, we should probably do it this way always

> java.lang.RuntimeException: Failed to rename XXX to YYY
> ---
>
> Key: CASSANDRA-8535
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
> Project: Cassandra
>  Issue Type: Bug
> Environment: Windows 2008 X64
>Reporter: Leonid Shalupov
>Assignee: Joshua McKenzie
> Fix For: 2.1.3
>
> Attachments: 8535_v1.txt, 8535_v2.txt
>
>
> {code}
> java.lang.RuntimeException: Failed to rename 
> build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
>  to 
> build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_45]
>   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
> Caused by: java.nio.file.FileSystemException: 
> build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
>  -> 
> build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
>  The process cannot access the file because it is being used by another 
> process.
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
> ~[na:1.7.0_45]
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
> ~[na:1.7.0_45]
>   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
> ~[na:1.7.0_45]
>   at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
> ~[na:1.7.0_45]
>   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
>   at 
> org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
> ~[main/:na]
>   ... 18 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314304#comment-14314304
 ] 

Marcus Eriksson commented on CASSANDRA-8719:


could you post your schema etc?

also, it would be extremely helpful if you could post a small script (or just 
the inserts you do etc) that reproduces this

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same 

[jira] [Commented] (CASSANDRA-8298) cassandra-stress legacy

2015-02-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314406#comment-14314406
 ] 

Jonathan Ellis commented on CASSANDRA-8298:
---

stress is really more for developers than end users, so I'm fine with that.

>  cassandra-stress legacy
> 
>
> Key: CASSANDRA-8298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Centos 6.5 Cassandra 2.1.1
>Reporter: Edgardo Vega
>Assignee: T Jake Luciani
>
> Running cassandra-stress legacy failed immediately with a error.
> Running in legacy support mode. Translating command to:
> stress write n=100 -col n=fixed(5) size=fixed(34) data=repeat(1) -rate 
> threads=50 -log interval=10 -mode thrift
> Invalid parameter data=repeat(1)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8729) Commitlog causes read before write when overwriting

2015-02-10 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314409#comment-14314409
 ] 

T Jake Luciani commented on CASSANDRA-8729:
---

I found a workaround by setting the segment size > the total commit log space 
size.  this drops the segment after each segment is full and it resolved this 
issue.

> Commitlog causes read before write when overwriting
> ---
>
> Key: CASSANDRA-8729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8729
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Ariel Weisberg
>  Labels: commitlog
>
> The memory mapped commit log implementation writes directly to the page 
> cache. If a page is not in the cache the kernel will read it in even though 
> we are going to overwrite.
> The way to avoid this is to write to private memory, and then pad the write 
> with 0s at the end so it is page (4k) aligned before writing to a file.
> The commit log would benefit from being refactored into something that looks 
> more like a pipeline with incoming requests receiving private memory to write 
> in, completed buffers being submitted to a  parallelized compression/checksum 
> step, followed by submission to another thread for writing to a file that 
> preserves the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8771) Remove commit log segment recycling

2015-02-10 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314414#comment-14314414
 ] 

T Jake Luciani commented on CASSANDRA-8771:
---

+1 dump it.

> Remove commit log segment recycling
> ---
>
> Key: CASSANDRA-8771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8771
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Ariel Weisberg
>  Labels: commitlog
>
> For discussion
> Commit log segment recycling introduces a lot of complexity in the existing 
> code.
> CASSANDRA-8729 is a side effect of commit log segment recycling and 
> addressing it will require memory management code and thread coordination for 
> memory that the filesystem will no longer handle for us.
> There is some discussion about what storage configurations actually benefit 
> from preallocated files. Fast random access devices like SSDs, or 
> non-volatile write caches etc. make the distinction not that great. 
> I haven't measured any difference in throughput for bulk appending vs 
> overwriting although it was pointed out that I didn't test with concurrent IO 
> streams.
> What would it take to make removing commit log segment recycling acceptable? 
> Maybe a benchmark on a spinning disk that measures the performance impact of 
> preallocation when there are other IO streams?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements

2015-02-10 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp reopened CASSANDRA-7304:
-

Sorry for the status change - the commit belongs to CASSANDRA-7886 (which I 
meant - got confused in my browser that day).

> Ability to distinguish between NULL and UNSET values in Prepared Statements
> ---
>
> Key: CASSANDRA-7304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7304
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Drew Kutcharian
>Assignee: Oded Peer
>  Labels: cql, protocolv4
> Fix For: 3.0
>
> Attachments: 7304-03.patch, 7304-04.patch, 7304-2.patch, 7304.patch
>
>
> Currently Cassandra inserts tombstones when a value of a column is bound to 
> NULL in a prepared statement. At higher insert rates managing all these 
> tombstones becomes an unnecessary overhead. This limits the usefulness of the 
> prepared statements since developers have to either create multiple prepared 
> statements (each with a different combination of column names, which at times 
> is just unfeasible because of the sheer number of possible combinations) or 
> fall back to using regular (non-prepared) statements.
> This JIRA is here to explore the possibility of either:
> A. Have a flag on prepared statements that once set, tells Cassandra to 
> ignore null columns
> or
> B. Have an "UNSET" value which makes Cassandra skip the null columns and not 
> tombstone them
> Basically, in the context of a prepared statement, a null value means delete, 
> but we don’t have anything that means "ignore" (besides creating a new 
> prepared statement without the ignored column).
> Please refer to the original conversation on DataStax Java Driver mailing 
> list for more background:
> https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion
> *EDIT 18/12/14 - [~odpeer] Implementation Notes:*
> The motivation hasn't changed.
> Protocol version 4 specifies that bind variables do not require having a 
> value when executing a statement. Bind variables without a value are called 
> 'unset'. The 'unset' bind variable is serialized as the int value '-2' 
> without following bytes.
> \\
> \\
> * An unset bind variable in an EXECUTE or BATCH request
> ** On a {{value}} does not modify the value and does not create a tombstone
> ** On the {{ttl}} clause is treated as 'unlimited'
> ** On the {{timestamp}} clause is treated as 'now'
> ** On a map key or a list index throws {{InvalidRequestException}}
> ** On a {{counter}} increment or decrement operation does not change the 
> counter value, e.g. {{UPDATE my_tab SET c = c - ? WHERE k = 1}} does change 
> the value of counter {{c}}
> ** On a tuple field or UDT field throws {{InvalidRequestException}}
> * An unset bind variable in a QUERY request
> ** On a partition column, clustering column or index column in the {{WHERE}} 
> clause throws {{InvalidRequestException}}
> ** On the {{limit}} clause is treated as 'unlimited'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8252) dtests that involve topology changes should verify system.peers on all nodes

2015-02-10 Thread Shawn Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314492#comment-14314492
 ] 

Shawn Kumar commented on CASSANDRA-8252:


While trying to modify replace_address_test.py to check system.peers, I had 
some difficulty due to unexpected results. I committed a modified version of 
replace_address_test with some debug statements 
[here|https://github.com/riptano/cassandra-dtest/blob/topology/replace_address_test.py]
 to illustrate the changes in the table through the process. Issues I am having 
are: variable system.peers table after replacing across runs (although the 
replaced address does not appear, it seems rare that the replacing node is 
noticed by 1 and 2), upon trying to truncate the table and restart nodes I can 
occasionally see the replaced address (in this case 127.0.0.3). 

> dtests that involve topology changes should verify system.peers on all nodes
> 
>
> Key: CASSANDRA-8252
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8252
> Project: Cassandra
>  Issue Type: Test
>  Components: Tests
>Reporter: Brandon Williams
>Assignee: Shawn Kumar
> Fix For: 2.1.3, 2.0.13
>
>
> This is especially true for replace where I've discovered it's wrong in 
> 1.2.19, which is sad because now it's too late to fix.  We've had a lot of 
> problems with incorrect/null system.peers, so after any topology change we 
> should verify it on every live node when everything is finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Randy Fradin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314498#comment-14314498
 ] 

Randy Fradin commented on CASSANDRA-8719:
-

>From cassandra-cli on a fresh 1-node cluster running 2.1.0 with 
>offheap_objects and hsha enabled:

{quote}
create keyspace ks1 with placement_strategy = 
'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
{replication_factor:1};
use ks1;
create column family cf1
with comparator = 'AsciiType'
and key_validation_class = 'AsciiType'
and default_validation_class = 'AsciiType'
and column_type = 'Standard'
and gc_grace = 864000
and read_repair_chance = 1.0
and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
and compaction_strategy_options = {sstable_size_in_mb:160}
and compression_options = {sstable_compression:SnappyCompressor, 
chunk_length_kb:64};
set cf1['111']['1'] = 123;
set cf1['111']['2'] = 456;
set cf1['222']['1'] = 321;
set cf1['222']['2'] = 654;
{quote}

At this point I run nodetool flush. Then back in cassandra-cli:

{quote}
set cf1['333']['1'] = 789;
set cf1['333']['2'] = abc;
set cf1['444']['1'] = 987;
set cf1['444']['2'] = cba;
{quote}

Then nodetool flush again.

Here are the files in the ks1/cf1-63901340b13f11e48e36730c68dbc38c directory:
ks1-cf1-ka-1-CompressionInfo.db
ks1-cf1-ka-1-Data.db
ks1-cf1-ka-1-Digest.sha1
ks1-cf1-ka-1-Filter.db
ks1-cf1-ka-1-Index.db
ks1-cf1-ka-1-Statistics.db
ks1-cf1-ka-1-Summary.db
ks1-cf1-ka-1-TOC.txt
ks1-cf1-ka-2-CompressionInfo.db
ks1-cf1-ka-2-Data.db
ks1-cf1-ka-2-Digest.sha1
ks1-cf1-ka-2-Filter.db
ks1-cf1-ka-2-Index.db
ks1-cf1-ka-2-Statistics.db
ks1-cf1-ka-2-Summary.db
ks1-cf1-ka-2-TOC.txt

And I see this in the log from the second flush onward:
{quote}
 INFO [RMI TCP Connection(6)-45.10.96.124] 2015-02-10 11:12:37,596 
ColumnFamilyStore.java (line 856) Enqueuing flush of cf1: 584 (0%) on-heap, 166 
(0%) off-heap
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,597 Memtable.java (line 326) 
Writing Memtable-cf1@114791717(72 serialized bytes, 4 ops, 0%/0% of on/off-heap 
limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,600 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-2-Data.db
 (67 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4428)
 INFO [RMI TCP Connection(6)-45.10.96.124] 2015-02-10 11:12:37,603 
ColumnFamilyStore.java (line 856) Enqueuing flush of compaction_history: 412 
(0%) on-heap, 328 (0%) off-heap
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,605 ColumnFamilyStore.java 
(line 856) Enqueuing flush of compactions_in_progress: 356 (0%) on-heap, 175 
(0%) off-heap
 INFO [MemtableFlushWriter:8] 2015-02-10 11:12:37,605 Memtable.java (line 326) 
Writing Memtable-compaction_history@1602711153(253 serialized bytes, 9 ops, 
0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,606 Memtable.java (line 326) 
Writing Memtable-compactions_in_progress@468406472(117 serialized bytes, 7 ops, 
0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,609 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-1-Data.db
 (145 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4685)
 INFO [MemtableFlushWriter:8] 2015-02-10 11:12:37,611 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/system-compaction_history-ka-2-Data.db
 (239 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4428)
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,612 CompactionTask.java (line 
138) Compacting 
[SSTableReader(path='/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-1-Data.db'),
 
SSTableReader(path='/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-2-Data.db')]
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,647 ColumnFamilyStore.java 
(line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
(0%) off-heap
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,648 Memtable.java (line 326) 
Writing Memtable-compactions_in_progress@359614352(0 serialized bytes, 1 ops, 
0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,650 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
 (42 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4756)
ERROR [CompactionExecutor:10] 2015-02-10 11:12:37,655 CassandraDaemon.java 
(line 166) Exception in thread Thread[CompactionExecutor:10,1,RMI Runtime]
java.lang.RuntimeExceptio

[jira] [Comment Edited] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Randy Fradin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314498#comment-14314498
 ] 

Randy Fradin edited comment on CASSANDRA-8719 at 2/10/15 5:34 PM:
--

>From cassandra-cli on a fresh 1-node cluster running 2.1.0 with 
>offheap_objects and hsha enabled:

{noformat}
create keyspace ks1 with placement_strategy = 
'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
{replication_factor:1};
use ks1;
create column family cf1
with comparator = 'AsciiType'
and key_validation_class = 'AsciiType'
and default_validation_class = 'AsciiType'
and column_type = 'Standard'
and gc_grace = 864000
and read_repair_chance = 1.0
and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
and compaction_strategy_options = {sstable_size_in_mb:160}
and compression_options = {sstable_compression:SnappyCompressor, 
chunk_length_kb:64};
set cf1['111']['1'] = 123;
set cf1['111']['2'] = 456;
set cf1['222']['1'] = 321;
set cf1['222']['2'] = 654;
{noformat}

At this point I run nodetool flush. Then back in cassandra-cli:

{noformat}
set cf1['333']['1'] = 789;
set cf1['333']['2'] = abc;
set cf1['444']['1'] = 987;
set cf1['444']['2'] = cba;
{noformat}

Then nodetool flush again.

Here are the files in the ks1/cf1-63901340b13f11e48e36730c68dbc38c directory:
ks1-cf1-ka-1-CompressionInfo.db
ks1-cf1-ka-1-Data.db
ks1-cf1-ka-1-Digest.sha1
ks1-cf1-ka-1-Filter.db
ks1-cf1-ka-1-Index.db
ks1-cf1-ka-1-Statistics.db
ks1-cf1-ka-1-Summary.db
ks1-cf1-ka-1-TOC.txt
ks1-cf1-ka-2-CompressionInfo.db
ks1-cf1-ka-2-Data.db
ks1-cf1-ka-2-Digest.sha1
ks1-cf1-ka-2-Filter.db
ks1-cf1-ka-2-Index.db
ks1-cf1-ka-2-Statistics.db
ks1-cf1-ka-2-Summary.db
ks1-cf1-ka-2-TOC.txt

And I see this in the log from the second flush onward:
{noformat}
 INFO [RMI TCP Connection(6)-45.10.96.124] 2015-02-10 11:12:37,596 
ColumnFamilyStore.java (line 856) Enqueuing flush of cf1: 584 (0%) on-heap, 166 
(0%) off-heap
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,597 Memtable.java (line 326) 
Writing Memtable-cf1@114791717(72 serialized bytes, 4 ops, 0%/0% of on/off-heap 
limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,600 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-2-Data.db
 (67 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4428)
 INFO [RMI TCP Connection(6)-45.10.96.124] 2015-02-10 11:12:37,603 
ColumnFamilyStore.java (line 856) Enqueuing flush of compaction_history: 412 
(0%) on-heap, 328 (0%) off-heap
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,605 ColumnFamilyStore.java 
(line 856) Enqueuing flush of compactions_in_progress: 356 (0%) on-heap, 175 
(0%) off-heap
 INFO [MemtableFlushWriter:8] 2015-02-10 11:12:37,605 Memtable.java (line 326) 
Writing Memtable-compaction_history@1602711153(253 serialized bytes, 9 ops, 
0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,606 Memtable.java (line 326) 
Writing Memtable-compactions_in_progress@468406472(117 serialized bytes, 7 ops, 
0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,609 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-1-Data.db
 (145 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4685)
 INFO [MemtableFlushWriter:8] 2015-02-10 11:12:37,611 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/system-compaction_history-ka-2-Data.db
 (239 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4428)
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,612 CompactionTask.java (line 
138) Compacting 
[SSTableReader(path='/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-1-Data.db'),
 
SSTableReader(path='/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-2-Data.db')]
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,647 ColumnFamilyStore.java 
(line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
(0%) off-heap
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,648 Memtable.java (line 326) 
Writing Memtable-compactions_in_progress@359614352(0 serialized bytes, 1 ops, 
0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,650 Memtable.java (line 360) 
Completed flushing 
/tmp/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
 (42 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, 
position=4756)
ERROR [CompactionExecutor:10] 2015-02-10 11:12:37,655 CassandraDaemon.java 
(line 166) Exception in thread Threa

[jira] [Commented] (CASSANDRA-8339) Reading columns marked as type different than default validation class from CQL causes errors

2015-02-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314515#comment-14314515
 ] 

Tyler Hobbs commented on CASSANDRA-8339:


[~iamaleksey] did you happen to make any progress on this?

> Reading columns marked as type different than default validation class from 
> CQL causes errors
> -
>
> Key: CASSANDRA-8339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8339
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
>Assignee: Aleksey Yeschenko
>
> As [discussed on users mailing 
> list|http://www.mail-archive.com/user%40cassandra.apache.org/msg39251.html] 
> I'm having trouble reading data from a table created via thrift, where some 
> columns are marked as having a validator different than the default one.
> Minimal working example:
> {noformat}
> #!/usr/bin/env python
> # Run this in virtualenv with pycassa and cassandra-driver installed via pip
> import pycassa
> import cassandra
> import calendar
> import traceback
> import time
> from uuid import uuid4
> keyspace = "badcql"
> sysmanager = pycassa.system_manager.SystemManager("localhost")
> sysmanager.create_keyspace(keyspace, 
> strategy_options={'replication_factor':'1'})
> sysmanager.create_column_family(keyspace, "Users", 
> key_validation_class=pycassa.system_manager.LEXICAL_UUID_TYPE,
> 
> comparator_type=pycassa.system_manager.ASCII_TYPE,
> 
> default_validation_class=pycassa.system_manager.UTF8_TYPE)
> sysmanager.create_index(keyspace, "Users", "username", 
> pycassa.system_manager.UTF8_TYPE)
> sysmanager.create_index(keyspace, "Users", "email", 
> pycassa.system_manager.UTF8_TYPE)
> sysmanager.alter_column(keyspace, "Users", "default_account_id", 
> pycassa.system_manager.LEXICAL_UUID_TYPE)
> sysmanager.create_index(keyspace, "Users", "active", 
> pycassa.system_manager.INT_TYPE)
> sysmanager.alter_column(keyspace, "Users", "date_created", 
> pycassa.system_manager.LONG_TYPE)
> pool = pycassa.pool.ConnectionPool(keyspace, ['localhost:9160'])
> cf = pycassa.ColumnFamily(pool, "Users")
> user_uuid = uuid4()
> cf.insert(user_uuid, {'username':'test_username', 'auth_method':'ldap', 
> 'email':'t...@example.com', 'active':1, 
>   'date_created':long(calendar.timegm(time.gmtime())), 
> 'default_account_id':uuid4()})
> from cassandra.cluster import Cluster
> cassandra_cluster = Cluster(["localhost"])
> cassandra_session = cassandra_cluster.connect(keyspace)
> print "username", cassandra_session.execute('SELECT value from "Users" where 
> key = %s and column1 = %s', (user_uuid, 'username',))
> print "email", cassandra_session.execute('SELECT value from "Users" where key 
> = %s and column1 = %s', (user_uuid, 'email',))
> try:
> print "default_account_id", cassandra_session.execute('SELECT value from 
> "Users" where key = %s and column1 = %s', (user_uuid, 'default_account_id',))
> except Exception as e:
> print "Exception trying to get default_account_id", traceback.format_exc()
> cassandra_session = cassandra_cluster.connect(keyspace)
> try:
> print "active", cassandra_session.execute('SELECT value from "Users" 
> where key = %s and column1 = %s', (user_uuid, 'active',))
> except Exception as e:
> print "Exception trying to get active", traceback.format_exc()
> cassandra_session = cassandra_cluster.connect(keyspace)
> try:
> print "date_created", cassandra_session.execute('SELECT value from 
> "Users" where key = %s and column1 = %s', (user_uuid, 'date_created',))
> except Exception as e:
> print "Exception trying to get date_created", traceback.format_exc()
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314521#comment-14314521
 ] 

Marcus Eriksson commented on CASSANDRA-8719:


great, thanks (and it reproduces in 2.1 HEAD)

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request bef

[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8719:
---
Reproduced In: 2.1.0, 2.1.3  (was: 2.1.0)

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request before any 
> SSTables had been flushed yet, so I presume thi

[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes

2015-02-10 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314524#comment-14314524
 ] 

Constance Eustace commented on CASSANDRA-6487:
--

What happens if a single-statement BATCH exceeds the limit?

I ask this because the batch size limit will impact setting the timestamp on a 
statement. If we have a collection of updates, the decision to batch or not 
batch them happens further downstream, when a collection of statements are 
analyzed. 

HOWEVER, the UPDATE statement only supports the USING timestamp in the middle 
of the statement.

The BATCH statement allows you to make the timestamp decision later on.

If a BATCH is encountered with a SINGLE STATEMENT, can the limit be ignored and 
have it be treated as a normal update?

I ask because there is discussion of making this a hard limit.

> Log WARN on large batch sizes
> -
>
> Key: CASSANDRA-6487
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Lyuben Todorov
>Priority: Minor
> Fix For: 2.0.8, 2.1 beta2
>
> Attachments: 6487-cassandra-2.0.patch, 6487-cassandra-2.0_v2.patch
>
>
> Large batches on a coordinator can cause a lot of node stress. I propose 
> adding a WARN log entry if batch sizes go beyond a configurable size. This 
> will give more visibility to operators on something that can happen on the 
> developer side. 
> New yaml setting with 5k default.
> {{# Log WARN on any batch size exceeding this value. 5k by default.}}
> {{# Caution should be taken on increasing the size of this threshold as it 
> can lead to node instability.}}
> {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-10 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8f5435d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8f5435d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8f5435d0

Branch: refs/heads/trunk
Commit: 8f5435d0630c048569eee8f7a88d7f56487bcbc6
Parents: 307e4f2 69d07fb
Author: Brandon Williams 
Authored: Tue Feb 10 12:07:49 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:07:49 2015 -0600

--
 conf/cassandra-env.sh | 1 +
 conf/cassandra.yaml   | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f5435d0/conf/cassandra-env.sh
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8f5435d0/conf/cassandra.yaml
--



[1/6] cassandra git commit: State the obvious.

2015-02-10 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 08deff70a -> ce8501b77
  refs/heads/cassandra-2.1 97da271b2 -> 69d07fb03
  refs/heads/trunk 307e4f242 -> 8f5435d06


State the obvious.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce8501b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce8501b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce8501b7

Branch: refs/heads/cassandra-2.0
Commit: ce8501b77d3afa900d477855a0f1aa33bbe0a853
Parents: 08deff7
Author: Brandon Williams 
Authored: Tue Feb 10 11:48:59 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 11:48:59 2015 -0600

--
 conf/cassandra-env.sh | 1 +
 conf/cassandra.yaml   | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce8501b7/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 3544426..dfe8184 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -157,6 +157,7 @@ fi
 
 # Specifies the default port over which Cassandra will be available for
 # JMX connections.
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 JMX_PORT="7199"
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce8501b7/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 45290aa..99f13a6 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -318,6 +318,7 @@ listen_address: localhost
 # same as the rpc_address. The port however is different and specified below.
 start_native_transport: true
 # port for the CQL native transport to listen for clients on
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 native_transport_port: 9042
 # The maximum threads for handling requests when the native transport is used.
 # This is similar to rpc_max_threads though the default differs slightly (and
@@ -341,6 +342,8 @@ start_rpc: true
 # Note that unlike ListenAddress above, it is allowed to specify 0.0.0.0
 # here if you want to listen on all interfaces, but that will break clients 
 # that rely on node auto-discovery.
+#
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 rpc_address: localhost
 # port for Thrift to listen for clients on
 rpc_port: 9160



[2/6] cassandra git commit: State the obvious.

2015-02-10 Thread brandonwilliams
State the obvious.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce8501b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce8501b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce8501b7

Branch: refs/heads/cassandra-2.1
Commit: ce8501b77d3afa900d477855a0f1aa33bbe0a853
Parents: 08deff7
Author: Brandon Williams 
Authored: Tue Feb 10 11:48:59 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 11:48:59 2015 -0600

--
 conf/cassandra-env.sh | 1 +
 conf/cassandra.yaml   | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce8501b7/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 3544426..dfe8184 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -157,6 +157,7 @@ fi
 
 # Specifies the default port over which Cassandra will be available for
 # JMX connections.
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 JMX_PORT="7199"
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce8501b7/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 45290aa..99f13a6 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -318,6 +318,7 @@ listen_address: localhost
 # same as the rpc_address. The port however is different and specified below.
 start_native_transport: true
 # port for the CQL native transport to listen for clients on
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 native_transport_port: 9042
 # The maximum threads for handling requests when the native transport is used.
 # This is similar to rpc_max_threads though the default differs slightly (and
@@ -341,6 +342,8 @@ start_rpc: true
 # Note that unlike ListenAddress above, it is allowed to specify 0.0.0.0
 # here if you want to listen on all interfaces, but that will break clients 
 # that rely on node auto-discovery.
+#
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 rpc_address: localhost
 # port for Thrift to listen for clients on
 rpc_port: 9160



[4/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-02-10 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
conf/cassandra.yaml


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69d07fb0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69d07fb0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69d07fb0

Branch: refs/heads/trunk
Commit: 69d07fb03f4357bcb0eb9d138af317c1172f28ca
Parents: 97da271 ce8501b
Author: Brandon Williams 
Authored: Tue Feb 10 12:07:38 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:07:38 2015 -0600

--
 conf/cassandra-env.sh | 1 +
 conf/cassandra.yaml   | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69d07fb0/conf/cassandra-env.sh
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69d07fb0/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 0b114aa,99f13a6..f337067
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -398,20 -333,18 +399,22 @@@ native_transport_port: 904
  # Whether to start the thrift rpc server.
  start_rpc: true
  
 -# The address to bind the Thrift RPC service and native transport
 -# server -- clients connect here.
 +# The address or interface to bind the Thrift RPC service and native transport
 +# server to.
 +#
 +# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
 +# to a single address, IP aliasing is not supported.
  #
 -# Leaving this blank has the same effect it does for ListenAddress,
 +# Leaving rpc_address blank has the same effect as on listen_address
  # (i.e. it will be based on the configured hostname of the node).
  #
 -# Note that unlike ListenAddress above, it is allowed to specify 0.0.0.0
 -# here if you want to listen on all interfaces, but that will break clients 
 -# that rely on node auto-discovery.
 +# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
 +# set broadcast_rpc_address to a value other than 0.0.0.0.
+ #
+ # For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
  rpc_address: localhost
 +# rpc_interface: eth1
 +
  # port for Thrift to listen for clients on
  rpc_port: 9160
  



[3/6] cassandra git commit: State the obvious.

2015-02-10 Thread brandonwilliams
State the obvious.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce8501b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce8501b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce8501b7

Branch: refs/heads/trunk
Commit: ce8501b77d3afa900d477855a0f1aa33bbe0a853
Parents: 08deff7
Author: Brandon Williams 
Authored: Tue Feb 10 11:48:59 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 11:48:59 2015 -0600

--
 conf/cassandra-env.sh | 1 +
 conf/cassandra.yaml   | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce8501b7/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 3544426..dfe8184 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -157,6 +157,7 @@ fi
 
 # Specifies the default port over which Cassandra will be available for
 # JMX connections.
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 JMX_PORT="7199"
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce8501b7/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 45290aa..99f13a6 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -318,6 +318,7 @@ listen_address: localhost
 # same as the rpc_address. The port however is different and specified below.
 start_native_transport: true
 # port for the CQL native transport to listen for clients on
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 native_transport_port: 9042
 # The maximum threads for handling requests when the native transport is used.
 # This is similar to rpc_max_threads though the default differs slightly (and
@@ -341,6 +342,8 @@ start_rpc: true
 # Note that unlike ListenAddress above, it is allowed to specify 0.0.0.0
 # here if you want to listen on all interfaces, but that will break clients 
 # that rely on node auto-discovery.
+#
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 rpc_address: localhost
 # port for Thrift to listen for clients on
 rpc_port: 9160



[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-02-10 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
conf/cassandra.yaml


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69d07fb0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69d07fb0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69d07fb0

Branch: refs/heads/cassandra-2.1
Commit: 69d07fb03f4357bcb0eb9d138af317c1172f28ca
Parents: 97da271 ce8501b
Author: Brandon Williams 
Authored: Tue Feb 10 12:07:38 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:07:38 2015 -0600

--
 conf/cassandra-env.sh | 1 +
 conf/cassandra.yaml   | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69d07fb0/conf/cassandra-env.sh
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69d07fb0/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 0b114aa,99f13a6..f337067
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -398,20 -333,18 +399,22 @@@ native_transport_port: 904
  # Whether to start the thrift rpc server.
  start_rpc: true
  
 -# The address to bind the Thrift RPC service and native transport
 -# server -- clients connect here.
 +# The address or interface to bind the Thrift RPC service and native transport
 +# server to.
 +#
 +# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
 +# to a single address, IP aliasing is not supported.
  #
 -# Leaving this blank has the same effect it does for ListenAddress,
 +# Leaving rpc_address blank has the same effect as on listen_address
  # (i.e. it will be based on the configured hostname of the node).
  #
 -# Note that unlike ListenAddress above, it is allowed to specify 0.0.0.0
 -# here if you want to listen on all interfaces, but that will break clients 
 -# that rely on node auto-discovery.
 +# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
 +# set broadcast_rpc_address to a value other than 0.0.0.0.
+ #
+ # For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
  rpc_address: localhost
 +# rpc_interface: eth1
 +
  # port for Thrift to listen for clients on
  rpc_port: 9160
  



[jira] [Commented] (CASSANDRA-8768) Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap

2015-02-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314552#comment-14314552
 ] 

Tyler Hobbs commented on CASSANDRA-8768:


[~brandon.williams] could you ninja commit a better error message, or open a 
ticket for this if needed?

> Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap
> --
>
> Key: CASSANDRA-8768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ron Kuris
> Fix For: 2.1.3
>
> Attachments: gossip-with-2.0-patch.txt
>
>
> If you spin up a Cassandra 2.0 cluster with some seeds, and then attempt to 
> attach a Cassandra 2.1 node to it, you get the following message:
> {code}OutboundTcpConnection.java:429 - Handshaking version with 
> /10.24.0.10{code}
> Turning on debug, you get a few additional messages:
> {code}DEBUG [WRITE-/(ip)] MessagingService.java:789 - Setting version 7 for 
> /10.24.0.10
> DEBUG [WRITE-/(ip)] OutboundTcpConnection.java:369 - Target max version is 7; 
> will reconnect with that version{code}
> However, the code never reconnects. See the comments as to why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7662) Implement templated CREATE TABLE functionality (CREATE TABLE LIKE)

2015-02-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314560#comment-14314560
 ] 

Tyler Hobbs commented on CASSANDRA-7662:


[~pramod_nair] it looks like the majority of your patch is changing whitespace 
and imports.  Can you attach a patch without all of those extra changes?

> Implement templated CREATE TABLE functionality (CREATE TABLE LIKE)
> --
>
> Key: CASSANDRA-7662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7662
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.1.3
>
> Attachments: 7662.patch
>
>
> Implement templated CREATE TABLE functionality (CREATE TABLE LIKE) to 
> simplify creating new tables duplicating existing ones (see parent_table part 
> of  http://www.postgresql.org/docs/9.1/static/sql-createtable.html).
> CREATE TABLE  LIKE ; - would create a new table with 
> the same columns and options as 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8768) Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap

2015-02-10 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-8768:
-
  Assignee: Brandon Williams

Reopening to fix the message.

> Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap
> --
>
> Key: CASSANDRA-8768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ron Kuris
>Assignee: Brandon Williams
> Fix For: 2.1.3
>
> Attachments: gossip-with-2.0-patch.txt
>
>
> If you spin up a Cassandra 2.0 cluster with some seeds, and then attempt to 
> attach a Cassandra 2.1 node to it, you get the following message:
> {code}OutboundTcpConnection.java:429 - Handshaking version with 
> /10.24.0.10{code}
> Turning on debug, you get a few additional messages:
> {code}DEBUG [WRITE-/(ip)] MessagingService.java:789 - Setting version 7 for 
> /10.24.0.10
> DEBUG [WRITE-/(ip)] OutboundTcpConnection.java:369 - Target max version is 7; 
> will reconnect with that version{code}
> However, the code never reconnects. See the comments as to why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314573#comment-14314573
 ] 

Tyler Hobbs commented on CASSANDRA-6538:


[~anuja_mandlecha] only calculating for variable-length types makes sense to me.

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
> Attachments: 6538.patch, sizeFzt.PNG
>
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: Mention storage port should not be exposed, either

2015-02-10 Thread brandonwilliams
Mention storage port should not be exposed, either


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28c380c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28c380c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28c380c9

Branch: refs/heads/trunk
Commit: 28c380c96a363eb1a2b43109ee40f8cf2ee681f3
Parents: ce8501b
Author: Brandon Williams 
Authored: Tue Feb 10 12:19:43 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:19:43 2015 -0600

--
 conf/cassandra.yaml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28c380c9/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 99f13a6..163ae9e 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -287,10 +287,12 @@ trickle_fsync: false
 trickle_fsync_interval_in_kb: 10240
 
 # TCP port, for commands and data
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 storage_port: 7000
 
 # SSL port, for encrypted communication.  Unused unless enabled in
 # encryption_options
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 ssl_storage_port: 7001
 
 # Address to bind to and tell other Cassandra nodes to connect to. You



[2/6] cassandra git commit: Mention storage port should not be exposed, either

2015-02-10 Thread brandonwilliams
Mention storage port should not be exposed, either


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28c380c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28c380c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28c380c9

Branch: refs/heads/cassandra-2.1
Commit: 28c380c96a363eb1a2b43109ee40f8cf2ee681f3
Parents: ce8501b
Author: Brandon Williams 
Authored: Tue Feb 10 12:19:43 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:19:43 2015 -0600

--
 conf/cassandra.yaml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28c380c9/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 99f13a6..163ae9e 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -287,10 +287,12 @@ trickle_fsync: false
 trickle_fsync_interval_in_kb: 10240
 
 # TCP port, for commands and data
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 storage_port: 7000
 
 # SSL port, for encrypted communication.  Unused unless enabled in
 # encryption_options
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 ssl_storage_port: 7001
 
 # Address to bind to and tell other Cassandra nodes to connect to. You



[4/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-02-10 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88b226b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88b226b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88b226b1

Branch: refs/heads/trunk
Commit: 88b226b1186f2b94c0e3d4611defe4da7a60cc80
Parents: 69d07fb 28c380c
Author: Brandon Williams 
Authored: Tue Feb 10 12:19:51 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:19:51 2015 -0600

--
 conf/cassandra.yaml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88b226b1/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index f337067,163ae9e..094a196
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -354,14 -292,13 +355,15 @@@ storage_port: 700
  
  # SSL port, for encrypted communication.  Unused unless enabled in
  # encryption_options
+ # For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
  ssl_storage_port: 7001
  
 -# Address to bind to and tell other Cassandra nodes to connect to. You
 -# _must_ change this if you want multiple nodes to be able to
 -# communicate!
 -# 
 +# Address or interface to bind to and tell other Cassandra nodes to connect 
to.
 +# You _must_ change this if you want multiple nodes to be able to communicate!
 +#
 +# Set listen_address OR listen_interface, not both. Interfaces must correspond
 +# to a single address, IP aliasing is not supported.
 +#
  # Leaving it blank leaves it up to InetAddress.getLocalHost(). This
  # will always do the Right Thing _if_ the node is properly configured
  # (hostname, name resolution, etc), and the Right Thing is to use the



[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-10 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51057f4c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51057f4c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51057f4c

Branch: refs/heads/trunk
Commit: 51057f4cf7674363c5a65383315dadbe431c70fb
Parents: 8f5435d 88b226b
Author: Brandon Williams 
Authored: Tue Feb 10 12:19:58 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:19:58 2015 -0600

--
 conf/cassandra.yaml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/51057f4c/conf/cassandra.yaml
--



[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-02-10 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88b226b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88b226b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88b226b1

Branch: refs/heads/cassandra-2.1
Commit: 88b226b1186f2b94c0e3d4611defe4da7a60cc80
Parents: 69d07fb 28c380c
Author: Brandon Williams 
Authored: Tue Feb 10 12:19:51 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:19:51 2015 -0600

--
 conf/cassandra.yaml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88b226b1/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index f337067,163ae9e..094a196
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -354,14 -292,13 +355,15 @@@ storage_port: 700
  
  # SSL port, for encrypted communication.  Unused unless enabled in
  # encryption_options
+ # For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
  ssl_storage_port: 7001
  
 -# Address to bind to and tell other Cassandra nodes to connect to. You
 -# _must_ change this if you want multiple nodes to be able to
 -# communicate!
 -# 
 +# Address or interface to bind to and tell other Cassandra nodes to connect 
to.
 +# You _must_ change this if you want multiple nodes to be able to communicate!
 +#
 +# Set listen_address OR listen_interface, not both. Interfaces must correspond
 +# to a single address, IP aliasing is not supported.
 +#
  # Leaving it blank leaves it up to InetAddress.getLocalHost(). This
  # will always do the Right Thing _if_ the node is properly configured
  # (hostname, name resolution, etc), and the Right Thing is to use the



[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-02-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314580#comment-14314580
 ] 

Aleksey Yeschenko commented on CASSANDRA-6538:
--

But they all are variable-length (can be empty bb).

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
> Attachments: 6538.patch, sizeFzt.PNG
>
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/6] cassandra git commit: Mention storage port should not be exposed, either

2015-02-10 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ce8501b77 -> 28c380c96
  refs/heads/cassandra-2.1 69d07fb03 -> 88b226b11
  refs/heads/trunk 8f5435d06 -> 51057f4cf


Mention storage port should not be exposed, either


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28c380c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28c380c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28c380c9

Branch: refs/heads/cassandra-2.0
Commit: 28c380c96a363eb1a2b43109ee40f8cf2ee681f3
Parents: ce8501b
Author: Brandon Williams 
Authored: Tue Feb 10 12:19:43 2015 -0600
Committer: Brandon Williams 
Committed: Tue Feb 10 12:19:43 2015 -0600

--
 conf/cassandra.yaml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28c380c9/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 99f13a6..163ae9e 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -287,10 +287,12 @@ trickle_fsync: false
 trickle_fsync_interval_in_kb: 10240
 
 # TCP port, for commands and data
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 storage_port: 7000
 
 # SSL port, for encrypted communication.  Unused unless enabled in
 # encryption_options
+# For security reasons, you should not expose this port to the internet.  
Firewall it if needed.
 ssl_storage_port: 7001
 
 # Address to bind to and tell other Cassandra nodes to connect to. You



[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8719:
---
Attachment: repro8719.sh

attaching script that repros this

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request befor

[jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314587#comment-14314587
 ] 

Marcus Eriksson commented on CASSANDRA-8719:


I'll dig some more, but I suspect I will need [~xedin] or [~benedict] help here

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen

[jira] [Commented] (CASSANDRA-8714) row-cache: use preloaded jemalloc w/ Unsafe

2015-02-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314595#comment-14314595
 ] 

Robert Stupp commented on CASSANDRA-8714:
-

Regarding the first point: it wasn't JNA - it was jemalloc itself. Don't really 
understand why, but using a self-compiled, recent version of jemalloc (3.6.0) 
works fine. Updated the JNA ticket accordingly.

> row-cache: use preloaded jemalloc w/ Unsafe
> ---
>
> Key: CASSANDRA-8714
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8714
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 8714.txt
>
>
> Using jemalloc via Java's {{Unsafe}} is a better alternative on Linux than 
> using jemalloc via JNA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2015-02-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314597#comment-14314597
 ] 

Robert Stupp commented on CASSANDRA-8308:
-

[~JoshuaMcKenzie] branch LGTM - except that there's a change in 
{{test/data/corrupt-sstables/Keyspace1-Standard3-jb-1-Summary.db}}
I like the idea to test it from the branch on cassci :)

> Windows: Commitlog access violations on unit tests
> --
>
> Key: CASSANDRA-8308
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
> Attachments: 8308-post-fix.txt, 8308_v1.txt, 8308_v2.txt, 8308_v3.txt
>
>
> We have four unit tests failing on trunk on Windows, all with 
> FileSystemException's related to the SchemaLoader:
> {noformat}
> [junit] Test 
> org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
> [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
> [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
> [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
> {noformat}
> Example error:
> {noformat}
> [junit] Caused by: java.nio.file.FileSystemException: 
> build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
> cannot access the file because it is being used by another process.
> [junit]
> [junit] at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> [junit] at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> [junit] at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> [junit] at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> [junit] at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
> [junit] at java.nio.file.Files.delete(Files.java:1079)
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8714) row-cache: use preloaded jemalloc w/ Unsafe

2015-02-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314610#comment-14314610
 ] 

Ariel Weisberg commented on CASSANDRA-8714:
---

OK, so it doesn't matter which access path you use to get to jemalloc? Unsafe 
vs JNI? They are both JNI if I understood Aleksey?

> row-cache: use preloaded jemalloc w/ Unsafe
> ---
>
> Key: CASSANDRA-8714
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8714
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 8714.txt
>
>
> Using jemalloc via Java's {{Unsafe}} is a better alternative on Linux than 
> using jemalloc via JNA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8714) row-cache: use preloaded jemalloc w/ Unsafe

2015-02-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314626#comment-14314626
 ] 

Robert Stupp commented on CASSANDRA-8714:
-

For my last comment: the microbench only failed for jemalloc-via-JNA-Library 
(for whatever reason w/ old jemalloc version).

Regarding performance: JNA's Native.malloc() is up to 3x faster than 
Unsafe.allocateMemory().

> row-cache: use preloaded jemalloc w/ Unsafe
> ---
>
> Key: CASSANDRA-8714
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8714
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 8714.txt
>
>
> Using jemalloc via Java's {{Unsafe}} is a better alternative on Linux than 
> using jemalloc via JNA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314635#comment-14314635
 ] 

Benedict commented on CASSANDRA-8719:
-

Are we absolutely certain this *only* occurs with *both* options enabled? Is it 
possible to elicit the bug with only sync server and offheap_objects? It could 
be a straight up deterministic bug with comparison implementation in 
offheap_objects

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch ei

cassandra git commit: Upgrade bundled python driver to 077b876fdd38

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 88b226b11 -> f5380de5e


Upgrade bundled python driver to 077b876fdd38

Patch by Tyler Hobbs for CASSANDRA-8154


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5380de5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5380de5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5380de5

Branch: refs/heads/cassandra-2.1
Commit: f5380de5e008eae6830f399a758dafb836c1ae2f
Parents: 88b226b
Author: Tyler Hobbs 
Authored: Tue Feb 10 12:55:01 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 12:55:01 2015 -0600

--
 CHANGES.txt   |   2 ++
 lib/cassandra-driver-internal-only-2.1.3.post.zip | Bin 138474 -> 0 bytes
 lib/cassandra-driver-internal-only-2.1.4.post.zip | Bin 0 -> 140181 bytes
 3 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5380de5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c5cff48..92ee5d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,7 @@
 2.1.4
  * Write partition size estimates into a system table (CASSANDRA-7688)
+ * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
+   (CASSANDRA-8154)
 
 
 2.1.3

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5380de5/lib/cassandra-driver-internal-only-2.1.3.post.zip
--
diff --git a/lib/cassandra-driver-internal-only-2.1.3.post.zip 
b/lib/cassandra-driver-internal-only-2.1.3.post.zip
deleted file mode 100644
index 3cedfc4..000
Binary files a/lib/cassandra-driver-internal-only-2.1.3.post.zip and /dev/null 
differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5380de5/lib/cassandra-driver-internal-only-2.1.4.post.zip
--
diff --git a/lib/cassandra-driver-internal-only-2.1.4.post.zip 
b/lib/cassandra-driver-internal-only-2.1.4.post.zip
new file mode 100644
index 000..56b7162
Binary files /dev/null and b/lib/cassandra-driver-internal-only-2.1.4.post.zip 
differ



[1/2] cassandra git commit: Upgrade bundled python driver to 077b876fdd38

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 51057f4cf -> 0af528b94


Upgrade bundled python driver to 077b876fdd38

Patch by Tyler Hobbs for CASSANDRA-8154


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5380de5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5380de5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5380de5

Branch: refs/heads/trunk
Commit: f5380de5e008eae6830f399a758dafb836c1ae2f
Parents: 88b226b
Author: Tyler Hobbs 
Authored: Tue Feb 10 12:55:01 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 12:55:01 2015 -0600

--
 CHANGES.txt   |   2 ++
 lib/cassandra-driver-internal-only-2.1.3.post.zip | Bin 138474 -> 0 bytes
 lib/cassandra-driver-internal-only-2.1.4.post.zip | Bin 0 -> 140181 bytes
 3 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5380de5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c5cff48..92ee5d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,7 @@
 2.1.4
  * Write partition size estimates into a system table (CASSANDRA-7688)
+ * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
+   (CASSANDRA-8154)
 
 
 2.1.3

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5380de5/lib/cassandra-driver-internal-only-2.1.3.post.zip
--
diff --git a/lib/cassandra-driver-internal-only-2.1.3.post.zip 
b/lib/cassandra-driver-internal-only-2.1.3.post.zip
deleted file mode 100644
index 3cedfc4..000
Binary files a/lib/cassandra-driver-internal-only-2.1.3.post.zip and /dev/null 
differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5380de5/lib/cassandra-driver-internal-only-2.1.4.post.zip
--
diff --git a/lib/cassandra-driver-internal-only-2.1.4.post.zip 
b/lib/cassandra-driver-internal-only-2.1.4.post.zip
new file mode 100644
index 000..56b7162
Binary files /dev/null and b/lib/cassandra-driver-internal-only-2.1.4.post.zip 
differ



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-10 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0af528b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0af528b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0af528b9

Branch: refs/heads/trunk
Commit: 0af528b942f8f061958f1736ac27da44ace4a723
Parents: 51057f4 f5380de
Author: Tyler Hobbs 
Authored: Tue Feb 10 12:56:35 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 12:56:35 2015 -0600

--
 CHANGES.txt   |   2 ++
 lib/cassandra-driver-internal-only-2.1.3.post.zip | Bin 138474 -> 0 bytes
 lib/cassandra-driver-internal-only-2.1.4.post.zip | Bin 0 -> 140181 bytes
 3 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0af528b9/CHANGES.txt
--
diff --cc CHANGES.txt
index 9f1bda5,92ee5d1..c722e94
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,7 +1,66 @@@
 +3.0
 + * Add role based access control (CASSANDRA-7653, 8650)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 +
 +
  2.1.4
   * Write partition size estimates into a system table (CASSANDRA-7688)
+  * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
+(CASSANDRA-8154)
  
  
  2.1.3



[jira] [Comment Edited] (CASSANDRA-8154) desc table output shows key-only index ambiguously

2015-02-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314663#comment-14314663
 ] 

Tyler Hobbs edited comment on CASSANDRA-8154 at 2/10/15 6:58 PM:
-

Thanks, the bundled python driver has been upgraded to commit {{077b876fdd38}} 
(2.1.4.post) as commit {{f5380de}}.


was (Author: thobbs):
Thanks, the bundled python driver has been upgraded {{077b876fdd38}} as commit 
{{f5380de}}.

> desc table output shows key-only index ambiguously
> --
>
> Key: CASSANDRA-8154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8154
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.4
>
>
> When creating a secondary index on a map type, for keys, the DESC TABLE 
> output does not create correct DDL (it omits the keys part). So if someone 
> uses describe to recreate a schema they could end up with a values index 
> instead of a keys index.
> First, create a table and add an index:
> {noformat}
> CREATE TABLE test.foo (
> id1 text,
> id2 text,
> categories map,
> PRIMARY KEY (id1, id2));
> create index on foo(keys(categories));|
> {noformat}
> Now DESC TABLE and you'll see the incomplete index DDL:
> {noformat}
> CREATE TABLE test.foo (
> id1 text,
> id2 text,
> categories map,
> PRIMARY KEY (id1, id2)
> ) WITH CLUSTERING ORDER BY (id2 ASC)
> ...snip..
> CREATE INDEX foo_categories_idx ON test.foo (categories);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8154) desc table output shows key-only index ambiguously

2015-02-10 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-8154.

   Resolution: Fixed
Fix Version/s: 2.1.4

Thanks, the bundled python driver has been upgraded {{077b876fdd38}} as commit 
{{f5380de}}.

> desc table output shows key-only index ambiguously
> --
>
> Key: CASSANDRA-8154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8154
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.4
>
>
> When creating a secondary index on a map type, for keys, the DESC TABLE 
> output does not create correct DDL (it omits the keys part). So if someone 
> uses describe to recreate a schema they could end up with a values index 
> instead of a keys index.
> First, create a table and add an index:
> {noformat}
> CREATE TABLE test.foo (
> id1 text,
> id2 text,
> categories map,
> PRIMARY KEY (id1, id2));
> create index on foo(keys(categories));|
> {noformat}
> Now DESC TABLE and you'll see the incomplete index DDL:
> {noformat}
> CREATE TABLE test.foo (
> id1 text,
> id2 text,
> categories map,
> PRIMARY KEY (id1, id2)
> ) WITH CLUSTERING ORDER BY (id2 ASC)
> ...snip..
> CREATE INDEX foo_categories_idx ON test.foo (categories);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-02-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-5791:
--
Attachment: cassandra-5791.patch.txt

- Implements bin/nodetool verify
- Implements bin/sstableverify
- Changes Digest.sha1 to Digest.adler32
- Fixes calculation of adler32 for compressed files
- Allows for walking all atoms with -e (extended) option flag

Will notify of error, but does NOT currently mark sstable as needing to be 
repaired. 

> A nodetool command to validate all sstables in a node
> -
>
> Key: CASSANDRA-5791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
> Attachments: cassandra-5791.patch.txt
>
>
> CUrrently there is no nodetool command to validate all sstables on disk. The 
> only way to do this is to run a repair and see if it succeeds. But we cannot 
> repair the system keyspace. 
> Also we can run upgrade sstables but that re writes all the sstables. 
> This command should check the hash of all sstables and return whether all 
> data is readable all not. This should NOT care about consistency. 
> The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-02-10 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5791:
--
Reviewer: Branimir Lambov  (was: Jonathan Ellis)

[~blambov] to review

> A nodetool command to validate all sstables in a node
> -
>
> Key: CASSANDRA-5791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
> Attachments: cassandra-5791.patch.txt
>
>
> CUrrently there is no nodetool command to validate all sstables on disk. The 
> only way to do this is to run a repair and see if it succeeds. But we cannot 
> repair the system keyspace. 
> Also we can run upgrade sstables but that re writes all the sstables. 
> This command should check the hash of all sstables and return whether all 
> data is readable all not. This should NOT care about consistency. 
> The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8595) Emit timeouts per endpoint

2015-02-10 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314711#comment-14314711
 ] 

sankalp kohli commented on CASSANDRA-8595:
--

During reads, we block on a latch to count the number of responses. You need to 
do some code changes to figure out which endpoint did not reply. You also need 
to account for speculative retry. 
You can start from SP.fetchRows method to see how reads happen. 

> Emit timeouts per endpoint
> --
>
> Key: CASSANDRA-8595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8595
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
>
> We currently emit number of timeouts experienced by a co-ordinator while 
> doing reads and writes. This does not tell us which replica or endpoint is 
> responsible for the timeouts. 
> We can keep a map of endpoint to number of timeouts which could be emitted 
> via JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: nija: fix cqlsh completion for full() collection indexes

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0af528b94 -> 187624b11


nija: fix cqlsh completion for full() collection indexes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad91d416
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad91d416
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad91d416

Branch: refs/heads/trunk
Commit: ad91d41628e6e2ff89d552fc3630682cd1c29f3f
Parents: f5380de
Author: Tyler Hobbs 
Authored: Tue Feb 10 13:20:51 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 13:20:51 2015 -0600

--
 pylib/cqlshlib/cql3handling.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad91d416/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 31f0de2..f089cd7 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -1001,7 +1001,7 @@ syntax_rules += r'''
cf= "(" (
col= |
"keys(" col= ")" |
-   "fullCollection(" col= ")"
+   "full(" col= ")"
) ")"
( "USING"  ( "WITH" "OPTIONS" 
"="  )? )?
  ;



cassandra git commit: nija: fix cqlsh completion for full() collection indexes

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f5380de5e -> ad91d4162


nija: fix cqlsh completion for full() collection indexes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad91d416
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad91d416
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad91d416

Branch: refs/heads/cassandra-2.1
Commit: ad91d41628e6e2ff89d552fc3630682cd1c29f3f
Parents: f5380de
Author: Tyler Hobbs 
Authored: Tue Feb 10 13:20:51 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 13:20:51 2015 -0600

--
 pylib/cqlshlib/cql3handling.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad91d416/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 31f0de2..f089cd7 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -1001,7 +1001,7 @@ syntax_rules += r'''
cf= "(" (
col= |
"keys(" col= ")" |
-   "fullCollection(" col= ")"
+   "full(" col= ")"
) ")"
( "USING"  ( "WITH" "OPTIONS" 
"="  )? )?
  ;



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-10 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/187624b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/187624b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/187624b1

Branch: refs/heads/trunk
Commit: 187624b114711febc452ab70362014e986a1b24e
Parents: 0af528b ad91d41
Author: Tyler Hobbs 
Authored: Tue Feb 10 13:21:37 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 13:21:37 2015 -0600

--
 pylib/cqlshlib/cql3handling.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/187624b1/pylib/cqlshlib/cql3handling.py
--



[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8719:
---
Assignee: Benedict  (was: Marcus Eriksson)

yes, hsha/offheap_objects seems to be the only combo this reproduces on (unless 
it is very non-deterministic with other setups)

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Benedict
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circums

[jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Randy Fradin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314723#comment-14314723
 ] 

Randy Fradin commented on CASSANDRA-8719:
-

The steps used to reproduce definitely do not elicit the bug when sync is used. 
Purely anecdotal, but I've done a lot of testing with sync + offheap_objects 
for a couple of applications on 2.1.0 and never had this problem until we tried 
to change the cluster to hsha.

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for mem

[jira] [Updated] (CASSANDRA-8768) Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap

2015-02-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8768:
-
Priority: Minor  (was: Major)

> Using a Cassandra 2.0 seed doesn't allow a new Cassandra 2.1 node to bootstrap
> --
>
> Key: CASSANDRA-8768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ron Kuris
>Assignee: Brandon Williams
>Priority: Minor
> Fix For: 2.1.3
>
> Attachments: gossip-with-2.0-patch.txt
>
>
> If you spin up a Cassandra 2.0 cluster with some seeds, and then attempt to 
> attach a Cassandra 2.1 node to it, you get the following message:
> {code}OutboundTcpConnection.java:429 - Handshaking version with 
> /10.24.0.10{code}
> Turning on debug, you get a few additional messages:
> {code}DEBUG [WRITE-/(ip)] MessagingService.java:789 - Setting version 7 for 
> /10.24.0.10
> DEBUG [WRITE-/(ip)] OutboundTcpConnection.java:369 - Target max version is 7; 
> will reconnect with that version{code}
> However, the code never reconnects. See the comments as to why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-02-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14310025#comment-14310025
 ] 

Jeff Jirsa edited comment on CASSANDRA-5791 at 2/10/15 7:26 PM:


Duplicating my comment from 8703 here since its a dupe and prone to closure :

-I've got a version at ... that follows the scrub read path and implements 
nodetool verify / sstableverify. This works, for both compressed and 
uncompressed, but requires walking the entire sstable and verifies each on disk 
atom. This works, it just isn't very fast (though it is thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.- 
(New patch attached)


was (Author: jjirsa):
Duplicating my comment from 8703 here since its a dupe and prone to closure :

-I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom. This works, it just isn't very fast (though it is 
thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.- 
(New patch attached)

> A nodetool command to validate all sstables in a node
> -
>
> Key: CASSANDRA-5791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
> Attachments: cassandra-5791.patch.txt
>
>
> CUrrently there is no nodetool command to validate all sstables on disk. The 
> only way to do this is to run a repair and see if it succeeds. But we cannot 
> repair the system keyspace. 
> Also we can run upgrade sstables but that re writes all the sstables. 
> This command should check the hash of all sstables and return whether all 
> data is readable all not. This should NOT care about consistency. 
> The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-02-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14310025#comment-14310025
 ] 

Jeff Jirsa edited comment on CASSANDRA-5791 at 2/10/15 7:25 PM:


Duplicating my comment from 8703 here since its a dupe and prone to closure :

-I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom. This works, it just isn't very fast (though it is 
thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify (-e) or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.- 
(New patch attached)


was (Author: jjirsa):
Duplicating my comment from 8703 here since its a dupe and prone to closure :

I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom. This works, it just isn't very fast (though it is 
thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify (-e) or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.

> A nodetool command to validate all sstables in a node
> -
>
> Key: CASSANDRA-5791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
> Attachments: cassandra-5791.patch.txt
>
>
> CUrrently there is no nodetool command to validate all sstables on disk. The 
> only way to do this is to run a repair and see if it succeeds. But we cannot 
> repair the system keyspace. 
> Also we can run upgrade sstables but that re writes all the sstables. 
> This command should check the hash of all sstables and return whether all 
> data is readable all not. This should NOT care about consistency. 
> The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-02-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14310025#comment-14310025
 ] 

Jeff Jirsa edited comment on CASSANDRA-5791 at 2/10/15 7:25 PM:


Duplicating my comment from 8703 here since its a dupe and prone to closure :

-I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom. This works, it just isn't very fast (though it is 
thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.- 
(New patch attached)


was (Author: jjirsa):
Duplicating my comment from 8703 here since its a dupe and prone to closure :

-I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom. This works, it just isn't very fast (though it is 
thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify (-e) or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.- 
(New patch attached)

> A nodetool command to validate all sstables in a node
> -
>
> Key: CASSANDRA-5791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
> Attachments: cassandra-5791.patch.txt
>
>
> CUrrently there is no nodetool command to validate all sstables on disk. The 
> only way to do this is to run a repair and see if it succeeds. But we cannot 
> repair the system keyspace. 
> Also we can run upgrade sstables but that re writes all the sstables. 
> This command should check the hash of all sstables and return whether all 
> data is readable all not. This should NOT care about consistency. 
> The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8771) Remove commit log segment recycling

2015-02-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314731#comment-14314731
 ] 

Aleksey Yeschenko commented on CASSANDRA-8771:
--

Another +1 in favour of dumping it.

> Remove commit log segment recycling
> ---
>
> Key: CASSANDRA-8771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8771
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Ariel Weisberg
>  Labels: commitlog
>
> For discussion
> Commit log segment recycling introduces a lot of complexity in the existing 
> code.
> CASSANDRA-8729 is a side effect of commit log segment recycling and 
> addressing it will require memory management code and thread coordination for 
> memory that the filesystem will no longer handle for us.
> There is some discussion about what storage configurations actually benefit 
> from preallocated files. Fast random access devices like SSDs, or 
> non-volatile write caches etc. make the distinction not that great. 
> I haven't measured any difference in throughput for bulk appending vs 
> overwriting although it was pointed out that I didn't test with concurrent IO 
> streams.
> What would it take to make removing commit log segment recycling acceptable? 
> Maybe a benchmark on a spinning disk that measures the performance impact of 
> preallocation when there are other IO streams?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-5791) A nodetool command to validate all sstables in a node

2015-02-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-5791:
--
Comment: was deleted

(was: Duplicating my comment from 8703 here since its a dupe and prone to 
closure :

-I've got a version at ... that follows the scrub read path and implements 
nodetool verify / sstableverify. This works, for both compressed and 
uncompressed, but requires walking the entire sstable and verifies each on disk 
atom. This works, it just isn't very fast (though it is thorough).
The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore.- 
(New patch attached))

> A nodetool command to validate all sstables in a node
> -
>
> Key: CASSANDRA-5791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5791
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
> Attachments: cassandra-5791.patch.txt
>
>
> CUrrently there is no nodetool command to validate all sstables on disk. The 
> only way to do this is to run a repair and see if it succeeds. But we cannot 
> repair the system keyspace. 
> Also we can run upgrade sstables but that re writes all the sstables. 
> This command should check the hash of all sstables and return whether all 
> data is readable all not. This should NOT care about consistency. 
> The compressed sstables do not have hash so not sure how it will work there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2015-02-10 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314814#comment-14314814
 ] 

Joshua McKenzie commented on CASSANDRA-8308:


Reverted the test/data that snuck in - thanks for the heads up.

> Windows: Commitlog access violations on unit tests
> --
>
> Key: CASSANDRA-8308
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
> Attachments: 8308-post-fix.txt, 8308_v1.txt, 8308_v2.txt, 8308_v3.txt
>
>
> We have four unit tests failing on trunk on Windows, all with 
> FileSystemException's related to the SchemaLoader:
> {noformat}
> [junit] Test 
> org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
> [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
> [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
> [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
> {noformat}
> Example error:
> {noformat}
> [junit] Caused by: java.nio.file.FileSystemException: 
> build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
> cannot access the file because it is being used by another process.
> [junit]
> [junit] at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> [junit] at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> [junit] at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> [junit] at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> [junit] at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
> [junit] at java.nio.file.Files.delete(Files.java:1079)
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix multicolumn relations with indexes on some clustering cols

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 28c380c96 -> 9649594c7


Fix multicolumn relations with indexes on some clustering cols

Patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-8275


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9649594c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9649594c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9649594c

Branch: refs/heads/cassandra-2.0
Commit: 9649594c761dbb72e58ddd71a10f0794378337ca
Parents: 28c380c
Author: blerer 
Authored: Tue Feb 10 15:07:02 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 15:07:02 2015 -0600

--
 CHANGES.txt |   2 +
 .../cql3/statements/SelectStatement.java|  46 --
 .../cassandra/cql3/MultiColumnRelationTest.java | 122 
 .../cql3/SingleColumnRelationTest.java  | 145 +++
 4 files changed, 303 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9649594c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa9c77d..861730f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.13:
+ * Fix some multi-column relations with indexes on some clustering
+   columns (CASSANDRA-8275)
  * Fix IllegalArgumentException in dynamic snitch (CASSANDRA-8448)
  * Add support for UPDATE ... IF EXISTS (CASSANDRA-8610)
  * Fix reversal of list prepends (CASSANDRA-8733)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9649594c/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 19615b6..2fa57b9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -35,7 +35,6 @@ import org.apache.cassandra.cql3.CFDefinition.Name.Kind;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.context.CounterContext;
 import org.apache.cassandra.db.filter.*;
 import org.apache.cassandra.db.marshal.*;
 import org.apache.cassandra.dht.*;
@@ -83,8 +82,10 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 /** Restrictions on non-primary key columns (i.e. secondary index 
restrictions) */
 private final Map metadataRestrictions = 
new HashMap();
 
-// The name of all restricted names not covered by the key or index filter
-private final Set restrictedNames = new 
HashSet();
+// The map keys are the name of the columns that must be converted into 
IndexExpressions if a secondary index need
+// to be used. The value specify if the column has an index that can be 
used to for the relation in which the column
+// is specified.
+private final Map restrictedNames = new 
HashMap();
 private Restriction.Slice sliceRestriction;
 
 private boolean isReversed;
@@ -1027,7 +1028,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 return Collections.emptyList();
 
 List expressions = new ArrayList();
-for (CFDefinition.Name name : restrictedNames)
+for (CFDefinition.Name name : restrictedNames.keySet())
 {
 Restriction restriction;
 switch (name.kind)
@@ -1068,12 +1069,21 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 else
 {
-List values = restriction.values(variables);
+ByteBuffer value;
+if (restriction.isMultiColumn())
+{
+List values = restriction.values(variables);
+value = values.get(name.position);
+}
+else
+{
+List values = restriction.values(variables);
+if (values.size() != 1)
+throw new InvalidRequestException("IN restrictions are 
not supported on indexed columns");
 
-if (values.size() != 1)
-throw new InvalidRequestException("IN restrictions are not 
supported on indexed columns");
+value = values.get(0);
+}
 
-ByteBuffer value = values.get(0);
 validateIndexExpressionValue(value, name);
 expressions.add(new IndexExpression(name.name.key, 
IndexOper

[1/2] cassandra git commit: Fix multicolumn relations with indexes on some clustering cols

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ad91d4162 -> 07ffe1b12


Fix multicolumn relations with indexes on some clustering cols

Patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-8275


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9649594c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9649594c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9649594c

Branch: refs/heads/cassandra-2.1
Commit: 9649594c761dbb72e58ddd71a10f0794378337ca
Parents: 28c380c
Author: blerer 
Authored: Tue Feb 10 15:07:02 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 15:07:02 2015 -0600

--
 CHANGES.txt |   2 +
 .../cql3/statements/SelectStatement.java|  46 --
 .../cassandra/cql3/MultiColumnRelationTest.java | 122 
 .../cql3/SingleColumnRelationTest.java  | 145 +++
 4 files changed, 303 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9649594c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa9c77d..861730f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.13:
+ * Fix some multi-column relations with indexes on some clustering
+   columns (CASSANDRA-8275)
  * Fix IllegalArgumentException in dynamic snitch (CASSANDRA-8448)
  * Add support for UPDATE ... IF EXISTS (CASSANDRA-8610)
  * Fix reversal of list prepends (CASSANDRA-8733)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9649594c/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 19615b6..2fa57b9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -35,7 +35,6 @@ import org.apache.cassandra.cql3.CFDefinition.Name.Kind;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.context.CounterContext;
 import org.apache.cassandra.db.filter.*;
 import org.apache.cassandra.db.marshal.*;
 import org.apache.cassandra.dht.*;
@@ -83,8 +82,10 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 /** Restrictions on non-primary key columns (i.e. secondary index 
restrictions) */
 private final Map metadataRestrictions = 
new HashMap();
 
-// The name of all restricted names not covered by the key or index filter
-private final Set restrictedNames = new 
HashSet();
+// The map keys are the name of the columns that must be converted into 
IndexExpressions if a secondary index need
+// to be used. The value specify if the column has an index that can be 
used to for the relation in which the column
+// is specified.
+private final Map restrictedNames = new 
HashMap();
 private Restriction.Slice sliceRestriction;
 
 private boolean isReversed;
@@ -1027,7 +1028,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 return Collections.emptyList();
 
 List expressions = new ArrayList();
-for (CFDefinition.Name name : restrictedNames)
+for (CFDefinition.Name name : restrictedNames.keySet())
 {
 Restriction restriction;
 switch (name.kind)
@@ -1068,12 +1069,21 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 else
 {
-List values = restriction.values(variables);
+ByteBuffer value;
+if (restriction.isMultiColumn())
+{
+List values = restriction.values(variables);
+value = values.get(name.position);
+}
+else
+{
+List values = restriction.values(variables);
+if (values.size() != 1)
+throw new InvalidRequestException("IN restrictions are 
not supported on indexed columns");
 
-if (values.size() != 1)
-throw new InvalidRequestException("IN restrictions are not 
supported on indexed columns");
+value = values.get(0);
+}
 
-ByteBuffer value = values.get(0);
 validateIndexExpressionValue(value, name);
 expressions.add(new IndexExpression(name.name.key, 
IndexOper

[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-02-10 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07ffe1b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07ffe1b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07ffe1b1

Branch: refs/heads/cassandra-2.1
Commit: 07ffe1b12eb68cd51fdfc8715ffa7df14381df3a
Parents: ad91d41 9649594
Author: Tyler Hobbs 
Authored: Tue Feb 10 15:09:39 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 15:09:39 2015 -0600

--
 CHANGES.txt |  3 +
 .../cql3/statements/SelectStatement.java| 58 ++--
 .../cassandra/cql3/MultiColumnRelationTest.java | 94 
 .../cql3/SingleColumnRelationTest.java  | 40 +
 4 files changed, 170 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07ffe1b1/CHANGES.txt
--
diff --cc CHANGES.txt
index 92ee5d1,861730f..2113349
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,96 -1,6 +1,99 @@@
 -2.0.13:
 +2.1.4
 + * Write partition size estimates into a system table (CASSANDRA-7688)
 + * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
 +   (CASSANDRA-8154)
++Merged from 2.0:
+  * Fix some multi-column relations with indexes on some clustering
+columns (CASSANDRA-8275)
 +
 +
 +2.1.3
 + * Upgrade libthrift to 0.9.2 (CASSANDRA-8685)
 + * Don't use the shared ref in sstableloader (CASSANDRA-8704)
 + * Purge internal prepared statements if related tables or
 +   keyspaces are dropped (CASSANDRA-8693)
 + * (cqlsh) Handle unicode BOM at start of files (CASSANDRA-8638)
 + * Stop compactions before exiting offline tools (CASSANDRA-8623)
 + * Update tools/stress/README.txt to match current behaviour (CASSANDRA-7933)
 + * Fix schema from Thrift conversion with empty metadata (CASSANDRA-8695)
 + * Safer Resource Management (CASSANDRA-7705)
 + * Make sure we compact highly overlapping cold sstables with
 +   STCS (CASSANDRA-8635)
 + * rpc_interface and listen_interface generate NPE on startup when specified
 +   interface doesn't exist (CASSANDRA-8677)
 + * Fix ArrayIndexOutOfBoundsException in nodetool cfhistograms 
(CASSANDRA-8514)
 + * Switch from yammer metrics for nodetool cf/proxy histograms 
(CASSANDRA-8662)
 + * Make sure we don't add tmplink files to the compaction
 +   strategy (CASSANDRA-8580)
 + * (cqlsh) Handle maps with blob keys (CASSANDRA-8372)
 + * (cqlsh) Handle DynamicCompositeType schemas correctly (CASSANDRA-8563)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6706)
 + * Add tooling to detect hot partitions (CASSANDRA-7974)
 + * Fix cassandra-stress user-mode truncation of partition generation 
(CASSANDRA-8608)
 + * Only stream from unrepaired sstables during inc repair (CASSANDRA-8267)
 + * Don't allow starting multiple inc repairs on the same sstables 
(CASSANDRA-8316)
 + * Invalidate prepared BATCH statements when related tables
 +   or keyspaces are dropped (CASSANDRA-8652)
 + * Fix missing results in secondary index queries on collections
 +   with ALLOW FILTERING (CASSANDRA-8421)
 + * Expose EstimatedHistogram metrics for range slices (CASSANDRA-8627)
 + * (cqlsh) Escape clqshrc passwords properly (CASSANDRA-8618)
 + * Fix NPE when passing wrong argument in ALTER TABLE statement 
(CASSANDRA-8355)
 + * Pig: Refactor and deprecate CqlStorage (CASSANDRA-8599)
 + * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check f

[4/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-10 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fbc38cd3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fbc38cd3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fbc38cd3

Branch: refs/heads/trunk
Commit: fbc38cd3a2dbda77aeca4a84765550fc571031ad
Parents: 187624b 07ffe1b
Author: Tyler Hobbs 
Authored: Tue Feb 10 15:10:49 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 15:10:49 2015 -0600

--
 CHANGES.txt |   3 +
 .../ForwardingPrimaryKeyRestrictions.java   |   3 +-
 .../restrictions/MultiColumnRestriction.java|  63 ++--
 .../cql3/restrictions/Restriction.java  |   2 +
 .../cql3/restrictions/Restrictions.java |   2 +
 .../SingleColumnPrimaryKeyRestrictions.java |  26 -
 .../restrictions/SingleColumnRestriction.java   |   6 ++
 .../restrictions/SingleColumnRestrictions.java  |   3 +-
 .../restrictions/StatementRestrictions.java |  33 +-
 .../cql3/restrictions/TokenRestriction.java |   4 +-
 .../cql3/statements/SelectStatement.java|   5 +-
 .../cassandra/cql3/MultiColumnRelationTest.java | 100 ++-
 .../cql3/SingleColumnRelationTest.java  |  67 +
 13 files changed, 226 insertions(+), 91 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbc38cd3/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbc38cd3/src/java/org/apache/cassandra/cql3/restrictions/ForwardingPrimaryKeyRestrictions.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/restrictions/ForwardingPrimaryKeyRestrictions.java
index 8a57292,000..5492c2b
mode 100644,00..100644
--- 
a/src/java/org/apache/cassandra/cql3/restrictions/ForwardingPrimaryKeyRestrictions.java
+++ 
b/src/java/org/apache/cassandra/cql3/restrictions/ForwardingPrimaryKeyRestrictions.java
@@@ -1,159 -1,0 +1,160 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.restrictions;
 +
 +import java.nio.ByteBuffer;
 +import java.util.Collection;
 +import java.util.List;
 +
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.QueryOptions;
 +import org.apache.cassandra.cql3.statements.Bound;
 +import org.apache.cassandra.db.IndexExpression;
 +import org.apache.cassandra.db.composites.Composite;
 +import org.apache.cassandra.db.index.SecondaryIndexManager;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +
 +/**
 + * A PrimaryKeyRestrictions which forwards all its method calls 
to another 
 + * PrimaryKeyRestrictions. Subclasses should override one or 
more methods to modify the behavior 
 + * of the backing PrimaryKeyRestrictions as desired per the 
decorator pattern. 
 + */
 +abstract class ForwardingPrimaryKeyRestrictions implements 
PrimaryKeyRestrictions
 +{
 +/**
 + * Returns the backing delegate instance that methods are forwarded to.
 + * @return the backing delegate instance that methods are forwarded to.
 + */
 +protected abstract PrimaryKeyRestrictions getDelegate();
 +
 +@Override
 +public boolean usesFunction(String ksName, String functionName)
 +{
 +return getDelegate().usesFunction(ksName, functionName);
 +}
 +
 +@Override
 +public Collection getColumnDefs()
 +{
 +return getDelegate().getColumnDefs();
 +}
 +
 +@Override
 +public PrimaryKeyRestrictions mergeWith(Restriction restriction) throws 
InvalidRequestException
 +{
 +return getDelegate().mergeWith(restriction);
 +}
 +
 +@Override
 +public boolean hasSupportingIndex(SecondaryIndexManager 
secondaryIndexManager)
 +{
 +return getDelegate().hasSupportingIndex(secondaryIndexManager);
 +}
 +
 +@Override
 +public List values(QueryOptions options

[3/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-10 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbc38cd3/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
index 598478c,000..403bf6d
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
+++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
@@@ -1,599 -1,0 +1,576 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.restrictions;
 +
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.base.Joiner;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.ColumnIdentifier;
 +import org.apache.cassandra.cql3.QueryOptions;
 +import org.apache.cassandra.cql3.Relation;
 +import org.apache.cassandra.cql3.VariableSpecifications;
 +import org.apache.cassandra.cql3.statements.Bound;
 +import org.apache.cassandra.db.ColumnFamilyStore;
 +import org.apache.cassandra.db.IndexExpression;
 +import org.apache.cassandra.db.Keyspace;
 +import org.apache.cassandra.db.RowPosition;
 +import org.apache.cassandra.db.composites.Composite;
 +import org.apache.cassandra.db.index.SecondaryIndexManager;
 +import org.apache.cassandra.dht.*;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkFalse;
 +import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkNotNull;
 +import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkTrue;
 +import static 
org.apache.cassandra.cql3.statements.RequestValidations.invalidRequest;
 +
 +/**
 + * The restrictions corresponding to the relations specified on the 
where-clause of CQL query.
 + */
 +public final class StatementRestrictions
 +{
 +/**
 + * The Column Family meta data
 + */
 +public final CFMetaData cfm;
 +
 +/**
 + * Restrictions on partitioning columns
 + */
 +private PrimaryKeyRestrictions partitionKeyRestrictions;
 +
 +/**
 + * Restrictions on clustering columns
 + */
 +private PrimaryKeyRestrictions clusteringColumnsRestrictions;
 +
 +/**
 + * Restriction on non-primary key columns (i.e. secondary index 
restrictions)
 + */
 +private SingleColumnRestrictions nonPrimaryKeyRestrictions;
 +
 +/**
 + * The restrictions used to build the index expressions
 + */
 +private final List indexRestrictions = new ArrayList<>();
 +
 +/**
 + * true if the secondary index need to be queried, 
false otherwise
 + */
 +private boolean usesSecondaryIndexing;
 +
 +/**
 + * Specify if the query will return a range of partition keys.
 + */
 +private boolean isKeyRange;
 +
 +/**
 + * Creates a new empty StatementRestrictions.
 + *
 + * @param cfm the column family meta data
 + * @return a new empty StatementRestrictions.
 + */
 +public static StatementRestrictions empty(CFMetaData cfm)
 +{
 +return new StatementRestrictions(cfm);
 +}
 +
 +private StatementRestrictions(CFMetaData cfm)
 +{
 +this.cfm = cfm;
 +this.partitionKeyRestrictions = new 
SingleColumnPrimaryKeyRestrictions(cfm.getKeyValidatorAsCType());
 +this.clusteringColumnsRestrictions = new 
SingleColumnPrimaryKeyRestrictions(cfm.comparator);
 +this.nonPrimaryKeyRestrictions = new SingleColumnRestrictions();
 +}
 +
 +public StatementRestrictions(CFMetaData cfm,
 +List whereClause,
 +VariableSpecifications boundNames,
 +boolean selectsOnlyStaticColumns,
 +boolean selectACollection) throws InvalidRequestException
 +{
 +this.cfm = cfm;
 +this.partitionKeyRestrictions = new 
SingleColumnPrimaryKeyRestrictions(cfm.getKeyValidatorAsCType());
 +this.clusterin

[2/4] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-02-10 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07ffe1b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07ffe1b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07ffe1b1

Branch: refs/heads/trunk
Commit: 07ffe1b12eb68cd51fdfc8715ffa7df14381df3a
Parents: ad91d41 9649594
Author: Tyler Hobbs 
Authored: Tue Feb 10 15:09:39 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 15:09:39 2015 -0600

--
 CHANGES.txt |  3 +
 .../cql3/statements/SelectStatement.java| 58 ++--
 .../cassandra/cql3/MultiColumnRelationTest.java | 94 
 .../cql3/SingleColumnRelationTest.java  | 40 +
 4 files changed, 170 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07ffe1b1/CHANGES.txt
--
diff --cc CHANGES.txt
index 92ee5d1,861730f..2113349
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,96 -1,6 +1,99 @@@
 -2.0.13:
 +2.1.4
 + * Write partition size estimates into a system table (CASSANDRA-7688)
 + * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
 +   (CASSANDRA-8154)
++Merged from 2.0:
+  * Fix some multi-column relations with indexes on some clustering
+columns (CASSANDRA-8275)
 +
 +
 +2.1.3
 + * Upgrade libthrift to 0.9.2 (CASSANDRA-8685)
 + * Don't use the shared ref in sstableloader (CASSANDRA-8704)
 + * Purge internal prepared statements if related tables or
 +   keyspaces are dropped (CASSANDRA-8693)
 + * (cqlsh) Handle unicode BOM at start of files (CASSANDRA-8638)
 + * Stop compactions before exiting offline tools (CASSANDRA-8623)
 + * Update tools/stress/README.txt to match current behaviour (CASSANDRA-7933)
 + * Fix schema from Thrift conversion with empty metadata (CASSANDRA-8695)
 + * Safer Resource Management (CASSANDRA-7705)
 + * Make sure we compact highly overlapping cold sstables with
 +   STCS (CASSANDRA-8635)
 + * rpc_interface and listen_interface generate NPE on startup when specified
 +   interface doesn't exist (CASSANDRA-8677)
 + * Fix ArrayIndexOutOfBoundsException in nodetool cfhistograms 
(CASSANDRA-8514)
 + * Switch from yammer metrics for nodetool cf/proxy histograms 
(CASSANDRA-8662)
 + * Make sure we don't add tmplink files to the compaction
 +   strategy (CASSANDRA-8580)
 + * (cqlsh) Handle maps with blob keys (CASSANDRA-8372)
 + * (cqlsh) Handle DynamicCompositeType schemas correctly (CASSANDRA-8563)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6706)
 + * Add tooling to detect hot partitions (CASSANDRA-7974)
 + * Fix cassandra-stress user-mode truncation of partition generation 
(CASSANDRA-8608)
 + * Only stream from unrepaired sstables during inc repair (CASSANDRA-8267)
 + * Don't allow starting multiple inc repairs on the same sstables 
(CASSANDRA-8316)
 + * Invalidate prepared BATCH statements when related tables
 +   or keyspaces are dropped (CASSANDRA-8652)
 + * Fix missing results in secondary index queries on collections
 +   with ALLOW FILTERING (CASSANDRA-8421)
 + * Expose EstimatedHistogram metrics for range slices (CASSANDRA-8627)
 + * (cqlsh) Escape clqshrc passwords properly (CASSANDRA-8618)
 + * Fix NPE when passing wrong argument in ALTER TABLE statement 
(CASSANDRA-8355)
 + * Pig: Refactor and deprecate CqlStorage (CASSANDRA-8599)
 + * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/las

[1/4] cassandra git commit: Fix multicolumn relations with indexes on some clustering cols

2015-02-10 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 187624b11 -> fbc38cd3a


Fix multicolumn relations with indexes on some clustering cols

Patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-8275


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9649594c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9649594c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9649594c

Branch: refs/heads/trunk
Commit: 9649594c761dbb72e58ddd71a10f0794378337ca
Parents: 28c380c
Author: blerer 
Authored: Tue Feb 10 15:07:02 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Feb 10 15:07:02 2015 -0600

--
 CHANGES.txt |   2 +
 .../cql3/statements/SelectStatement.java|  46 --
 .../cassandra/cql3/MultiColumnRelationTest.java | 122 
 .../cql3/SingleColumnRelationTest.java  | 145 +++
 4 files changed, 303 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9649594c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa9c77d..861730f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.13:
+ * Fix some multi-column relations with indexes on some clustering
+   columns (CASSANDRA-8275)
  * Fix IllegalArgumentException in dynamic snitch (CASSANDRA-8448)
  * Add support for UPDATE ... IF EXISTS (CASSANDRA-8610)
  * Fix reversal of list prepends (CASSANDRA-8733)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9649594c/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 19615b6..2fa57b9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -35,7 +35,6 @@ import org.apache.cassandra.cql3.CFDefinition.Name.Kind;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.context.CounterContext;
 import org.apache.cassandra.db.filter.*;
 import org.apache.cassandra.db.marshal.*;
 import org.apache.cassandra.dht.*;
@@ -83,8 +82,10 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 /** Restrictions on non-primary key columns (i.e. secondary index 
restrictions) */
 private final Map metadataRestrictions = 
new HashMap();
 
-// The name of all restricted names not covered by the key or index filter
-private final Set restrictedNames = new 
HashSet();
+// The map keys are the name of the columns that must be converted into 
IndexExpressions if a secondary index need
+// to be used. The value specify if the column has an index that can be 
used to for the relation in which the column
+// is specified.
+private final Map restrictedNames = new 
HashMap();
 private Restriction.Slice sliceRestriction;
 
 private boolean isReversed;
@@ -1027,7 +1028,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 return Collections.emptyList();
 
 List expressions = new ArrayList();
-for (CFDefinition.Name name : restrictedNames)
+for (CFDefinition.Name name : restrictedNames.keySet())
 {
 Restriction restriction;
 switch (name.kind)
@@ -1068,12 +1069,21 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 else
 {
-List values = restriction.values(variables);
+ByteBuffer value;
+if (restriction.isMultiColumn())
+{
+List values = restriction.values(variables);
+value = values.get(name.position);
+}
+else
+{
+List values = restriction.values(variables);
+if (values.size() != 1)
+throw new InvalidRequestException("IN restrictions are 
not supported on indexed columns");
 
-if (values.size() != 1)
-throw new InvalidRequestException("IN restrictions are not 
supported on indexed columns");
+value = values.get(0);
+}
 
-ByteBuffer value = values.get(0);
 validateIndexExpressionValue(value, name);
 expressions.add(new IndexExpression(name.name.key, 
IndexOperator.EQ, value))

[jira] [Commented] (CASSANDRA-5839) Save repair data to system table

2015-02-10 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314959#comment-14314959
 ] 

sankalp kohli commented on CASSANDRA-5839:
--

Any update on this [~krummas]

> Save repair data to system table
> 
>
> Key: CASSANDRA-5839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5839
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core, Tools
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 0001-5839.patch, 2.0.4-5839-draft.patch, 
> 2.0.6-5839-v2.patch
>
>
> As noted in CASSANDRA-2405, it would be useful to store repair results, 
> particularly with sub-range repair available (CASSANDRA-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >