[jira] [Updated] (CASSANDRA-7468) Add time-based execution to cassandra-stress

2014-06-28 Thread Matt Kennedy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Kennedy updated CASSANDRA-7468:


Attachment: trunk-7468.patch

> Add time-based execution to cassandra-stress
> 
>
> Key: CASSANDRA-7468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7468
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Matt Kennedy
>Priority: Minor
> Attachments: trunk-7468.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7468) Add time-based execution to cassandra-stress

2014-06-28 Thread Matt Kennedy (JIRA)
Matt Kennedy created CASSANDRA-7468:
---

 Summary: Add time-based execution to cassandra-stress
 Key: CASSANDRA-7468
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7468
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Matt Kennedy
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7467) flood of "setting live ratio to maximum of 64" from repair

2014-06-28 Thread Jackson Chung (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14047034#comment-14047034
 ] 

Jackson Chung commented on CASSANDRA-7467:
--

disabled the flush keyspace cron job, it does seem help. However, during the 
repair those lines would still be logged frequently (due to stream from the 
repair?). But at least it is not in a infinite-loop style. 

> flood of "setting live ratio to maximum of 64" from repair
> --
>
> Key: CASSANDRA-7467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7467
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jackson Chung
>
> we are on 2.0.8
> running with repair -pr -local , all nodes on i2.2x (60G ram);, with 
> setting 8G of heap. Using java 8. (key cache size is 1G)
> On occasion, when repair is run, the C* that run the repair, or another node 
> in the cluster, or both, run into a bad state with the system.log just 
> printing ""setting live ratio to maximum of 64"  forever every split seconds. 
> It usually happens when repairing one of the larger/wider CF. 
>  WARN [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 470) 
> setting live ratio to maximum of 64.0 instead of Infinity
>  INFO [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 481) 
> CFS(Keyspace='RIQ', ColumnFamily='MemberTimeline') liveRatio is 64.0 
> (just-counted was 64.0).  calculation took 0ms for 0 cells
>   Table: MemberTimeline
>   SSTable count: 13
>   Space used (live), bytes: 17644018786
> ...
>   Compacted partition minimum bytes: 30
>   Compacted partition maximum bytes: 464228842
>   Compacted partition mean bytes: 54578
> Just to give an idea of how bad this is, the log file is set to rotate 50 
> times with 21M each. In less than 15 minutes, all the logs are filled up with 
> just that log. C* is not responding, and can't be killed normally. Only way 
> is to kill -9



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-06-28 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046976#comment-14046976
 ] 

Vijay commented on CASSANDRA-7438:
--

Pushed a new project in github https://github.com/Vijay2win/lruc, including 
benchmark utils. I can move the code to Cassandra repo or use it as a library 
in Cassandra (Working on it).

> Serializing Row cache alternative (Fully off heap)
> --
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Linux
>Reporter: Vijay
>Assignee: Vijay
>  Labels: performance
> Fix For: 3.0
>
>
> Currently SerializingCache is partially off heap, keys are still stored in 
> JVM heap as BB, 
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better 
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off 
> heap and use JNI to interact with cache. We might want to ensure that the new 
> implementation match the existing API's (ICache), and the implementation 
> needs to have safe memory access, low overhead in memory and less memcpy's 
> (As much as possible).
> We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-06-28 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046976#comment-14046976
 ] 

Vijay edited comment on CASSANDRA-7438 at 6/28/14 9:50 PM:
---

Pushed a new project to github https://github.com/Vijay2win/lruc, including 
benchmark utils. I can move the code to Cassandra repo or use it as a library 
in Cassandra (Working on it).


was (Author: vijay2...@yahoo.com):
Pushed a new project in github https://github.com/Vijay2win/lruc, including 
benchmark utils. I can move the code to Cassandra repo or use it as a library 
in Cassandra (Working on it).

> Serializing Row cache alternative (Fully off heap)
> --
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Linux
>Reporter: Vijay
>Assignee: Vijay
>  Labels: performance
> Fix For: 3.0
>
>
> Currently SerializingCache is partially off heap, keys are still stored in 
> JVM heap as BB, 
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better 
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off 
> heap and use JNI to interact with cache. We might want to ensure that the new 
> implementation match the existing API's (ICache), and the implementation 
> needs to have safe memory access, low overhead in memory and less memcpy's 
> (As much as possible).
> We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7467) flood of "setting live ratio to maximum of 64" from repair

2014-06-28 Thread Jackson Chung (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046975#comment-14046975
 ] 

Jackson Chung commented on CASSANDRA-7467:
--

We have some cron job that periodically run flush on the entire keyspace,
so could be that



> flood of "setting live ratio to maximum of 64" from repair
> --
>
> Key: CASSANDRA-7467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7467
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jackson Chung
>
> we are on 2.0.8
> running with repair -pr -local , all nodes on i2.2x (60G ram);, with 
> setting 8G of heap. Using java 8. (key cache size is 1G)
> On occasion, when repair is run, the C* that run the repair, or another node 
> in the cluster, or both, run into a bad state with the system.log just 
> printing ""setting live ratio to maximum of 64"  forever every split seconds. 
> It usually happens when repairing one of the larger/wider CF. 
>  WARN [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 470) 
> setting live ratio to maximum of 64.0 instead of Infinity
>  INFO [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 481) 
> CFS(Keyspace='RIQ', ColumnFamily='MemberTimeline') liveRatio is 64.0 
> (just-counted was 64.0).  calculation took 0ms for 0 cells
>   Table: MemberTimeline
>   SSTable count: 13
>   Space used (live), bytes: 17644018786
> ...
>   Compacted partition minimum bytes: 30
>   Compacted partition maximum bytes: 464228842
>   Compacted partition mean bytes: 54578
> Just to give an idea of how bad this is, the log file is set to rotate 50 
> times with 21M each. In less than 15 minutes, all the logs are filled up with 
> just that log. C* is not responding, and can't be killed normally. Only way 
> is to kill -9



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7467) flood of "setting live ratio to maximum of 64" from repair

2014-06-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046958#comment-14046958
 ] 

Aleksey Yeschenko commented on CASSANDRA-7467:
--

Pretty sure this is a duplicate of CASSANDRA-7401. Now I'm really curious where 
those empty CFs are coming from, though.

> flood of "setting live ratio to maximum of 64" from repair
> --
>
> Key: CASSANDRA-7467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7467
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jackson Chung
>
> we are on 2.0.8
> running with repair -pr -local , all nodes on i2.2x (60G ram);, with 
> setting 8G of heap. Using java 8. (key cache size is 1G)
> On occasion, when repair is run, the C* that run the repair, or another node 
> in the cluster, or both, run into a bad state with the system.log just 
> printing ""setting live ratio to maximum of 64"  forever every split seconds. 
> It usually happens when repairing one of the larger/wider CF. 
>  WARN [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 470) 
> setting live ratio to maximum of 64.0 instead of Infinity
>  INFO [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 481) 
> CFS(Keyspace='RIQ', ColumnFamily='MemberTimeline') liveRatio is 64.0 
> (just-counted was 64.0).  calculation took 0ms for 0 cells
>   Table: MemberTimeline
>   SSTable count: 13
>   Space used (live), bytes: 17644018786
> ...
>   Compacted partition minimum bytes: 30
>   Compacted partition maximum bytes: 464228842
>   Compacted partition mean bytes: 54578
> Just to give an idea of how bad this is, the log file is set to rotate 50 
> times with 21M each. In less than 15 minutes, all the logs are filled up with 
> just that log. C* is not responding, and can't be killed normally. Only way 
> is to kill -9



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-06-28 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046954#comment-14046954
 ] 

Aleksey Yeschenko commented on CASSANDRA-7056:
--

bq. I also want to point out that Aleksey Yeschenko's response to global 
indexes (CASSANDRA-6477) was: "I think we should leave it to people's client 
code. We don't need more complexity on our read/write paths when this can be 
done client-side."

That combined with "alternatively, we just don't invent new unnecessary 
concepts (batch reads) to justify hypothetical things we could do that nobody 
asked us for" would leave us with absolutely no approach to achieve consistent 
cross-partition consistent indexes through either client or server-side code.

I'm okay with global indexes now (and it doesn't matter, really, because they 
are happening either way), so this is a non-argument.

> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Tupshin Harper
>Priority: Minor
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-06-28 Thread pankaj mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pankaj mishra updated CASSANDRA-7311:
-

Attachment: (was: cassandra_incremental.patch)

> Enable incremental backup on a per-keyspace level
> -
>
> Key: CASSANDRA-7311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: lhf
> Attachments: 7311-cqlsh-update.txt, table_incremental_7311.patch
>
>
> Currently incremental backups are globally defined, however this is not 
> always appropriate or required for all keyspaces in a cluster. 
> As this is quite expensive, it would be preferred to either specify the 
> keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-06-28 Thread pankaj mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pankaj mishra updated CASSANDRA-7311:
-

Attachment: (was: cassandra_incremental_latest.patch)

> Enable incremental backup on a per-keyspace level
> -
>
> Key: CASSANDRA-7311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: lhf
> Attachments: 7311-cqlsh-update.txt, table_incremental_7311.patch
>
>
> Currently incremental backups are globally defined, however this is not 
> always appropriate or required for all keyspaces in a cluster. 
> As this is quite expensive, it would be preferred to either specify the 
> keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-06-28 Thread pankaj mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046935#comment-14046935
 ] 

pankaj mishra commented on CASSANDRA-7311:
--

Dave Brosius,
Can you please review the code.

> Enable incremental backup on a per-keyspace level
> -
>
> Key: CASSANDRA-7311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: lhf
> Attachments: 7311-cqlsh-update.txt, table_incremental_7311.patch
>
>
> Currently incremental backups are globally defined, however this is not 
> always appropriate or required for all keyspaces in a cluster. 
> As this is quite expensive, it would be preferred to either specify the 
> keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-06-28 Thread pankaj mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pankaj mishra updated CASSANDRA-7311:
-

Attachment: (was: cassandra.patch)

> Enable incremental backup on a per-keyspace level
> -
>
> Key: CASSANDRA-7311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: lhf
> Attachments: 7311-cqlsh-update.txt, cassandra_incremental.patch, 
> cassandra_incremental_latest.patch, table_incremental_7311.patch
>
>
> Currently incremental backups are globally defined, however this is not 
> always appropriate or required for all keyspaces in a cluster. 
> As this is quite expensive, it would be preferred to either specify the 
> keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-06-28 Thread pankaj mishra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pankaj mishra updated CASSANDRA-7311:
-

Attachment: (was: CASSANDRA-7311-formated.patch)

> Enable incremental backup on a per-keyspace level
> -
>
> Key: CASSANDRA-7311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: lhf
> Attachments: 7311-cqlsh-update.txt, cassandra_incremental.patch, 
> cassandra_incremental_latest.patch, table_incremental_7311.patch
>
>
> Currently incremental backups are globally defined, however this is not 
> always appropriate or required for all keyspaces in a cluster. 
> As this is quite expensive, it would be preferred to either specify the 
> keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7467) flood of "setting live ratio to maximum of 64" from repair

2014-06-28 Thread Jackson Chung (JIRA)
Jackson Chung created CASSANDRA-7467:


 Summary: flood of "setting live ratio to maximum of 64" from repair
 Key: CASSANDRA-7467
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7467
 Project: Cassandra
  Issue Type: Bug
Reporter: Jackson Chung


we are on 2.0.8

running with repair -pr -local , all nodes on i2.2x (60G ram);, with 
setting 8G of heap. Using java 8. (key cache size is 1G)

On occasion, when repair is run, the C* that run the repair, or another node in 
the cluster, or both, run into a bad state with the system.log just printing 
""setting live ratio to maximum of 64"  forever every split seconds. It usually 
happens when repairing one of the larger/wider CF. 

 WARN [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 470) setting 
live ratio to maximum of 64.0 instead of Infinity
 INFO [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 481) 
CFS(Keyspace='RIQ', ColumnFamily='MemberTimeline') liveRatio is 64.0 
(just-counted was 64.0).  calculation took 0ms for 0 cells

Table: MemberTimeline
SSTable count: 13
Space used (live), bytes: 17644018786
...
Compacted partition minimum bytes: 30
Compacted partition maximum bytes: 464228842
Compacted partition mean bytes: 54578

Just to give an idea of how bad this is, the log file is set to rotate 50 times 
with 21M each. In less than 15 minutes, all the logs are filled up with just 
that log. C* is not responding, and can't be killed normally. Only way is to 
kill -9



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-06-28 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046919#comment-14046919
 ] 

Benedict commented on CASSANDRA-7056:
-

Another separate point to consider, as a follow up: RAMP transactions may also 
permit us to provide consistent reads with less than QUORUM nodes involved. If 
we are performing a consistent read with a known transaction id, we only need 
to ensure the node has seen the totality of that transaction (i.e. any bulk 
insert has completed its first round, but not necessarily its second (commit) 
round) to be certain we have all of the data we need to answer the query 
correctly. So we can potentially answer QUORUM queries at the coordinator only. 
Note this only works if the coordinator has seen _exactly_ this transaction id, 
though some similar optimisations are likely possible to expand that. 

I can envisage answering multiple queries with the following scheme:

1) start transaction, by asking for the latest transaction_id from a given 
coordinator for the data we are interested in;
2) query all coordinators directly for the regions they own, providing them 
with the transaction_id

All of those that were updated with the given transaction_id have the potential 
to be answered with only the coordinator's involvement

Further, to outline a sketch client-side API, I would suggest something like:

Txn txn = client.begin()
Future rsf1 = txn.execute(stmt1);
Future rsf2 = txn.execute(stmt2);
...
txn.execute();
ResultSet rs1 = rsf1.get();
...



> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Tupshin Harper
>Priority: Minor
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-06-28 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046914#comment-14046914
 ] 

Benedict commented on CASSANDRA-7056:
-

I can say that, from the point of view of a prior target consumer, the addition 
of cross-cluster consistent reads would have been exciting for me.

On implementation details, thinking more from the point of view of my prior 
self, I would love to see this support streamed batches of arbitrary size. By 
which I mean I would have liked to start a write transaction, stream arbitrary 
amounts of data, and have it commit with complete isolation or not. To this 
end, I'm leaning towards writing the data straight into the memtables, but 
maintain a separate set of "uncommitted" transaction ids, which can be filtered 
out at read time. If a record is overwritten either before or after it is 
committed, it is moved to the read-buffer. I doubt this will be dramatically 
more complex, but the approach to implementation is fundamentally different. It 
seems to me supporting transactions of arbitrary size is an equally powerful 
win to consistent transactions.



> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Tupshin Harper
>Priority: Minor
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-06-28 Thread Patrick McFadin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046904#comment-14046904
 ] 

Patrick McFadin commented on CASSANDRA-7056:


I don't get how cross partition consistent reads are something seen as edge 
case. I feel this is the primary use case. I've passed this by several users 
and got some measurable excitement.

> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Tupshin Harper
>Priority: Minor
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-06-28 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046902#comment-14046902
 ] 

Tupshin Harper commented on CASSANDRA-7056:
---

I also want to point out that [~iamaleksey]'s response to global indexes 
(CASSANDRA-6477) was: "I think we should leave it to people's client code. We 
don't need more complexity on our read/write paths when this can be done 
client-side."

That combined with "alternatively, we just don't invent new unnecessary 
concepts (batch reads) to justify hypothetical things we could do that nobody 
asked us for" would leave us with absolutely no approach to achieve consistent 
cross-partition consistent indexes through either client or server-side code.

> Add RAMP transactions
> -
>
> Key: CASSANDRA-7056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Tupshin Harper
>Priority: Minor
>
> We should take a look at 
> [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
>  transactions, and figure out if they can be used to provide more efficient 
> LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)