[jira] [Commented] (CASSANDRA-18434) yaml should explain behavior from CASSANDRA-13325

2023-05-12 Thread Sudeep Rao (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722234#comment-17722234
 ] 

Sudeep Rao commented on CASSANDRA-18434:


Hi Brandon, thanks for letting me know. Will make the change.

> yaml should explain behavior from CASSANDRA-13325
> -
>
> Key: CASSANDRA-18434
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18434
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Brandon Williams
>Assignee: Sudeep Rao
>Priority: Normal
>  Labels: lhf
> Fix For: 4.0.x, 4.1.x, 5.x
>
>
> After CASSANDRA-13325, it is possible in the yaml to set the 'protocol' 
> option in a given encryption_options to the csv list of acceptable protocols, 
> as we do in [this 
> test|https://github.com/apache/cassandra-dtest/blob/trunk/cqlsh_tests/test_cqlsh.py#L185]
>  to limit it to TLSv1.2.  However, there is no way to know this from the yaml 
> today, so some comments/example would be helpful. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18434) yaml should explain behavior from CASSANDRA-13325

2023-05-12 Thread Sudeep Rao (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1777#comment-1777
 ] 

Sudeep Rao commented on CASSANDRA-18434:


Hi, I am a new contributor. Picked this bug from the Low Hanging Fruit Jira 
list. I have pushed up a branch for review. 

[Branch|https://github.com/sudeepraovm/cassandra/tree/CASSANDRA-18434]
[PR|https://github.com/apache/cassandra/pull/2330]

> yaml should explain behavior from CASSANDRA-13325
> -
>
> Key: CASSANDRA-18434
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18434
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Brandon Williams
>Assignee: Sudeep Rao
>Priority: Normal
>  Labels: lhf
> Fix For: 4.0.x, 4.1.x, 5.x
>
>
> After CASSANDRA-13325, it is possible in the yaml to set the 'protocol' 
> option in a given encryption_options to the csv list of acceptable protocols, 
> as we do in [this 
> test|https://github.com/apache/cassandra-dtest/blob/trunk/cqlsh_tests/test_cqlsh.py#L185]
>  to limit it to TLSv1.2.  However, there is no way to know this from the yaml 
> today, so some comments/example would be helpful. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18434) yaml should explain behavior from CASSANDRA-13325

2023-05-12 Thread Sudeep Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudeep Rao updated CASSANDRA-18434:
---
Test and Documentation Plan: NA
 Status: Patch Available  (was: Open)

> yaml should explain behavior from CASSANDRA-13325
> -
>
> Key: CASSANDRA-18434
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18434
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Brandon Williams
>Assignee: Sudeep Rao
>Priority: Normal
>  Labels: lhf
> Fix For: 4.0.x, 4.1.x, 5.x
>
>
> After CASSANDRA-13325, it is possible in the yaml to set the 'protocol' 
> option in a given encryption_options to the csv list of acceptable protocols, 
> as we do in [this 
> test|https://github.com/apache/cassandra-dtest/blob/trunk/cqlsh_tests/test_cqlsh.py#L185]
>  to limit it to TLSv1.2.  However, there is no way to know this from the yaml 
> today, so some comments/example would be helpful. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-18434) yaml should explain behavior from CASSANDRA-13325

2023-05-12 Thread Sudeep Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudeep Rao reassigned CASSANDRA-18434:
--

Assignee: Sudeep Rao

> yaml should explain behavior from CASSANDRA-13325
> -
>
> Key: CASSANDRA-18434
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18434
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Brandon Williams
>Assignee: Sudeep Rao
>Priority: Normal
>  Labels: lhf
> Fix For: 4.0.x, 4.1.x, 5.x
>
>
> After CASSANDRA-13325, it is possible in the yaml to set the 'protocol' 
> option in a given encryption_options to the csv list of acceptable protocols, 
> as we do in [this 
> test|https://github.com/apache/cassandra-dtest/blob/trunk/cqlsh_tests/test_cqlsh.py#L185]
>  to limit it to TLSv1.2.  However, there is no way to know this from the yaml 
> today, so some comments/example would be helpful. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-5682) When the Cassandra delete keys in secondary Index?

2013-12-30 Thread Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13859121#comment-13859121
 ] 

Rao commented on CASSANDRA-5682:


Question: we are seeing some performance issues with some queries.. (only with 
specific inputs). The first query takes around 10sec, the second query comes 
back instantly. The column routeoffer has a secondary index. How can we check 
if we have a similar issues described in the ticket OR could it be some thing 
else? tried to repair and rebuild index, but it did not fix the issue.

cqlsh:topology SELECT count(*) FROM ManagedResource WHERE routeoffer='JMETER' 
ALLOW FILTERING;
count
---
137

cqlsh:topology SELECT count(*) FROM ManagedResource WHERE routeoffer='DEFAULT' 
ALLOW FILTERING;

count
---
161

 When the Cassandra delete keys in secondary Index?
 --

 Key: CASSANDRA-5682
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5682
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0.1
 Environment: normal x86 PC (i3 CPU + 4GB ram) + Ubuntu 12.04
Reporter: YounwooKim
Priority: Minor

 How can i reduce the size of secondary index?
 Obviously, I delete many keys, and tried flush, compact, cleanup, 
 rebuild_index using nodetool. However, i can't reduce the size of secondary 
 index. ( Of course, the size of table(Primary key) is reduced. )
 Therefore, I found out the hint from the Cassandra source code, and I guess a 
 feature of secondary index deletion.
 1) When I request deletion of key, and the key is in the sstable(not in the 
 memtable), the Cassandra doesn't insert the tombstone to the sstable for 
 secondary index.( Unlike the table )
 ( from AbstractSimpleColumnSecondaryIndex.delete() function. )
 2) After scaning the secondary index, the tombstone is maded in secondary 
 index.
 ( from KeysSearcher.getIndexedIterator() function. It is called by index scan 
 verb. )
 3) Cleanup command in nodetool is used to delete out of range keys. ( Cleanup 
 command don't care about deleted keys )
 ( from CompactionManager.doCleanupCompaction() function. )
 After this, I scan deleted keys using 'Where' clause, and I can reduce the 
 size of secondary index. I think that it is only one way to reduce the size 
 of secondary index.
 Is this a correct conclusion? I can't found related articles and other 
 methods. 
 I think that the Cassandra needs the compaction function for secondary index .



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5599) Intermittently, CQL SELECT with WHERE on secondary indexed field value returns null when there are rows

2013-07-05 Thread Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701032#comment-13701032
 ] 

Rao commented on CASSANDRA-5599:


We have 2 dc and 2 nodes/dc. Heres the keyspace configuration:

CREATE KEYSPACE grd WITH replication = {
  'class': 'NetworkTopologyStrategy',
  'HYWRCA02': '2',
  'CHRLNCUN': '2'
};

All these queries are executed on node1 on dc HYWRCA02. As you can see we only 
see the results when consistency level is set to ALL. 

After repair on node1 (dc: HYWRCA02), its the same result.
After repair on node2 (dc: HYWRCA02), its the same result.
After repair on node1 (dc: CHRLNCUN), its the same result.  
After repair on node2 (dc: CHRLNCUN), its the same result.

Let me know if you need any more info.

cqlsh consistency one;
Consistency level set to ONE.
cqlsh select count(*) from grd.route where 
serviceidentifier='com.att.scld.GRMServerTestService'
   ... ;

 count
---
 0

cqlsh consistency local_quorum;
Consistency level set to LOCAL_QUORUM.
cqlsh select count(*) from grd.route where 
serviceidentifier='com.att.scld.GRMServerTestService';

 count
---
 0
 
cqlsh consistency all;
Consistency level set to ALL.
cqlsh select count(*) from grd.route where 
serviceidentifier='com.att.scld.GRMServerTestService' ;

 count
---
   158

 Intermittently, CQL SELECT  with WHERE on secondary indexed field value 
 returns null when there are rows
 

 Key: CASSANDRA-5599
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5599
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4, 1.2.5
 Environment: x86_64 / RedHat Enterprise Linux 4.x
Reporter: Ravi Basawa
Priority: Minor

 Intermittently, CQL SELECT  with WHERE on secondary indexed field value 
 returns null when there are rows.
 As it happens intermittently, it is difficult to replicate. To resolve we 
 have had to recreate the index. Also using the nodetool to reindex did not 
 help us either.
 We would create a table, create a secondary index for that table on a field, 
 import data then when we try to select rows from that table with where on 
 said field which should return results, we get a null back ...intermittently. 
  Sometimes works, sometimes not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5599) Intermittently, CQL SELECT with WHERE on secondary indexed field value returns null when there are rows

2013-07-05 Thread Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701055#comment-13701055
 ] 

Rao commented on CASSANDRA-5599:


I could not reproduce it as well. But my guess its occurring when we keep 
deleting the same content and recreate it over a period of time.

 Intermittently, CQL SELECT  with WHERE on secondary indexed field value 
 returns null when there are rows
 

 Key: CASSANDRA-5599
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5599
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4, 1.2.5
 Environment: x86_64 / RedHat Enterprise Linux 4.x
Reporter: Ravi Basawa
Priority: Minor

 Intermittently, CQL SELECT  with WHERE on secondary indexed field value 
 returns null when there are rows.
 As it happens intermittently, it is difficult to replicate. To resolve we 
 have had to recreate the index. Also using the nodetool to reindex did not 
 help us either.
 We would create a table, create a secondary index for that table on a field, 
 import data then when we try to select rows from that table with where on 
 said field which should return results, we get a null back ...intermittently. 
  Sometimes works, sometimes not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5686) During node repair operation, Cassandra throws exception on one of our nodes: failed to uncompress the chunk: FAILED_TO_UNCOMPRESS

2013-06-21 Thread Rao (JIRA)
Rao created CASSANDRA-5686:
--

 Summary: During node repair operation, Cassandra throws exception 
on one of our nodes: failed to uncompress the chunk: FAILED_TO_UNCOMPRESS 
 Key: CASSANDRA-5686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5686
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.5
Reporter: Rao


 INFO [AntiEntropyStage:1] 2013-06-20 03:08:03,876 AntiEntropyService.java 
(line 213) [repair #4ae2a970-d991-11e2-bd05-094c1105e54d] Received merkle tree 
for tags from /135.163.214.175
ERROR [Thread-40] 2013-06-20 03:08:03,903 CassandraDaemon.java (line 175) 
Exception in thread Thread[Thread-40,5,main]
java.lang.RuntimeException: java.io.IOException: failed to uncompress the 
chunk: FAILED_TO_UNCOMPRESS(5)
at 
org.apache.cassandra.service.AntiEntropyService$Validator$ValidatorSerializer.deserialize(AntiEntropyService.java:438)
at 
org.apache.cassandra.service.AntiEntropyService$Validator$ValidatorSerializer.deserialize(AntiEntropyService.java:421)
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:203)
at 
org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
Caused by: java.io.IOException: failed to uncompress the chunk: 
FAILED_TO_UNCOMPRESS(5)
at 
org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:361)
at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:383)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at 
org.apache.cassandra.utils.MerkleTree$Leaf$LeafSerializer.deserialize(MerkleTree.java:793)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:920)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 

[jira] [Updated] (CASSANDRA-5686) During node repair operation, Cassandra throws exception on one of our nodes: failed to uncompress the chunk: FAILED_TO_UNCOMPRESS

2013-06-21 Thread Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rao updated CASSANDRA-5686:
---

Description: 


 INFO [AntiEntropyStage:1] 2013-06-20 03:08:03,876 AntiEntropyService.java 
(line 213) [repair #4ae2a970-d991-11e2-bd05-094c1105e54d] Received merkle tree 
for tags from /135.163.214.175
ERROR [Thread-40] 2013-06-20 03:08:03,903 CassandraDaemon.java (line 175) 
Exception in thread Thread[Thread-40,5,main]
java.lang.RuntimeException: java.io.IOException: failed to uncompress the 
chunk: FAILED_TO_UNCOMPRESS(5)
at 
org.apache.cassandra.service.AntiEntropyService$Validator$ValidatorSerializer.deserialize(AntiEntropyService.java:438)
at 
org.apache.cassandra.service.AntiEntropyService$Validator$ValidatorSerializer.deserialize(AntiEntropyService.java:421)
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:203)
at 
org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
Caused by: java.io.IOException: failed to uncompress the chunk: 
FAILED_TO_UNCOMPRESS(5)
at 
org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:361)
at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:383)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at 
org.apache.cassandra.utils.MerkleTree$Leaf$LeafSerializer.deserialize(MerkleTree.java:793)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:920)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 

[jira] [Updated] (CASSANDRA-5686) During node repair operation, Cassandra throws exception on one of our nodes: failed to uncompress the chunk: FAILED_TO_UNCOMPRESS

2013-06-21 Thread Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rao updated CASSANDRA-5686:
---

Description: 
During node repair operation, Cassandra throws exception on one of our nodes: 
failed to uncompress the chunk: FAILED_TO_UNCOMPRESS 


 INFO [AntiEntropyStage:1] 2013-06-20 03:08:03,876 AntiEntropyService.java 
(line 213) [repair #4ae2a970-d991-11e2-bd05-094c1105e54d] Received merkle tree 
for tags from /135.163.214.175
ERROR [Thread-40] 2013-06-20 03:08:03,903 CassandraDaemon.java (line 175) 
Exception in thread Thread[Thread-40,5,main]
java.lang.RuntimeException: java.io.IOException: failed to uncompress the 
chunk: FAILED_TO_UNCOMPRESS(5)
at 
org.apache.cassandra.service.AntiEntropyService$Validator$ValidatorSerializer.deserialize(AntiEntropyService.java:438)
at 
org.apache.cassandra.service.AntiEntropyService$Validator$ValidatorSerializer.deserialize(AntiEntropyService.java:421)
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:203)
at 
org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:135)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
Caused by: java.io.IOException: failed to uncompress the chunk: 
FAILED_TO_UNCOMPRESS(5)
at 
org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:361)
at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:383)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at 
org.apache.cassandra.utils.MerkleTree$Leaf$LeafSerializer.deserialize(MerkleTree.java:793)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:920)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:713)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
at 
org.apache.cassandra.utils.MerkleTree$Inner$InnerSerializer.deserialize(MerkleTree.java:714)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:918)
at 
org.apache.cassandra.utils.MerkleTree$Hashable$HashableSerializer.deserialize(MerkleTree.java:896)
  

[jira] [Created] (CASSANDRA-5676) Occasional timeouts from cassandra on secondary index queries: AssertionError: Illegal offset error observed in cassandra logs.

2013-06-20 Thread Rao (JIRA)
Rao created CASSANDRA-5676:
--

 Summary: Occasional timeouts from cassandra on secondary index 
queries: AssertionError: Illegal offset error observed in cassandra logs.
 Key: CASSANDRA-5676
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5676
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.5
Reporter: Rao


When executing the queries based on secondary index, we are occasionally 
getting a OperationTimeoutException from astyanax client and at the same time 
observed the following error in Cassandra logs:

Query executed: select * from grd.route where 
serviceidentifier='com.att.aft.NagiosTestService'  LIMIT 3 ALLOW FILTERING;

serviceidentifier has a secondary index.

ERROR [ReadStage:6185] 2013-06-20 09:20:31,574 CassandraDaemon.java (line 175) 
Exception in thread Thread[ReadStage:6185,5,RMI Runtime]
java.lang.AssertionError: Illegal offset: 13956, size: 13955
at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:147)
at org.apache.cassandra.io.util.Memory.setBytes(Memory.java:103)
at 
org.apache.cassandra.io.util.MemoryOutputStream.write(MemoryOutputStream.java:45)
at 
org.apache.cassandra.utils.vint.EncodedDataOutputStream.write(EncodedDataOutputStream.java:50)
at 
org.apache.cassandra.utils.ByteBufferUtil.write(ByteBufferUtil.java:328)
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithLength(ByteBufferUtil.java:315)
at 
org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:55)
at 
org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:30)
at 
org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:73)
at 
org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:47)
at 
org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:37)
at 
org.apache.cassandra.cache.SerializingCache.serialize(SerializingCache.java:118)
at 
org.apache.cassandra.cache.SerializingCache.replace(SerializingCache.java:206)
at 
org.apache.cassandra.cache.InstrumentingCache.replace(InstrumentingCache.java:54)
at 
org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1174)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:305)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:161)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1466)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:85)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:548)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1454)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:44)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1076)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5676) Sporadic timeouts from cassandra on secondary index queries: AssertionError: Illegal offset error observed in cassandra logs.

2013-06-20 Thread Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rao updated CASSANDRA-5676:
---

Summary: Sporadic timeouts from cassandra on secondary index queries: 
AssertionError: Illegal offset error observed in cassandra logs.  (was: 
Occasional timeouts from cassandra on secondary index queries: AssertionError: 
Illegal offset error observed in cassandra logs.)

 Sporadic timeouts from cassandra on secondary index queries: AssertionError: 
 Illegal offset error observed in cassandra logs.
 -

 Key: CASSANDRA-5676
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5676
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.5
Reporter: Rao

 When executing the queries based on secondary index, we are occasionally 
 getting a OperationTimeoutException from astyanax client and at the same time 
 observed the following error in Cassandra logs:
 Query executed: select * from grd.route where 
 serviceidentifier='com.att.aft.NagiosTestService'  LIMIT 3 ALLOW 
 FILTERING;
 serviceidentifier has a secondary index.
 ERROR [ReadStage:6185] 2013-06-20 09:20:31,574 CassandraDaemon.java (line 
 175) Exception in thread Thread[ReadStage:6185,5,RMI Runtime]
 java.lang.AssertionError: Illegal offset: 13956, size: 13955
 at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:147)
 at org.apache.cassandra.io.util.Memory.setBytes(Memory.java:103)
 at 
 org.apache.cassandra.io.util.MemoryOutputStream.write(MemoryOutputStream.java:45)
 at 
 org.apache.cassandra.utils.vint.EncodedDataOutputStream.write(EncodedDataOutputStream.java:50)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.write(ByteBufferUtil.java:328)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithLength(ByteBufferUtil.java:315)
 at 
 org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:55)
 at 
 org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:30)
 at 
 org.apache.cassandra.db.ColumnFamilySerializer.serialize(ColumnFamilySerializer.java:73)
 at 
 org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:47)
 at 
 org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:37)
 at 
 org.apache.cassandra.cache.SerializingCache.serialize(SerializingCache.java:118)
 at 
 org.apache.cassandra.cache.SerializingCache.replace(SerializingCache.java:206)
 at 
 org.apache.cassandra.cache.InstrumentingCache.replace(InstrumentingCache.java:54)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getThroughCache(ColumnFamilyStore.java:1174)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:305)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:161)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1466)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:85)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:548)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1454)
 at 
 org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:44)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1076)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5671) cassandra automatic token generation issue: each datacenter doesnot span the complete set of tokens in NetworkTopologyStrategy

2013-06-19 Thread Rao (JIRA)
Rao created CASSANDRA-5671:
--

 Summary: cassandra automatic token generation issue: each 
datacenter doesnot span the complete set of tokens in NetworkTopologyStrategy
 Key: CASSANDRA-5671
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5671
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.5
Reporter: Rao


When a route is saved, some of the routes save time is taking log longer 
(200ms+) than the other routes (30ms). When analysed, it looks like the 
routeId (primary key which is a UUID) has a token that maps to a different 
datacenter than the current one, so the request is going accross dc and is 
taking more time. 

We have the following configuration for the keyspace: 2 nodes in each 
datacenter and with replication factor of 2. 

CREATE KEYSPACE grd WITH replication = {
  'class': 'NetworkTopologyStrategy',
  'HYWRCA02': '2',
  'CHRLNCUN': '2'
};

Cassandra Version:  Cassandra 1.2.5 
Using Virtual tokens generated (num_tokens: 256)
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
On save we are using the consistency level of ONE. 
On read we are using the consistency level of local_quorum.

So in this case am expecting the the tokens to be generated in such a way that 
the each datacenter spans the complete set of tokens. So when a save happens it 
always goes to the local data center. Also on reads too, it should go to the 
local dc.

some examples of the nodetool getendpoints:
[cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 2 getendpoints 
grd route 22005151-a250-37b5-bb00-163df3bf0ad6
135.201.73.144 (dc2)
135.201.73.145 (dc2)
150.233.236.97 (dc1)
150.233.236.98 (dc1)

[cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 2 getendpoints 
grd route d1e86f4e-6d74-3bf6-8d76-27f41ae18149
150.233.236.97 (dc1)
135.201.73.144 (dc2)
150.233.236.98 (dc1)
135.201.73.145 (dc2)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5671) cassandra automatic token generation issue: each datacenter doesnot span the complete set of tokens in NetworkTopologyStrategy

2013-06-19 Thread Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rao updated CASSANDRA-5671:
---

Description: 
When a route is saved, some of the routes save time is taking log longer 
(200ms+) than the other routes (30ms). When analysed, it looks like the 
routeId (primary key which is a UUID) has a token that maps to a different 
datacenter than the current one, so the request is going accross dc and is 
taking more time. 

We have the following configuration for the keyspace: 2 nodes in each 
datacenter and with replication factor of 2. 

CREATE KEYSPACE grd WITH replication = {
  'class': 'NetworkTopologyStrategy',
  'HYWRCA02': '2',
  'CHRLNCUN': '2'
};

Cassandra Version:  Cassandra 1.2.5 
Using Virtual tokens generated (num_tokens: 256)
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
On save we are using the consistency level of ONE. 
On read we are using the consistency level of local_quorum.

So in this case am expecting the the tokens to be generated in such a way that 
the each datacenter spans the complete set of tokens. So when a save happens it 
always goes to the local data center. Also on reads too, it should go to the 
local dc.

some examples of the nodetool getendpoints:
[cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 2 getendpoints 
grd route 22005151-a250-37b5-bb00-163df3bf0ad6
135.201.73.144 (dc2)
135.201.73.145 (dc2)
150.233.236.97 (dc1)
150.233.236.98 (dc1)

[cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 2 getendpoints 
grd route d1e86f4e-6d74-3bf6-8d76-27f41ae18149
150.233.236.97 (dc1)
135.201.73.144 (dc2)
150.233.236.98 (dc1)
135.201.73.145 (dc2)

Not sure if we are missing any configuration. Would really appreciate some help.

thx - srrepaka

  was:
When a route is saved, some of the routes save time is taking log longer 
(200ms+) than the other routes (30ms). When analysed, it looks like the 
routeId (primary key which is a UUID) has a token that maps to a different 
datacenter than the current one, so the request is going accross dc and is 
taking more time. 

We have the following configuration for the keyspace: 2 nodes in each 
datacenter and with replication factor of 2. 

CREATE KEYSPACE grd WITH replication = {
  'class': 'NetworkTopologyStrategy',
  'HYWRCA02': '2',
  'CHRLNCUN': '2'
};

Cassandra Version:  Cassandra 1.2.5 
Using Virtual tokens generated (num_tokens: 256)
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
On save we are using the consistency level of ONE. 
On read we are using the consistency level of local_quorum.

So in this case am expecting the the tokens to be generated in such a way that 
the each datacenter spans the complete set of tokens. So when a save happens it 
always goes to the local data center. Also on reads too, it should go to the 
local dc.

some examples of the nodetool getendpoints:
[cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 2 getendpoints 
grd route 22005151-a250-37b5-bb00-163df3bf0ad6
135.201.73.144 (dc2)
135.201.73.145 (dc2)
150.233.236.97 (dc1)
150.233.236.98 (dc1)

[cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 2 getendpoints 
grd route d1e86f4e-6d74-3bf6-8d76-27f41ae18149
150.233.236.97 (dc1)
135.201.73.144 (dc2)
150.233.236.98 (dc1)
135.201.73.145 (dc2)


 cassandra automatic token generation issue: each datacenter doesnot span the 
 complete set of tokens in NetworkTopologyStrategy
 --

 Key: CASSANDRA-5671
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5671
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.5
Reporter: Rao

 When a route is saved, some of the routes save time is taking log longer 
 (200ms+) than the other routes (30ms). When analysed, it looks like the 
 routeId (primary key which is a UUID) has a token that maps to a different 
 datacenter than the current one, so the request is going accross dc and is 
 taking more time. 
 We have the following configuration for the keyspace: 2 nodes in each 
 datacenter and with replication factor of 2. 
 CREATE KEYSPACE grd WITH replication = {
   'class': 'NetworkTopologyStrategy',
   'HYWRCA02': '2',
   'CHRLNCUN': '2'
 };
 Cassandra Version:  Cassandra 1.2.5 
 Using Virtual tokens generated (num_tokens: 256)
 partitioner: org.apache.cassandra.dht.Murmur3Partitioner
 On save we are using the consistency level of ONE. 
 On read we are using the consistency level of local_quorum.
 So in this case am expecting the the tokens to be generated in such a way 
 that the each datacenter spans the complete set of tokens. So when a save 
 happens it always goes to the local data center. Also on reads too, it should 
 go to the local dc.
 some examples of the 

[jira] [Commented] (CASSANDRA-5510) Following sequence of operations delete, add, search by secondary index of operations doesnot return correct results all the time.

2013-04-24 Thread Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640481#comment-13640481
 ] 

Rao commented on CASSANDRA-5510:


The performance test we have has many layers and may not be feasible for 
testing from your side. I will try to put a test that recreates this scenario 
and will provide you.

 Following sequence of operations delete, add, search by secondary index of 
 operations doesnot return correct results all the time.
 --

 Key: CASSANDRA-5510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5510
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.2
 Environment: Test
Reporter: Rao
  Labels: patch

 Following sequence of operations delete, add, search by secondary index of 
 operations doesnot return correct results all the time.
 Performance tests was performed on the following sequence of operations: 
 delete a set of rows, add a set of rows and then search a set of rows by 
 secondary index by each thread. On search some of the rows were not returned 
 some times.
 configuration:
 replication_factor:2 per dc 
 nodes: 2 per dc
 consistency_level: local_quorum

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5510) Following sequence of operations delete, add, search by secondary index of operations doesnot return correct results all the time.

2013-04-23 Thread Rao (JIRA)
Rao created CASSANDRA-5510:
--

 Summary: Following sequence of operations delete, add, search by 
secondary index of operations doesnot return correct results all the time.
 Key: CASSANDRA-5510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5510
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.2
 Environment: Test
Reporter: Rao


Following sequence of operations delete, add, search by secondary index of 
operations doesnot return correct results all the time.

Performance tests was performed on the following sequence of operations: delete 
a set of rows, add a set of rows and then search a set of rows by secondary 
index by each thread. On search some of the rows were not returned some times.

configuration:
replication_factor:2 per dc 
nodes: 2 per dc
consistency_level: local_quorum


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5062) Support CAS

2013-02-27 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13588668#comment-13588668
 ] 

Jun Rao commented on CASSANDRA-5062:


To support things like CAS, the easiest way is for all writes to go to a leader 
replica that orders all incoming writes. There are different approaches for the 
leader to commit data. One approach is the quorum-based one used in Paxos, ZK 
and Spinnaker. The advantage of this approach is that it can hide the latency 
of a slow replica. The disadvantage is that for 2f+1 replicas, it only 
tolerates f failures (instead of 2f failures). While this is ok for ZK since it 
only stores state info, it's probably not ideal for systems that store real 
data. For that reason, in Kafka, we designed a slightly different approach for 
maintaining strongly consistent replicas. The details can be found in the 
ApacheCon presentation that I gave yesterday 
(http://www.slideshare.net/junrao/kafka-replication-apachecon2013). The Kafka 
design doesn't do paxos, but depends on ZK for leader election. So, the 
implementation is a bit simpler than that used in Spinnaker.


 Support CAS
 ---

 Key: CASSANDRA-5062
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5062
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 2.0


 Strong consistency is not enough to prevent race conditions.  The classic 
 example is user account creation: we want to ensure usernames are unique, so 
 we only want to signal account creation success if nobody else has created 
 the account yet.  But naive read-then-write allows clients to race and both 
 think they have a green light to create.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5152) CQLSH prompt doesn't properly accept input characters on OSX

2013-01-13 Thread Akshay Rao (JIRA)
Akshay Rao created CASSANDRA-5152:
-

 Summary: CQLSH prompt doesn't properly accept input characters on 
OSX
 Key: CASSANDRA-5152
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5152
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
 Environment: OSX Mountain Lion
Reporter: Akshay Rao


In the terminal for OSX Mountain Lion, I execute 'cqlsh'.  When I try to type 
the 't' letter, nothing appears on the screen.  All other keys work, and no 
other shell application is affected in this manner.  This is not an issue for 
Cassandra 1.1.6 - just started happening when I downloaded Cassandra 1.2.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira