[jira] [Updated] (CASSANDRA-2864) Alternative Row Cache Implementation

2012-04-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2864:
-

Comment: was deleted

(was: Wrote comments thinking it was a diffrent ticket hence removed the 
comments...)

 Alternative Row Cache Implementation
 

 Key: CASSANDRA-2864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2864
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
Priority: Minor

 we have been working on an alternative implementation to the existing row 
 cache(s)
 We have 2 main goals:
 - Decrease memory - get more rows in the cache without suffering a huge 
 performance penalty
 - Reduce gc pressure
 This sounds a lot like we should be using the new serializing cache in 0.8. 
 Unfortunately our workload consists of loads of updates which would 
 invalidate the cache all the time.
 The second unfortunate thing is that the idea we came up with doesn't fit the 
 new cache provider api...
 It looks like this:
 Like the serializing cache we basically only cache the serialized byte 
 buffer. we don't serialize the bloom filter and try to do some other minor 
 compression tricks (var ints etc not done yet). The main difference is that 
 we don't deserialize but use the normal sstable iterators and filters as in 
 the regular uncached case.
 So the read path looks like this:
 return filter.collectCollatedColumns(memtable iter, cached row iter)
 The write path is not affected. It does not update the cache
 During flush we merge all memtable updates with the cached rows.
 The attached patch is based on 0.8 branch r1143352
 It does not replace the existing row cache but sits aside it. Theres 
 environment switch to choose the implementation. This way it is easy to 
 benchmark performance differences.
 -DuseSSTableCache=true enables the alternative cache. It shares its 
 configuration with the standard row cache. So the cache capacity is shared. 
 We have duplicated a fair amount of code. First we actually refactored the 
 existing sstable filter / reader but than decided to minimize dependencies. 
 Also this way it is easy to customize serialization for in memory sstable 
 rows. 
 We have also experimented a little with compression but since this task at 
 this stage is mainly to kick off discussion we wanted to keep things simple. 
 But there is certainly room for optimizations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2864) Alternative Row Cache Implementation

2012-04-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2864:
-

Comment: was deleted

(was: Hi Jonathan, When there is a write for X3 we invalidate/update the cache 
and the next fetch does the FS scan and populates the cache after it is out of 
the cache (it is similar to the page cache and if there is a write on the block 
the whole block is marked dirty and next fetch will go to the FS). there is a 
configurable block size when set high enough will cache the whole row (like the 
existing cache). The logic around it is kind of what the patch has

 I think you might need to write that book, because the commit history is 
 tough to follow
Yeah just wrote a prototype hence... :) I can it up if we agree on that 
approach.)

 Alternative Row Cache Implementation
 

 Key: CASSANDRA-2864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2864
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
Priority: Minor

 we have been working on an alternative implementation to the existing row 
 cache(s)
 We have 2 main goals:
 - Decrease memory - get more rows in the cache without suffering a huge 
 performance penalty
 - Reduce gc pressure
 This sounds a lot like we should be using the new serializing cache in 0.8. 
 Unfortunately our workload consists of loads of updates which would 
 invalidate the cache all the time.
 The second unfortunate thing is that the idea we came up with doesn't fit the 
 new cache provider api...
 It looks like this:
 Like the serializing cache we basically only cache the serialized byte 
 buffer. we don't serialize the bloom filter and try to do some other minor 
 compression tricks (var ints etc not done yet). The main difference is that 
 we don't deserialize but use the normal sstable iterators and filters as in 
 the regular uncached case.
 So the read path looks like this:
 return filter.collectCollatedColumns(memtable iter, cached row iter)
 The write path is not affected. It does not update the cache
 During flush we merge all memtable updates with the cached rows.
 The attached patch is based on 0.8 branch r1143352
 It does not replace the existing row cache but sits aside it. Theres 
 environment switch to choose the implementation. This way it is easy to 
 benchmark performance differences.
 -DuseSSTableCache=true enables the alternative cache. It shares its 
 configuration with the standard row cache. So the cache capacity is shared. 
 We have duplicated a fair amount of code. First we actually refactored the 
 existing sstable filter / reader but than decided to minimize dependencies. 
 Also this way it is easy to customize serialization for in memory sstable 
 rows. 
 We have also experimented a little with compression but since this task at 
 this stage is mainly to kick off discussion we wanted to keep things simple. 
 But there is certainly room for optimizations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: 0001-CASSANDRA-4138-v4.patch

v4 fixes the test failures (BBU.readShort to readUnsignedShort) and also fixes 
the autoboxing in the verbs which was there in earlier versions... 

Thanks!

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch, 
 0001-CASSANDRA-4138-V2.patch, 0001-CASSANDRA-4138-v4.patch, 
 0002-sizeof-changes-on-rest-of-the-code.patch, CASSANDRA-4138-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: 0001-CASSANDRA-4138-v4.patch

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch, 
 0001-CASSANDRA-4138-V2.patch, 0001-CASSANDRA-4138-v4.patch, 
 0002-sizeof-changes-on-rest-of-the-code.patch, CASSANDRA-4138-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: (was: 0001-CASSANDRA-4138-v4.patch)

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch, 
 0001-CASSANDRA-4138-V2.patch, 0001-CASSANDRA-4138-v4.patch, 
 0002-sizeof-changes-on-rest-of-the-code.patch, CASSANDRA-4138-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: 0002-sizeof-changes-on-rest-of-the-code.patch

Attached patch changes the DBConstants to private and changes other unrelated 
code to use sizeof instead of constants.

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch, 
 0001-CASSANDRA-4138-V2.patch, 0002-sizeof-changes-on-rest-of-the-code.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-16 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: 0001-CASSANDRA-4138-V2.patch

Hi Pavel, attached patch has recommended changes except 

{comment}
I think the DBContants class is now should be changed to only share 
sizeof(type) methods and become something like DBContants.{native, 
vint}.sizeof(type)
{comment}

I will mark it private once parent ticket is complete (Messaging and SSTable 
formats), currently we have it called in other places too.

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch, 
 0001-CASSANDRA-4138-V2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4140) Build stress classes in a location that allows tools/stress/bin/stress to find them

2012-04-13 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4140:
-

Attachment: 0001-CASSANDRA-4140-v2.patch

Done and tested!

 Build stress classes in a location that allows tools/stress/bin/stress to 
 find them
 ---

 Key: CASSANDRA-4140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2
Reporter: Nick Bailey
Assignee: Vijay
Priority: Trivial
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4140-v2.patch, 0001-CASSANDRA-4140.patch


 Right now its hard to run stress from a checkout of trunk. You need to do 
 'ant artifacts' and then run the stress tool in the generated artifacts.
 A discussion on irc came up with the proposal to just move stress to the main 
 jar, but the stress/stressd bash scripts in bin/, and drop the tools 
 directory altogether. It will be easier for users to find that way and will 
 make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2635) make cache skipping optional

2012-04-12 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2635:
-

Attachment: 0001-CASSANDRA-2635-v3.patch

Attached patch includes jonathan's feedback with some renames. I am still 
wondering how i missed that :/

 make cache skipping optional
 

 Key: CASSANDRA-2635
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2635
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Harish Doddi
Priority: Minor
 Attachments: 0001-CASSANDRA-2635-v3.patch, CASSANDRA-2635-075.txt, 
 CASSANDRA-2635-trunk-1.txt, CASSANDRA-2635-trunk.txt


 We've applied this patch locally in order to turn of page skipping; not 
 completely but only for compaction/repair situations where it can be directly 
 detrimental in the sense of causing data to become cold even though your 
 entire data set fits in memory.
 It's better than completely disabling DONTNEED because the cache skipping 
 does make sense and has no relevant (that I can see) detrimental effects in 
 some cases, like when dumping caches.
 The patch is against 0.7.5 right now but if the change is desired I can make 
 a patch for trunk. Also, the name of the configuration option is dubious 
 since saying 'false' does not actually turn it off completely. I wasn't able 
 to figure out a good name that conveyed the functionality in a short brief 
 name however.
 A related concern as discussed in CASSANDRA-1902 is that the cache skipping 
 isn't fsync:ing and so won't work reliably on writes. If the feature is to be 
 retained that's something to fix in a different ticket.
 A question is also whether to retain the default to true or change it to 
 false. I'm kinda leaning to false since it's detrimental in the easy cases 
 of little data. In big cases with lots of data people will have to think 
 and tweak anyway, so better to put the burden on that end.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-12 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: 0001-CASSANDRA-4138-Take1.patch

Attached patch is the first attempt to add VarInt Encoding to cassandra. 

It save's us around 10% of the memory compared to normal DataInputStream.

Once this gets committed i will work on the rest of the pieces.

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4138) Add varint encoding to Serializing Cache

2012-04-12 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4138:
-

Attachment: 0001-CASSANDRA-4138-Take1.patch

 Add varint encoding to Serializing Cache
 

 Key: CASSANDRA-4138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4138
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4138-Take1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2635) make cache skipping optional

2012-04-12 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2635:
-

Affects Version/s: 1.2
   1.1.1

 make cache skipping optional
 

 Key: CASSANDRA-2635
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2635
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.1, 1.2
Reporter: Peter Schuller
Assignee: Harish Doddi
Priority: Minor
 Attachments: 0001-CASSANDRA-2635-v3.patch, CASSANDRA-2635-075.txt, 
 CASSANDRA-2635-trunk-1.txt, CASSANDRA-2635-trunk.txt


 We've applied this patch locally in order to turn of page skipping; not 
 completely but only for compaction/repair situations where it can be directly 
 detrimental in the sense of causing data to become cold even though your 
 entire data set fits in memory.
 It's better than completely disabling DONTNEED because the cache skipping 
 does make sense and has no relevant (that I can see) detrimental effects in 
 some cases, like when dumping caches.
 The patch is against 0.7.5 right now but if the change is desired I can make 
 a patch for trunk. Also, the name of the configuration option is dubious 
 since saying 'false' does not actually turn it off completely. I wasn't able 
 to figure out a good name that conveyed the functionality in a short brief 
 name however.
 A related concern as discussed in CASSANDRA-1902 is that the cache skipping 
 isn't fsync:ing and so won't work reliably on writes. If the feature is to be 
 retained that's something to fix in a different ticket.
 A question is also whether to retain the default to true or change it to 
 false. I'm kinda leaning to false since it's detrimental in the easy cases 
 of little data. In big cases with lots of data people will have to think 
 and tweak anyway, so better to put the burden on that end.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2635) make cache skipping optional

2012-04-12 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2635:
-

Affects Version/s: (was: 1.2)
Fix Version/s: 1.2
   1.1.1

 make cache skipping optional
 

 Key: CASSANDRA-2635
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2635
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.1
Reporter: Peter Schuller
Assignee: Harish Doddi
Priority: Minor
 Fix For: 1.1.1, 1.2

 Attachments: 0001-CASSANDRA-2635-v3.patch, CASSANDRA-2635-075.txt, 
 CASSANDRA-2635-trunk-1.txt, CASSANDRA-2635-trunk.txt


 We've applied this patch locally in order to turn of page skipping; not 
 completely but only for compaction/repair situations where it can be directly 
 detrimental in the sense of causing data to become cold even though your 
 entire data set fits in memory.
 It's better than completely disabling DONTNEED because the cache skipping 
 does make sense and has no relevant (that I can see) detrimental effects in 
 some cases, like when dumping caches.
 The patch is against 0.7.5 right now but if the change is desired I can make 
 a patch for trunk. Also, the name of the configuration option is dubious 
 since saying 'false' does not actually turn it off completely. I wasn't able 
 to figure out a good name that conveyed the functionality in a short brief 
 name however.
 A related concern as discussed in CASSANDRA-1902 is that the cache skipping 
 isn't fsync:ing and so won't work reliably on writes. If the feature is to be 
 retained that's something to fix in a different ticket.
 A question is also whether to retain the default to true or change it to 
 false. I'm kinda leaning to false since it's detrimental in the easy cases 
 of little data. In big cases with lots of data people will have to think 
 and tweak anyway, so better to put the burden on that end.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4141) Looks like Serializing cache broken in 1.1

2012-04-11 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4141:
-

Attachment: 0001-CASSANDRA-4141.patch

attached fixes this issue... changes to ConcurrentLinkedHashCache is not needed 
but thought the default was good instead of setting it to 0.

 Looks like Serializing cache broken in 1.1
 --

 Key: CASSANDRA-4141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
 Fix For: 1.1.0

 Attachments: 0001-CASSANDRA-4141.patch


 I get the following error while setting the row cache to be 1500 MB
 INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and 
 provider org.apache.cassandra.cache.SerializingCacheProvider
 java.lang.OutOfMemoryError: Java heap space
 Dumping heap to java_pid26402.hprof ...
 havent spend a lot of time looking into the issue but looks like SC 
 constructor has 
 .initialCapacity(capacity)
 .maximumWeightedCapacity(capacity)
  which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4140) Build stress classes in a location that allows tools/stress/bin/stress to find them

2012-04-11 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4140:
-

Attachment: 0001-CASSANDRA-4140.patch

Alright, attached patch will make the following possible...

Build from Source - 
#ant stress-build
#tools/stress/bin/stress

Installed Binary -
#tools/stress/bin/stress

NOTE: executing from source file only works on Unix like systems i dont have a 
machine to test stress.bat hence left it untouched.

 Build stress classes in a location that allows tools/stress/bin/stress to 
 find them
 ---

 Key: CASSANDRA-4140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2
Reporter: Nick Bailey
Assignee: Vijay
Priority: Trivial
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-4140.patch


 Right now its hard to run stress from a checkout of trunk. You need to do 
 'ant artifacts' and then run the stress tool in the generated artifacts.
 A discussion on irc came up with the proposal to just move stress to the main 
 jar, but the stress/stressd bash scripts in bin/, and drop the tools 
 directory altogether. It will be easier for users to find that way and will 
 make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4100) Make scrub and cleanup operations throttled

2012-04-10 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4100:
-

Attachment: 0001-CASSANDRA-4100-v3.patch

Alright, v3 simply adds throttle to scrub and cleanup... Simple refactor to 
move the throttle to compaction controller.

 Make scrub and cleanup operations throttled
 ---

 Key: CASSANDRA-4100
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4100
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: compaction
 Fix For: 1.0.10

 Attachments: 0001-CASSANDRA-4100-v2.patch, 
 0001-CASSANDRA-4100-v3.patch, 0001-CASSANDRA-4100.patch


 Looks like scrub and cleanup operations are not throttled and it will be nice 
 to throttle else we are likely to run into IO issues while running it on live 
 cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4100) Make scrub and cleanup operations throttled

2012-04-09 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4100:
-

Attachment: 0001-CASSANDRA-4100-v2.patch

Fixed. Thanks!

 Make scrub and cleanup operations throttled
 ---

 Key: CASSANDRA-4100
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4100
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: compaction
 Fix For: 1.0.10

 Attachments: 0001-CASSANDRA-4100-v2.patch, 0001-CASSANDRA-4100.patch


 Looks like scrub and cleanup operations are not throttled and it will be nice 
 to throttle else we are likely to run into IO issues while running it on live 
 cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-04-07 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0001-CASSANDRA-3690-v5.patch

Hi Jonathan, 
v5 removes the recycle related changes and
Added 2 JMX (getActiveSegmentNames and getArchivingSegmentNames)

(list all files) - (getActiveSegmentNames) will provide a view of orphan files 
which failed archiving...

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 
 0001-CASSANDRA-3690-v4.patch, 0001-CASSANDRA-3690-v5.patch, 
 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-04-06 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0001-CASSANDRA-3690.patch

Hi Jonathan, Attached patch incorporates all the recommended changes except

 Maybe we should also have a restore_list_segments command as well, so we 
 can query s3 (again for instance) directly and have restore_command pull 
 from there, rather than requiring a local directory?
IMHO. It might be better if we have a streaming API to list and stream the data 
in... otherwise we need have to download to the local FS anyways, So it will be 
better to incrementally download and use the JMX to restore the files 
independently (example: A external agent), that may be a simple solution for 
now. If the user has a NFS mount it will work even better all he needs to 
do is to ln -s location and he is done :)

Plz note that i also removed the requirement to turn off recycling for backup 
(as recommended), but i left that as configurable because it will good to have 
unique names in the backup sometimes so we dont overwrite :)

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 0001-CASSANDRA-3690.patch, 
 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-04-06 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: (was: 0001-CASSANDRA-3690.patch)

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 
 0001-CASSANDRA-3690-v4.patch, 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-04-06 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0001-CASSANDRA-3690-v4.patch

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 
 0001-CASSANDRA-3690-v4.patch, 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3966) KeyCacheKey and RowCacheKey to use raw byte[]

2012-04-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3966:
-

Attachment: 0001-CASSANDRA-3966-v2.patch

Hi Jonathan v2 does the recommended. One thing is there was redundancy in int 
write vs serialization size hence made it void write. Thanks!

 KeyCacheKey and RowCacheKey to use raw byte[]
 -

 Key: CASSANDRA-3966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3966
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3966-v2.patch, 0001-CASSANDRA-3966.patch


 We can just store the raw byte[] instead of byteBuffer,
 After reading the mail
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
 Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
 implementing hashcode and equals in the KeyCacheKey and RowCacheKey
 http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4111) Serializing cache can cause Segfault in 1.1

2012-04-03 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4111:
-

Attachment: 0001-CASSANDRA-4111-v2.patch

ahaaa missed that V2 fixes it. Thanks!

 Serializing cache can cause Segfault in 1.1
 ---

 Key: CASSANDRA-4111
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4111
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.1.0

 Attachments: 0001-CASSANDRA-4111-v2.patch, 0001-CASSANDRA-4111.patch


 Rare but this can happen per sure, looks like this issue is after 
 CASSANDRA-3862 hence affectes only 1.1
 FreeableMemory old = map.get(key);
 if (old == null)
 return false;
 // see if the old value matches the one we want to replace
 FreeableMemory mem = serialize(value);
 if (mem == null)
 return false; // out of memory.  never mind.
 V oldValue = deserialize(old);
 boolean success = oldValue.equals(oldToReplace)  map.replace(key, 
 old, mem);
 if (success)
 old.unreference();
 else
 mem.unreference();
 return success;
 in the above code block we deserialize(old) without taking reference to the 
 old memory, this can case seg faults when the old is reclaimed (free is 
 called)
 Fix is to get the reference just for deserialization
 V oldValue;
 // reference old guy before de-serializing
 old.reference();
 try
 {
  oldValue = deserialize(old);
 }
 finally
 {
 old.unreference();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4100) Make scrub and cleanup operations throttled

2012-04-03 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4100:
-

Attachment: 0001-CASSANDRA-4100.patch

Attached patch adds the throttle for cleanup and scrub. 
Note: compaction_throughput_mb_per_sec is used for the throttling, not sure if 
making a seperate property for cleanup/scrub is better

 Make scrub and cleanup operations throttled
 ---

 Key: CASSANDRA-4100
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4100
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.8
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.10

 Attachments: 0001-CASSANDRA-4100.patch


 Looks like scrub and cleanup operations are not throttled and it will be nice 
 to throttle else we are likely to run into IO issues while running it on live 
 cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3966) KeyCacheKey and RowCacheKey to use raw byte[]

2012-04-03 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3966:
-

Attachment: 0001-CASSANDRA-3966.patch

Simple patch with test. Thanks!

 KeyCacheKey and RowCacheKey to use raw byte[]
 -

 Key: CASSANDRA-3966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3966
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3966.patch


 We can just store the raw byte[] instead of byteBuffer,
 After reading the mail
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
 Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
 implementing hashcode and equals in the KeyCacheKey and RowCacheKey
 http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-04-02 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: 0001-CASSANDRA-3997-v2.patch

Segfaults happen in multiple places (opening a file, accessing malloc, while 
calling free, and in a lot of unrelated cases)... 
Unless we open JDK source code and figure out how it is structured it is hard 
to say when exactly it can fails (Let me know if you want to take a look at the 
hs_err*.log). 

In the bright side at least we can isolate this by calling via JNI. In v2 I 
removed the synchronization, i have also attached it here (Plz note the yaml 
setting is not included just to hide it for now). Thanks!
Note: jemalloc 2.2.5 release works fine and so as the git/dev branch.

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997-v2.patch, 0001-CASSANDRA-3997.patch, 
 jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4111) Serializing cache can cause Segfault in 1.1

2012-04-02 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4111:
-

Attachment: 0001-CASSANDRA-4111.patch

 Serializing cache can cause Segfault in 1.1
 ---

 Key: CASSANDRA-4111
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4111
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.1.0

 Attachments: 0001-CASSANDRA-4111.patch


 Rare but this can happen per sure, looks like this issue is after 
 CASSANDRA-3862 hence affectes only 1.1
 FreeableMemory old = map.get(key);
 if (old == null)
 return false;
 // see if the old value matches the one we want to replace
 FreeableMemory mem = serialize(value);
 if (mem == null)
 return false; // out of memory.  never mind.
 V oldValue = deserialize(old);
 boolean success = oldValue.equals(oldToReplace)  map.replace(key, 
 old, mem);
 if (success)
 old.unreference();
 else
 mem.unreference();
 return success;
 in the above code block we deserialize(old) without taking reference to the 
 old memory, this can case seg faults when the old is reclaimed (free is 
 called)
 Fix is to get the reference just for deserialization
 V oldValue;
 // reference old guy before de-serializing
 old.reference();
 try
 {
  oldValue = deserialize(old);
 }
 finally
 {
 old.unreference();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3772) Evaluate Murmur3-based partitioner

2012-04-01 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3772:
-

Attachment: 0001-CASSANDRA-3772-Test.patch

micro benchmark shows a lot better performance

testing size of: 20
Test MD5
MD5 test completed @ 1506
Test Murmur3
Murmur3 test completed @ 781

Hi Dave, while reviewing the patch it looks like 
Murmur3Partitioner.hash 

{code}
hashBytes[1] = (byte) (bufferLong  48);
...
{code}

is kind of redundant to 

{code}
case 15: k2 ^= ((long) key.get(offset+14))  48
... 
{code}

Though i dont think it is going to cause any additional latency :)



 Evaluate Murmur3-based partitioner
 --

 Key: CASSANDRA-3772
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3772
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Dave Brosius
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3772-Test.patch, MumPartitionerTest.docx, 
 hashed_partitioner.diff, hashed_partitioner_3.diff, try_murmur3.diff, 
 try_murmur3_2.diff


 MD5 is a relatively heavyweight hash to use when we don't need cryptographic 
 qualities, just a good output distribution.  Let's see how much overhead we 
 can save by using Murmur3 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4103) Add stress tool to binaries

2012-03-31 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4103:
-

Attachment: 0001-CASSANDRA-4103.patch

With the patch, an user can do ./tools/stress/bin/stress to execute the stress 
tool from cassandra_home.

 Add stress tool to binaries
 ---

 Key: CASSANDRA-4103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4103
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Rick Branson
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-4103.patch


 It would be great to also get the stress tool packaged along with the 
 binaries. Many people don't even know it exists because it's not distributed 
 with them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4099) IncomingTCPConnection recognizes from by doing socket.getInetAddress() instead of BroadCastAddress

2012-03-29 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4099:
-

Attachment: 0001-CASSANDRA-4099-v3.patch

Attached version incorporate's the comments... Note that the streaming a file 
will not remove or add the version, which i think is a better option IMHO. Plz 
let me know if you think otherwise. Thanks!

 IncomingTCPConnection recognizes from by doing socket.getInetAddress() 
 instead of BroadCastAddress
 --

 Key: CASSANDRA-4099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4099
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-4099-v2.patch, 
 0001-CASSANDRA-4099-v3.patch, 0001-CASSANDRA-4099.patch


 change this.from = socket.getInetAddress() to understand the broad cast IP, 
 but the problem is we dont know until the first packet is received, this 
 ticket is to work around the problem until it reads the first packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4099) IncomingTCPConnection recognizes from by doing socket.getInetAddress() instead of BroadCastAddress

2012-03-29 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4099:
-

Attachment: 0001-CASSANDRA-4099-v4.patch

v4 is on top of the v3 and it fixes the NPE

 IncomingTCPConnection recognizes from by doing socket.getInetAddress() 
 instead of BroadCastAddress
 --

 Key: CASSANDRA-4099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4099
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.9, 1.1.0

 Attachments: 0001-CASSANDRA-4099-v2.patch, 
 0001-CASSANDRA-4099-v3.patch, 0001-CASSANDRA-4099-v4.patch, 
 0001-CASSANDRA-4099.patch


 change this.from = socket.getInetAddress() to understand the broad cast IP, 
 but the problem is we dont know until the first packet is received, this 
 ticket is to work around the problem until it reads the first packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4099) IncomingTCPConnection recognizes from by doing socket.getInetAddress() instead of BroadCastAddress

2012-03-29 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4099:
-

Attachment: 0001-CASSANDRA-4099-v4.patch

 IncomingTCPConnection recognizes from by doing socket.getInetAddress() 
 instead of BroadCastAddress
 --

 Key: CASSANDRA-4099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4099
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.9, 1.1.0

 Attachments: 0001-CASSANDRA-4099-v2.patch, 
 0001-CASSANDRA-4099-v3.patch, 0001-CASSANDRA-4099-v4.patch, 
 0001-CASSANDRA-4099.patch


 change this.from = socket.getInetAddress() to understand the broad cast IP, 
 but the problem is we dont know until the first packet is received, this 
 ticket is to work around the problem until it reads the first packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4099) IncomingTCPConnection recognizes from by doing socket.getInetAddress() instead of BroadCastAddress

2012-03-29 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4099:
-

Attachment: (was: 0001-CASSANDRA-4099-v4.patch)

 IncomingTCPConnection recognizes from by doing socket.getInetAddress() 
 instead of BroadCastAddress
 --

 Key: CASSANDRA-4099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4099
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.9, 1.1.0

 Attachments: 0001-CASSANDRA-4099-v2.patch, 
 0001-CASSANDRA-4099-v3.patch, 0001-CASSANDRA-4099-v4.patch, 
 0001-CASSANDRA-4099.patch


 change this.from = socket.getInetAddress() to understand the broad cast IP, 
 but the problem is we dont know until the first packet is received, this 
 ticket is to work around the problem until it reads the first packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3722) Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.

2012-03-28 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3722:
-

Attachment: 0001-CASSANDRA-3722-v3.patch

Hi Brandon, Fixed in v3. Thanks!

 Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.
 --

 Key: CASSANDRA-3722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3722
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3722-A1-V2.patch, 
 0001-CASSANDRA-3722-A1.patch, 0001-CASSANDRA-3722-v3.patch, 
 0001-CASSANDRA-3723-A2-Patch.patch, 
 0001-Expose-SP-latencies-in-nodetool-proxyhistograms.txt


 Currently Dynamic snitch looks at the latency for figuring out which node 
 will be better serving the requests, this works great but there is a part of 
 the traffic sent to collect this data... There is also a window when Snitch 
 doesn't know about some major event which are going to happen on the node 
 (Node which is going to receive the data request).
 It would be great if we can send some sort hints to the Snitch so they can 
 score based on known events causing higher latencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4099) IncomingTCPConnection recognizes from by doing socket.getInetAddress() instead of BroadCastAddress

2012-03-28 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4099:
-

Attachment: 0001-CASSANDRA-4099.patch

 IncomingTCPConnection recognizes from by doing socket.getInetAddress() 
 instead of BroadCastAddress
 --

 Key: CASSANDRA-4099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4099
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-4099.patch


 change this.from = socket.getInetAddress() to understand the broad cast IP, 
 but the problem is we dont know until the first packet is received, this 
 ticket is to work around the problem until it reads the first packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4099) IncomingTCPConnection recognizes from by doing socket.getInetAddress() instead of BroadCastAddress

2012-03-28 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4099:
-

Attachment: 0001-CASSANDRA-4099-v2.patch

IRC Brandon explained the changes in flowcontrol which will not allow us to 
stream data from other versions, v2 removes those changes from v1

 IncomingTCPConnection recognizes from by doing socket.getInetAddress() 
 instead of BroadCastAddress
 --

 Key: CASSANDRA-4099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4099
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-4099-v2.patch, 0001-CASSANDRA-4099.patch


 change this.from = socket.getInetAddress() to understand the broad cast IP, 
 but the problem is we dont know until the first packet is received, this 
 ticket is to work around the problem until it reads the first packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3772) Evaluate Murmur3-based partitioner

2012-03-27 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3772:
-

Attachment: MumPartitionerTest.docx

 Didn't someone post profiler results fingering MD5 as a bottleneck?
I do have the profile where the md5 was a bottleneck earlier but after changing 
the bloom filter I am not sure.

Hi Sylvain, Plz find the attachement it is a 3 node setup and i am doing very 
basic thing (including index scan). I can run let the secondary index test run 
longer if you want.

 Evaluate Murmur3-based partitioner
 --

 Key: CASSANDRA-3772
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3772
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Dave Brosius
 Fix For: 1.2

 Attachments: MumPartitionerTest.docx, hashed_partitioner.diff, 
 try_murmur3.diff, try_murmur3_2.diff


 MD5 is a relatively heavyweight hash to use when we don't need cryptographic 
 qualities, just a good output distribution.  Let's see how much overhead we 
 can save by using Murmur3 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3722) Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.

2012-03-26 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3722:
-

Attachment: 0001-CASSANDRA-3723-A2-Patch.patch

A2 goes on top of A1. A2 also has pending queue stats looks like it is more 
choppy with that metric in the mix.

 Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.
 --

 Key: CASSANDRA-3722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3722
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3722-A1-V2.patch, 
 0001-CASSANDRA-3722-A1.patch, 0001-CASSANDRA-3723-A2-Patch.patch


 Currently Dynamic snitch looks at the latency for figuring out which node 
 will be better serving the requests, this works great but there is a part of 
 the traffic sent to collect this data... There is also a window when Snitch 
 doesn't know about some major event which are going to happen on the node 
 (Node which is going to receive the data request).
 It would be great if we can send some sort hints to the Snitch so they can 
 score based on known events causing higher latencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3722) Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.

2012-03-26 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3722:
-

Reviewer: brandon.williams

 Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.
 --

 Key: CASSANDRA-3722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3722
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3722-A1-V2.patch, 
 0001-CASSANDRA-3722-A1.patch, 0001-CASSANDRA-3723-A2-Patch.patch


 Currently Dynamic snitch looks at the latency for figuring out which node 
 will be better serving the requests, this works great but there is a part of 
 the traffic sent to collect this data... There is also a window when Snitch 
 doesn't know about some major event which are going to happen on the node 
 (Node which is going to receive the data request).
 It would be great if we can send some sort hints to the Snitch so they can 
 score based on known events causing higher latencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3722) Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.

2012-03-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3722:
-

Attachment: 0001-CASSANDRA-3722-A1.patch

Attached is the version which just has the hints and doesn't have pending queue 
into the account. (Working on A2, just in case with artificial load for other 
DC's).

Im addition there is a JMX, in which users can force load in/out into a node by 
specifying +ve/-ve severity.



 Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.
 --

 Key: CASSANDRA-3722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3722
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3722-A1.patch


 Currently Dynamic snitch looks at the latency for figuring out which node 
 will be better serving the requests, this works great but there is a part of 
 the traffic sent to collect this data... There is also a window when Snitch 
 doesn't know about some major event which are going to happen on the node 
 (Node which is going to receive the data request).
 It would be great if we can send some sort hints to the Snitch so they can 
 score based on known events causing higher latencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3722) Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.

2012-03-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3722:
-

Attachment: 0001-CASSANDRA-3722-A1-V2.patch

Fixed, not sure why i decided to do that :)

 Send Hints to Dynamic Snitch when Compaction or repair is going on for a node.
 --

 Key: CASSANDRA-3722
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3722
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3722-A1-V2.patch, 
 0001-CASSANDRA-3722-A1.patch


 Currently Dynamic snitch looks at the latency for figuring out which node 
 will be better serving the requests, this works great but there is a part of 
 the traffic sent to collect this data... There is also a window when Snitch 
 doesn't know about some major event which are going to happen on the node 
 (Node which is going to receive the data request).
 It would be great if we can send some sort hints to the Snitch so they can 
 score based on known events causing higher latencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: 0001-CASSANDRA-3997.patch

Attached patch makes the Offheap allocation pluggable and has JEMallocAllocator.

To Test JEMalloc: Plz set 

#export LD_LIBRARY_PATH=/xxx/jemalloc-2.2.5/lib/
JVM Property: -Djava.library.path=/xxx/jemalloc-2.2.5/lib/libjemalloc.so

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997.patch, jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: 0001-CASSANDRA-3997.patch

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997.patch, jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: (was: 0001-CASSANDRA-3997.patch)

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997.patch, jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: (was: 0001-CASSANDRA-3997.patch)

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997.patch, jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-18 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: 0001-CASSANDRA-3997.patch

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997.patch, jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2975) Upgrade MurmurHash to version 3

2012-03-15 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2975:
-

Attachment: 0001-CASSANDRA-2975-v2.patch

Done! unit tests and functional tests works fine.

 Upgrade MurmurHash to version 3
 ---

 Key: CASSANDRA-2975
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2975
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brian Lindauer
Assignee: Vijay
Priority: Trivial
  Labels: lhf
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-2975-v2.patch, 0001-CASSANDRA-2975.patch, 
 0001-Convert-BloomFilter-to-use-MurmurHash-v3-instead-of-.patch, 
 0002-Backwards-compatibility-with-files-using-Murmur2-blo.patch, 
 Murmur3Benchmark.java


 MurmurHash version 3 was finalized on June 3. It provides an enormous speedup 
 and increased robustness over version 2, which is implemented in Cassandra. 
 Information here:
 http://code.google.com/p/smhasher/
 The reference implementation is here:
 http://code.google.com/p/smhasher/source/browse/trunk/MurmurHash3.cpp?spec=svn136r=136
 I have already done the work to port the (public domain) reference 
 implementation to Java in the MurmurHash class and updated the BloomFilter 
 class to use the new implementation:
 https://github.com/lindauer/cassandra/commit/cea6068a4a3e5d7d9509335394f9ef3350d37e93
 Apart from the faster hash time, the new version only requires one call to 
 hash() rather than 2, since it returns 128 bits of hash instead of 64.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-03-08 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0001-CASSANDRA-3690-v2.patch

Hi Jonathan,

Attached patch does exactly what we discussed here. Its almost the same as 
Postgress :) 

In addition we can start the node with -Dcassandra.join_ring=false and then use 
JMX to restore files one by one via JMX.

Plz let me know.

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 
 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4026) EC2 snitch incorrectly reports regions

2012-03-08 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4026:
-

Comment: was deleted

(was: I ran through it a while ago. But the problem is that we cannot change 
the settings on the current live clusters which uses Ec2MultiregionSnitch or 
Ec2Snitch without taking a downtime. (If we change EC2Snitch/Ec2Multiregion 
snitch, we also need to change the schema for the existing cluster).

Option 1: Leave the existing snitch as it is and add a new snitch.
Option 2: Parse for us-west-1 as us-west and parse us-west-2 as us-west2, as 
us-west-2 is fairly new it wont affect a lot of us?

Brandon, Thoughts?
)

 EC2 snitch incorrectly reports regions
 --

 Key: CASSANDRA-4026
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4026
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.8
 Environment: Ubuntu 10.10 64 bit Oracle Java 6
Reporter: Todd Nine
Assignee: Vijay

 Currently the org.apache.cassandra.locator.Ec2Snitch reports us-west in 
 both the oregon and the california data centers.  This is incorrect, since 
 they are different regions.
 California = us-west-1
 Oregon = us-west-2
 wget http://169.254.169.254/latest/meta-data/placement/availability-zone 
 returns the value us-west-2a
 After parsing this returns
 DC = us-west Rack = 2a
 What it should return
 DC = us-west-2 Rack = a
 This makes it possible to use multi region when both regions are in the west 
 coast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4026) EC2 snitch incorrectly reports regions

2012-03-08 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4026:
-

Reviewer: brandon.williams

 EC2 snitch incorrectly reports regions
 --

 Key: CASSANDRA-4026
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4026
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.8
 Environment: Ubuntu 10.10 64 bit Oracle Java 6
Reporter: Todd Nine
Assignee: Vijay

 Currently the org.apache.cassandra.locator.Ec2Snitch reports us-west in 
 both the oregon and the california data centers.  This is incorrect, since 
 they are different regions.
 California = us-west-1
 Oregon = us-west-2
 wget http://169.254.169.254/latest/meta-data/placement/availability-zone 
 returns the value us-west-2a
 After parsing this returns
 DC = us-west Rack = 2a
 What it should return
 DC = us-west-2 Rack = a
 This makes it possible to use multi region when both regions are in the west 
 coast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4026) EC2 snitch incorrectly reports regions

2012-03-08 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4026:
-

Attachment: 0001-CASSANDRA-4026.patch

 This is doable though if you're willing to repair afterwards.
The problem is that StorageProxy will start to write the data to the nodes 
which are not suppose to have the data (During upgrade, and restart takes a 
while)... hence after recovery they will not be able to be recovered via repair 
(Lets say Node A, B, C, D if B and C are upgraded A will start to write the 
data to D for thinking it as this datacenters replica).

 it's definitely a hack, but only appending the number to the DC if  1 
 might be the least painful for existing users.
Agree, and attached patch does this.

BTW: The attached patch can break after we AWS has 24 AZ's which is highly 
unlikely but i will create a ticket requesting for API for Regions instead of 
AZ.

 EC2 snitch incorrectly reports regions
 --

 Key: CASSANDRA-4026
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4026
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.8
 Environment: Ubuntu 10.10 64 bit Oracle Java 6
Reporter: Todd Nine
Assignee: Vijay
 Attachments: 0001-CASSANDRA-4026.patch


 Currently the org.apache.cassandra.locator.Ec2Snitch reports us-west in 
 both the oregon and the california data centers.  This is incorrect, since 
 they are different regions.
 California = us-west-1
 Oregon = us-west-2
 wget http://169.254.169.254/latest/meta-data/placement/availability-zone 
 returns the value us-west-2a
 After parsing this returns
 DC = us-west Rack = 2a
 What it should return
 DC = us-west-2 Rack = a
 This makes it possible to use multi region when both regions are in the west 
 coast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Attachment: jna.zip

Attached is the test classes used for the test.

Results on CentOS:

[vijay_tcasstest@vijay_tcass-i-a91ee8cd ~]$ 
/etc/alternatives/jre_1.7.0/bin/java -Djava.library.path=/usr/local/lib/ -cp 
jna.jar:/apps/nfcassandra_server/lib/*:. com.sun.jna.MallocAllocator 5 
200
 total   used   free sharedbuffers cached
Mem:  71688220   26049380   45638840  0 169116 996172
-/+ buffers/cache:   24884092   46804128
Swap:0  0  0
 Starting Test! 
Total bytes read: 101422934016
Time taken: 25407
 total   used   free sharedbuffers cached
Mem:  71688220   31981924   39706296  0 169116 996312
-/+ buffers/cache:   30816496   40871724
Swap:0  0  0
 ending Test! 
[vijay_tcasstest@vijay_tcass-i-a91ee8cd ~]$ export 
LD_LIBRARY_PATH=/usr/local/lib/
[vijay_tcasstest@vijay_tcass-i-a91ee8cd ~]$ 
/etc/alternatives/jre_1.7.0/bin/java -Djava.library.path=/usr/local/lib/ -cp 
jna.jar:/apps/nfcassandra_server/lib/*:. com.sun.jna.TCMallocAllocator 5 
200
 total   used   free sharedbuffers cached
Mem:  71688220   26054620   45633600  0 169128 996228
-/+ buffers/cache:   24889264   46798956
Swap:0  0  0
 Starting Test! 
Total bytes read: 101304894464
Time taken: 46387
 total   used   free sharedbuffers cached
Mem:  71688220   28535136   43153084  0 169128 996436
-/+ buffers/cache:   27369572   44318648
Swap:0  0  0
 ending Test! 
[vijay_tcasstest@vijay_tcass-i-a91ee8cd ~]$ export 
LD_LIBRARY_PATH=~/jemalloc-2.2.5/lib/ 
[vijay_tcasstest@vijay_tcass-i-a91ee8cd ~]$ 
/etc/alternatives/jre_1.7.0/bin/java -Djava.library.path=~/jemalloc-2.2.5/lib/ 
-cp jna.jar:/apps/nfcassandra_server/lib/*:. com.sun.jna.JEMallocAllocator 
5 200
 total   used   free sharedbuffers cached
Mem:  71688220   26060604   45627616  0 169128 996300
-/+ buffers/cache:   24895176   46793044
Swap:0  0  0
 Starting Test! 
Total bytes read: 101321734144
Time taken: 29937
 total   used   free sharedbuffers cached
Mem:  71688220   28472436   43215784  0 169128 996440
-/+ buffers/cache:   27306868   44381352
Swap:0  0  0
 ending Test! 
[vijay_tcasstest@vijay_tcass-i-a91ee8cd ~]$ 

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3997:
-

Assignee: Vijay

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3555) Bootstrapping to handle more failure

2012-03-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3555:
-

Attachment: 0001-CASSANDRA-3555-v1.patch

Hi Brandon, Attached patch cancel's the bootstraps when the other node gets 
restarted or killed.

 Bootstrapping to handle more failure
 

 Key: CASSANDRA-3555
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3555
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.5
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3555-v1.patch, 
 3555-bootstrap-with-down-node-test.txt, 3555-bootstrap-with-down-node.txt


 We might want to handle failures in bootstrapping:
 1) When none of the Seeds are available to communicate then throw exception
 2) When any one of the node which it is bootstrapping fails then try next in 
 the list (and if the list is exhausted then throw exception).
 3) Clean all the existing files in the data Dir before starting just in case 
 we retry.
 4) Currently when one node is down in the cluster the bootstrapping will 
 fail, because the bootstrapping node doesnt understand which one is actually 
 down.
 Also print the nt ring in the logs so we can troubleshoot later if it fails.
 Currently if any one of the above happens the node is skipping the bootstrap 
 or hangs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3555) Bootstrapping to handle more failure

2012-03-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3555:
-

Attachment: 0001-CASSANDRA-3555-v1.patch

 Bootstrapping to handle more failure
 

 Key: CASSANDRA-3555
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3555
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.5
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3555-v1.patch, 
 3555-bootstrap-with-down-node-test.txt, 3555-bootstrap-with-down-node.txt


 We might want to handle failures in bootstrapping:
 1) When none of the Seeds are available to communicate then throw exception
 2) When any one of the node which it is bootstrapping fails then try next in 
 the list (and if the list is exhausted then throw exception).
 3) Clean all the existing files in the data Dir before starting just in case 
 we retry.
 4) Currently when one node is down in the cluster the bootstrapping will 
 fail, because the bootstrapping node doesnt understand which one is actually 
 down.
 Also print the nt ring in the logs so we can troubleshoot later if it fails.
 Currently if any one of the above happens the node is skipping the bootstrap 
 or hangs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3555) Bootstrapping to handle more failure

2012-03-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3555:
-

Attachment: (was: 0001-CASSANDRA-3555-v1.patch)

 Bootstrapping to handle more failure
 

 Key: CASSANDRA-3555
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3555
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.5
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3555-v1.patch, 
 3555-bootstrap-with-down-node-test.txt, 3555-bootstrap-with-down-node.txt


 We might want to handle failures in bootstrapping:
 1) When none of the Seeds are available to communicate then throw exception
 2) When any one of the node which it is bootstrapping fails then try next in 
 the list (and if the list is exhausted then throw exception).
 3) Clean all the existing files in the data Dir before starting just in case 
 we retry.
 4) Currently when one node is down in the cluster the bootstrapping will 
 fail, because the bootstrapping node doesnt understand which one is actually 
 down.
 Also print the nt ring in the logs so we can troubleshoot later if it fails.
 Currently if any one of the above happens the node is skipping the bootstrap 
 or hangs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3583) Add rebuild index JMX command

2012-03-01 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3583:
-

Attachment: 0001-CASSANDRA-3583-fix.patch

Hi Pavel, it is not broken per say... We allow the queries flow through during 
rebuild phase... attached patch which fixes it. Thanks! 

 Add rebuild index JMX command
 ---

 Key: CASSANDRA-3583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3583
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Tools
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.0

 Attachments: 0001-3583-v2.patch, 0001-3583.patch, 
 0001-CASSANDRA-3583-fix.patch


 CASSANDRA-1740 allows aborting an index build, but there is no way to 
 re-attempt the build without restarting the server.
 We've also had requests to allow rebuilding an index that *has* been built, 
 so it would be nice to kill two birds with one stone here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3984) missing documents for caching in 1.1

2012-02-29 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3984:
-

Attachment: 0001-docs-for-caching.patch

 missing documents for caching in 1.1
 

 Key: CASSANDRA-3984
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3984
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.1.0

 Attachments: 0001-docs-for-caching.patch


 add row cache and key cache setting documentation in CliHelp.yaml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3966) KeyCacheKey and RowCacheKey to use raw byte[]

2012-02-27 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3966:
-

Summary: KeyCacheKey and RowCacheKey to use raw byte[]  (was: KeyCacheKey 
and RowCacheKey to use raw cache)

 KeyCacheKey and RowCacheKey to use raw byte[]
 -

 Key: CASSANDRA-3966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3966
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.8
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.9, 1.1.1


 We can just store the raw byte[] which is backes decorated key insted of 
 storing the byteBuffer,
 After reading the mail
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
 Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
 implementing hashcode and equals in the KeyCacheKey and RowCacheKey
 http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3966) KeyCacheKey and RowCacheKey to use raw byte[]

2012-02-27 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3966:
-

Description: 
We can just store the raw byte[] instead of byteBuffer,

After reading the mail
http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
implementing hashcode and equals in the KeyCacheKey and RowCacheKey
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29


  was:
We can just store the raw byte[] which is backes decorated key insted of 
storing the byteBuffer,

After reading the mail
http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
implementing hashcode and equals in the KeyCacheKey and RowCacheKey
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29



 KeyCacheKey and RowCacheKey to use raw byte[]
 -

 Key: CASSANDRA-3966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3966
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.8
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.9, 1.1.1


 We can just store the raw byte[] instead of byteBuffer,
 After reading the mail
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
 Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
 implementing hashcode and equals in the KeyCacheKey and RowCacheKey
 http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3610) Checksum improvement for CommitLog

2012-02-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3610:
-

Attachment: 0001-CASSANDRA-3610-v4.patch

Commitlog Patch to use CRC32 for commitlog alone Thanks!

 Checksum improvement for CommitLog
 --

 Key: CASSANDRA-3610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3610
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
 Environment: JVM
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3610-v4.patch, 
 0001-use-pure-java-CRC32-v2.patch, 0001-use-pure-java-CRC32-v3.patch, 
 0001-use-pure-java-CRC32.patch, TestCrc32Performance.java, 
 TestCrc32Performance.java, crc32Test.xlsx


 When compression is on, Currently we see checksum taking up about 40% of the 
 CPU more than snappy library.
 Looks like hadoop solved it by implementing their own checksum, we can either 
 use it or implement something like that.
 http://images.slidesharecdn.com/1toddlipconyanpeichen-cloudera-hadoopandperformance-final-10132228-phpapp01-slide-15-768.jpg?1321043717
 in our test env it provided 50% improvement over native implementation which 
 uses jni to call the OS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3555) Bootstrapping to handle more failure

2012-02-22 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3555:
-

Reviewer: brandon.williams

 Bootstrapping to handle more failure
 

 Key: CASSANDRA-3555
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3555
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.5
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.2

 Attachments: 3555-bootstrap-with-down-node-test.txt, 
 3555-bootstrap-with-down-node.txt


 We might want to handle failures in bootstrapping:
 1) When none of the Seeds are available to communicate then throw exception
 2) When any one of the node which it is bootstrapping fails then try next in 
 the list (and if the list is exhausted then throw exception).
 3) Clean all the existing files in the data Dir before starting just in case 
 we retry.
 4) Currently when one node is down in the cluster the bootstrapping will 
 fail, because the bootstrapping node doesnt understand which one is actually 
 down.
 Also print the nt ring in the logs so we can troubleshoot later if it fails.
 Currently if any one of the above happens the node is skipping the bootstrap 
 or hangs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3412) make nodetool ring ownership smarter

2012-02-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3412:
-

Attachment: 0001-CASSANDRA-3412-v2.patch

Ahaaa I was not handling the case where DC1 and DC2 wasnt valid DC's... 
Attached patch has the fix.


[vijay_tcasstest@vijay_tcass-i-a6643ac3 ~]$ nt ring Keyspace1
Address DC  RackStatus State   Load
Effective-Owership  Token   

   141784319550391026443072753096942836216 
79.125.30.58eu-west 1c  Up Normal  57.55 KB21.36%   
   7980510910520838461985007190626882496   
107.21.183.168  us-east 1c  Up Normal  79.97 KB6.42%
   18904575940052136859076367081351254013  
46.137.134.6eu-west 1c  Up Normal  13.49 KB22.22%   
   56713727820156410577229101239000783353  
50.16.117.152   us-east 1c  Up Normal  79.78 KB11.11%   
   75618303760208547436305468319979289255  
50.19.163.142   us-east 1c  Up Normal  119.48 KB   33.33%   
   132332031580364958013534569558607324497 
46.51.157.33eu-west 1c  Down   Normal  27.37 KB5.56%
   141784319550391026443072753096942836216 
[vijay_tcasstest@vijay_tcass-i-a6643ac3 ~]$ nt ring
Warning: Output contains ownership information which does not include 
replication factor.
Why? Non System keyspaces doesnt have the same the same topology
Warning: Use nodetool ring keyspace to specify a keyspace. 
Address DC  RackStatus State   LoadOwns 
   Token   

   141784319550391026443072753096942836216 
79.125.30.58eu-west 1c  Up Normal  57.55 KB21.36%   
   7980510910520838461985007190626882496   
107.21.183.168  us-east 1c  Up Normal  79.97 KB6.42%
   18904575940052136859076367081351254013  
46.137.134.6eu-west 1c  Up Normal  13.49 KB22.22%   
   56713727820156410577229101239000783353  
50.16.117.152   us-east 1c  Up Normal  79.78 KB11.11%   
   75618303760208547436305468319979289255  
50.19.163.142   us-east 1c  Up Normal  119.48 KB   33.33%   
   132332031580364958013534569558607324497 
46.51.157.33eu-west 1c  Down   Normal  27.37 KB5.56%
   141784319550391026443072753096942836216 
[vijay_tcasstest@vijay_tcass-i-a6643ac3 ~]$ nt ring nts
Address DC  RackStatus State   Load
Effective-Owership  Token   

   141784319550391026443072753096942836216 
79.125.30.58eu-west 1c  Up Normal  57.55 KB0.00%
   7980510910520838461985007190626882496   
107.21.183.168  us-east 1c  Up Normal  79.97 KB0.00%
   18904575940052136859076367081351254013  
46.137.134.6eu-west 1c  Up Normal  13.49 KB0.00%
   56713727820156410577229101239000783353  
50.16.117.152   us-east 1c  Up Normal  79.78 KB0.00%
   75618303760208547436305468319979289255  
50.19.163.142   us-east 1c  Up Normal  119.48 KB   0.00%
   132332031580364958013534569558607324497 
46.51.157.33eu-west 1c  Down   Normal  27.37 KB0.00%
   141784319550391026443072753096942836216 
[vijay_tcasstest@vijay_tcass-i-a6643ac3 ~]$ 


 make nodetool ring ownership smarter
 

 Key: CASSANDRA-3412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3412
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jackson Chung
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3412-v2.patch, 0001-CASSANDRA-3412.patch


 just a thought.. the ownership info currently just look at the token and 
 calculate the % between nodes. It would be nice if it could do more, such as 
 discriminate nodes of each DC, replica set, etc. 
 ticket is open for suggestion...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: 

[jira] [Updated] (CASSANDRA-3610) Checksum improvement for CompressedRandomAccessReader

2012-02-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3610:
-

Attachment: crc32Test.xlsx
TestCrc32Performance.java

Looks like the Open JDK 6 and Sun JDK 7 has the almost same performance on the 
JNI. Attached is the Multi-Threaded test (10 threads) for both the versions. 
hence closing this issue as not an issue as we expect users to upgrade to JDK 7

 Checksum improvement for CompressedRandomAccessReader
 -

 Key: CASSANDRA-3610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3610
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
 Environment: JVM
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-use-pure-java-CRC32-v2.patch, 
 0001-use-pure-java-CRC32-v3.patch, 0001-use-pure-java-CRC32.patch, 
 TestCrc32Performance.java, TestCrc32Performance.java, crc32Test.xlsx


 When compression is on, Currently we see checksum taking up about 40% of the 
 CPU more than snappy library.
 Looks like hadoop solved it by implementing their own checksum, we can either 
 use it or implement something like that.
 http://images.slidesharecdn.com/1toddlipconyanpeichen-cloudera-hadoopandperformance-final-10132228-phpapp01-slide-15-768.jpg?1321043717
 in our test env it provided 50% improvement over native implementation which 
 uses jni to call the OS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3412) make nodetool ring ownership smarter

2012-02-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3412:
-

Attachment: 0001-CASSANDRA-3412-v3.patch

Attached patch with the fix for warning message, I have also added a note to 
changes.txt to inform the users. Will commit in few min, Thanks!

 make nodetool ring ownership smarter
 

 Key: CASSANDRA-3412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3412
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jackson Chung
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3412-v2.patch, 
 0001-CASSANDRA-3412-v3.patch, 0001-CASSANDRA-3412.patch


 just a thought.. the ownership info currently just look at the token and 
 calculate the % between nodes. It would be nice if it could do more, such as 
 discriminate nodes of each DC, replica set, etc. 
 ticket is open for suggestion...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3412) make nodetool ring ownership smarter

2012-02-16 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3412:
-

Attachment: 0001-CASSANDRA-3412.patch

I tried to make this generic enough and attached is a simple patch to display 
what we discussed. If we cannot figure out a keyspace to display the default is 
the same as today.

Default:
Warning: Output contains ownership information which does not include 
replication factor.
Warning: Use nodetool ring keyspace to specify a keyspace. 
Address DC  RackStatus State   LoadOwns 
   Token   

   141784319550391026443072753096942836216 
107.21.183.168  us-east 1c  Up Normal  38.57 KB27.78%   
   18904575940052136859076367081351254013  
79.125.30.58eu-west 1c  Up Normal  36.44 KB22.22%   
   56713727820156410577229101239000783353  
50.16.117.152   us-east 1c  Up Normal  52.03 KB11.11%   
   75618303760208547436305468319979289255  
50.19.163.142   us-east 1c  Up Normal  51.59 KB33.33%   
   132332031580364958013534569558607324497 
46.51.157.33eu-west 1c  Up Normal  31.64 KB5.56%
   141784319550391026443072753096942836216 

Effective nt ring:
[vijay_tcasstest@vijay_tcass-i-a6643ac3 ~]$ nt ring
Address DC  RackStatus State   Load
Effective-Owership  Token   

   141784319550391026443072753096942836216 
107.21.183.168  us-east 1c  Up Normal  27.23 KB66.67%   
   18904575940052136859076367081351254013  
79.125.30.58eu-west 1c  Up Normal  31.51 KB50.00%   
   56713727820156410577229101239000783353  
50.16.117.152   us-east 1c  Up Normal  47.1 KB 66.67%   
   75618303760208547436305468319979289255  
50.19.163.142   us-east 1c  Up Normal  42.52 KB66.67%   
   132332031580364958013534569558607324497 
46.51.157.33eu-west 1c  Up Normal  36.32 KB50.00%   
   141784319550391026443072753096942836216 




 make nodetool ring ownership smarter
 

 Key: CASSANDRA-3412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3412
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jackson Chung
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-3412.patch


 just a thought.. the ownership info currently just look at the token and 
 calculate the % between nodes. It would be nice if it could do more, such as 
 discriminate nodes of each DC, replica set, etc. 
 ticket is open for suggestion...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2975) Upgrade MurmurHash to version 3

2012-02-13 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2975:
-

Attachment: 0001-CASSANDRA-2975.patch

Attached is the refactor which includes fixes as per the suggestions. Added a 
factory to make adding newer hashesh easier but left the Legacy alone but it 
will be fairly trivial and more cleaner if we want to refactor a little more. 
Let me know thanks! Tests passed and the long test shows significant 
improvement Thanks Brian!

 Upgrade MurmurHash to version 3
 ---

 Key: CASSANDRA-2975
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2975
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brian Lindauer
Assignee: Vijay
Priority: Trivial
  Labels: lhf
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-2975.patch, 
 0001-Convert-BloomFilter-to-use-MurmurHash-v3-instead-of-.patch, 
 0002-Backwards-compatibility-with-files-using-Murmur2-blo.patch, 
 Murmur3Benchmark.java


 MurmurHash version 3 was finalized on June 3. It provides an enormous speedup 
 and increased robustness over version 2, which is implemented in Cassandra. 
 Information here:
 http://code.google.com/p/smhasher/
 The reference implementation is here:
 http://code.google.com/p/smhasher/source/browse/trunk/MurmurHash3.cpp?spec=svn136r=136
 I have already done the work to port the (public domain) reference 
 implementation to Java in the MurmurHash class and updated the BloomFilter 
 class to use the new implementation:
 https://github.com/lindauer/cassandra/commit/cea6068a4a3e5d7d9509335394f9ef3350d37e93
 Apart from the faster hash time, the new version only requires one call to 
 hash() rather than 2, since it returns 128 bits of hash instead of 64.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2975) Upgrade MurmurHash to version 3

2012-02-13 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2975:
-

Attachment: 0001-CASSANDRA-2975.patch

updating the patch because old one missed the new files created.

 Upgrade MurmurHash to version 3
 ---

 Key: CASSANDRA-2975
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2975
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brian Lindauer
Assignee: Vijay
Priority: Trivial
  Labels: lhf
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-2975.patch, 
 0001-Convert-BloomFilter-to-use-MurmurHash-v3-instead-of-.patch, 
 0002-Backwards-compatibility-with-files-using-Murmur2-blo.patch, 
 Murmur3Benchmark.java


 MurmurHash version 3 was finalized on June 3. It provides an enormous speedup 
 and increased robustness over version 2, which is implemented in Cassandra. 
 Information here:
 http://code.google.com/p/smhasher/
 The reference implementation is here:
 http://code.google.com/p/smhasher/source/browse/trunk/MurmurHash3.cpp?spec=svn136r=136
 I have already done the work to port the (public domain) reference 
 implementation to Java in the MurmurHash class and updated the BloomFilter 
 class to use the new implementation:
 https://github.com/lindauer/cassandra/commit/cea6068a4a3e5d7d9509335394f9ef3350d37e93
 Apart from the faster hash time, the new version only requires one call to 
 hash() rather than 2, since it returns 128 bits of hash instead of 64.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2975) Upgrade MurmurHash to version 3

2012-02-13 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2975:
-

Attachment: (was: 0001-CASSANDRA-2975.patch)

 Upgrade MurmurHash to version 3
 ---

 Key: CASSANDRA-2975
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2975
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brian Lindauer
Assignee: Vijay
Priority: Trivial
  Labels: lhf
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-2975.patch, 
 0001-Convert-BloomFilter-to-use-MurmurHash-v3-instead-of-.patch, 
 0002-Backwards-compatibility-with-files-using-Murmur2-blo.patch, 
 Murmur3Benchmark.java


 MurmurHash version 3 was finalized on June 3. It provides an enormous speedup 
 and increased robustness over version 2, which is implemented in Cassandra. 
 Information here:
 http://code.google.com/p/smhasher/
 The reference implementation is here:
 http://code.google.com/p/smhasher/source/browse/trunk/MurmurHash3.cpp?spec=svn136r=136
 I have already done the work to port the (public domain) reference 
 implementation to Java in the MurmurHash class and updated the BloomFilter 
 class to use the new implementation:
 https://github.com/lindauer/cassandra/commit/cea6068a4a3e5d7d9509335394f9ef3350d37e93
 Apart from the faster hash time, the new version only requires one call to 
 hash() rather than 2, since it returns 128 bits of hash instead of 64.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2506) Push read repair setting down to the DC-level

2012-02-10 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2506:
-

Attachment: 0002-dc-local-read-repair-2506.patch
0001-avro-and-Thrift-2506.patch

Hi Sylvain, plz find the attachement.

 Push read repair setting down to the DC-level
 -

 Key: CASSANDRA-2506
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2506
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-avro-and-Thrift-2506.patch, 
 0001-dc-localized-read-repair-v2.patch, 0001-dc-localized-read-repair.patch, 
 0001-documentation-for-read_repair-v4.patch, 
 0001-thrift-and-avro-changes-v3.patch, 0002-dc-local-read-repair-2506.patch, 
 0002-dc-localized-read-repair-v3.patch, 0002-thrift-and-avro-v2.patch, 
 0002-thrift-and-avro-v4.patch, 0002-thrift-and-avro.patch, 
 0003-dc-localized-read-repair-v4.patch, 
 0003-documentation-for-read_repair-v3.patch, 
 0003-documentation-for-read_repair_options-v2.patch, 
 0003-documentation-for-read_repair_options.patch


 Currently, read repair is a global setting.  However, when you have two DCs 
 and use one for analytics, it would be nice to turn it off only for that DC 
 so the live DC serving the application can still benefit from it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3867) Disablethrift and Enablethrift can leaves behind zombie connections on THSHA server

2012-02-10 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3867:
-

Attachment: 0001-CASSANDRA-3867.patch

Simple patch to close the selector when disablethrift.

 Disablethrift and Enablethrift can leaves behind zombie connections on THSHA 
 server
 ---

 Key: CASSANDRA-3867
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3867
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.0.8

 Attachments: 0001-CASSANDRA-3867.patch


 While doing nodetool disable thrift we disable selecting threads and close 
 them... but the connections are still active...
 Enable thrift creates a new Selector threads because we create new 
 ThriftServer() which will cause the old connections to be zombies.
 I think the right fix will be to call server.interrupt(); and then close the 
 connections when they are done selecting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-1956) Convert row cache to row+filter cache

2012-02-08 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-1956:
-

Attachment: 0001-commiting-block-cache.patch

This patch is not complete yet i just wanted to show and see what you guys 
think about this... This patch is something like a block cache where it will 
cache blocks of columns where the user can choose the block size and if the 
query is within the block we are good by just pulling the block into memory 
else we will scan through the blocks and get the required blocks. Updates can 
also scan through the blocks and update them... The good part here is this 
should have lower memory foot print than Query cache but it should also solve 
the problems which we are discussing in this ticket. Let me know thanks! Again 
there is more logic/cases to be handled, Just a prototype for now.

 Convert row cache to row+filter cache
 -

 Key: CASSANDRA-1956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1956
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Stu Hood
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-1956-cache-updates-v0.patch, 
 0001-commiting-block-cache.patch, 0001-re-factor-row-cache.patch, 
 0001-row-cache-filter.patch, 0002-1956-updates-to-thrift-and-avro-v0.patch, 
 0002-add-query-cache.patch


 Changing the row cache to a row+filter cache would make it much more useful. 
 We currently have to warn against using the row cache with wide rows, where 
 the read pattern is typically a peek at the head, but this usecase would be 
 perfect supported by a cache that stored only columns matching the filter.
 Possible implementations:
 * (copout) Cache a single filter per row, and leave the cache key as is
 * Cache a list of filters per row, leaving the cache key as is: this is 
 likely to have some gotchas for weird usage patterns, and it requires the 
 list overheard
 * Change the cache key to rowkey+filterid: basically ideal, but you need a 
 secondary index to lookup cache entries by rowkey so that you can keep them 
 in sync with the memtable
 * others?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3838) Repair Streaming hangs between multiple regions

2012-02-05 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3838:
-

Attachment: 0001-CASSANDRA-3838-v2.patch

Hi Sylvain the default is set to no timeout in the new patch. Thanks!

 Repair Streaming hangs between multiple regions
 ---

 Key: CASSANDRA-3838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3838
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.8

 Attachments: 0001-Add-streaming-socket-timeouts.patch, 
 0001-CASSANDRA-3838-v2.patch, 0001-CASSANDRA-3838.patch


 Streaming hangs between datacenters, though there might be multiple reasons 
 for this, a simple fix will be to add the Socket timeout so the session can 
 retry.
 The following is the netstat of the affected node (the below output remains 
 this way for a very long period).
 [test_abrepairtest@test_abrepair--euwest1c-i-1adfb753 ~]$ nt netstats
 Mode: NORMAL
 Streaming to: /50.17.92.159
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2221-Data.db 
 sections=7002 progress=1523325354/2475291786 - 61%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2233-Data.db 
 sections=4581 progress=0/595026085 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-g-2235-Data.db 
 sections=6631 progress=0/2270344837 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2239-Data.db 
 sections=6266 progress=0/2190197091 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2230-Data.db 
 sections=7662 progress=0/3082087770 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2240-Data.db 
 sections=7874 progress=0/587439833 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-g-2226-Data.db 
 sections=7682 progress=0/2933920085 - 0%
 Streaming:1 daemon prio=10 tid=0x2aaac2060800 nid=0x1676 runnable 
 [0x6be85000]
java.lang.Thread.State: RUNNABLE
 at java.net.SocketOutputStream.socketWrite0(Native Method)
 at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
 at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
 at 
 com.sun.net.ssl.internal.ssl.OutputRecord.writeBuffer(OutputRecord.java:297)
 at 
 com.sun.net.ssl.internal.ssl.OutputRecord.write(OutputRecord.java:286)
 at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:743)
 at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:731)
 at 
 com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59)
 - locked 0x0006afea1bd8 (a 
 com.sun.net.ssl.internal.ssl.AppOutputStream)
 at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
 at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
 at 
 com.ning.compress.lzf.LZFOutputStream.flush(LZFOutputStream.java:117)
 at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:152)
 at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Streaming from: /46.51.141.51
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2241-Data.db 
 sections=7231 progress=0/1548922508 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2231-Data.db 
 sections=4730 progress=0/296474156 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2244-Data.db 
 sections=7650 progress=0/1580417610 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2217-Data.db 
 sections=7682 progress=0/196689250 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2220-Data.db 
 sections=7149 progress=0/478695185 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2171-Data.db 
 sections=443 progress=0/78417320 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-g-2235-Data.db 
 sections=6631 progress=0/2270344837 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc--Data.db 
 sections=4590 progress=0/1310718798 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2233-Data.db 
 sections=4581 progress=0/595026085 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-g-2226-Data.db 
 sections=7682 progress=0/2933920085 - 0%
abtests: 

[jira] [Updated] (CASSANDRA-3838) Repair Streaming hangs between multiple regions

2012-02-04 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3838:
-

Attachment: 0001-CASSANDRA-3838.patch

 Repair Streaming hangs between multiple regions
 ---

 Key: CASSANDRA-3838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3838
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.8

 Attachments: 0001-Add-streaming-socket-timeouts.patch, 
 0001-CASSANDRA-3838.patch


 Streaming hangs between datacenters, though there might be multiple reasons 
 for this, a simple fix will be to add the Socket timeout so the session can 
 retry.
 The following is the netstat of the affected node (the below output remains 
 this way for a very long period).
 [test_abrepairtest@test_abrepair--euwest1c-i-1adfb753 ~]$ nt netstats
 Mode: NORMAL
 Streaming to: /50.17.92.159
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2221-Data.db 
 sections=7002 progress=1523325354/2475291786 - 61%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2233-Data.db 
 sections=4581 progress=0/595026085 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-g-2235-Data.db 
 sections=6631 progress=0/2270344837 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2239-Data.db 
 sections=6266 progress=0/2190197091 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2230-Data.db 
 sections=7662 progress=0/3082087770 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-hc-2240-Data.db 
 sections=7874 progress=0/587439833 - 0%
/mnt/data/cassandra070/data/abtests/cust_allocs-g-2226-Data.db 
 sections=7682 progress=0/2933920085 - 0%
 Streaming:1 daemon prio=10 tid=0x2aaac2060800 nid=0x1676 runnable 
 [0x6be85000]
java.lang.Thread.State: RUNNABLE
 at java.net.SocketOutputStream.socketWrite0(Native Method)
 at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
 at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
 at 
 com.sun.net.ssl.internal.ssl.OutputRecord.writeBuffer(OutputRecord.java:297)
 at 
 com.sun.net.ssl.internal.ssl.OutputRecord.write(OutputRecord.java:286)
 at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:743)
 at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:731)
 at 
 com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59)
 - locked 0x0006afea1bd8 (a 
 com.sun.net.ssl.internal.ssl.AppOutputStream)
 at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
 at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
 at 
 com.ning.compress.lzf.LZFOutputStream.flush(LZFOutputStream.java:117)
 at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:152)
 at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Streaming from: /46.51.141.51
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2241-Data.db 
 sections=7231 progress=0/1548922508 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2231-Data.db 
 sections=4730 progress=0/296474156 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2244-Data.db 
 sections=7650 progress=0/1580417610 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2217-Data.db 
 sections=7682 progress=0/196689250 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2220-Data.db 
 sections=7149 progress=0/478695185 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2171-Data.db 
 sections=443 progress=0/78417320 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-g-2235-Data.db 
 sections=6631 progress=0/2270344837 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc--Data.db 
 sections=4590 progress=0/1310718798 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2233-Data.db 
 sections=4581 progress=0/595026085 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-g-2226-Data.db 
 sections=7682 progress=0/2933920085 - 0%
abtests: /mnt/data/cassandra070/data/abtests/cust_allocs-hc-2213-Data.db 
 sections=7876 progress=0/3308781588 - 0%
abtests: 

[jira] [Updated] (CASSANDRA-3835) FB.broadcastAddress fixes and Soft reset on Ec2MultiRegionSnitch.reconnect

2012-02-01 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3835:
-

Attachment: 0001-fix-fb-broadcastAddress.patch

 FB.broadcastAddress fixes and Soft reset on Ec2MultiRegionSnitch.reconnect
 --

 Key: CASSANDRA-3835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3835
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.8

 Attachments: 0001-fix-fb-broadcastAddress.patch


 looks like OutboundTcpConnectionPool.reset will clear the queue which might 
 not be ideal for Ec2Multiregion snitch.
 there is additional cleanup needed for FB.broadCastAddress.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-1956) Convert row cache to row+filter cache

2012-01-27 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-1956:
-

Comment: was deleted

(was: So for this ticket:

1) Expose rowCache API's so we can extend easier.
2) Reduce the Query cache memory foot print.
3) reject rows  x (Configurable)
4) Writes should not invalidates the cache (Configurable but if not invalidate 
take some hit on the write performance).

Reasonable? Anything missing?)

 Convert row cache to row+filter cache
 -

 Key: CASSANDRA-1956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1956
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Stu Hood
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-1956-cache-updates-v0.patch, 
 0001-re-factor-row-cache.patch, 0001-row-cache-filter.patch, 
 0002-1956-updates-to-thrift-and-avro-v0.patch, 0002-add-query-cache.patch


 Changing the row cache to a row+filter cache would make it much more useful. 
 We currently have to warn against using the row cache with wide rows, where 
 the read pattern is typically a peek at the head, but this usecase would be 
 perfect supported by a cache that stored only columns matching the filter.
 Possible implementations:
 * (copout) Cache a single filter per row, and leave the cache key as is
 * Cache a list of filters per row, leaving the cache key as is: this is 
 likely to have some gotchas for weird usage patterns, and it requires the 
 list overheard
 * Change the cache key to rowkey+filterid: basically ideal, but you need a 
 secondary index to lookup cache entries by rowkey so that you can keep them 
 in sync with the memtable
 * others?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3721) Staggering repair

2012-01-24 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3721:
-

Attachment: 0001-add-snapshot-command.patch

Hi Sylvain, Plz see the attached this patch will add the command. Ideally this 
patch should not break the backward compatibility because it just adds a 
command...

 Staggering repair
 -

 Key: CASSANDRA-3721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3721
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-add-snapshot-command.patch, 
 0001-staggering-repair-with-snapshot.patch


 Currently repair runs on all the nodes at once and causing the range of data 
 to be hot (higher latency on reads).
 Sequence:
 1) Send a repair request to all of the nodes so we can hold the references of 
 the SSTables (point at which repair was initiated)
 2) Send Validation on one node at a time (once completed will release 
 references).
 3) Hold the reference of the tree in the requesting node and once everything 
 is complete start diff.
 We can also serialize the streaming part not more than 1 node is involved in 
 the streaming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3721) Staggering repair

2012-01-24 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3721:
-

Attachment: (was: 0001-add-snapshot-command.patch)

 Staggering repair
 -

 Key: CASSANDRA-3721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3721
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-add-snapshot-command.patch, 
 0001-staggering-repair-with-snapshot.patch


 Currently repair runs on all the nodes at once and causing the range of data 
 to be hot (higher latency on reads).
 Sequence:
 1) Send a repair request to all of the nodes so we can hold the references of 
 the SSTables (point at which repair was initiated)
 2) Send Validation on one node at a time (once completed will release 
 references).
 3) Hold the reference of the tree in the requesting node and once everything 
 is complete start diff.
 We can also serialize the streaming part not more than 1 node is involved in 
 the streaming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3721) Staggering repair

2012-01-24 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3721:
-

Attachment: 0001-add-snapshot-command.patch

 Staggering repair
 -

 Key: CASSANDRA-3721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3721
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-add-snapshot-command.patch, 
 0001-staggering-repair-with-snapshot.patch


 Currently repair runs on all the nodes at once and causing the range of data 
 to be hot (higher latency on reads).
 Sequence:
 1) Send a repair request to all of the nodes so we can hold the references of 
 the SSTables (point at which repair was initiated)
 2) Send Validation on one node at a time (once completed will release 
 references).
 3) Hold the reference of the tree in the requesting node and once everything 
 is complete start diff.
 We can also serialize the streaming part not more than 1 node is involved in 
 the streaming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2392) Saving IndexSummaries to disk

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2392:
-

Attachment: 0001-save-summaries-to-disk-v4.patch

 Saving IndexSummaries to disk
 -

 Key: CASSANDRA-2392
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2392
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-re-factor-first-and-last.patch, 
 0001-save-summaries-to-disk-v4.patch, 0001-save-summaries-to-disk.patch, 
 0002-save-summaries-to-disk-v2.patch, 0002-save-summaries-to-disk-v3.patch, 
 0002-save-summaries-to-disk.patch


 For nodes with millions of keys, doing rolling restarts that take over 10 
 minutes per node can be painful if you have 100 node cluster. All of our time 
 is spent on doing index summary computations on startup. It would be great if 
 we could save those to disk as well. Our indexes are quite large.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2392) Saving IndexSummaries to disk

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2392:
-

Attachment: (was: 0001-save-summaries-to-disk-v4.patch)

 Saving IndexSummaries to disk
 -

 Key: CASSANDRA-2392
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2392
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-re-factor-first-and-last.patch, 
 0001-save-summaries-to-disk-v4.patch, 0001-save-summaries-to-disk.patch, 
 0002-save-summaries-to-disk-v2.patch, 0002-save-summaries-to-disk-v3.patch, 
 0002-save-summaries-to-disk.patch


 For nodes with millions of keys, doing rolling restarts that take over 10 
 minutes per node can be painful if you have 100 node cluster. All of our time 
 is spent on doing index summary computations on startup. It would be great if 
 we could save those to disk as well. Our indexes are quite large.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3762) AutoSaving KeyCache and System load time improvements.

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3762:
-

Attachment: 0001-SavedKeyCache-load-time-improvements.patch

This patch will read the keys from the cache and sort it, so the index scan can 
be less impact full on the MMaped index. 

Test (Not an extensive test but from my laptop):
100 Keys to be loaded into cache.

7G data file and 15M index (long type keys)
before the patch:
/var/log/cassandra/system.log:DEBUG [SSTableBatchOpen:4] 2012-01-23 
15:10:40,825 SSTableReader.java (line 196) INDEX LOAD TIME for 
/var/lib/cassandra/data/Keyspace2/Standard3/Keyspace2-Standard3-hc-2777: 850 ms.
after this patch:
/var/log/cassandra/system.log:DEBUG [SSTableBatchOpen:4] 2012-01-23 
15:10:59,128 SSTableReader.java (line 196) INDEX LOAD TIME for 
/var/lib/cassandra/data/Keyspace2/Standard3/Keyspace2-Standard3-hc-2777: 177 ms.

 AutoSaving KeyCache and System load time improvements.
 --

 Key: CASSANDRA-3762
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3762
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-SavedKeyCache-load-time-improvements.patch


 CASSANDRA-2392 saves the index summary to the disk... but when we have saved 
 cache we will still scan through the index to get the data out.
 We might be able to separate this from SSTR.load and let it load the index 
 summary, once all the SST's are loaded we might be able to check the 
 bloomfilter and do a random IO on fewer Index's to populate the KeyCache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0003-external-commitlog-with-sockets.patch
0002-helper-jmx-methods.patch
0001-support-commit-log-listener.patch

0001 = Adds CommitLogListener, implementation can recive the updates to the 
commitlogs. This also adds a configuration so we can avoid recycling in case 
some one wants to copy the files across to another location like a archive logs
0002 = helper JMX in case the user wants to query the active CL's
0003 = this can go to the tools folder/we dont need to commit it to the core.

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-support-commit-log-listener.patch, 
 0002-helper-jmx-methods.patch, 0003-external-commitlog-with-sockets.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0001-Make-commitlog-recycle-configurable.patch

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-Make-commitlog-recycle-configurable.patch, 
 0001-support-commit-log-listener.patch, 0002-helper-jmx-methods.patch, 
 0003-external-commitlog-with-sockets.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: (was: 0002-helper-jmx-methods.patch)

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: (was: 0001-Make-commitlog-recycle-configurable.patch)

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: (was: 0001-support-commit-log-listener.patch)

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: (was: 0003-external-commitlog-with-sockets.patch)

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-01-23 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3690:
-

Attachment: 0005-cmmiting-comments-to-yaml.patch
0004-external-commitlog-with-sockets.patch
0003-helper-jmx-methods.patch
0002-support-commit-log-listener.patch
0001-Make-commitlog-recycle-configurable.patch

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2392) Saving IndexSummaries to disk

2012-01-22 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2392:
-

Attachment: 0001-save-summaries-to-disk-v4.patch

Hope the attached patch has all you needed... I added a check to see the disk 
access mode if it changed then we will recreate the summaries For now 
ignore the saveSummaries called when keycache is read. i will fix it in the 
other ticket.

 Saving IndexSummaries to disk
 -

 Key: CASSANDRA-2392
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2392
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-re-factor-first-and-last.patch, 
 0001-save-summaries-to-disk-v4.patch, 0001-save-summaries-to-disk.patch, 
 0002-save-summaries-to-disk-v2.patch, 0002-save-summaries-to-disk-v3.patch, 
 0002-save-summaries-to-disk.patch


 For nodes with millions of keys, doing rolling restarts that take over 10 
 minutes per node can be painful if you have 100 node cluster. All of our time 
 is spent on doing index summary computations on startup. It would be great if 
 we could save those to disk as well. Our indexes are quite large.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2392) Saving IndexSummaries to disk

2012-01-22 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2392:
-

Fix Version/s: (was: 1.1)
   1.2

 Saving IndexSummaries to disk
 -

 Key: CASSANDRA-2392
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2392
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: 0001-re-factor-first-and-last.patch, 
 0001-save-summaries-to-disk-v4.patch, 0001-save-summaries-to-disk.patch, 
 0002-save-summaries-to-disk-v2.patch, 0002-save-summaries-to-disk-v3.patch, 
 0002-save-summaries-to-disk.patch


 For nodes with millions of keys, doing rolling restarts that take over 10 
 minutes per node can be painful if you have 100 node cluster. All of our time 
 is spent on doing index summary computations on startup. It would be great if 
 we could save those to disk as well. Our indexes are quite large.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3768) Implement VNode to improve bootstrapping

2012-01-21 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3768:
-

Description: 
Implementation of vnode will provide us with lot of advantages in bootstrapping 
and operating a big cluster.

Some Problems which VNodes can solve:
* When we have a balanced large cluster doubling a live cluster is not an very 
good option and often this makes us to over provision.
* The above is true when we want to shrink a cluster too.
* Currently in my organization often we have use cases where we have to refresh 
production cluster's into a dev clusters. There is a strong need where we can 
have different sized clusters and data' can be fork lift data's into and out of 
it without over engineering or complicating the recovery process.
* In some cases bootstrapping a node which which has a large amount of data 
might take longer than needed and it might cause the neighbors to be over 
loaded.

I am not sure if i know all the changes which needs to be performed to get to 
that state, but this ticket will get us started.


  was:
Implementation of vnode will provide us with lot of advantages automatically 
balancing the cluster when 1 node is added to the cluster.

Some Problems which VNodes can solve:
When we have a balanced large cluster doubling a live cluster is not an very 
good option and often this makes us to over provision.
The above is true when we want to shrink a cluster too.
Currently in my organization often we have use cases where we have to refresh 
production cluster's into a dev clusters. There is a strong need where we can 
have different sized clusters and data' can be fork lift data's into and out of 
it without over engineering or complicating the recovery process.
In some cases bootstrapping a node which which has a large amount of data might 
take longer than needed and it might cause the neighbors to be over loaded.

I am not sure if i know all the changes which needs to be performed to get to 
that state, but this ticket will get us started.



 Implement VNode to improve bootstrapping
 

 Key: CASSANDRA-3768
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3768
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Affects Versions: 1.2
 Environment: JVM
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2


 Implementation of vnode will provide us with lot of advantages in 
 bootstrapping and operating a big cluster.
 Some Problems which VNodes can solve:
 * When we have a balanced large cluster doubling a live cluster is not an 
 very good option and often this makes us to over provision.
 * The above is true when we want to shrink a cluster too.
 * Currently in my organization often we have use cases where we have to 
 refresh production cluster's into a dev clusters. There is a strong need 
 where we can have different sized clusters and data' can be fork lift data's 
 into and out of it without over engineering or complicating the recovery 
 process.
 * In some cases bootstrapping a node which which has a large amount of data 
 might take longer than needed and it might cause the neighbors to be over 
 loaded.
 I am not sure if i know all the changes which needs to be performed to get to 
 that state, but this ticket will get us started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3768) Implement VNode to improve bootstrapping

2012-01-21 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3768:
-

Description: 
Implementation of vnode will provide us with lot of advantages in bootstrapping 
and operating a big cluster.

Some Problems which VNodes can solve:
* When we have a balanced large cluster doubling a live cluster is not an very 
good option and often this makes us to over provision.
* The above is true when we want to shrink a cluster too.
* Currently in my organization often we have use cases where we have to refresh 
production cluster's into a dev clusters. There is a strong need where we can 
have different sized clusters and data' can be fork lift into and out of it 
without over engineering or complicating the recovery process.
* In some cases bootstrapping a node which has a large amount of data might be 
faster without over loading the neighbors.

I am not sure if i know all the changes which needs to be performed to get to 
that state, but this ticket will get us started.


  was:
Implementation of vnode will provide us with lot of advantages in bootstrapping 
and operating a big cluster.

Some Problems which VNodes can solve:
* When we have a balanced large cluster doubling a live cluster is not an very 
good option and often this makes us to over provision.
* The above is true when we want to shrink a cluster too.
* Currently in my organization often we have use cases where we have to refresh 
production cluster's into a dev clusters. There is a strong need where we can 
have different sized clusters and data' can be fork lift data's into and out of 
it without over engineering or complicating the recovery process.
* In some cases bootstrapping a node which which has a large amount of data 
might take longer than needed and it might cause the neighbors to be over 
loaded.

I am not sure if i know all the changes which needs to be performed to get to 
that state, but this ticket will get us started.



 Implement VNode to improve bootstrapping
 

 Key: CASSANDRA-3768
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3768
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Affects Versions: 1.2
 Environment: JVM
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2


 Implementation of vnode will provide us with lot of advantages in 
 bootstrapping and operating a big cluster.
 Some Problems which VNodes can solve:
 * When we have a balanced large cluster doubling a live cluster is not an 
 very good option and often this makes us to over provision.
 * The above is true when we want to shrink a cluster too.
 * Currently in my organization often we have use cases where we have to 
 refresh production cluster's into a dev clusters. There is a strong need 
 where we can have different sized clusters and data' can be fork lift into 
 and out of it without over engineering or complicating the recovery process.
 * In some cases bootstrapping a node which has a large amount of data might 
 be faster without over loading the neighbors.
 I am not sure if i know all the changes which needs to be performed to get to 
 that state, but this ticket will get us started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2392) Saving IndexSummaries to disk

2012-01-20 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-2392:
-

Attachment: 0002-save-summaries-to-disk-v3.patch

Hi Pavel, v3 has the method name changes. Now called Summaries.db

 Saving IndexSummaries to disk
 -

 Key: CASSANDRA-2392
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2392
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-re-factor-first-and-last.patch, 
 0001-save-summaries-to-disk.patch, 0002-save-summaries-to-disk-v2.patch, 
 0002-save-summaries-to-disk-v3.patch, 0002-save-summaries-to-disk.patch


 For nodes with millions of keys, doing rolling restarts that take over 10 
 minutes per node can be painful if you have 100 node cluster. All of our time 
 is spent on doing index summary computations on startup. It would be great if 
 we could save those to disk as well. Our indexes are quite large.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3723) Include await for the queues in tpstats

2012-01-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3723:
-

Attachment: 0001-tp-queue-wait-jmx.patch

added a small refactor on MDT and this patch adds the metric @ 
beforeExecute(Thread t, Runnable task)


Pool NameActive   Pending  Completed   Blocked  All 
time blocked  Queue Wait(ms)
ReadStage 0 0 851359 0  
   0   0.000


Let me know, Thanks!

 Include await for the queues in tpstats
 ---

 Key: CASSANDRA-3723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3723
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-tp-queue-wait-jmx.patch


 Something simillar to IOSTAT await, there is an additional over head not sure 
 if we have to make an exception for this but i think this has a huge + 
 while troubleshooting
 await
 The average time (in milliseconds) for I/O requests issued to the request to 
 be served. This includes the time spent by the requests in queue and the time 
 spent servicing them 
 or we can also have a simple average of time spent in the queue before being 
 served.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3723) Include await for the queues in tpstats

2012-01-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3723:
-

Attachment: 0001-tp-queue-wait-jmx.patch

 Include await for the queues in tpstats
 ---

 Key: CASSANDRA-3723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3723
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-tp-queue-wait-jmx.patch


 Something simillar to IOSTAT await, there is an additional over head not sure 
 if we have to make an exception for this but i think this has a huge + 
 while troubleshooting
 await
 The average time (in milliseconds) for I/O requests issued to the request to 
 be served. This includes the time spent by the requests in queue and the time 
 spent servicing them 
 or we can also have a simple average of time spent in the queue before being 
 served.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3723) Include await for the queues in tpstats

2012-01-17 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3723:
-

Attachment: (was: 0001-tp-queue-wait-jmx.patch)

 Include await for the queues in tpstats
 ---

 Key: CASSANDRA-3723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3723
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Attachments: 0001-tp-queue-wait-jmx.patch


 Something simillar to IOSTAT await, there is an additional over head not sure 
 if we have to make an exception for this but i think this has a huge + 
 while troubleshooting
 await
 The average time (in milliseconds) for I/O requests issued to the request to 
 be served. This includes the time spent by the requests in queue and the time 
 spent servicing them 
 or we can also have a simple average of time spent in the queue before being 
 served.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3721) Staggering repair

2012-01-16 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3721:
-

Reviewer: slebresne

 Staggering repair
 -

 Key: CASSANDRA-3721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3721
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-staggering-repair-with-snapshot.patch


 Currently repair runs on all the nodes at once and causing the range of data 
 to be hot (higher latency on reads).
 Sequence:
 1) Send a repair request to all of the nodes so we can hold the references of 
 the SSTables (point at which repair was initiated)
 2) Send Validation on one node at a time (once completed will release 
 references).
 3) Hold the reference of the tree in the requesting node and once everything 
 is complete start diff.
 We can also serialize the streaming part not more than 1 node is involved in 
 the streaming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3590) Use multiple connection to share the OutboutTCPConnection

2012-01-16 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3590:
-

Attachment: TCPTest.zip

Finally had a chance to do this bench mark. 

Configuration: M2.4xl (AWS)
Traffic Between: US and EU
Open JDK, CentOS 5.6

3 Tests where done where the active queue is the limiting factor for the 
traffic to go across the nodes. Latency is the metric which we are trying to 
measure in this test (With 1 connection the latency is high, because of the 
Delay over the public internet in a AWS multi region setup). 

Code for the benchmark is attached with this ticket. 

Server A (US): java -jar Listener.jar 7103
Server B (EU): java -jar RunTest.jar 1 107.22.50.61 7103 500

Server C (US): java -jar Listener.jar 7103
Server D (EU): java -jar RunTest.jar 2 107.22.50.61 7103 500

Data is collected with 1 Second interval (plz see code for details).


 Use multiple connection to share the OutboutTCPConnection
 -

 Key: CASSANDRA-3590
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3590
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: TCPTest.zip


 Currently there is one connection between any given host to another host in 
 the cluster, the problem with this is:
 1) This can become a bottleneck in some cases where the latencies are higher.
 2) When a connection is dropped we also drop the queue and recreate a new one 
 and hence the messages can be lost (Currently hints will take care of it and 
 clients also can retry)
 by making it a configurable option to configure the number of connections and 
 also making the queue common to those connections the above 2 issues can be 
 resolved.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3590) Use multiple connection to share the OutboutTCPConnection

2012-01-16 Thread Vijay (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3590:
-

Attachment: TCPTest.xlsx

 Use multiple connection to share the OutboutTCPConnection
 -

 Key: CASSANDRA-3590
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3590
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2

 Attachments: TCPTest.xlsx, TCPTest.zip


 Currently there is one connection between any given host to another host in 
 the cluster, the problem with this is:
 1) This can become a bottleneck in some cases where the latencies are higher.
 2) When a connection is dropped we also drop the queue and recreate a new one 
 and hence the messages can be lost (Currently hints will take care of it and 
 clients also can retry)
 by making it a configurable option to configure the number of connections and 
 also making the queue common to those connections the above 2 issues can be 
 resolved.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >