[jira] [Updated] (CASSANDRA-7183) BackgroundActivityMonitor.readAndCompute only returns half of the values

2014-05-07 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-7183:


Attachment: 7183.txt

against 2.0

 BackgroundActivityMonitor.readAndCompute only returns half of the values
 

 Key: CASSANDRA-7183
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7183
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Minor
 Fix For: 2.0.9

 Attachments: 7183.txt


 BackgroundActivityMonitor.readAndCompute does
 long[] returned = new long[tokenizer.countTokens()];
 for (int i = 0; i  tokenizer.countTokens(); i++)
 returned[i] = Long.parseLong(tokenizer.nextToken());
 which is not only inefficient as it counts tokens each time thru the loop, 
 it's wrong in that only the first half of the values are populated in the 
 array, as each time thru the loop the number of tokens goes down by 1, since 
 you've consumed one.
 switch the loop to
  for (int i = 0; i  returned.length; i++)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7183) BackgroundActivityMonitor.readAndCompute only returns half of the values

2014-05-07 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-7183:
---

 Summary: BackgroundActivityMonitor.readAndCompute only returns 
half of the values
 Key: CASSANDRA-7183
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7183
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Minor
 Fix For: 2.0.9
 Attachments: 7183.txt

BackgroundActivityMonitor.readAndCompute does

long[] returned = new long[tokenizer.countTokens()];
for (int i = 0; i  tokenizer.countTokens(); i++)
returned[i] = Long.parseLong(tokenizer.nextToken());

which is not only inefficient as it counts tokens each time thru the loop, it's 
wrong in that only the first half of the values are populated in the array, as 
each time thru the loop the number of tokens goes down by 1, since you've 
consumed one.

switch the loop to

 for (int i = 0; i  returned.length; i++)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7181) Remove unused method isLocalTask() in o.a.c.repair.StreamingRepairTask

2014-05-07 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991616#comment-13991616
 ] 

Sam Tunnicliffe commented on CASSANDRA-7181:


It isn't used anywhere I'm aware of.

 Remove unused method isLocalTask() in o.a.c.repair.StreamingRepairTask
 --

 Key: CASSANDRA-7181
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7181
 Project: Cassandra
  Issue Type: Wish
Reporter: Lyuben Todorov
Assignee: Lyuben Todorov
Priority: Trivial
 Attachments: 
 cassandra-2.1-remove-o.a.c.repair.StreamingRepairTask.isLocalTask.diff


 Not sure if the method is used by any other tools /cc [~beobal]  but based on 
 info from #cassandra-dev it is not. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6973) timestamp data type does ISO 8601 formats with 'Z' as time zone.

2014-05-07 Thread Chander S Pechetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chander S Pechetty updated CASSANDRA-6973:
--

Attachment: trunk-6973_unittest.txt
trunk-6973_v2.txt

Good catch on missing non-ISO by removing Z.
- Z was only supporting RFC 822  timezone (-0800). 
- the pattern with two X will cover the RFC 822 which takes care of removing Z. 
patch v2 takes care of this
- pattern with single X covers this ticket (Z as well as -08)
- the pattern XXX handles -08:00 
- With the one, two and three letter patterns we complete all the 4 timezone 
designators for 8601 I stated earlier in the ticket
- did have a test program, moved that into a unit test and uploaded as a 
separate patch.

On another note maybe its a good idea to rename the iso8601Patterns in 
TimestampSerializer to something like casTimestampPatterns which cover both iso 
and non-ISO dates. what do you think ?

 timestamp data type does ISO 8601 formats with 'Z' as time zone.
 

 Key: CASSANDRA-6973
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6973
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Juho Mäkinen
Assignee: Chander S Pechetty
Priority: Trivial
 Attachments: trunk-6973.txt, trunk-6973_unittest.txt, 
 trunk-6973_v2.txt


 The timestamp data type does not support format where time zone is specified 
 with 'Z' (as in zulu aka. UTC+0 aka + time zone). Example:
 create table foo(ts timestamp primary key);
 insert into foo(ts) values('2014-04-01T20:17:35+'); -- this works
 cqlsh:test insert into foo(ts) values('2014-04-01T20:17:35Z');
 Bad Request: unable to coerce '2014-04-01T20:17:35Z' to a  formatted date 
 (long)
 The example date was copied directly from ISO 8601 Wikipedia page. The 
 standard says that If the time is in UTC, add a Z directly after the time 
 without a space. Z is the zone designator for the zero UTC offset.
 Tested with cqlsh with 2.0.6 version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: Set keepalive on MessagingService connections patch by Jianwei Zhang; reviewed by jbellis for CASSANDRA-7170

2014-05-07 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 f4460a55b - c7e472e8c
  refs/heads/cassandra-2.0 0a09edc81 - 8d4dc6d5f


 Set keepalive on MessagingService connections
patch by Jianwei Zhang; reviewed by jbellis for CASSANDRA-7170


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7e472e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7e472e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7e472e8

Branch: refs/heads/cassandra-1.2
Commit: c7e472e8c1eb5739866e8c93957738676cc744bc
Parents: f4460a5
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 6 22:41:20 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 6 22:41:20 2014 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/net/MessagingService.java | 5 +
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7e472e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1c6171e..8c1d234 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.17
+ * Set keepalive on MessagingService connections (CASSANDRA-7170)
  * Add Cloudstack snitch (CASSANDRA-7147)
  * Update system.peers correctly when relocating tokens (CASSANDRA-7126)
  * Add Google Compute Engine snitch (CASSANDRA-7132)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7e472e8/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 5e4a117..41553b1 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -904,9 +904,14 @@ public final class MessagingService implements 
MessagingServiceMBean
 {
 Socket socket = server.accept();
 if (authenticate(socket))
+{
+socket.setKeepAlive(true);
 new IncomingTcpConnection(socket).start();
+}
 else
+{
 socket.close();
+}
 }
 catch (AsynchronousCloseException e)
 {



[jira] [Updated] (CASSANDRA-7184) improvement of SizeTieredCompaction

2014-05-07 Thread Jianwei Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianwei Zhang updated CASSANDRA-7184:
-

Labels: compaction  (was: )

 improvement  of  SizeTieredCompaction
 -

 Key: CASSANDRA-7184
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jianwei Zhang
Assignee: Jianwei Zhang
Priority: Minor
  Labels: compaction
   Original Estimate: 48h
  Remaining Estimate: 48h

 1,  In our usage scenario, there is no duplicated insert and no delete . The 
 data increased all the time, and some huge sstables generate (100GB for 
 example).  we don't want these sstables to participate in the 
 SizeTieredCompaction any more. so we add a max threshold which we set to 
 100GB . Sstables larger than the threshold will not be compacted. Can this 
 strategy be added to the trunk ?
 2,  In our usage scenario, maybe hundreds of sstable need to be compacted in 
 a Major Compaction. The total size would be larger to 5TB. So during the 
 compaction, when the size writed reach to a configed threshhold(200GB for 
 example), it switch to write a new sstable. In this way, we avoid to generate 
 too huge sstables. Too huge sstable have some bad infuence: 
  (1) It will be larger than the capacity of a disk;
  (2) If the sstable is corrupt, lots of objects will be influenced .
 Can this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7177) Starting threads in the OutboundTcpConnectionPool constructor causes race conditions

2014-05-07 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991585#comment-13991585
 ] 

Sergio Bossa commented on CASSANDRA-7177:
-

Ok, so that's what you mean. I'm not sure those add any significant overhead in 
modern jvms (it would be interesting to investigate), but anyways, feel free to 
add the getCount check :)

 Starting threads in the OutboundTcpConnectionPool constructor causes race 
 conditions
 

 Key: CASSANDRA-7177
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7177
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
 Attachments: CASSANDRA-7177.patch


 The OutboundTcpConnectionPool starts connection threads in its constructor, 
 causing race conditions when MessagingService#getConnectionPool is 
 concurrently called for the first time for a given address.
 I.e., here's one of the races:
 {noformat}
  WARN 12:49:03,182 Error processing 
 org.apache.cassandra.metrics:type=Connection,scope=127.0.0.1,name=CommandPendingTasks
 javax.management.InstanceAlreadyExistsException: 
 org.apache.cassandra.metrics:type=Connection,scope=127.0.0.1,name=CommandPendingTasks
   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
   at 
 com.yammer.metrics.reporting.JmxReporter.registerBean(JmxReporter.java:464)
   at 
 com.yammer.metrics.reporting.JmxReporter.processGauge(JmxReporter.java:438)
   at 
 com.yammer.metrics.reporting.JmxReporter.processGauge(JmxReporter.java:16)
   at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
   at 
 com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
   at 
 com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
   at 
 com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
   at 
 com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
   at com.yammer.metrics.Metrics.newGauge(Metrics.java:70)
   at 
 org.apache.cassandra.metrics.ConnectionMetrics.init(ConnectionMetrics.java:71)
   at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.init(OutboundTcpConnectionPool.java:55)
   at 
 org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:498)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7183) BackgroundActivityMonitor.readAndCompute only returns half of the values

2014-05-07 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-7183:


Description: 
BackgroundActivityMonitor.readAndCompute does
{code}
long[] returned = new long[tokenizer.countTokens()];
for (int i = 0; i  tokenizer.countTokens(); i++)
returned[i] = Long.parseLong(tokenizer.nextToken());
{code}
which is not only inefficient as it counts tokens each time thru the loop, it's 
wrong in that only the first half of the values are populated in the array, as 
each time thru the loop the number of tokens goes down by 1, since you've 
consumed one.

switch the loop to
{code}
 for (int i = 0; i  returned.length; i++)
{code}

  was:
BackgroundActivityMonitor.readAndCompute does

long[] returned = new long[tokenizer.countTokens()];
for (int i = 0; i  tokenizer.countTokens(); i++)
returned[i] = Long.parseLong(tokenizer.nextToken());

which is not only inefficient as it counts tokens each time thru the loop, it's 
wrong in that only the first half of the values are populated in the array, as 
each time thru the loop the number of tokens goes down by 1, since you've 
consumed one.

switch the loop to

 for (int i = 0; i  returned.length; i++)


 BackgroundActivityMonitor.readAndCompute only returns half of the values
 

 Key: CASSANDRA-7183
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7183
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Minor
 Fix For: 2.0.9

 Attachments: 7183.txt


 BackgroundActivityMonitor.readAndCompute does
 {code}
 long[] returned = new long[tokenizer.countTokens()];
 for (int i = 0; i  tokenizer.countTokens(); i++)
 returned[i] = Long.parseLong(tokenizer.nextToken());
 {code}
 which is not only inefficient as it counts tokens each time thru the loop, 
 it's wrong in that only the first half of the values are populated in the 
 array, as each time thru the loop the number of tokens goes down by 1, since 
 you've consumed one.
 switch the loop to
 {code}
  for (int i = 0; i  returned.length; i++)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Optimize netty server

2014-05-07 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 903069318 - bc4b008bf


Optimize netty server

Patch by tjake; reviewed by Benedict Elliott Smith for CASSANDRA-6861


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc4b008b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc4b008b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc4b008b

Branch: refs/heads/cassandra-2.1
Commit: bc4b008bf138f3542f228624b9e9a4a4301ea8b2
Parents: 9030693
Author: Jake Luciani j...@apache.org
Authored: Tue May 6 21:22:03 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Tue May 6 21:22:03 2014 -0400

--
 CHANGES.txt |  3 +-
 .../org/apache/cassandra/transport/CBUtil.java  |  9 +-
 .../org/apache/cassandra/transport/Frame.java   | 10 ++-
 .../cassandra/transport/FrameCompressor.java| 94 +++-
 .../org/apache/cassandra/transport/Message.java | 52 ---
 .../org/apache/cassandra/transport/Server.java  | 17 ++--
 6 files changed, 142 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc4b008b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 88ff5d2..6564aa6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,7 +3,8 @@
  * Fix bugs in supercolumns handling (CASSANDRA-7138)
  * Fix ClassClassException on composite dense tables (CASSANDRA-7112)
  * Cleanup and optimize collation and slice iterators (CASSANDRA-7107)
- * Upgrade NBHM lib (CASSANDRA-7128) 
+ * Upgrade NBHM lib (CASSANDRA-7128)
+ * Optimize netty server (CASSANDRA-6861)
 Merged from 2.0:
  * Correctly delete scheduled range xfers (CASSANDRA-7143)
  * Make batchlog replica selection rack-aware (CASSANDRA-6551)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc4b008b/src/java/org/apache/cassandra/transport/CBUtil.java
--
diff --git a/src/java/org/apache/cassandra/transport/CBUtil.java 
b/src/java/org/apache/cassandra/transport/CBUtil.java
index 36a7e71..e6ba029 100644
--- a/src/java/org/apache/cassandra/transport/CBUtil.java
+++ b/src/java/org/apache/cassandra/transport/CBUtil.java
@@ -30,12 +30,15 @@ import java.util.Map;
 import java.util.UUID;
 
 import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufAllocator;
+import io.netty.buffer.PooledByteBufAllocator;
 import io.netty.buffer.Unpooled;
 import io.netty.util.AttributeKey;
 import io.netty.util.CharsetUtil;
 
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.UUIDGen;
 
@@ -48,6 +51,9 @@ import org.apache.cassandra.utils.UUIDGen;
  */
 public abstract class CBUtil
 {
+public static final ByteBufAllocator allocator = new 
PooledByteBufAllocator(true);
+public static final ByteBufAllocator onHeapAllocator = new 
PooledByteBufAllocator(false);
+
 private CBUtil() {}
 
 private static String readString(ByteBuf cb, int length)
@@ -300,7 +306,8 @@ public abstract class CBUtil
 if (slice.nioBufferCount()  0)
 return slice.nioBuffer();
 else
-return Unpooled.copiedBuffer(slice).nioBuffer();
+return ByteBuffer.wrap(readRawBytes(cb));
+
 }
 
 public static void writeValue(byte[] bytes, ByteBuf cb)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc4b008b/src/java/org/apache/cassandra/transport/Frame.java
--
diff --git a/src/java/org/apache/cassandra/transport/Frame.java 
b/src/java/org/apache/cassandra/transport/Frame.java
index 70fe150..bec3c96 100644
--- a/src/java/org/apache/cassandra/transport/Frame.java
+++ b/src/java/org/apache/cassandra/transport/Frame.java
@@ -55,6 +55,11 @@ public class Frame
 this.body = body;
 }
 
+public void release()
+{
+body.release();
+}
+
 public static Frame create(Message.Type type, int streamId, int version, 
EnumSetHeader.Flag flags, ByteBuf body)
 {
 Header header = new Header(version, flags, streamId, type);
@@ -194,8 +199,7 @@ public class Frame
 return;
 
 // extract body
-// TODO: do we need unpooled?
-ByteBuf body = Unpooled.copiedBuffer(buffer.duplicate().slice(idx 
+ Header.LENGTH, (int) bodyLength));
+ByteBuf body = CBUtil.allocator.buffer((int) 
bodyLength).writeBytes(buffer.duplicate().slice(idx + Header.LENGTH, (int) 
bodyLength));
 buffer.readerIndex(idx + frameLengthInt);
 
 Connection connection = 

[1/2] git commit: Optimize netty server

2014-05-07 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk b768bb2d6 - cc4127c6c


Optimize netty server

Patch by tjake; reviewed by Benedict Elliott Smith for CASSANDRA-6861


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc4b008b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc4b008b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc4b008b

Branch: refs/heads/trunk
Commit: bc4b008bf138f3542f228624b9e9a4a4301ea8b2
Parents: 9030693
Author: Jake Luciani j...@apache.org
Authored: Tue May 6 21:22:03 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Tue May 6 21:22:03 2014 -0400

--
 CHANGES.txt |  3 +-
 .../org/apache/cassandra/transport/CBUtil.java  |  9 +-
 .../org/apache/cassandra/transport/Frame.java   | 10 ++-
 .../cassandra/transport/FrameCompressor.java| 94 +++-
 .../org/apache/cassandra/transport/Message.java | 52 ---
 .../org/apache/cassandra/transport/Server.java  | 17 ++--
 6 files changed, 142 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc4b008b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 88ff5d2..6564aa6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,7 +3,8 @@
  * Fix bugs in supercolumns handling (CASSANDRA-7138)
  * Fix ClassClassException on composite dense tables (CASSANDRA-7112)
  * Cleanup and optimize collation and slice iterators (CASSANDRA-7107)
- * Upgrade NBHM lib (CASSANDRA-7128) 
+ * Upgrade NBHM lib (CASSANDRA-7128)
+ * Optimize netty server (CASSANDRA-6861)
 Merged from 2.0:
  * Correctly delete scheduled range xfers (CASSANDRA-7143)
  * Make batchlog replica selection rack-aware (CASSANDRA-6551)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc4b008b/src/java/org/apache/cassandra/transport/CBUtil.java
--
diff --git a/src/java/org/apache/cassandra/transport/CBUtil.java 
b/src/java/org/apache/cassandra/transport/CBUtil.java
index 36a7e71..e6ba029 100644
--- a/src/java/org/apache/cassandra/transport/CBUtil.java
+++ b/src/java/org/apache/cassandra/transport/CBUtil.java
@@ -30,12 +30,15 @@ import java.util.Map;
 import java.util.UUID;
 
 import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufAllocator;
+import io.netty.buffer.PooledByteBufAllocator;
 import io.netty.buffer.Unpooled;
 import io.netty.util.AttributeKey;
 import io.netty.util.CharsetUtil;
 
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.UUIDGen;
 
@@ -48,6 +51,9 @@ import org.apache.cassandra.utils.UUIDGen;
  */
 public abstract class CBUtil
 {
+public static final ByteBufAllocator allocator = new 
PooledByteBufAllocator(true);
+public static final ByteBufAllocator onHeapAllocator = new 
PooledByteBufAllocator(false);
+
 private CBUtil() {}
 
 private static String readString(ByteBuf cb, int length)
@@ -300,7 +306,8 @@ public abstract class CBUtil
 if (slice.nioBufferCount()  0)
 return slice.nioBuffer();
 else
-return Unpooled.copiedBuffer(slice).nioBuffer();
+return ByteBuffer.wrap(readRawBytes(cb));
+
 }
 
 public static void writeValue(byte[] bytes, ByteBuf cb)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc4b008b/src/java/org/apache/cassandra/transport/Frame.java
--
diff --git a/src/java/org/apache/cassandra/transport/Frame.java 
b/src/java/org/apache/cassandra/transport/Frame.java
index 70fe150..bec3c96 100644
--- a/src/java/org/apache/cassandra/transport/Frame.java
+++ b/src/java/org/apache/cassandra/transport/Frame.java
@@ -55,6 +55,11 @@ public class Frame
 this.body = body;
 }
 
+public void release()
+{
+body.release();
+}
+
 public static Frame create(Message.Type type, int streamId, int version, 
EnumSetHeader.Flag flags, ByteBuf body)
 {
 Header header = new Header(version, flags, streamId, type);
@@ -194,8 +199,7 @@ public class Frame
 return;
 
 // extract body
-// TODO: do we need unpooled?
-ByteBuf body = Unpooled.copiedBuffer(buffer.duplicate().slice(idx 
+ Header.LENGTH, (int) bodyLength));
+ByteBuf body = CBUtil.allocator.buffer((int) 
bodyLength).writeBytes(buffer.duplicate().slice(idx + Header.LENGTH, (int) 
bodyLength));
 buffer.readerIndex(idx + frameLengthInt);
 
 Connection connection = 

[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-07 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc4127c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc4127c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc4127c6

Branch: refs/heads/trunk
Commit: cc4127c6c779c372f24a65624e86bfa2d8e68d69
Parents: b768bb2 bc4b008
Author: Jake Luciani j...@apache.org
Authored: Tue May 6 22:07:01 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Tue May 6 22:07:01 2014 -0400

--
 CHANGES.txt |  3 +-
 .../org/apache/cassandra/transport/CBUtil.java  |  9 +-
 .../org/apache/cassandra/transport/Frame.java   | 10 ++-
 .../cassandra/transport/FrameCompressor.java| 94 +++-
 .../org/apache/cassandra/transport/Message.java | 52 ---
 .../org/apache/cassandra/transport/Server.java  | 17 ++--
 6 files changed, 142 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc4127c6/CHANGES.txt
--



[jira] [Updated] (CASSANDRA-7184) improvement of SizeTieredCompaction

2014-05-07 Thread Jianwei Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianwei Zhang updated CASSANDRA-7184:
-

Description: 
1,  In our usage scenario, there is no duplicated insert and no delete . The 
data increased all the time, and some big sstables are generated (100GB for 
example).  we don't want these sstables to participate in the 
SizeTieredCompaction any more. so we add a max threshold which is set to 100GB 
. Sstables larger than the threshold will not be compacted. Can this strategy 
be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a 
Major Compaction. The total size would be larger to 5TB. So during the 
compaction, when the size writed reach to a configed threshhold(200GB for 
example), it switch to write a new sstable. In this way, we avoid to generate 
too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?

  was:
1,  In our usage scenario, there is no duplicated insert and no delete . The 
data increased all the time, and some huge sstables generate (100GB for 
example).  we don't want these sstables to participate in the 
SizeTieredCompaction any more. so we add a max threshold which we set to 100GB 
. Sstables larger than the threshold will not be compacted. Can this strategy 
be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a 
Major Compaction. The total size would be larger to 5TB. So during the 
compaction, when the size writed reach to a configed threshhold(200GB for 
example), it switch to write a new sstable. In this way, we avoid to generate 
too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?


 improvement  of  SizeTieredCompaction
 -

 Key: CASSANDRA-7184
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jianwei Zhang
Assignee: Jianwei Zhang
Priority: Minor
  Labels: compaction
   Original Estimate: 48h
  Remaining Estimate: 48h

 1,  In our usage scenario, there is no duplicated insert and no delete . The 
 data increased all the time, and some big sstables are generated (100GB for 
 example).  we don't want these sstables to participate in the 
 SizeTieredCompaction any more. so we add a max threshold which is set to 
 100GB . Sstables larger than the threshold will not be compacted. Can this 
 strategy be added to the trunk ?
 2,  In our usage scenario, maybe hundreds of sstable need to be compacted in 
 a Major Compaction. The total size would be larger to 5TB. So during the 
 compaction, when the size writed reach to a configed threshhold(200GB for 
 example), it switch to write a new sstable. In this way, we avoid to generate 
 too huge sstables. Too huge sstable have some bad infuence: 
  (1) It will be larger than the capacity of a disk;
  (2) If the sstable is corrupt, lots of objects will be influenced .
 Can this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-05-07 Thread dbrosius
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4a3b520
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4a3b520
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4a3b520

Branch: refs/heads/cassandra-2.0
Commit: b4a3b52076e221f3fa7c65a70c7c4ddec439689c
Parents: 8d4dc6d 0132e54
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 01:37:48 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 01:37:48 2014 -0400

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4a3b520/CHANGES.txt
--
diff --cc CHANGES.txt
index d65a694,d7b7f00..517f0ab
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -26,69 -15,15 +26,70 @@@ Merged from 1.2
   * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
   * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
   * Ensure that batchlog and hint timeouts do not produce hints 
(CASSANDRA-7058)
 - * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
   * Always clean up references in SerializingCache (CASSANDRA-6994)
 + * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
   * fix npe when doing -Dcassandra.fd_initial_value_ms (CASSANDRA-6751)
   * Preserves CQL metadata when updating table from thrift (CASSANDRA-6831)
 - * fix time conversion to milliseconds in SimpleCondition.await 
(CASSANDRA-7149)
+  * remove duplicate query for local tokens (CASSANDRA-7182)
  
  
 -1.2.16
 +2.0.7
 + * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
 + * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
 + * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
 + * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
 + * Schedule schema pulls on change (CASSANDRA-6971)
 + * Avoid early loading of non-system keyspaces before compaction-leftovers 
 +   cleanup at startup (CASSANDRA-6913)
 + * Restrict Windows to parallel repairs (CASSANDRA-6907)
 + * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
 + * Fix NPE in MeteredFlusher (CASSANDRA-6820)
 + * Fix race processing range scan responses (CASSANDRA-6820)
 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821)
 + * Add uuid() function (CASSANDRA-6473)
 + * Omit tombstones from schema digests (CASSANDRA-6862)
 + * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884)
 + * Lower chances for losing new SSTables during nodetool refresh and
 +   ColumnFamilyStore.loadNewSSTables (CASSANDRA-6514)
 + * Add support for DELETE ... IF EXISTS to CQL3 (CASSANDRA-5708)
 + * Update hadoop_cql3_word_count example (CASSANDRA-6793)
 + * Fix handling of RejectedExecution in sync Thrift server (CASSANDRA-6788)
 + * Log more information when exceeding tombstone_warn_threshold 
(CASSANDRA-6865)
 + * Fix truncate to not abort due to unreachable fat clients (CASSANDRA-6864)
 + * Fix schema concurrency exceptions (CASSANDRA-6841)
 + * Fix leaking validator FH in StreamWriter (CASSANDRA-6832)
 + * Fix saving triggers to schema (CASSANDRA-6789)
 + * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 + * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 + * Fix static counter columns (CASSANDRA-6827)
 + * Restore expiring-deleted (cell) compaction optimization (CASSANDRA-6844)
 + * Fix CompactionManager.needsCleanup (CASSANDRA-6845)
 + * Correctly compare BooleanType values other than 0 and 1 (CASSANDRA-6779)
 + * Read message id as string from earlier versions (CASSANDRA-6840)
 + * Properly use the Paxos consistency for (non-protocol) batch 
(CASSANDRA-6837)
 + * Add paranoid disk failure option (CASSANDRA-6646)
 + * Improve PerRowSecondaryIndex performance (CASSANDRA-6876)
 + * Extend triggers to support CAS updates (CASSANDRA-6882)
 + * Static columns with IF NOT EXISTS don't always work as expected 
(CASSANDRA-6873)
 + * Fix paging with SELECT DISTINCT (CASSANDRA-6857)
 + * Fix UnsupportedOperationException on CAS timeout (CASSANDRA-6923)
 + * Improve MeteredFlusher handling of MF-unaffected column families
 +   (CASSANDRA-6867)
 + * Add CqlRecordReader using native pagination (CASSANDRA-6311)
 + * Add QueryHandler interface (CASSANDRA-6659)
 + * Track liveRatio per-memtable, not per-CF (CASSANDRA-6945)
 + * Make sure upgradesstables keeps sstable level (CASSANDRA-6958)
 + * Fix LIMIT with static columns (CASSANDRA-6956)
 + * Fix clash with CQL column name in 

[3/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-05-07 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8d4dc6d5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8d4dc6d5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8d4dc6d5

Branch: refs/heads/cassandra-2.0
Commit: 8d4dc6d5f2db5d19476d79c8b56e7e3d2f61e2d5
Parents: 0a09edc c7e472e
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 6 22:41:30 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 6 22:41:30 2014 -0500

--

--




[jira] [Commented] (CASSANDRA-6861) Optimise our Netty 4 integration

2014-05-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991485#comment-13991485
 ] 

Aleksey Yeschenko commented on CASSANDRA-6861:
--

B/c IMO it being too slow is a security issue itself, if it causes people to 
switch back to unencrypted transport.

 Optimise our Netty 4 integration
 

 Key: CASSANDRA-6861
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6861
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 2.1 rc1


 Now we've upgraded to Netty 4, we're generating a lot of garbage that could 
 be avoided, so we should probably stop that. Should be reasonably easy to 
 hook into Netty's pooled buffers, returning them to the pool once a given 
 message is completed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: remove duplicate queries for local tokens

2014-05-07 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 8d4dc6d5f - b4a3b5207


remove duplicate queries for local tokens

patch by dbrosius reviewed by ayeschenko for cassandra-7182


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0132e546
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0132e546
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0132e546

Branch: refs/heads/cassandra-2.0
Commit: 0132e546b55b67f68fca230c9e0ca1ccef6aa273
Parents: c7e472e
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 01:34:02 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 01:34:02 2014 -0400

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0132e546/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8c1d234..d7b7f00 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -20,6 +20,7 @@
  * fix npe when doing -Dcassandra.fd_initial_value_ms (CASSANDRA-6751)
  * Preserves CQL metadata when updating table from thrift (CASSANDRA-6831)
  * fix time conversion to milliseconds in SimpleCondition.await 
(CASSANDRA-7149)
+ * remove duplicate query for local tokens (CASSANDRA-7182)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0132e546/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index ed6d031..7cecec9 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -209,8 +209,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 SystemTable.updateTokens(tokens);
 tokenMetadata.updateNormalTokens(tokens, 
FBUtilities.getBroadcastAddress());
 // order is important here, the gossiper can fire in between adding 
these two states.  It's ok to send TOKENS without STATUS, but *not* vice versa.
-Gossiper.instance.addLocalApplicationState(ApplicationState.TOKENS, 
valueFactory.tokens(getLocalTokens()));
-Gossiper.instance.addLocalApplicationState(ApplicationState.STATUS, 
valueFactory.normal(getLocalTokens()));
+CollectionToken localTokens = getLocalTokens();
+Gossiper.instance.addLocalApplicationState(ApplicationState.TOKENS, 
valueFactory.tokens(localTokens));
+Gossiper.instance.addLocalApplicationState(ApplicationState.STATUS, 
valueFactory.normal(localTokens));
 setMode(Mode.NORMAL, false);
 }
 



[jira] [Updated] (CASSANDRA-7184) improvement of SizeTieredCompaction

2014-05-07 Thread Jianwei Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianwei Zhang updated CASSANDRA-7184:
-

Description: 
1,  In our usage scenario, there is no duplicated insert and no delete . The 
data increased all the time, and some big sstables are generated (100GB for 
example).  we don't want these sstables to participate in the 
SizeTieredCompaction any more. so we add a max threshold which is set to 100GB 
. Sstables larger than the threshold will not be compacted. Should this 
strategy be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a 
Major Compaction. The total size would be larger to 5TB. So during the 
compaction, when the size writed reach to a configed threshhold(200GB for 
example), it switch to write a new sstable. In this way, we avoid to generate 
too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Should this strategy be added to the trunk ?

  was:
1,  In our usage scenario, there is no duplicated insert and no delete . The 
data increased all the time, and some big sstables are generated (100GB for 
example).  we don't want these sstables to participate in the 
SizeTieredCompaction any more. so we add a max threshold which is set to 100GB 
. Sstables larger than the threshold will not be compacted. Can this strategy 
be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a 
Major Compaction. The total size would be larger to 5TB. So during the 
compaction, when the size writed reach to a configed threshhold(200GB for 
example), it switch to write a new sstable. In this way, we avoid to generate 
too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?


 improvement  of  SizeTieredCompaction
 -

 Key: CASSANDRA-7184
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jianwei Zhang
Assignee: Jianwei Zhang
Priority: Minor
  Labels: compaction
   Original Estimate: 48h
  Remaining Estimate: 48h

 1,  In our usage scenario, there is no duplicated insert and no delete . The 
 data increased all the time, and some big sstables are generated (100GB for 
 example).  we don't want these sstables to participate in the 
 SizeTieredCompaction any more. so we add a max threshold which is set to 
 100GB . Sstables larger than the threshold will not be compacted. Should this 
 strategy be added to the trunk ?
 2,  In our usage scenario, maybe hundreds of sstable need to be compacted in 
 a Major Compaction. The total size would be larger to 5TB. So during the 
 compaction, when the size writed reach to a configed threshhold(200GB for 
 example), it switch to write a new sstable. In this way, we avoid to generate 
 too huge sstables. Too huge sstable have some bad infuence: 
  (1) It will be larger than the capacity of a disk;
  (2) If the sstable is corrupt, lots of objects will be influenced .
 Should this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6861) Optimise our Netty 4 integration

2014-05-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991484#comment-13991484
 ] 

Aleksey Yeschenko commented on CASSANDRA-6861:
--

I guess it depends on how much JDK's SSLEngine really sucks (and I suspect it 
does, a lot). Either way, should at least create a ticket so that we don't 
forget about it, and switch to the built-in Netty's SslEngine once it's 
available.

 Optimise our Netty 4 integration
 

 Key: CASSANDRA-6861
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6861
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 2.1 rc1


 Now we've upgraded to Netty 4, we're generating a lot of garbage that could 
 be avoided, so we should probably stop that. Should be reasonably easy to 
 hook into Netty's pooled buffers, returning them to the pool once a given 
 message is completed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: remove duplicate queries for local tokens

2014-05-07 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 c7e472e8c - 0132e546b


remove duplicate queries for local tokens

patch by dbrosius reviewed by ayeschenko for cassandra-7182


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0132e546
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0132e546
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0132e546

Branch: refs/heads/cassandra-1.2
Commit: 0132e546b55b67f68fca230c9e0ca1ccef6aa273
Parents: c7e472e
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 01:34:02 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 01:34:02 2014 -0400

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0132e546/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8c1d234..d7b7f00 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -20,6 +20,7 @@
  * fix npe when doing -Dcassandra.fd_initial_value_ms (CASSANDRA-6751)
  * Preserves CQL metadata when updating table from thrift (CASSANDRA-6831)
  * fix time conversion to milliseconds in SimpleCondition.await 
(CASSANDRA-7149)
+ * remove duplicate query for local tokens (CASSANDRA-7182)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0132e546/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index ed6d031..7cecec9 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -209,8 +209,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 SystemTable.updateTokens(tokens);
 tokenMetadata.updateNormalTokens(tokens, 
FBUtilities.getBroadcastAddress());
 // order is important here, the gossiper can fire in between adding 
these two states.  It's ok to send TOKENS without STATUS, but *not* vice versa.
-Gossiper.instance.addLocalApplicationState(ApplicationState.TOKENS, 
valueFactory.tokens(getLocalTokens()));
-Gossiper.instance.addLocalApplicationState(ApplicationState.STATUS, 
valueFactory.normal(getLocalTokens()));
+CollectionToken localTokens = getLocalTokens();
+Gossiper.instance.addLocalApplicationState(ApplicationState.TOKENS, 
valueFactory.tokens(localTokens));
+Gossiper.instance.addLocalApplicationState(ApplicationState.STATUS, 
valueFactory.normal(localTokens));
 setMode(Mode.NORMAL, false);
 }
 



[2/3] git commit: Set keepalive on MessagingService connections patch by Jianwei Zhang; reviewed by jbellis for CASSANDRA-7170

2014-05-07 Thread jbellis
 Set keepalive on MessagingService connections
patch by Jianwei Zhang; reviewed by jbellis for CASSANDRA-7170


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7e472e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7e472e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7e472e8

Branch: refs/heads/cassandra-2.0
Commit: c7e472e8c1eb5739866e8c93957738676cc744bc
Parents: f4460a5
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue May 6 22:41:20 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue May 6 22:41:20 2014 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/net/MessagingService.java | 5 +
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7e472e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1c6171e..8c1d234 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.17
+ * Set keepalive on MessagingService connections (CASSANDRA-7170)
  * Add Cloudstack snitch (CASSANDRA-7147)
  * Update system.peers correctly when relocating tokens (CASSANDRA-7126)
  * Add Google Compute Engine snitch (CASSANDRA-7132)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7e472e8/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 5e4a117..41553b1 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -904,9 +904,14 @@ public final class MessagingService implements 
MessagingServiceMBean
 {
 Socket socket = server.accept();
 if (authenticate(socket))
+{
+socket.setKeepAlive(true);
 new IncomingTcpConnection(socket).start();
+}
 else
+{
 socket.close();
+}
 }
 catch (AsynchronousCloseException e)
 {



[jira] [Commented] (CASSANDRA-7182) no need to query for local tokens twice in a row

2014-05-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991545#comment-13991545
 ] 

Aleksey Yeschenko commented on CASSANDRA-7182:
--

+1

 no need to query for local tokens twice in a row
 

 Key: CASSANDRA-7182
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7182
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.2.17

 Attachments: 7182.txt


 StorageService.setTokens issues
  SELECT tokens FROM system.local WHERE key='local' 
 back to back, just do it once.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7182) no need to query for local tokens twice in a row

2014-05-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991545#comment-13991545
 ] 

Aleksey Yeschenko commented on CASSANDRA-7182:
--

+1

 no need to query for local tokens twice in a row
 

 Key: CASSANDRA-7182
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7182
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.2.17

 Attachments: 7182.txt


 StorageService.setTokens issues
  SELECT tokens FROM system.local WHERE key='local' 
 back to back, just do it once.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7168) Add repair aware consistency levels

2014-05-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991658#comment-13991658
 ] 

Sylvain Lebresne commented on CASSANDRA-7168:
-

I hope we'll get aggregations for 3.0, and it might well be this will provide a 
good boost in that case. But I wouldn't mind getting aggregation first, and 
then try this and see if it actually help second, rather that doing it first on 
the assumption it might help later.

 Add repair aware consistency levels
 ---

 Key: CASSANDRA-7168
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
  Labels: performance
 Fix For: 3.0


 With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to 
 avoid a lot of extra disk I/O when running queries with higher consistency 
 levels.  
 Since repaired data is by definition consistent and we know which sstables 
 are repaired, we can optimize the read path by having a REPAIRED_QUORUM which 
 breaks reads into two phases:
  
   1) Read from one replica the result from the repaired sstables. 
   2) Read from a quorum only the un-repaired data.
 For the node performing 1) we can pipeline the call so it's a single hop.
 In the long run (assuming data is repaired regularly) we will end up with 
 much closer to CL.ONE performance while maintaining consistency.
 Some things to figure out:
   - If repairs fail on some nodes we can have a situation where we don't have 
 a consistent repaired state across the replicas.  
   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7184) improvement of SizeTieredCompaction

2014-05-07 Thread Jianwei Zhang (JIRA)
Jianwei Zhang created CASSANDRA-7184:


 Summary: improvement  of  SizeTieredCompaction
 Key: CASSANDRA-7184
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jianwei Zhang
Assignee: Jianwei Zhang
Priority: Minor


1,  In our usage scenario, there is no duplicated insert and no delete . The 
data increased all the time, and some huge sstables generate (100GB for 
example).  we don't want these sstables to participate in the 
SizeTieredCompaction any more. so we add a max threshold which we set to 100GB 
. Sstables larger than the threshold will not be compacted. Can this strategy 
be added to the trunk ?

2,  In our usage scenario, maybe hundreds of sstable need to be compacted in a 
Major Compaction. The total size would be larger to 5TB. So during the 
compaction, when the size writed reach to a configed threshhold(200GB for 
example), it switch to write a new sstable. In this way, we avoid to generate 
too huge sstables. Too huge sstable have some bad infuence: 
 (1) It will be larger than the capacity of a disk;
 (2) If the sstable is corrupt, lots of objects will be influenced .
Can this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v6.2#6252)