[jira] [Updated] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-03-30 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11393:
-
Assignee: Benjamin Lerer

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.x
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  

[jira] [Updated] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-30 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11391:
-
Reviewer: Tyler Hobbs

> "class declared as inner class" error when using UDF
> 
>
> Key: CASSANDRA-11391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Critical
> Fix For: 3.x
>
>
> {noformat}
> cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
>  ... CALLED ON NULL INPUT
>  ... RETURNS text
>  ... LANGUAGE java
>  ... AS $$
>  ... String buffer = "";
>  ... for(java.util.Map.Entry entry: 
> my_map.entrySet()) {
>  ... buffer = buffer + entry.getKey() + ": " + 
> entry.getValue() + ", ";
>  ... }
>  ... return buffer;
>  ... $$;
> InvalidRequest: code=2200 [Invalid query] 
> message="Could not compile function 'music.testmapentry' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: 
> Java UDF validation failed: [class declared as inner class]"
> {noformat}
> When I try to decompile the source code into byte code, below is the result:
> {noformat}
>   public java.lang.String test(java.util.Map java.lang.String>);
> Code:
>0: ldc   #2  // String
>2: astore_2
>3: aload_1
>4: invokeinterface #3,  1// InterfaceMethod 
> java/util/Map.entrySet:()Ljava/util/Set;
>9: astore_3
>   10: aload_3
>   11: invokeinterface #4,  1// InterfaceMethod 
> java/util/Set.iterator:()Ljava/util/Iterator;
>   16: astore4
>   18: aload 4
>   20: invokeinterface #5,  1// InterfaceMethod 
> java/util/Iterator.hasNext:()Z
>   25: ifeq  94
>   28: aload 4
>   30: invokeinterface #6,  1// InterfaceMethod 
> java/util/Iterator.next:()Ljava/lang/Object;
>   35: checkcast #7  // class java/util/Map$Entry
>   38: astore5
>   40: new   #8  // class java/lang/StringBuilder
>   43: dup
>   44: invokespecial #9  // Method 
> java/lang/StringBuilder."":()V
>   47: aload_2
>   48: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   51: aload 5
>   53: invokeinterface #11,  1   // InterfaceMethod 
> java/util/Map$Entry.getKey:()Ljava/lang/Object;
>   58: checkcast #12 // class java/lang/String
>   61: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   64: ldc   #13 // String :
>   66: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   69: aload 5
>   71: invokeinterface #14,  1   // InterfaceMethod 
> java/util/Map$Entry.getValue:()Ljava/lang/Object;
>   76: checkcast #12 // class java/lang/String
>   79: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   82: ldc   #15 // String ,
>   84: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   87: invokevirtual #16 // Method 
> java/lang/StringBuilder.toString:()Ljava/lang/String;
>   90: astore_2
>   91: goto  18
>   94: aload_2
>   95: areturn
> {noformat}
>  There is nothing that could trigger inner class creation ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11448) Running OOS should trigger the disk failure policy

2016-03-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215636#comment-15215636
 ] 

Sylvain Lebresne commented on CASSANDRA-11448:
--

Fyi, it would be nice to avoid acronyms at least on the first use in tickets 
descriptions. I have personally no clue what OOS stands for so I assume our 
average user won't either and this makes searches useless.

> Running OOS should trigger the disk failure policy
> --
>
> Key: CASSANDRA-11448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Brandon Williams
>Assignee: Branimir Lambov
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Currently when you run OOS, this happens:
> {noformat}
> ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
> Insufficient disk space to write 48 bytes 
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.1.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> {noformat}
> Now your flush writer is dead and postflush tasks build up forever.  Instead 
> we should throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11443) Prevent (or warn) changing clustering order with ALTER TABLE when data already exists

2016-03-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215630#comment-15215630
 ] 

Sylvain Lebresne commented on CASSANDRA-11443:
--

Could you share reproduction steps and/or verify this reproduced in 2.1+. 
Because I'm pretty sure we forbid that kind of changes so either that 
validation is just not in 2.0, or there is a specific case we've missed and 
repro steps would be useful.

> Prevent (or warn) changing clustering order with ALTER TABLE when data 
> already exists
> -
>
> Key: CASSANDRA-11443
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11443
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, CQL
>Reporter: Erick Ramirez
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Inexperienced DBAs get caught out on certain schema changes thinking that 
> Cassandra will automatically retrofit/convert the existing data on disk.
> We should prevent users from changing the clustering order on existing tables 
> or they will run into compaction/read issues such as (example from Cassandra 
> 2.0.14):
> {noformat}
> ERROR [CompactionExecutor:6488] 2015-07-14 19:33:14,247 CassandraDaemon.java 
> (line 258) Exception in thread Thread[CompactionExecutor:6488,1,main] 
> java.lang.AssertionError: Added column does not sort as the last column 
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
>  
> at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) 
> at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155) 
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
>  
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
>  
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
>  
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>  
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>  
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:164)
>  
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>  
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>  
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> At the very least, we should report a warning advising users about possible 
> problems when changing the clustering order if the table is not empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2016-03-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208605#comment-15208605
 ] 

Sylvain Lebresne commented on CASSANDRA-8969:
-

bq. you could potentially have 2x in-flight requests being kept in memory

Not totally sure I follow. Are we talking about the request objects? Because 
those are really tiny and I don't that being that relevant. Besides, the number 
of max in-flight queries is currently really limited by the number of native 
transport threads, and I'm not sure to see in which way a bigger rpc timeouts 
changes much here.

Anyway, feel free to check the attached patch to see if we're talking of the 
same thing with different words. But if we aren't, I'm not sure to understand 
the problem you're mentioning.

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 8969.txt
>
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9666) Provide an alternative to DTCS

2016-03-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208149#comment-15208149
 ] 

Sylvain Lebresne commented on CASSANDRA-9666:
-

I hate to be the buzzkill but I'm kind of -1 on just adding TWCS and calling it 
a day. TWCS and DTCS are targeting the exact same use case and having both in 
tree just doesn't make sense. It's a maintenance burden on the project and it's 
very confusing to users.

If TWCS is noticably better than DTCS, then we should just admit it is, add 
TWCS and _deprecate_ DTCS. Note that I'm not saying it is, I haven't really 
much experience on any of the strategy, but *if* TWCS has same general 
performances than DTCS but the consensus is that it's more intuitive and 
operationally simpler, then it definitively qualifies as better (the code 
complexity between the 2 approaches could also be a factor to take into 
accounts)).

That said, are we judging DTCS on its current state or on it's original one?

> Provide an alternative to DTCS
> --
>
> Key: CASSANDRA-9666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9666
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 2.1.x, 2.2.x
>
> Attachments: dtcs-twcs-io.png, dtcs-twcs-load.png
>
>
> DTCS is great for time series data, but it comes with caveats that make it 
> difficult to use in production (typical operator behaviors such as bootstrap, 
> removenode, and repair have MAJOR caveats as they relate to 
> max_sstable_age_days, and hints/read repair break the selection algorithm).
> I'm proposing an alternative, TimeWindowCompactionStrategy, that sacrifices 
> the tiered nature of DTCS in order to address some of DTCS' operational 
> shortcomings. I believe it is necessary to propose an alternative rather than 
> simply adjusting DTCS, because it fundamentally removes the tiered nature in 
> order to remove the parameter max_sstable_age_days - the result is very very 
> different, even if it is heavily inspired by DTCS. 
> Specifically, rather than creating a number of windows of ever increasing 
> sizes, this strategy allows an operator to choose the window size, compact 
> with STCS within the first window of that size, and aggressive compact down 
> to a single sstable once that window is no longer current. The window size is 
> a combination of unit (minutes, hours, days) and size (1, etc), such that an 
> operator can expect all data using a block of that size to be compacted 
> together (that is, if your unit is hours, and size is 6, you will create 
> roughly 4 sstables per day, each one containing roughly 6 hours of data). 
> The result addresses a number of the problems with 
> DateTieredCompactionStrategy:
> - At the present time, DTCS’s first window is compacted using an unusual 
> selection criteria, which prefers files with earlier timestamps, but ignores 
> sizes. In TimeWindowCompactionStrategy, the first window data will be 
> compacted with the well tested, fast, reliable STCS. All STCS options can be 
> passed to TimeWindowCompactionStrategy to configure the first window’s 
> compaction behavior.
> - HintedHandoff may put old data in new sstables, but it will have little 
> impact other than slightly reduced efficiency (sstables will cover a wider 
> range, but the old timestamps will not impact sstable selection criteria 
> during compaction)
> - ReadRepair may put old data in new sstables, but it will have little impact 
> other than slightly reduced efficiency (sstables will cover a wider range, 
> but the old timestamps will not impact sstable selection criteria during 
> compaction)
> - Small, old sstables resulting from streams of any kind will be swiftly and 
> aggressively compacted with the other sstables matching their similar 
> maxTimestamp, without causing sstables in neighboring windows to grow in size.
> - The configuration options are explicit and straightforward - the tuning 
> parameters leave little room for error. The window is set in common, easily 
> understandable terms such as “12 hours”, “1 Day”, “30 days”. The 
> minute/hour/day options are granular enough for users keeping data for hours, 
> and users keeping data for years. 
> - There is no explicitly configurable max sstable age, though sstables will 
> naturally stop compacting once new data is written in that window. 
> - Streaming operations can create sstables with old timestamps, and they'll 
> naturally be joined together with sstables in the same time bucket. This is 
> true for bootstrap/repair/sstableloader/removenode. 
> - It remains true that if old data and new data is written into the memtable 
> at the same time, the resulting sstables will be treated as if they were new 
> sstables, however, that no longer negatively impacts 

[jira] [Updated] (CASSANDRA-11408) simple compaction defaults for common scenarios

2016-03-23 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11408:
-
Priority: Minor  (was: Major)

> simple compaction defaults for common scenarios
> ---
>
> Key: CASSANDRA-11408
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11408
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jonathan Shook
>Priority: Minor
>
> As compaction strategies get more flexible over time, some users might prefer 
> to have a simple named profile for their settings.
> {code:title=example, syntax variant|borderStyle=solid}
> alter table foo.bar with compaction = 'timeseries-hourly-for-a-week';
> {code}
> {code:title=example, syntax variant |borderStyle=solid}
> alter table foo.bar with compaction = { 'profile' : 'key-value-balanced-ops' 
> };
> {code}
> These would simply be a map into sets of well-tested and documented defaults 
> across any of the core compaction strategies.
> This would simplify setting up compaction for well-understood workloads, but 
> still allow for customization where desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11403) Serializer/Version mismatch during upgrades to C* 3.0

2016-03-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208076#comment-15208076
 ] 

Sylvain Lebresne commented on CASSANDRA-11403:
--

Which exact version of C* is that? We've fixed bugs related to this already so 
this is likely a duplicate, unless you can reproduce on the current branches.

> Serializer/Version mismatch during upgrades to C* 3.0
> -
>
> Key: CASSANDRA-11403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11403
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Anthony Cozzie
>
> The problem line seems to be:
> {code}
> MessageOut message = 
> readCommand.createMessage(MessagingService.instance().getVersion(endpoint));
> {code}
> SinglePartitionReadCommand then picks the serializer based on the version:
> {code}
> return new MessageOut<>(MessagingService.Verb.READ, this, version < 
> MessagingService.VERSION_30 ? legacyReadCommandSerializer : serializer);
> {code}
> However, OutboundTcpConnectionPool will test the payload size vs the version 
> from its smallMessages connection:
> {code}
> return msg.payloadSize(smallMessages.getTargetVersion()) > 
> LARGE_MESSAGE_THRESHOLD
> {code}
> Which is set when the connection/pool is created:
> {code}
> targetVersion = MessagingService.instance().getVersion(pool.endPoint());
> {code}
> During an upgrade, this state can change between these two calls leading the 
> 3.0 serializer being used on 2.x packets and the following stacktrace:
> ERROR [OptionalTasks:1] 2016-03-07 19:53:06,445  CassandraDaemon.java:195 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$NeverSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:214)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:918)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:77)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> 

[jira] [Commented] (CASSANDRA-9348) Nodetool move output should be more user friendly if bad token is supplied

2016-03-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206087#comment-15206087
 ] 

Sylvain Lebresne commented on CASSANDRA-9348:
-

bq. If everyone agrees that this is acceptable.

It would be acceptable but the best fix here is probably to just have the 
{{validate}} method call {{fromString}} since the error message in that later 
message is appropriate and there is no point in duplicating that code.

> Nodetool move output should be more user friendly if bad token is supplied
> --
>
> Key: CASSANDRA-9348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sequoyha pelletier
>Priority: Trivial
>  Labels: lhf
>
> If you put a token into nodetool move that is out of range for the 
> partitioner you get the following error:
> {noformat}
> [architect@md03-gcsarch-lapp33 11:01:06 ]$ nodetool -h 10.11.48.229 -u 
> cassandra -pw cassandra move \\-9223372036854775809 
> Exception in thread "main" java.io.IOException: For input string: 
> "-9223372036854775809" 
> at org.apache.cassandra.service.StorageService.move(StorageService.java:3104) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>  
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>  
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
> at sun.rmi.transport.Transport$1.run(Transport.java:177) 
> at sun.rmi.transport.Transport$1.run(Transport.java:174) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>  
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:745) 
> {noformat}
> This ticket is just requesting that we catch the exception an output 
> something along the lines of "Token supplied is outside of the acceptable 
> range" for those that are still in the Cassandra learning curve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11392) Add auto import java.util for UDF code block

2016-03-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11392:
-
Priority: Minor  (was: Major)

> Add auto import java.util for UDF code block
> 
>
> Key: CASSANDRA-11392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11392
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Priority: Minor
>
> Right now, when creating Java source code for UDF, since we cannot define 
> import, we need to use fully qualified class name, ex:
> {noformat}
> CREATE FUNCTION toSet(li list)
> CALLED ON NULL INPUT
> RETURNS text
> LANGUAGE java
> AS $$
> java.util.Set set = new java.util.HashSet();
> for(String txt: list) {
> set.add(txt);
> }
> return set;
> $$;
> {noformat}
> Classes from {{java.util}} package are so commonly used that it makes 
> developer life easier to import automatically {{java.util.*}} in the 
> {{JavaUDF}} base class so that developers don't need to use FQCN for common 
> classes.
>  The only drawback I can see is the risk of class name clash but since:
> 1. it is not allow to create new class
> 2. classes that can be used in UDF are restricted
>  I don't see serious clash name issues either
> [~snazy] WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10649) Improve field-checking and error reporting in cassandra.yaml

2016-03-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199652#comment-15199652
 ] 

Sylvain Lebresne commented on CASSANDRA-10649:
--

Dealing with an empty file better is certainly a nice addition, though checking 
the line of the stack in the description on 2.1.11 shows that this is related 
to the the client encryption configuration, and indeed, if you have this in the 
yaml:
{noformat}
client_encryption_options:
#enabled: false
## If enabled and optional is set to true encrypted and unencrypted 
connections are handled.
#optional: false
#keystore: conf/.keystore
#keystore_password: cassandra
{noformat}
that is, you declare {{client_encryption_options}} but everything else is 
commented, you do get a server side NPE with nothing returned to the client 
(it's in {{NativeTransportService}} on trunk but this still reproduce). We 
should try to fix that (either by complaining or by assuming {{enabled: false}} 
if it's not there).

Another nit that the comments above make me thing of regarding yaml error 
reporting is that when you add an unknown property, we exit with a stack. We, 
devs, reads stack trace as a 2nd nature so it's fine, but some of our user may 
not even be that familiar with JAVA and getting a full stack when the only 
problem is you've typoed a property in the yaml is scary and not super user 
friendly. In general, I think that if we a {{ConfigurationException}} at 
startup, we should probably just print the message of the error to stderr and 
exit, rather than letting the exception escape. Would be cool if we could use 
that ticket for that too.


> Improve field-checking and error reporting in cassandra.yaml
> 
>
> Key: CASSANDRA-10649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10649
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Linux: Fedora-16 64 bit
>Reporter: sandeep thakur
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 2.2.6, 3.0.5, 3.5
>
> Attachments: 10649-2.2.txt, 10649-3.0.txt, cassandra.yaml
>
>
> I am trying to setup cassandra single node cluster. i've downloaded below 
> build:
> apache-cassandra-2.1.11-bin.tar.gz
> I've upgraded Java to 1.8 as well, as earlier it was throwing errors related 
> to Java version.
> {code}
> [root@localhost cassandra]# java -version
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}
> I've also verified the cassandra.yaml file from "http://www.yamllint.com/; as 
> well. But while starting cassandra, I am getting vague exception as below:
> {code}
> INFO  15:52:11 Compacting 
> [SSTableReader(path='/home/sandeep/bck_up/data/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-18-Data.db'),
>  
> SSTableReader(path='/home/sandeep/bck_up/data/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-17-Data.db'),
>  
> SSTableReader(path='/home/sandeep/bck_up/data/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-20-Data.db'),
>  
> SSTableReader(path='/home/sandeep/bck_up/data/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-19-Data.db')]
> INFO  15:52:11 Node localhost/127.0.0.1 state jump to normal
> INFO  15:52:11 Netty using native Epoll event loop
> ERROR 15:52:11 Exception encountered during startup
> java.lang.NullPointerException: null
> at org.apache.cassandra.transport.Server.run(Server.java:171) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.transport.Server.start(Server.java:117) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:492) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:575)
>  [apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> java.lang.NullPointerException
> at org.apache.cassandra.transport.Server.run(Server.java:171)
> at org.apache.cassandra.transport.Server.start(Server.java:117)
> at 
> org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:492)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:575)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651)
> Exception encountered during startup: null
> INFO  15:52:11 Announcing shutdown
> INFO  15:52:11 Node localhost/127.0.0.1 state jump 

[jira] [Updated] (CASSANDRA-10809) Create a -D option to prevent gossip startup

2016-03-20 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10809:
-
   Resolution: Fixed
Fix Version/s: (was: 2.1.x)
   3.5
   3.0.5
   2.2.6
   Status: Resolved  (was: Patch Available)

Alright then, committed, thanks.

> Create a -D option to prevent gossip startup
> 
>
> Key: CASSANDRA-10809
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10809
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Brandon Williams
>Assignee: Sylvain Lebresne
> Fix For: 2.2.6, 3.0.5, 3.5
>
> Attachments: 10809.txt
>
>
> In CASSANDRA-6961 we changed how join_ring=false works, to great benefit.  
> However, sometimes you need to a node to come up, but not interact with other 
> nodes whatsoever - for example if you have a schema problem, it will still 
> pull the schema from another node because they still gossip even though we're 
> in a dead state.
> We can add a way to restore the previous behavior by simply adding something 
> like -Dcassandra.start_gossip=false.
> In the meantime we can workaround by setting listen_address to localhost, but 
> that's kind of a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11368) List of UDT can't be updated properly when using USING TIMESTAMP

2016-03-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199467#comment-15199467
 ] 

Sylvain Lebresne commented on CASSANDRA-11368:
--

Yes, I can agree that behavior is surprising and it means list inserts are not 
truly idempotent. This is however a downside of the design of lists and to be 
perfectly honest, I'm not sure there is an easy fix for that (though if someone 
has an idea, please share). So I'm not saying it wouldn't be great if this was 
made idempotent, but I'd rather be upfront that unless has a very clever idea, 
this is likely to stay as a known limitation of lists for the foreseeable 
future. Fyi, this is not the only gotcha of lists and we generally advise to 
prefer sets over lists unless you absolutely absolutely need the ordering. And 
even then, make sure a frozen list (which doesn't have this problem) isn't good 
enough for you.

I'll do note that the issue is due to having both statements having the exact 
same timestamp. If you'd use a bigger timestamp for the 2nd insert for 
instance, this would work as expected. The use of UDT has also no impact on 
this. So updating the title to reflect both of those.

> List of UDT can't be updated properly when using USING TIMESTAMP
> 
>
> Key: CASSANDRA-11368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thanh
>
> List of UDT can't be updated properly when using USING TIMESTAMP
> Observe:
> {code}
> cqlsh:t360> CREATE TYPE fullname ( 
> ... fname text, 
> ... lname text 
> ... );
> cqlsh:t360> CREATE TABLE users ( 
> ... id text PRIMARY KEY, 
> ... names list>, 
> ... phone text 
> ... ); 
> cqlsh:t360> UPDATE users USING TIMESTAMP 1458019725701 SET names = [{ fname: 
> 'fname1', lname: 'lname1'},{ fname: 'fname2', lname: 'lname2'},{ fname: 
> 'fname3', lname: 'lname3'}] WHERE id='a'; 
> cqlsh:t360> select * from users;
> id | names | phone 
> +--+---
>  
> a | [{lname: 'lname1', fname: 'fname1'}, {lname: 'lname2', fname: 'fname2'}, 
> {lname: 'lname3', fname: 'fname3'}] | null
> (1 rows) 
> cqlsh:t360> UPDATE users USING TIMESTAMP 1458019725701 SET names = [{ fname: 
> 'fname1', lname: 'lname1'},{ fname: 'fname2', lname: 'lname2'},{ fname: 
> 'fname3', lname: 'lname3'}] WHERE id='a'; 
> cqlsh:t360> select * from users;
> id | names | phone 
> +--+---
>  
> a | [{lname: 'lname1', fname: 'fname1'}, {lname: 'lname2', fname: 'fname2'}, 
> {lname: 'lname3', fname: 'fname3'}, {lname: 'lname1', fname: 'fname1'}, 
> {lname: 'lname2', fname: 'fname2'}, {lname: 'lname3', fname: 'fname3'}] | null
> (1 rows)
> {code}
> => the list doesn't get replaced, it gets appended, which is not the 
> expected/desired result



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8969:

Attachment: 8969.txt

Attaching a patch that describe with a bit more detail what the rpc timeouts 
are useful for (and hence why you don't want to set them too high).
When you mention memory building up, I assume it's a reference to reads going 
on for too long reading tons of stuffs? In which case I'll note that before 
CASSANDRA-7392 the rpc timeouts weren't really helping with that, and so the 
attached patch is really only meant for trunk (it's partly false for earlier 
releases).

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 8969.txt
>
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11255) COPY TO should have higher double precision

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11255:
-
Reviewer: Sylvain Lebresne

> COPY TO should have higher double precision
> ---
>
> Key: CASSANDRA-11255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11255
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting, lhf
> Fix For: 3.x
>
>
> At the moment COPY TO uses the same float precision as cqlsh, which by 
> default is 5 but it can be changed in cqlshrc. However, typically people want 
> to preserve precision when exporting data and so this default is too low for 
> COPY TO.
> I suggest adding a new COPY FROM option to specify floating point precision 
> with a much higher default value, for example 12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11255) COPY TO should have higher double precision

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11255:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Thanks Stefania, committed.

> COPY TO should have higher double precision
> ---
>
> Key: CASSANDRA-11255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11255
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting, lhf
> Fix For: 3.6
>
>
> At the moment COPY TO uses the same float precision as cqlsh, which by 
> default is 5 but it can be changed in cqlshrc. However, typically people want 
> to preserve precision when exporting data and so this default is too low for 
> COPY TO.
> I suggest adding a new COPY FROM option to specify floating point precision 
> with a much higher default value, for example 12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11226) nodetool tablestats' keyspace-level metrics are wrong/misleading

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11226:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Committed, thanks.

> nodetool tablestats' keyspace-level metrics are wrong/misleading
> 
>
> Key: CASSANDRA-11226
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11226
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
>
> In the nodetool tablestats output (formerly cfstats), we display "keyspace" 
> level metrics before the table-level metrics:
> {noformat}
> Keyspace: testks
> Read Count: 14772528
> Read Latency: 0.14456651623879135 ms.
> Write Count: 4761283
> Write Latency: 0.062120404521218336 ms.
> Pending Flushes: 0
> Table: processes
> SSTable count: 7
> Space used (live): 496.76 MB
> Space used (total): 496.76 MB
> Space used by snapshots (total): 0 bytes
> Off heap memory used (total): 285.76 KB
> SSTable Compression Ratio: 0.2318241570710227
> Number of keys (estimate): 3027
> Memtable cell count: 2140
> Memtable data size: 1.66 MB
> Memtable off heap memory used: 0 bytes
> Memtable switch count: 967
> Local read count: 14772528
> Local read latency: 0.159 ms
> Local write count: 4761283
> Local write latency: 0.068 ms
> {noformat}
> However, the keyspace-level metrics are misleading, at best.  They are 
> aggregate metrics for every table in the keyspace _that is included in the 
> command line filters_.  So, if you run {{tablestats}} for a single table, the 
> keyspace-level stats will only reflect that table's stats.
> I see two possible fixes:
> # If the command line options don't include the entire keyspace, skip the 
> keyspace-level stats
> # Ignore the command line options, and always make the keyspace-level stats 
> an aggregate of all tables in the keyspace
> My only concern with option 2 is that performance may suffer a bit on 
> keyspaces with many tables.  However, this is a command line tool, so as long 
> as the response time is reasonable, I don't think it's a big deal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11142) Confusing error message on schema updates when nodes are down

2016-03-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199798#comment-15199798
 ] 

Sylvain Lebresne commented on CASSANDRA-11142:
--

I do think in the case we should have the warning, but not the 
{{OperationTimedOut}} line. Because it's not the schema query that timeouted, 
it's (I assume) the schema agreement check done by the underlying python driver 
(plus, the warning is really enough).

Now, I had a quick check of the code, and currently cqlsh has no good way to 
know that the timeout for the operation it just executed was not due to the 
query itself (also, cqlsh ends up doing it's own schema agreement check on 
every timeout which is kind of inefficient/ugly when the operation is not 
schema related). Ideally, since cqlsh knows when it executes a schema statement 
or not, it could ask the driver to not do its internal agreement check and 
cqlsh would do it itself, thus being able to know what to print/not print.

I'm not familiar enough with cqlsh and the python driver to do that easily 
though, so if someone more familiar want to have a shot at it, be my guest.

> Confusing error message on schema updates when nodes are down
> -
>
> Key: CASSANDRA-11142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11142
> Project: Cassandra
>  Issue Type: Bug
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> Repro steps are as follows (this was tested on Windows and is a consistent 
> repro)
> . Start a two node cluster.
> . Ensure that "nodetool status" shows both nodes as UN on both nodes
> . Stop Node2
> . Ensure that "nodetool status" shows that Node2 in DN.
> . Start cqlsh on Node1
> . Create a table
> . cqlsh times out with below message (coming from .py)
> Warning: schema version mismatch detected, which might be caused by DOWN 
> nodes; if this is not the case, check the schema versions of your nodes in 
> system.local and system.peers.
> OperationTimedOut: errors={}, last_host=10.1.0.10
> . Do a select * on the table that just timed out. It works fine.
> It just seems odd that there are no errors, but the table gets created fine. 
> We should either fix the timeout exception with a real error or not throw 
> timeout. Not sure what the best approach is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10900) CQL3 spec should clarify that now() function is calculated on the coordinator node

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10900:
-
Status: Ready to Commit  (was: Patch Available)

> CQL3 spec should clarify that now() function is calculated on the coordinator 
> node
> --
>
> Key: CASSANDRA-10900
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10900
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Wei Deng
>Assignee: Benjamin Lerer
>Priority: Trivial
>  Labels: documentation
> Fix For: 2.2.6, 3.0.5, 3.5
>
> Attachments: 10900-2.2.txt
>
>
> DataStax CQL v3.3 document on now() function here 
> (http://docs.datastax.com/en/cql/3.3/cql/cql_reference/timeuuid_functions_r.html)
>  states that it is generated on the coordinator, but this is not reflected on 
> the apache cassandra CQL3 spec. Since it is a minor addition, there's no 
> urgency to add it and the commit can stay in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11368) Lists inserts are not truly idempotent

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11368:
-
Summary: Lists inserts are not truly idempotent  (was: List of UDT can't be 
updated properly when using USING TIMESTAMP)

> Lists inserts are not truly idempotent
> --
>
> Key: CASSANDRA-11368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thanh
>
> List of UDT can't be updated properly when using USING TIMESTAMP
> Observe:
> {code}
> cqlsh:t360> CREATE TYPE fullname ( 
> ... fname text, 
> ... lname text 
> ... );
> cqlsh:t360> CREATE TABLE users ( 
> ... id text PRIMARY KEY, 
> ... names list>, 
> ... phone text 
> ... ); 
> cqlsh:t360> UPDATE users USING TIMESTAMP 1458019725701 SET names = [{ fname: 
> 'fname1', lname: 'lname1'},{ fname: 'fname2', lname: 'lname2'},{ fname: 
> 'fname3', lname: 'lname3'}] WHERE id='a'; 
> cqlsh:t360> select * from users;
> id | names | phone 
> +--+---
>  
> a | [{lname: 'lname1', fname: 'fname1'}, {lname: 'lname2', fname: 'fname2'}, 
> {lname: 'lname3', fname: 'fname3'}] | null
> (1 rows) 
> cqlsh:t360> UPDATE users USING TIMESTAMP 1458019725701 SET names = [{ fname: 
> 'fname1', lname: 'lname1'},{ fname: 'fname2', lname: 'lname2'},{ fname: 
> 'fname3', lname: 'lname3'}] WHERE id='a'; 
> cqlsh:t360> select * from users;
> id | names | phone 
> +--+---
>  
> a | [{lname: 'lname1', fname: 'fname1'}, {lname: 'lname2', fname: 'fname2'}, 
> {lname: 'lname3', fname: 'fname3'}, {lname: 'lname1', fname: 'fname1'}, 
> {lname: 'lname2', fname: 'fname2'}, {lname: 'lname3', fname: 'fname3'}] | null
> (1 rows)
> {code}
> => the list doesn't get replaced, it gets appended, which is not the 
> expected/desired result



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11371) Error on startup: keyspace not found in the schema definitions keyspace

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11371:
-
Assignee: Aleksey Yeschenko

> Error on startup: keyspace not found in the schema definitions keyspace
> ---
>
> Key: CASSANDRA-11371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu
>Reporter: Sergey Kirillov
>Assignee: Aleksey Yeschenko
>Priority: Critical
>
> My entire cluster is down now and all nodes failing to start with following 
> error:
> {quote}
> ERROR [main] 2016-03-17 15:26:37,755 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: sempi_kitkat: not found in the schema definitions 
> keyspace.
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:947)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:938)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:901)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:878)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:866)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> {quote}
> It looks like it is somehow related to CASSANDRA-10964 but I'm using default 
> memtable_allocation_type now.
> Any advice how to fix this and restart my cluster will be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10752) CQL.textile wasn't updated for CASSANDRA-6839

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-10752:


Assignee: Sylvain Lebresne  (was: Tyler Hobbs)

> CQL.textile wasn't updated for CASSANDRA-6839
> -
>
> Key: CASSANDRA-10752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Jeremiah Jordan
>Assignee: Sylvain Lebresne
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> CQL.textile wasn't updated after CASSANDRA-6839 added inequalities for LWT's.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10900) CQL3 spec should clarify that now() function is calculated on the coordinator node

2016-03-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199176#comment-15199176
 ] 

Sylvain Lebresne commented on CASSANDRA-10900:
--

+1

> CQL3 spec should clarify that now() function is calculated on the coordinator 
> node
> --
>
> Key: CASSANDRA-10900
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10900
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Wei Deng
>Assignee: Benjamin Lerer
>Priority: Trivial
>  Labels: documentation
> Fix For: 2.2.6, 3.0.5, 3.5
>
> Attachments: 10900-2.2.txt
>
>
> DataStax CQL v3.3 document on now() function here 
> (http://docs.datastax.com/en/cql/3.3/cql/cql_reference/timeuuid_functions_r.html)
>  states that it is generated on the coordinator, but this is not reflected on 
> the apache cassandra CQL3 spec. Since it is a minor addition, there's no 
> urgency to add it and the commit can stay in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11255) COPY TO should have higher double precision

2016-03-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199487#comment-15199487
 ] 

Sylvain Lebresne commented on CASSANDRA-11255:
--

Look mostly good to me. Maybe my last nit on the altar of consistency is that 
the COPY {{float_precision}} default to the cqlsh one but {{double_precision}} 
doesn't, and it's not entirely clear from the {{float_precision}} COPY option 
documentation that this is the case. I'd rather just have none of these option 
be linked between COPY and cqlsh for consistency sake. Outside of that, maybe 
it's worth a re-run of the dtests in case you haven't ran it again, but looks 
good otherwise.

> COPY TO should have higher double precision
> ---
>
> Key: CASSANDRA-11255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11255
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting, lhf
> Fix For: 3.x
>
>
> At the moment COPY TO uses the same float precision as cqlsh, which by 
> default is 5 but it can be changed in cqlshrc. However, typically people want 
> to preserve precision when exporting data and so this default is too low for 
> COPY TO.
> I suggest adding a new COPY FROM option to specify floating point precision 
> with a much higher default value, for example 12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11226) nodetool tablestats' keyspace-level metrics are wrong/misleading

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11226:
-
Status: Patch Available  (was: Open)

Patch is [here|https://github.com/pcmanus/cassandra/commits/11226]. I went with 
option 2: we don't really support humongous amounts of tables in the first 
place and as you said, it's not a performance sensible tool in the first place. 
Plus, while this might maybe be improvement, the current implementation 
iterates over all tables of all keyspaces no matter what ends up being 
displayed, so it's not like this patch will have any performance impact on the 
status quo.

I took the liberty to do a few cleanups while at it btw. And the patch is 
against trunk: not convinced it's worth backporting given that it's been the 
way it is forever and that in theory changing the numbers in a minor/bug fix 
release feels not nice.

> nodetool tablestats' keyspace-level metrics are wrong/misleading
> 
>
> Key: CASSANDRA-11226
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11226
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In the nodetool tablestats output (formerly cfstats), we display "keyspace" 
> level metrics before the table-level metrics:
> {noformat}
> Keyspace: testks
> Read Count: 14772528
> Read Latency: 0.14456651623879135 ms.
> Write Count: 4761283
> Write Latency: 0.062120404521218336 ms.
> Pending Flushes: 0
> Table: processes
> SSTable count: 7
> Space used (live): 496.76 MB
> Space used (total): 496.76 MB
> Space used by snapshots (total): 0 bytes
> Off heap memory used (total): 285.76 KB
> SSTable Compression Ratio: 0.2318241570710227
> Number of keys (estimate): 3027
> Memtable cell count: 2140
> Memtable data size: 1.66 MB
> Memtable off heap memory used: 0 bytes
> Memtable switch count: 967
> Local read count: 14772528
> Local read latency: 0.159 ms
> Local write count: 4761283
> Local write latency: 0.068 ms
> {noformat}
> However, the keyspace-level metrics are misleading, at best.  They are 
> aggregate metrics for every table in the keyspace _that is included in the 
> command line filters_.  So, if you run {{tablestats}} for a single table, the 
> keyspace-level stats will only reflect that table's stats.
> I see two possible fixes:
> # If the command line options don't include the entire keyspace, skip the 
> keyspace-level stats
> # Ignore the command line options, and always make the keyspace-level stats 
> an aggregate of all tables in the keyspace
> My only concern with option 2 is that performance may suffer a bit on 
> keyspaces with many tables.  However, this is a command line tool, so as long 
> as the response time is reasonable, I don't think it's a big deal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11374) LEAK DETECTED during repair

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11374:
-
Assignee: Marcus Eriksson

> LEAK DETECTED during repair
> ---
>
> Key: CASSANDRA-11374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jean-Francois Gosselin
>Assignee: Marcus Eriksson
>
> When running a range repair we are seeing the following LEAK DETECTED errors:
> {noformat}
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,261 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@5ee90b43) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@367168611:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,262 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@4ea9d4a7) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1875396681:Memory@[7f34b905fd10..7f34b9060b7a)
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,262 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@27a6b614) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@838594402:Memory@[7f34bae11ce0..7f34bae11d84)
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,263 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@64e7b566) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@674656075:Memory@[7f342deab4e0..7f342deb7ce0)
>  was not released before the reference was garbage collected
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5645) Display PK values along the header when using EXPAND in cqlsh

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5645:

Labels: lhf  (was: )

> Display PK values along the header when using EXPAND in cqlsh
> -
>
> Key: CASSANDRA-5645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5645
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michał Michalski
>Assignee: Michał Michalski
>Priority: Minor
>  Labels: lhf
>
> Follow-up to CASSANDRA-5597 proposed by [~jjordan].
> Currently cqlsh run in vertical mode prints a header like this:
> {noformat}cqlsh> EXPAND on;
> Now printing expanded output
> cqlsh> SELECT * FROM system.schema_columnfamilies limit 1;
> @ Row 1
> -+-
>  keyspace_name   | system_auth
>  columnfamily_name   | users
>  bloom_filter_fp_chance  | 0.01
>  caching | KEYS_ONLY
>  column_aliases  | []
> (...){noformat}
> The idea is to make it print header this way:
> {noformat}cqlsh> EXPAND on;
> Now printing expanded output
> cqlsh> SELECT * FROM system.schema_columnfamilies limit 1;
> @ Row 1: system_auth, users
> -+-
>  keyspace_name   | system_auth
>  columnfamily_name   | users
>  bloom_filter_fp_chance  | 0.01
>  caching | KEYS_ONLY
>  column_aliases  | []
> (...){noformat}
> [~jjordan], please verify if it's what you requested for.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11226) nodetool tablestats' keyspace-level metrics are wrong/misleading

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-11226:


Assignee: Sylvain Lebresne

> nodetool tablestats' keyspace-level metrics are wrong/misleading
> 
>
> Key: CASSANDRA-11226
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11226
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In the nodetool tablestats output (formerly cfstats), we display "keyspace" 
> level metrics before the table-level metrics:
> {noformat}
> Keyspace: testks
> Read Count: 14772528
> Read Latency: 0.14456651623879135 ms.
> Write Count: 4761283
> Write Latency: 0.062120404521218336 ms.
> Pending Flushes: 0
> Table: processes
> SSTable count: 7
> Space used (live): 496.76 MB
> Space used (total): 496.76 MB
> Space used by snapshots (total): 0 bytes
> Off heap memory used (total): 285.76 KB
> SSTable Compression Ratio: 0.2318241570710227
> Number of keys (estimate): 3027
> Memtable cell count: 2140
> Memtable data size: 1.66 MB
> Memtable off heap memory used: 0 bytes
> Memtable switch count: 967
> Local read count: 14772528
> Local read latency: 0.159 ms
> Local write count: 4761283
> Local write latency: 0.068 ms
> {noformat}
> However, the keyspace-level metrics are misleading, at best.  They are 
> aggregate metrics for every table in the keyspace _that is included in the 
> command line filters_.  So, if you run {{tablestats}} for a single table, the 
> keyspace-level stats will only reflect that table's stats.
> I see two possible fixes:
> # If the command line options don't include the entire keyspace, skip the 
> keyspace-level stats
> # Ignore the command line options, and always make the keyspace-level stats 
> an aggregate of all tables in the keyspace
> My only concern with option 2 is that performance may suffer a bit on 
> keyspaces with many tables.  However, this is a command line tool, so as long 
> as the response time is reasonable, I don't think it's a big deal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10748) UTF8Validator.validate() wrong ??

2016-03-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199339#comment-15199339
 ] 

Sylvain Lebresne commented on CASSANDRA-10748:
--

Not a huge UTF8 expert but the change do seems to match the comment pretty well 
and this makes a lot more sense. +1 unless there is a CI problem of course.

> UTF8Validator.validate() wrong ??
> -
>
> Key: CASSANDRA-10748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10748
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Benjamin Lerer
>Priority: Minor
>
> The switch-statement in {{UTF8Validator.validate()}} can never go into {{case 
> TWO_80}} as the assignment {{state = State.TWO_80;}} in line 75 is dead.
> I assume that the {{TWO_80}} case is completely superfluous - but would like 
> to have a 2nd opinion on this.
> /cc [~carlyeks] (CASSANDRA-4495)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11255) COPY TO should have higher double precision

2016-03-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199190#comment-15199190
 ] 

Sylvain Lebresne commented on CASSANDRA-11255:
--

The only thing I'm wondering is if for consistency we shouldn't have a 
{{floatprecision}} for COPY, even if it default to the cqlsh option. Otherwise, 
it's weird when you want to change both precision to have to use the cqlsh 
option for float and the COPY one for double. Also, wouldn't it make sense to 
add a {{double_precision}} option to cqlsh for consistency? 

> COPY TO should have higher double precision
> ---
>
> Key: CASSANDRA-11255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11255
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting, lhf
> Fix For: 3.x
>
>
> At the moment COPY TO uses the same float precision as cqlsh, which by 
> default is 5 but it can be changed in cqlshrc. However, typically people want 
> to preserve precision when exporting data and so this default is too low for 
> COPY TO.
> I suggest adding a new COPY FROM option to specify floating point precision 
> with a much higher default value, for example 12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8958) Add client to cqlsh SHOW_SESSION

2016-03-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8958:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Lgtm, committed, thanks.

> Add client to cqlsh SHOW_SESSION
> 
>
> Key: CASSANDRA-8958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8958
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
>
> Once the python driver supports it, 
> https://datastax-oss.atlassian.net/browse/PYTHON-235, add the client to cqlsh 
> {{SHOW_SESSION}} as done in this commit:
> https://github.com/apache/cassandra/commit/249f79d3718fa05347d60e09f9d3fa15059bd3d3
> Also, update the bundled python driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10809) Create a -D option to prevent gossip startup

2016-03-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10809:
-
Attachment: 10809.txt

I believe restoring the previous behavior is as simple as aborting 
{{initServer}} before {{prepareToJoin}}, which the attached patch does. The 
patch doesn't add a way to start gossip after the fact, but I assume that if 
you're debugging a schema problem or something like that, it's ok to restart 
(without the flag) once you've fixed your problem.

> Create a -D option to prevent gossip startup
> 
>
> Key: CASSANDRA-10809
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10809
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Brandon Williams
> Fix For: 2.1.x
>
> Attachments: 10809.txt
>
>
> In CASSANDRA-6961 we changed how join_ring=false works, to great benefit.  
> However, sometimes you need to a node to come up, but not interact with other 
> nodes whatsoever - for example if you have a schema problem, it will still 
> pull the schema from another node because they still gossip even though we're 
> in a dead state.
> We can add a way to restore the previous behavior by simply adding something 
> like -Dcassandra.start_gossip=false.
> In the meantime we can workaround by setting listen_address to localhost, but 
> that's kind of a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10752) CQL.textile wasn't updated for CASSANDRA-6839

2016-03-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10752:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 2.1.x)
   (was: 3.x)
   3.5
   3.0.5
   2.2.6
   2.1.14
   Status: Resolved  (was: Patch Available)

Committed and changed pushed online. Thanks.

> CQL.textile wasn't updated for CASSANDRA-6839
> -
>
> Key: CASSANDRA-10752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Jeremiah Jordan
>Assignee: Sylvain Lebresne
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
>
> CQL.textile wasn't updated after CASSANDRA-6839 added inequalities for LWT's.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10809) Create a -D option to prevent gossip startup

2016-03-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10809:
-
Assignee: Sylvain Lebresne
Reviewer: Brandon Williams
  Status: Patch Available  (was: Open)

[~brandon.williams] do you a few minutes to check if that's what you had in 
mind?

> Create a -D option to prevent gossip startup
> 
>
> Key: CASSANDRA-10809
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10809
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Brandon Williams
>Assignee: Sylvain Lebresne
> Fix For: 2.1.x
>
> Attachments: 10809.txt
>
>
> In CASSANDRA-6961 we changed how join_ring=false works, to great benefit.  
> However, sometimes you need to a node to come up, but not interact with other 
> nodes whatsoever - for example if you have a schema problem, it will still 
> pull the schema from another node because they still gossip even though we're 
> in a dead state.
> We can add a way to restore the previous behavior by simply adding something 
> like -Dcassandra.start_gossip=false.
> In the meantime we can workaround by setting listen_address to localhost, but 
> that's kind of a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2016-03-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8969:

Fix Version/s: 3.x
   Status: Patch Available  (was: Open)

> Add indication in cassandra.yaml that rpc timeouts going too high will cause 
> memory build up
> 
>
> Key: CASSANDRA-8969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Hanna
>Assignee: Jeremy Hanna
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 8969.txt
>
>
> It would be helpful to communicate that setting the rpc timeouts too high may 
> cause memory problems on the server as it can become overloaded and has to 
> retain the in flight requests in memory.  I'll get this done but just adding 
> the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9588) Make sstableofflinerelevel print stats before relevel

2016-03-18 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199250#comment-15199250
 ] 

Sylvain Lebresne commented on CASSANDRA-9588:
-

+1

> Make sstableofflinerelevel print stats before relevel
> -
>
> Key: CASSANDRA-9588
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9588
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jens Rantil
>Assignee: Marcus Eriksson
>Priority: Trivial
>  Labels: lhf
> Fix For: 3.x
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The current version of sstableofflinerelevel prints the new level hierarchy. 
> While "nodetool cfstats ..." will tell the current hierarchy it would be nice 
> to have "sstableofflinerelevel" output the current level histograms for easy 
> comparison of what changes will be made. Especially since 
> sstableofflinerelevel needs to run when node isn't running and "nodetool 
> cfstats ..." doesn't work because of that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10752) CQL.textile wasn't updated for CASSANDRA-6839

2016-03-18 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10752:
-
Status: Patch Available  (was: Open)

Patch is [here|https://github.com/pcmanus/cassandra/commits/10752]. It's 
against 2.1 since why not, but I'll just merge up if we're good with the 
changes.

> CQL.textile wasn't updated for CASSANDRA-6839
> -
>
> Key: CASSANDRA-10752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Jeremiah Jordan
>Assignee: Sylvain Lebresne
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> CQL.textile wasn't updated after CASSANDRA-6839 added inequalities for LWT's.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10342) Read defragmentation can cause unnecessary repairs

2016-03-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197087#comment-15197087
 ] 

Sylvain Lebresne commented on CASSANDRA-10342:
--

+1

> Read defragmentation can cause unnecessary repairs
> --
>
> Key: CASSANDRA-10342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10342
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Olsson
>Assignee: Marcus Eriksson
>Priority: Minor
>
> After applying the fix from CASSANDRA-10299 to the cluster we started having 
> a problem of ~20k small sstables appearing for the table with static data 
> when running incremental repair.
> In the logs there were several messages about flushes for that table, one for 
> each repaired range. The flushed sstables were 0.000kb in size with < 100 ops 
> in each. When checking cfstats there were several writes to that table, even 
> though we were only reading from it and read repair did not repair anything.
> After digging around in the codebase I noticed that defragmentation of data 
> can occur while reading, depending on the query and some other conditions. 
> This causes the read data to be inserted again to have it in a more recent 
> sstable, which can be a problem if that data was repaired using incremental 
> repair. The defragmentation is done in 
> [CollationController.java|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/CollationController.java#L151].
> I guess this wasn't a problem with full repairs since I assume that the 
> digest should be the same even if you have two copies of the same data. But 
> with incremental repair this will most probably cause a mismatch between 
> nodes if that data already was repaired, since the other nodes probably won't 
> have that data in their unrepaired set.
> --
> I can add that the problems on our cluster was probably due to the fact that 
> CASSANDRA-10299 caused the same data to be streamed multiple times and ending 
> up in several sstables. One of the conditions for the defragmentation is that 
> the number of sstables read during a read request have to be more than the 
> minimum number of sstables needed for a compaction(> 4 in our case). So 
> normally I don't think this would cause ~20k sstables to appear, we probably 
> hit an extreme.
> One workaround for this is to use another compaction strategy than STCS(it 
> seems to be the only affected strategy, atleast in 2.1), but the solution 
> might be to either make defragmentation configurable per table or avoid 
> reinserting the data if any of the sstables involved in the read are repaired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2016-03-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10968:
-
Description: 
xNoticed indeterminate behaviour when taking snapshot on column families that 
has secondary indexes setup. The created manifest.json created when doing 
snapshot, sometimes contains no file names at all and sometimes some file 
names. 
I don't know if this post is related but that was the only thing I could find:
http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html

  was:
Noticed indeterminate behaviour when taking snapshot on column families that 
has secondary indexes setup. The created manifest.json created when doing 
snapshot, sometimes contains no file names at all and sometimes some file 
names. 
I don't know if this post is related but that was the only thing I could find:
http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html


> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>  Labels: lhf
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2016-03-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10876:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Ready to Commit)

Committed, thanks.

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
> Attachments: 10876.txt
>
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11342) Native transport TP stats aren't getting logged

2016-03-14 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11342.
--
Resolution: Not A Problem

Oh, then since 2.1 is only for critical fixes at this point and this is not 
critical, closing.

> Native transport TP stats aren't getting logged
> ---
>
> Key: CASSANDRA-11342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11342
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sebastian Estevez
>  Labels: lhf
>
> Native-Transports was added back to tpstats in CASSANDRA-10044 but I think it 
> was missed in the StatusLogger because I'm not seeing it in my system.log.
> {code}
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,582  StatusLogger.java:51 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> CounterMutationStage  2 02534760 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> ReadStage 1 0 447464 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> RequestResponseStage  2 16035382 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> ReadRepairStage   0 1282 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> MutationStage 0 07187156 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> GossipStage   0 0   5535 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> AntiEntropyStage  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> CacheCleanupExecutor  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> MigrationStage0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> ValidationExecutor0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> Sampler   0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> MiscStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> CommitLogArchiver 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> MemtableFlushWriter   115106 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> PendingRangeCalculator0 0381 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> MemtableReclaimMemory 0 0106 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> MemtablePostFlush 116170 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> CompactionExecutor2   191636 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> InternalResponseStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> HintedHandoff 2 5 60 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:75 - 
> CompactionManager 2 4
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:87 - 
> MessagingServicen/a   1/4
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:97 - 
> Cache Type Size Capacity   
> KeysToSave
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:99 - 
> KeyCache   93954904104857600  

[jira] [Updated] (CASSANDRA-11342) Native transport TP stats aren't getting logged

2016-03-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11342:
-
Issue Type: Improvement  (was: Bug)

> Native transport TP stats aren't getting logged
> ---
>
> Key: CASSANDRA-11342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11342
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sebastian Estevez
>  Labels: lhf
>
> Native-Transports was added back to tpstats in CASSANDRA-10044 but I think it 
> was missed in the StatusLogger because I'm not seeing it in my system.log.
> {code}
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,582  StatusLogger.java:51 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> CounterMutationStage  2 02534760 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> ReadStage 1 0 447464 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> RequestResponseStage  2 16035382 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> ReadRepairStage   0 1282 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> MutationStage 0 07187156 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> GossipStage   0 0   5535 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> AntiEntropyStage  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> CacheCleanupExecutor  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> MigrationStage0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> ValidationExecutor0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> Sampler   0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> MiscStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> CommitLogArchiver 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> MemtableFlushWriter   115106 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> PendingRangeCalculator0 0381 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> MemtableReclaimMemory 0 0106 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> MemtablePostFlush 116170 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> CompactionExecutor2   191636 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> InternalResponseStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> HintedHandoff 2 5 60 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:75 - 
> CompactionManager 2 4
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:87 - 
> MessagingServicen/a   1/4
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:97 - 
> Cache Type Size Capacity   
> KeysToSave
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:99 - 
> KeyCache   93954904104857600  
> all
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  

[jira] [Updated] (CASSANDRA-11342) Native transport TP stats aren't getting logged

2016-03-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11342:
-
Labels: lhf  (was: )

> Native transport TP stats aren't getting logged
> ---
>
> Key: CASSANDRA-11342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11342
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>  Labels: lhf
>
> Native-Transports was added back to tpstats in CASSANDRA-10044 but I think it 
> was missed in the StatusLogger because I'm not seeing it in my system.log.
> {code}
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,582  StatusLogger.java:51 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> CounterMutationStage  2 02534760 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> ReadStage 1 0 447464 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,583  StatusLogger.java:66 - 
> RequestResponseStage  2 16035382 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> ReadRepairStage   0 1282 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> MutationStage 0 07187156 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> GossipStage   0 0   5535 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,584  StatusLogger.java:66 - 
> AntiEntropyStage  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> CacheCleanupExecutor  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> MigrationStage0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> ValidationExecutor0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> Sampler   0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,585  StatusLogger.java:66 - 
> MiscStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> CommitLogArchiver 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> MemtableFlushWriter   115106 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> PendingRangeCalculator0 0381 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,586  StatusLogger.java:66 - 
> MemtableReclaimMemory 0 0106 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> MemtablePostFlush 116170 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> CompactionExecutor2   191636 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> InternalResponseStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,587  StatusLogger.java:66 - 
> HintedHandoff 2 5 60 0
>  0
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:75 - 
> CompactionManager 2 4
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:87 - 
> MessagingServicen/a   1/4
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:97 - 
> Cache Type Size Capacity   
> KeysToSave
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:99 - 
> KeyCache   93954904104857600  
> all
> INFO  [ScheduledTasks:1] 2016-03-10 20:01:26,588  StatusLogger.java:105 - 
> RowCache  

[jira] [Updated] (CASSANDRA-11339) WHERE clause in SELECT DISTINCT can be ignored

2016-03-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11339:
-
Assignee: Benjamin Lerer

> WHERE clause in SELECT DISTINCT can be ignored
> --
>
> Key: CASSANDRA-11339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11339
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
> Fix For: 2.1.x
>
>
> I've tested this out on 2.1-head. I'm not sure if it's the same behavior on 
> newer versions.
> For a given table t, with {{PRIMARY KEY (id, v)}} the following two queries 
> return the same result:
> {{SELECT DISTINCT id FROM t WHERE v > X ALLOW FILTERING}}
> {{SELECT DISTINCT id FROM t}}
> The WHERE clause in the former is silently ignored, and all id are returned, 
> regardless of the value of v in any row. 
> It seems like this has been a known issue for a while:
> http://stackoverflow.com/questions/26548788/select-distinct-cql-ignores-where-clause
> However, if we don't support filtering on anything but the partition key, we 
> should reject the query, rather than silently dropping the where clause



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11196) tuple_notation_test upgrade tests flaps

2016-03-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190982#comment-15190982
 ] 

Sylvain Lebresne commented on CASSANDRA-11196:
--

+1 (but maybe add that this is not a problem in 3.0 so people reading that 
don't freak out too much as it's kind of a hack)

> tuple_notation_test upgrade tests flaps
> ---
>
> Key: CASSANDRA-11196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11196
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: 11196-2.2.txt, node1.log, node1_debug.log, node2.log, 
> output.txt
>
>
> {{tuple_notation_test}} in the {{upgrade_tests.cql_tests}} module flaps on a 
> number of different upgrade paths. Here are some of the tests that flap:
> {code}
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
> {code}
> Here's an example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD/tuple_notation_test/
> All the failures I've seen fail with this error:
> {code}
>  message="java.lang.IndexOutOfBoundsException">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11196) tuple_notation_test upgrade tests flaps

2016-03-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11196:
-
Reviewer: Sylvain Lebresne

> tuple_notation_test upgrade tests flaps
> ---
>
> Key: CASSANDRA-11196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11196
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: 11196-2.2.txt, node1.log, node1_debug.log, node2.log, 
> output.txt
>
>
> {{tuple_notation_test}} in the {{upgrade_tests.cql_tests}} module flaps on a 
> number of different upgrade paths. Here are some of the tests that flap:
> {code}
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
> {code}
> Here's an example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD/tuple_notation_test/
> All the failures I've seen fail with this error:
> {code}
>  message="java.lang.IndexOutOfBoundsException">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11332) nodes connect to themselves when NTS is used

2016-03-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190969#comment-15190969
 ] 

Sylvain Lebresne commented on CASSANDRA-11332:
--

bq. or it will complain about not being able to find it when it connects to 
itself

I believe you, but it would still be a little bit more useful to have a proper 
stack of the error, or in which way it "complains".

Anyway, my preference for fixing it when it goes to 2.1/2.2/3.0 would be to fix 
it in PFS, having it recognize your {{listen_address}} when asked for it and 
use the BCA instead to figure out the DC and rack. That ought to be simple and 
safe. Changing MessagingService to special case the local path is doable, but 
as said above a tad more involved.

> nodes connect to themselves when NTS is used
> 
>
> Key: CASSANDRA-11332
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11332
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
> Fix For: 2.1.x
>
>
> I tested this with both the simple snitch and PFS.  It's quite easy to repro, 
> setup a cluster, start it.  Mine looks like this:
> {noformat}
> tcp0  0 10.208.8.123:48003  10.208.8.63:7000
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.8.63:40215   
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:9  10.208.35.225:7000  
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:33498  10.208.8.63:7000
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.35.225:52530 
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.35.225:53674 
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:40846  10.208.35.225:7000  
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.8.63:48880   
> ESTABLISHED 26254/java
> {noformat}
> No problems so far.  Now create a keyspace using NTS with an rf of 3, and 
> perform some writes.  Now it looks like this:
> {noformat}
> tcp0  0 10.208.8.123:48003  10.208.8.63:7000
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.123:35024  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:35024  10.208.8.123:7000   
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:47212  10.208.8.123:7000   
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.63:40215   
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:9  10.208.35.225:7000  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:33498  10.208.8.63:7000
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.35.225:52530 
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.35.225:53674 
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.123:47212  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:40846  10.208.35.225:7000  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.63:48880   
> ESTABLISHED 26254/java  
> {noformat}
> I can't think of any reason for a node to connect to itself, and this can 
> cause problems with PFS where you might only define the broadcast addresses, 
> but now you need the internal addresses too because the node will need to 
> look itself up when connecting to itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10944) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-03-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189058#comment-15189058
 ] 

Sylvain Lebresne commented on CASSANDRA-10944:
--

Can you guy open a separate issue please? I don't doubt you still get this but 
there has been code committed here so it's a different (even if related) 
problem and we want to keep track of which code has been committed to which 
version. Thanks.

> ERROR [CompactionExecutor] CassandraDaemon.java  Exception in thread 
> -
>
> Key: CASSANDRA-10944
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10944
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.0.3, 3.3
>
>
> Hey. Please help me with a problem. Recently I updated to 3.0.1 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2596] 2015-12-28 08:30:27,733 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2596,1,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$100(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$53/1339741213.apply(Unknown
>  Source) ~[na:na]
>   at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:614)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:657) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:632) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow.lambda$purge$95(BTreeRow.java:333) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow$$Lambda$52/1236900032.apply(Unknown 
> Source) ~[na:na]
>   at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:614)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:657) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:632) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> 

[jira] [Resolved] (CASSANDRA-10944) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-03-10 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-10944.
--
Resolution: Fixed

> ERROR [CompactionExecutor] CassandraDaemon.java  Exception in thread 
> -
>
> Key: CASSANDRA-10944
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10944
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.3, 3.0.3
>
>
> Hey. Please help me with a problem. Recently I updated to 3.0.1 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2596] 2015-12-28 08:30:27,733 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2596,1,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$100(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$53/1339741213.apply(Unknown
>  Source) ~[na:na]
>   at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:614)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:657) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:632) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow.lambda$purge$95(BTreeRow.java:333) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow$$Lambda$52/1236900032.apply(Unknown 
> Source) ~[na:na]
>   at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:614)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:657) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:632) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> 

[jira] [Commented] (CASSANDRA-11208) Paging is broken for IN queries

2016-03-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189049#comment-15189049
 ] 

Sylvain Lebresne commented on CASSANDRA-11208:
--

+1 but can you fix the 2 following typos while committing:
* "If the _the_ clustering..."
* ".. in both cases there are _not_ data remaining" (that one is not new to the 
patch)

> Paging is broken for IN queries
> ---
>
> Key: CASSANDRA-11208
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11208
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Attachments: 11208-3.0.txt
>
>
> If the number of selected row is greater than the page size, C* will return 
> some duplicates.
> The problem can be reproduced with the java driver using the following code:
> {code}
>session = cluster.connect();
>session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = 
> {'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
>session.execute("USE test");
>session.execute("DROP TABLE IF EXISTS test");
>session.execute("CREATE TABLE test (rc int, pk int, PRIMARY KEY 
> (pk))");
>for (int i = 0; i < 5; i++)
>session.execute("INSERT INTO test (pk, rc) VALUES (?, ?);", i, i);
>ResultSet rs = session.execute(session.newSimpleStatement("SELECT * 
> FROM test WHERE  pk IN (1, 2, 3)").setFetchSize(2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11332) nodes connect to themselves when NTS is used

2016-03-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15189031#comment-15189031
 ] 

Sylvain Lebresne commented on CASSANDRA-11332:
--

bq. I can't think of any reason for a node to connect to itself

There is quite a few places where we don't special case the local case and just 
go through {{MessagingService}} even for localhost out of simplicity. Of the 
top of my head I believe that's at least the case for truncation, repair and 
most of Paxos messages, but I'm sure I'm missing some. And {{MessagingService}} 
don't special case the local host either.

I'll note that afaict normal writes and schema both do special case the local 
host so I'm not sure why you can reproduce with just adding a keyspace and 
doing a few writes (unless those write include LWT in which that's definitively 
due to Paxos), but I'm sure I'm forgotting something that is not special cases 
and I'm not surprised at all.

Now, we could modify {{MessagingService}} I suppose to special case local 
messages, though we still want sending to be non-blocking so we'll still need a 
specific thread with a queue, and we probably still want to preserve the 
protection we have like dropping messages too old (which, while I'm fine 
implementing that solution, make me uncomfortable doing it on an old branch).

But truth is, I'm not sure I understand the concrete consequence this has for 
{{PropertyFileSnitch}} (that is, have you noticed a genuine bug due to this?) 
but we could also fix it so it recognize that both broadcast and listen 
addresses mean the same thing.

> nodes connect to themselves when NTS is used
> 
>
> Key: CASSANDRA-11332
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11332
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
> Fix For: 2.1.x
>
>
> I tested this with both the simple snitch and PFS.  It's quite easy to repro, 
> setup a cluster, start it.  Mine looks like this:
> {noformat}
> tcp0  0 10.208.8.123:48003  10.208.8.63:7000
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.8.63:40215   
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:9  10.208.35.225:7000  
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:33498  10.208.8.63:7000
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.35.225:52530 
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.35.225:53674 
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:40846  10.208.35.225:7000  
> ESTABLISHED 26254/java
> tcp0  0 10.208.8.123:7000   10.208.8.63:48880   
> ESTABLISHED 26254/java
> {noformat}
> No problems so far.  Now create a keyspace using NTS with an rf of 3, and 
> perform some writes.  Now it looks like this:
> {noformat}
> tcp0  0 10.208.8.123:48003  10.208.8.63:7000
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.123:35024  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:35024  10.208.8.123:7000   
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:47212  10.208.8.123:7000   
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.63:40215   
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:9  10.208.35.225:7000  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:33498  10.208.8.63:7000
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.35.225:52530 
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.35.225:53674 
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.123:47212  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:40846  10.208.35.225:7000  
> ESTABLISHED 26254/java  
> tcp0  0 10.208.8.123:7000   10.208.8.63:48880   
> ESTABLISHED 26254/java  
> {noformat}
> I can't think of any reason for a node to connect to itself, and this can 
> cause problems with PFS where you might only define the broadcast addresses, 
> but now you need the internal addresses too because the node will need to 
> look itself up when connecting to itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11331) Create Index IF NOT EXISTS throws error when index already exists

2016-03-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188963#comment-15188963
 ] 

Sylvain Lebresne commented on CASSANDRA-11331:
--

bq. we can broaden the scope of the IF NOT EXISTS check to include duplicates 
in all but name

I think we should. And honestly, from a user point of view, this sounds a lot 
like a regression and it's hard to argue it's not.

> Create Index IF NOT EXISTS throws error when index already exists
> -
>
> Key: CASSANDRA-11331
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11331
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Philip Thompson
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.x, 3.x
>
>
> While testing with trunk, I see that issuing the following queries throws an 
> InvalidRequest, despite being valid.
> {code}
> CREATE KEYSPACE k WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'}  AND durable_writes = true;
> USE k;
> CREATE TABLE k.t (
> id int PRIMARY KEY,
> v int,
> v2 int,
> v3 text
> );
> CREATE INDEX IF NOT EXISTS ON t (v2);
> CREATE INDEX IF NOT EXISTS ON t (v2);
> InvalidRequest: code=2200 [Invalid query] message="Index t_v2_idx_1 is a 
> duplicate of existing index t_v2_idx"
> {code}
> The second {{CREATE INDEX IF NOT EXISTS}} should work fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11196) tuple_notation_test upgrade tests flaps

2016-03-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188952#comment-15188952
 ] 

Sylvain Lebresne commented on CASSANDRA-11196:
--

bq. It seems to me that we should probably run the upgrade tests with all the 
nodes on the same version

+1. Ultimately, we want to be able to test all type of request we have during 
upgrade to make sure we don't break the protocol for any of them, but we 
obviously want to also make sure it's not broken within a given version and 
there is not reason to duplicate the effort. 

> tuple_notation_test upgrade tests flaps
> ---
>
> Key: CASSANDRA-11196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11196
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, output.txt
>
>
> {{tuple_notation_test}} in the {{upgrade_tests.cql_tests}} module flaps on a 
> number of different upgrade paths. Here are some of the tests that flap:
> {code}
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
> upgrade_tests/cql_tests.py:TestCQLNodes3RF3_2_2_HEAD_UpTo_Trunk.tuple_notation_test
> {code}
> Here's an example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_1_UpTo_2_2_HEAD/tuple_notation_test/
> All the failures I've seen fail with this error:
> {code}
>  message="java.lang.IndexOutOfBoundsException">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11331) Create Index IF NOT EXISTS throws error when index already exists

2016-03-10 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11331:
-
Assignee: Sam Tunnicliffe

> Create Index IF NOT EXISTS throws error when index already exists
> -
>
> Key: CASSANDRA-11331
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11331
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Philip Thompson
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.x, 3.x
>
>
> While testing with trunk, I see that issuing the following queries throws an 
> InvalidRequest, despite being valid.
> {code}
> CREATE KEYSPACE k WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'}  AND durable_writes = true;
> USE k;
> CREATE TABLE k.t (
> id int PRIMARY KEY,
> v int,
> v2 int,
> v3 text
> );
> CREATE INDEX IF NOT EXISTS ON t (v2);
> CREATE INDEX IF NOT EXISTS ON t (v2);
> InvalidRequest: code=2200 [Invalid query] message="Index t_v2_idx_1 is a 
> duplicate of existing index t_v2_idx"
> {code}
> The second {{CREATE INDEX IF NOT EXISTS}} should work fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11302) Invalid time unit conversion causing write timeouts

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11302.
--
   Resolution: Fixed
Fix Version/s: 3.5
   3.0.5
   2.2.6
   2.1.14
Reproduced In: 2.2.5, 2.1.5  (was: 2.1.5, 2.2.5)

Re-run on 3.0 looked much better so committed, thanks.

I'll note that this bug will likely make us drop all droppable messages once 
{{expireMessages}} run, though that latter method only kicks in when we have 
1024 outstanding messages in the queue, which is why this shouldn't affect 
"healthy" cluster. That could still be pretty bad on a short burst of activity 
or a node getting very slightly behind. 

> Invalid time unit conversion causing write timeouts
> ---
>
> Key: CASSANDRA-11302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Mike Heffner
>Assignee: Sylvain Lebresne
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
> Attachments: nanosec.patch
>
>
> We've been debugging a write timeout that we saw after upgrading from the 
> 2.0.x release line, with our particular workload. Details of that process can 
> be found in this thread:
> https://www.mail-archive.com/user@cassandra.apache.org/msg46064.html
> After bisecting various patch release versions, and then commits, on the 
> 2.1.x release line we've identified version 2.1.5 and this commit as the 
> point where the timeouts first start appearing:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9
> After examining the commit we believe this line was a typo:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9#diff-c7ef124561c4cde1c906f28ad3883a88L467
> as it doesn't properly convert the timeout value from milliseconds to 
> nanoseconds.
> After testing with the attached patch applied, we do not see timeouts on 
> version 2.1.5 nor against 2.2.5 when we bring the patch forward. While we've 
> tested our workload against this and we are fairly confident in the patch, we 
> are not experts with the code base so we would prefer additional review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11304:
-
Reviewer: Stefania

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.5
>
> Attachments: 11304-3.0.txt
>
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11302) Invalid time unit conversion causing write timeouts

2016-03-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184979#comment-15184979
 ] 

Sylvain Lebresne commented on CASSANDRA-11302:
--

No reason to not get that fix quickly so pushed a fix that uses the 
{{isTimedOut}} method instead:
|| patch || utests || dtests ||
| [2.1|https://github.com/pcmanus/cassandra/commits/11302-2.1] | 
[utests|http://cassci.datastax.com/job/pcmanus-11302-2.1-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11302-2.1-dtest] |
| [2.2|https://github.com/pcmanus/cassandra/commits/11302-2.2] | 
[utests|http://cassci.datastax.com/job/pcmanus-11302-2.2-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11302-2.2-dtest] |
| [3.0|https://github.com/pcmanus/cassandra/commits/11302-3.0] | 
[utests|http://cassci.datastax.com/job/pcmanus-11302-3.0-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11302-3.0-dtest] |

The tests on 2.1 and 2.2 are on par with their main branches, the 3.0 runs had 
a few additional failures which are probably unrelated but I've re-started them 
to check. [~aweisberg], mind having a look (all the versions are the same, they 
merge up from 2.1 without conflict)?


> Invalid time unit conversion causing write timeouts
> ---
>
> Key: CASSANDRA-11302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Mike Heffner
> Attachments: nanosec.patch
>
>
> We've been debugging a write timeout that we saw after upgrading from the 
> 2.0.x release line, with our particular workload. Details of that process can 
> be found in this thread:
> https://www.mail-archive.com/user@cassandra.apache.org/msg46064.html
> After bisecting various patch release versions, and then commits, on the 
> 2.1.x release line we've identified version 2.1.5 and this commit as the 
> point where the timeouts first start appearing:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9
> After examining the commit we believe this line was a typo:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9#diff-c7ef124561c4cde1c906f28ad3883a88L467
> as it doesn't properly convert the timeout value from milliseconds to 
> nanoseconds.
> After testing with the attached patch applied, we do not see timeouts on 
> version 2.1.5 nor against 2.2.5 when we bring the patch forward. While we've 
> tested our workload against this and we are fairly confident in the patch, we 
> are not experts with the code base so we would prefer additional review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11293) NPE when using CQLSSTableWriter

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11293.
--
Resolution: Not A Problem

bq. No assertions are made anywhere I can find in the docs about 
CQLSSTableWriter's thread safety or lack thereof

I agree and I just added documentation regarding it to the javadoc.

bq. Is it really possible to use multiple CQLSSTableWriters writing to the same 
directory at the same time? I'd be astonished if that were so.

Then be astonished. You can indeed use multiple instances on the same 
table/same directory, as long as you don't use a given instance from multiple 
threads.

Anyway, sorry for the lack of documentation, which is now fixed, but closing 
since there isn't an actual bug here (outside of the lack of clear 
documentation I guess).

> NPE when using CQLSSTableWriter
> ---
>
> Key: CASSANDRA-11293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11293
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: C* 3.3, C* trunk
>Reporter: Will Hayworth
>
> Hi all!
> I'm trying to using CQLSSTableWriter to load a bunch of historical data into 
> my cluster and I'm getting NullPointerExceptions consistently after having 
> written a few million rows (anywhere from 0.5 to 1.5 GB of data).
> {code}
> java.lang.NullPointerException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at 
> java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
>  at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677) 
> at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
>  at 
> com.atlassian.engagementengine.segmentation.helenus.Daemon.main(Daemon.java:24)
> Caused by: java.lang.NullPointerException at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:126)
>  at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:44)
>  at
> java.util.TimSort.binarySort(TimSort.java:296) at
> java.util.TimSort.sort(TimSort.java:239) at
> java.util.Arrays.sort(Arrays.java:1512) at
> org.apache.cassandra.utils.btree.BTree$Builder.sort(BTree.java:1027) at 
> org.apache.cassandra.utils.btree.BTree$Builder.autoEnforce(BTree.java:1036) 
> at org.apache.cassandra.utils.btree.BTree$Builder.build(BTree.java:1075) at 
> org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:572)
>  at 
> org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:562)
>  at 
> org.apache.cassandra.db.partitions.PartitionUpdate.holder(PartitionUpdate.java:370)
>  at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:177)
>  at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:172)
>  at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:209)
> {code}
> This may be a red herring, but I started encountering this when I 
> parallelized writes. (I wasn't aware that doing so was safe until I saw 
> CASSANDRA-7463; I Googled in vain for a while before that.) I'm also 
> definitely not passing any nulls in my {{addRow}} calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11313) cassandra.in.sh sigar path isn't used.

2016-03-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184720#comment-15184720
 ] 

Sylvain Lebresne commented on CASSANDRA-11313:
--

I don't know the first thing about sigar, but I'll just note that we also add 
this in {{cassandra.in.bat}} (albeit without the {{:}} problem), so if it turns 
out to not be useful for sigar to work, we might want to remove it there too.

> cassandra.in.sh sigar path isn't used.
> --
>
> Key: CASSANDRA-11313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11313
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jeremiah Jordan
>Assignee: T Jake Luciani
>Priority: Trivial
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> cassandra.in.sh has:
> {code}
> # Added sigar-bin to the java.library.path CASSANDRA-7838
> JAVA_OPTS="$JAVA_OPTS:-Djava.library.path=$CASSANDRA_HOME/lib/sigar-bin"
> {code}
> At the end.  It is never used from there. If it was there would be an error 
> because it does "$JAVA_OPTS:" not "$JAVA_OPTS "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11313) cassandra.in.sh sigar path isn't used.

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11313:
-
Assignee: T Jake Luciani

> cassandra.in.sh sigar path isn't used.
> --
>
> Key: CASSANDRA-11313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11313
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jeremiah Jordan
>Assignee: T Jake Luciani
>Priority: Trivial
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> cassandra.in.sh has:
> {code}
> # Added sigar-bin to the java.library.path CASSANDRA-7838
> JAVA_OPTS="$JAVA_OPTS:-Djava.library.path=$CASSANDRA_HOME/lib/sigar-bin"
> {code}
> At the end.  It is never used from there. If it was there would be an error 
> because it does "$JAVA_OPTS:" not "$JAVA_OPTS "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8890) Enhance cassandra-env.sh to handle Java version output in case of OpenJDK icedtea"

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8890:

Fix Version/s: (was: 3.5)
   3.6

> Enhance cassandra-env.sh to handle Java version output in case of OpenJDK 
> icedtea"
> --
>
> Key: CASSANDRA-8890
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8890
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Red Hat Enterprise Linux Server release 6.4 (Santiago)
>Reporter: Sumod Pawgi
>Assignee: Brandon Williams
>Priority: Minor
>  Labels: conf, icedtea
> Fix For: 3.6
>
> Attachments: 8890-v2.txt, trunk-8890.patch, trunk-8890.txt
>
>
> Where observed - 
> Cassandra node has OpenJDK - 
> java version "1.7.0_09-icedtea"
> In some situations, external agents trying to monitor a C* cluster would need 
> to run cassandra -v command to determine the Cassandra version and would 
> expect a numerical output e.g. java version "1.7.0_75" as in case of Oracle 
> JDK. But if the cluster has OpenJDK IcedTea installed, then this condition is 
> not satisfied and the agents will not work correctly as the output from 
> "cassandra -v" is 
> /opt/apache/cassandra/bin/../conf/cassandra-env.sh: line 102: [: 09-icedtea: 
> integer expression expected
> Cause - 
> The line which is causing this behavior is -
> jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
> 'NR==1 {print $2}'`
> Suggested enhancement -
> If we change the line to -
>  jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
> 'NR==1 {print $2}' | awk 'BEGIN {FS="-"};{print $1}'`,
> it will give $jvmver as - 1.7.0_09 for the above case. 
> Can we add this enhancement in the cassandra-env.sh? I would like to add it 
> myself and submit for review, but I am not familiar with C* check in process. 
> There might be better ways to do this, but I thought of this to be simplest 
> and as the edition is at the end of the line, it will be easy to reverse if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8890) Enhance cassandra-env.sh to handle Java version output in case of OpenJDK icedtea"

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8890:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.5
Reproduced In: 2.2.0 beta 1, 2.1.2  (was: 2.1.2, 2.2.0 beta 1)
   Status: Resolved  (was: Ready to Commit)

> Enhance cassandra-env.sh to handle Java version output in case of OpenJDK 
> icedtea"
> --
>
> Key: CASSANDRA-8890
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8890
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Red Hat Enterprise Linux Server release 6.4 (Santiago)
>Reporter: Sumod Pawgi
>Assignee: Brandon Williams
>Priority: Minor
>  Labels: conf, icedtea
> Fix For: 3.5
>
> Attachments: 8890-v2.txt, trunk-8890.patch, trunk-8890.txt
>
>
> Where observed - 
> Cassandra node has OpenJDK - 
> java version "1.7.0_09-icedtea"
> In some situations, external agents trying to monitor a C* cluster would need 
> to run cassandra -v command to determine the Cassandra version and would 
> expect a numerical output e.g. java version "1.7.0_75" as in case of Oracle 
> JDK. But if the cluster has OpenJDK IcedTea installed, then this condition is 
> not satisfied and the agents will not work correctly as the output from 
> "cassandra -v" is 
> /opt/apache/cassandra/bin/../conf/cassandra-env.sh: line 102: [: 09-icedtea: 
> integer expression expected
> Cause - 
> The line which is causing this behavior is -
> jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
> 'NR==1 {print $2}'`
> Suggested enhancement -
> If we change the line to -
>  jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
> 'NR==1 {print $2}' | awk 'BEGIN {FS="-"};{print $1}'`,
> it will give $jvmver as - 1.7.0_09 for the above case. 
> Can we add this enhancement in the cassandra-env.sh? I would like to add it 
> myself and submit for review, but I am not familiar with C* check in process. 
> There might be better ways to do this, but I thought of this to be simplest 
> and as the edition is at the end of the line, it will be easy to reverse if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-03-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11053:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 2.1.x)
   (was: 3.x)
   3.5
   3.0.5
   2.2.6
   2.1.14
   Status: Resolved  (was: Ready to Commit)

Committed, thanks.

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> h5. Description
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.
> h5. Doc-impacting changes to COPY FROM options
> * A new option was added: PREPAREDSTATEMENTS - it indicates if prepared 
> statements should be used; it defaults to true.
> * The default value of CHUNKSIZE changed from 1000 to 5000.
> * The default value of MINBATCHSIZE changed from 2 to 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7987) Better defaults for CQL tables with clustering keys

2016-03-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-7987.
-
Resolution: Later

I'm skeptical that we can easily find good criterion that guarantees that LCS 
is a better default. I'm also not sure we currently have the ressources to 
properly evaluate such criterion even if we had good candidates, and since that 
issue hasn't been updated in a while, closing as 'Later'.

> Better defaults for CQL tables with clustering keys
> ---
>
> Key: CASSANDRA-7987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7987
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> We currently default to STCS regardless.  If a user creates a table with 
> clustering keys (maybe specifically types with likely high cardinality?)  We 
> should set compaction to LCS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10331) Establish and implement canonical bulk reading workload(s)

2016-03-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10331:
-
Reviewer: T Jake Luciani

> Establish and implement canonical bulk reading workload(s)
> --
>
> Key: CASSANDRA-10331
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10331
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Ariel Weisberg
>Assignee: Stefania
> Fix For: 3.x
>
>
> Implement a client, use stress, or extend stress to a bulk reading workload 
> that is indicative of the performance we are trying to improve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11293) NPE when using CQLSSTableWriter

2016-03-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15182743#comment-15182743
 ] 

Sylvain Lebresne commented on CASSANDRA-11293:
--

Are you using a single instance of {{CQLSSTableWriter}} from multiple threads? 
If so, that's wrong, {{CQLSSTableWriter}} is not thread-safe (I'll add 
something to the javadoc in that regard though, since that's arguably not 
clearly stated). In particular CASSANDRA-7463 wasn't saying that 
{{CQLSSTableWriter}} should be thread-safe, it was saying that you should be 
able to use multiple instance of {{CQLSSTableWriter}} (but each one should 
still be used from a single thread) within a single JVM.

> NPE when using CQLSSTableWriter
> ---
>
> Key: CASSANDRA-11293
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11293
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: C* 3.3, C* trunk
>Reporter: Will Hayworth
>
> Hi all!
> I'm trying to using CQLSSTableWriter to load a bunch of historical data into 
> my cluster and I'm getting NullPointerExceptions consistently after having 
> written a few million rows (anywhere from 0.5 to 1.5 GB of data).
> {code}
> java.lang.NullPointerException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at 
> java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
>  at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677) 
> at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
>  at 
> com.atlassian.engagementengine.segmentation.helenus.Daemon.main(Daemon.java:24)
> Caused by: java.lang.NullPointerException at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:126)
>  at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:44)
>  at
> java.util.TimSort.binarySort(TimSort.java:296) at
> java.util.TimSort.sort(TimSort.java:239) at
> java.util.Arrays.sort(Arrays.java:1512) at
> org.apache.cassandra.utils.btree.BTree$Builder.sort(BTree.java:1027) at 
> org.apache.cassandra.utils.btree.BTree$Builder.autoEnforce(BTree.java:1036) 
> at org.apache.cassandra.utils.btree.BTree$Builder.build(BTree.java:1075) at 
> org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:572)
>  at 
> org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:562)
>  at 
> org.apache.cassandra.db.partitions.PartitionUpdate.holder(PartitionUpdate.java:370)
>  at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:177)
>  at 
> org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:172)
>  at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:209)
> {code}
> This may be a red herring, but I started encountering this when I 
> parallelized writes. (I wasn't aware that doing so was safe until I saw 
> CASSANDRA-7463; I Googled in vain for a while before that.) I'm also 
> definitely not passing any nulls in my {{addRow}} calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11294) Provide better documentation for caching

2016-03-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11294:
-
Issue Type: Improvement  (was: Bug)

> Provide better documentation for caching
> 
>
> Key: CASSANDRA-11294
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11294
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Priority: Minor
>
> I realized that the way caching works in C* is not properly documented. For 
> example the row cache will only be populated for single partition or 
> multi-partitions queries and not range queries. It will also never be 
> populated by secondary index queries even if they are single partition or 
> multi-partitions queries.
> The documentation should be improved to allow people to fully understand how 
> caching works.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11295) Make custom filtering more extensible via custom classes

2016-03-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11295:
-
Reviewer: Sylvain Lebresne

> Make custom filtering more extensible via custom classes 
> -
>
> Key: CASSANDRA-11295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11295
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> At the moment, the implementation of {{RowFilter.CustomExpression}} is 
> tightly bound to the syntax designed to support non-CQL search syntax for 
> custom 2i implementations. It might be interesting to decouple the two things 
> by making the custom expression implementation and serialization a bit more 
> pluggable. This would allow users to add their own custom expression 
> implementations to experiment with custom filtering strategies without having 
> to patch the C* source. As a minimally invasive first step, custom 
> expressions could be added programmatically via {{QueryHandler}}. Further 
> down the line, if this proves useful and we can figure out some reasonable 
> syntax we could think about adding the capability in CQL in a separate 
> ticket. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-03-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179983#comment-15179983
 ] 

Sylvain Lebresne commented on CASSANDRA-11304:
--

For info, seems the recursive call to {{prepareNext}} that is the cause of this 
has already been fixed on trunk by CASSANDRA-10750 (which somehow doesn't have 
a proper fix version) but not on 3.0. We might just want to backport that 
change here.

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>Assignee: Sam Tunnicliffe
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10750) Minor code improvements

2016-03-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179981#comment-15179981
 ] 

Sylvain Lebresne commented on CASSANDRA-10750:
--

Can you please dig up in which release this made it in for history sake?

> Minor code improvements
> ---
>
> Key: CASSANDRA-10750
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10750
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>
> Went though several IDE inspections and found some places in the code that 
> could be improved. These are just minor improvements and no bug fixes (except 
> one minor "theoretical" thing).
> The [branch on github against 
> trunk|https://github.com/snazy/cassandra/tree/10750-code-opts-trunk] contains 
> a series of commits:
> * simplify Mutation.apply to remove the casts
> * "minor code improvements" just replaces some expressions that are 
> effectively constant
> * remove unused assignments (probably just cosmetic)
> * collapse identical if-branches (probably just cosmetic)
> * empty array constants
> * fix printf usage (could potentially raise an exception in printf)
> * replace tail-recursion in some critical sections (as the JVM cannot 
> optimize that AFAIK)
> * remove methods identical to their super methods (probably just cosmetic)
> [cassci results 
> here|http://cassci.datastax.com/view/Dev/view/snazy/search/?q=snazy-10750-]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-03-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11304:
-
Summary: Stack overflow when querying 2ndary index  (was: Query data 
through a secondary index)

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-03-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11304:
-
Assignee: Sam Tunnicliffe

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>Assignee: Sam Tunnicliffe
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11302) Invalid time unit conversion causing write timeouts

2016-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179524#comment-15179524
 ] 

Sylvain Lebresne commented on CASSANDRA-11302:
--

Definitively looks fishy but since you're the author, can you have a quick look 
[~aweisberg]?

> Invalid time unit conversion causing write timeouts
> ---
>
> Key: CASSANDRA-11302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Mike Heffner
> Attachments: nanosec.patch
>
>
> We've been debugging a write timeout that we saw after upgrading from the 
> 2.0.x release line, with our particular workload. Details of that process can 
> be found in this thread:
> https://www.mail-archive.com/user@cassandra.apache.org/msg46064.html
> After bisecting various patch release versions, and then commits, on the 
> 2.1.x release line we've identified version 2.1.5 and this commit as the 
> point where the timeouts first start appearing:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9
> After examining the commit we believe this line was a typo:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9#diff-c7ef124561c4cde1c906f28ad3883a88L467
> as it doesn't properly convert the timeout value from milliseconds to 
> nanoseconds.
> After testing with the attached patch applied, we do not see timeouts on 
> version 2.1.5 nor against 2.2.5 when we bring the patch forward. While we've 
> tested our workload against this and we are fairly confident in the patch, we 
> are not experts with the code base so we would prefer additional review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11302) Invalid time unit conversion causing write timeouts

2016-03-03 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11302:
-
Reproduced In: 2.2.5, 2.1.5  (was: 2.1.5, 2.2.5)
 Reviewer: Ariel Weisberg

> Invalid time unit conversion causing write timeouts
> ---
>
> Key: CASSANDRA-11302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Mike Heffner
> Attachments: nanosec.patch
>
>
> We've been debugging a write timeout that we saw after upgrading from the 
> 2.0.x release line, with our particular workload. Details of that process can 
> be found in this thread:
> https://www.mail-archive.com/user@cassandra.apache.org/msg46064.html
> After bisecting various patch release versions, and then commits, on the 
> 2.1.x release line we've identified version 2.1.5 and this commit as the 
> point where the timeouts first start appearing:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9
> After examining the commit we believe this line was a typo:
> https://github.com/apache/cassandra/commit/828496492c51d7437b690999205ecc941f41a0a9#diff-c7ef124561c4cde1c906f28ad3883a88L467
> as it doesn't properly convert the timeout value from milliseconds to 
> nanoseconds.
> After testing with the attached patch applied, we do not see timeouts on 
> version 2.1.5 nor against 2.2.5 when we bring the patch forward. While we've 
> tested our workload against this and we are fairly confident in the patch, we 
> are not experts with the code base so we would prefer additional review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9161) Add random interleaving for flush/compaction when running CQL unit tests

2016-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177651#comment-15177651
 ] 

Sylvain Lebresne commented on CASSANDRA-9161:
-

bq. the problems we faced with upgrade tests due to the randomness of how the 
data is distributed

Yeah, but that's a bad example: that randomness was not properly controlled, it 
was making things hard to reproduce and that was the problem. But this is not 
what I want to do here, this will _not_ make tests non reproducible at all.

In testing, sometimes (always really, but that's a different subject) the state 
space is just too big to be systematically explored on every run. Because of 
that, you do your best at exploring the most meaningful part of the space (and 
I'm not saying we can't improve on that part btw, we can and we should), but 
there is still space you can't explore.  Hoping you'll be so good at finding 
the meaningful subset of the states to test so that no bug will lurk in the 
remaining space is just wishful thinking. So this is just about getting 
incrementally better coverage of the full space by using some new random state 
on every run _for the parts we can't reasonably explore systematically_ (and 
I'm happy to discuss which parts can reasonably be explored systematically and 
which aren't btw).

bq. For flushing, the main problem that I have seen is that only the read or 
write path for memtables was tested not the one for SSTables

It's really more complex. Unless your test has a single insert, there isn't 
_just_ one path for memtables and one for sstables. There is the cases where 
some data is in memtable and some in sstables, where there is more than one 
sstable involved (and we can flush for every insert, or only in some places), 
whether we compact before reading etc... I have seen bugs in pretty much all of 
those cases (I'm genuinely not kidding: there has been cases with range 
tombstone in particular where things got only messes up when data was flushed 
at specific point and compaction was run before reading).

And here again, don't get me wrong: for some tests, there may be a clear place 
where we want to systematically test both with and without flush because we 
want that to be tested every time and that's fine, we can do it. But we just 
can't systematically test all combinations.

> Add random interleaving for flush/compaction when running CQL unit tests
> 
>
> Key: CASSANDRA-9161
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9161
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sylvain Lebresne
>  Labels: retrospective_generated
>
> Most CQL tests don't bother flushing, which means that they overwhelmingly 
> test the memtable path and not the sstables one. A simple way to improve on 
> that would be to make {{CQLTester}} issue flushes and compactions randomly 
> between statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7957) improve active/pending compaction monitoring

2016-03-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175908#comment-15175908
 ] 

Sylvain Lebresne commented on CASSANDRA-7957:
-

bq. I see hundreds of pending compactions. What I was asking for is the 
additional information that shows what is in that list of pending compactions

"pending compactions" is probably one of our most confusing metric: a "pending 
tasks" is just a task to look for whether there is more things to do, they have 
not yet chosen what to compact. So there is just no additional information for 
those tasks to expose I'm afraid. In particular, pending tasks are only 
"blocked" by the sheer fact that we only do one compaction at a time.
To put it another way, the only time where the code decide which sstable it 
compacts together is at the very beginning of a concrete compaction, and so the 
only information we could return in your case is the sstables being compacted 
by the one compaction that is running, which as I said, is already logged at 
the beginning of said compaction.

Please note that I'm not pretending that compaction performance problems are 
easy to understand/debug, just that there isn't anything regarding the "pending 
tasks" that we don't currently expose, and exposing just the sstables involves 
in the current compaction through JMX doesn't seem all that crucial to me since 
it's already logged.

> improve active/pending compaction monitoring
> 
>
> Key: CASSANDRA-7957
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7957
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Nikolai Grigoriev
>Priority: Minor
>
> I think it might be useful to create a way to see what sstables are being 
> compacted into what new sstable. Something like an extension of "nodetool 
> compactionstats". I think it would be easier with this feature to 
> troubleshoot and understand how compactions are happening on your data. Not 
> sure how it is useful in everyday life but I could use such a feature when 
> dealing with CASSANDRA-7949.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9161) Add random interleaving for flush/compaction when running CQL unit tests

2016-03-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175869#comment-15175869
 ] 

Sylvain Lebresne commented on CASSANDRA-9161:
-

bq. DTests are more random in nature

Just to clarify, we're talking about unit tests here.

bq. finding the problem behind a flapping test requires a lot more time that 
for a normal test failure

It doesn't have to be. It's very clear to me that if we do this we'll use a 
specific seed to initialize our random generator and that we'll log that seed 
on any error. That and some added convenience so we can pass said seed when 
running a particular test and debugging a problem won't be harder than for any 
failure. Further, CQLTester logs every statements it runs I believe, so if we 
also log the randomly added flushes/compactions, a simple inspection of the log 
would be enough to tell you how to modify the test to reproduce the issue.

bq. The most commons type of error in writting tests are

To be clear, that ticket wasn't trying to suggest we randomize absolutely 
everything. Most of what you suggest can't be easily randomized and is worth 
having specific tests for, I agree. But I've seen way more than one case where 
a problem hadn't been found because a test that could have found it wasn't 
flushing or compacting at the right time, and for a single test there can be 
many places where you could flush and compact: having a test for every 
permutation is just not doable. Adding some randomization is just a practical 
way to get testing of more interleaving over time and maybe find a few bugs 
along the way.

> Add random interleaving for flush/compaction when running CQL unit tests
> 
>
> Key: CASSANDRA-9161
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9161
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sylvain Lebresne
>  Labels: retrospective_generated
>
> Most CQL tests don't bother flushing, which means that they overwhelmingly 
> test the memtable path and not the sstables one. A simple way to improve on 
> that would be to make {{CQLTester}} issue flushes and compactions randomly 
> between statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7957) improve active/pending compaction monitoring

2016-03-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175801#comment-15175801
 ] 

Sylvain Lebresne commented on CASSANDRA-7957:
-

I believe we log this information, isn't it enough?

> improve active/pending compaction monitoring
> 
>
> Key: CASSANDRA-7957
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7957
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Nikolai Grigoriev
>Priority: Minor
>
> I think it might be useful to create a way to see what sstables are being 
> compacted into what new sstable. Something like an extension of "nodetool 
> compactionstats". I think it would be easier with this feature to 
> troubleshoot and understand how compactions are happening on your data. Not 
> sure how it is useful in everyday life but I could use such a feature when 
> dealing with CASSANDRA-7949.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8094) Heavy writes in RangeSlice read requests

2016-03-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8094:

Labels: lhf  (was: )

> Heavy writes in RangeSlice read  requests 
> --
>
> Key: CASSANDRA-8094
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8094
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Minh Do
>Assignee: Minh Do
>  Labels: lhf
> Fix For: 2.1.x
>
>
> RangeSlice requests always do a scheduled read repair when coordinators try 
> to resolve replicas' responses no matter read_repair_chance is set or not.
> Because of this, in low writes and high reads clusters, there are very high 
> write requests going on between nodes.  
> We should have an option to turn this off and this can be different than the 
> read_repair_chance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11288) Schema agreement appears to be false positive following a DROP TABLE command

2016-03-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175405#comment-15175405
 ] 

Sylvain Lebresne commented on CASSANDRA-11288:
--

It would be easier to track/test if you were to share your reproduction script. 
I'll also note that 2.0 is no supported anymore, so we'll want to check if that 
still reproduce on 2.1.

> Schema agreement appears to be false positive following a DROP TABLE command
> 
>
> Key: CASSANDRA-11288
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11288
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.14.439 (DSE 4.6.7)
> 2 nodes OR 4 nodes
> Connecting with Datastax Java driver 2.1.8 OR 2.0.12 OR 2.1.4 OR 2.1.9 OR 
> 3.0.0
>Reporter: Oliver Lockwood
>
> As part of a schema migration operation, our application is calling the 
> following operations on the Java driver consecutively:
> {noformat}
> session.execute("DROP TABLE table_name");
> session.execute("CREATE TABLE table_name (...)");
> {noformat}
> The second of these sometimes fails with a {{DriverException}} whose message 
> is "Table keyspace.table_name already exists".
> In the schema migration operation, there's 4 of these drop/create pairings 
> and, although it's random which exact one fails, we've never managed to get 
> further than the third operation in approximately 10 attempts - so there's a 
> reasonably high proportion of failure.
> I don't believe this is a driver issue because the driver is checking for 
> schema agreement (as per 
> https://github.com/datastax/java-driver/blob/2.1/driver-core/src/main/java/com/datastax/driver/core/ControlConnection.java#L701)
>  and we are seeing a log message to that effect.
> {noformat}
> c.d.d.c.ControlConnection - [] [] [] [] [] [] [] [] Checking for schema 
> agreement: versions are [02bce936-fddd-3bef-bb54-124d31bede57]
> {noformat}
> This log message appears in between our own logs which say "Executing 
> statement DROP TABLE..." and "Executing statement CREATE TABLE...", so we can 
> be reasonably sure this log message refers to the DROP operation being viewed 
> as "in agreement".
> Could this be a bug in the Cassandra server erroneously reporting that the 
> schemas are in agreement across the 2 nodes when, in fact, they are not?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7423) Allow updating individual subfields of UDT

2016-03-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175261#comment-15175261
 ] 

Sylvain Lebresne commented on CASSANDRA-7423:
-

bq. we do not currently require collections inside UDT definitions to be 
declared with {{frozen<>}}. They are always implicitly frozen.

It's not really that they are implicitly frozen, it's that we only allow frozne 
UDT and frozenness reaches deep. As soon as you froze something, everything 
nested is also frozen (rather intuitively I would add), and so I don't think 
there is anything wrong with the current behavior. But if this patch only allow 
non-frozen UDT at top-level, then I think we should force people to have nested 
fields frozen. In other words, we currently allow
{noformat}
CREATE TYPE foo (c set);
CREATE TABLE bar (k int PRIMARY KEY, t frozen);
{noformat}
and that's fine, but we don't and still shouldn't allow with this patch:
{noformat}
CREATE TABLE bar (k int PRIMARY KEY, t foo);
{noformat}
given the same definition of {{foo}}. What we should allow is:
{noformat}
CREATE TYPE foo (c frozen);
CREATE TABLE bar (k int PRIMARY KEY, t foo);
{noformat}
Assuming we do that, which I strongly think we should, I don't see a backward 
compatibility problem supporting nesting non-frozen stuffs.

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.x
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-5546) Gc_grace should start at the creation of the column, not when it expires

2016-03-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-5546:
---

Assignee: Sylvain Lebresne

> Gc_grace should start at the creation of the column, not when it expires
> 
>
> Key: CASSANDRA-5546
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5546
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.x
>
>
> Currently, gc_grace determines "the minimum time we keep a column that has 
> been marked for deletion", where "marked for deletion" is creation time for a 
> DeletedColumn or the expiration time for an ExpiringColumn.
> However, in the case of expiring columns, if you want to optimize deletions 
> while making sure you don't resurrect overwritten data, you only care about 
> keeping expired columns gc_grace seconds *since their creation time*, not 
> *since their expiration time*. It would thus be better to have gc_grace be 
> "the minimum time we keep a column since it's creation" (which would change 
> nothing for tombstones, but for TTL would basically ensure we remove the 
> expiration time from the time we keep the column once expired).
> To sum it up, this would have the following advantages:
> # This will make fine tuning of gc_grace a little less of a black art.
> # This will be more efficient for CF mixing deletes and expiring columns 
> (we'll remove tombstones for the expiring one sooner).
> # This means gc_grace will be more reliable for things like CASSANDRA-5314.
> Doing this is pretty simple. The one concern is backward compatilibity: it 
> means people that have fine tuned gc_grace to a very low value because they 
> knew it was ok due to their systematic use of ttls might have to update it 
> back to a bigger, more reasonable value before updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10707) Add support for Group By to Select statement

2016-03-01 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173551#comment-15173551
 ] 

Sylvain Lebresne commented on CASSANDRA-10707:
--

bq. The first point that needs discussing is that this patch changes the 
inter-node protocol and this without bumping the protocol version

We had some offline discussion about this, and the consensus seems to be that 
we'll leave it as is for this patch with just a fat warning in the NEWS file 
that you shouldn't use {{GROUP BY}} until you've fully upgraded. As said above, 
this is not perfect if someone don't follow that arguably intuitive instruction 
but this'll do for this time. In the meantime, we'll fix the protocol 
deserialization so it doesn't drop the connection if a message has more than it 
expects, but just skips the message remainder. Longer term, we should introduce 
at least major and minor versioning for the messaging protocol so we can deal 
with this in a better way.

bq. the operation (filtering and ordering) commute. That's not really the case 
for {{ORDER BY}} and {{GROUP BY}}.

Actually, I guess the grouping itself commutes, it's more the aggregation  that 
depend on the order. So nevermind, I'm good with sticking to the SQL syntax.

bq. Having a GroupMaker implementation for normal queries simplify the code as 
the same algorythm can be used for the 3 scenarios.

I read the code too quickly, sorry, but still, I meant to go the same route 
than for {{GroupSpecification.NO_GROUPING}}. The naming is equally confusing 
imo and the code simplification is pretty detatable: we reference that 
{{GroupMaker}} in {{Selection}} (only place where {{NO_GROUPING}} can be used I 
believe) twice, so using some {{groupMaker != null}} won't make a big 
difference.

bq. Even if we allow functions the {{lastClustering}} will always be a the last 
clustering.

Fair enough, though I still feel like grouping it with the partition key in a 
{{GroupMaker.State}} would be a tad cleaner. And at the very least, why use a 
{{ByteBuffer[]}} for the clustering instead of {{Clustering}} which is a more 
explicit?

bq. I tried the approach that you suggested but without success

I'll try to have a look in the coming days because I do feel it would be 
cleaner and it ought to be possible but ...

bq. Performing the modification outside of the {{CQLGroupByLimits}} is probably 
possible but will force us to modify the {{DataLimits}} and {{QueryPager}} 
interfaces to expose the {{rowCount}}.

... I might be misunderstanding what this imply but that doesn't sound 
particularly bad to me.


A few other remarks:
* In the news file, you have {{IN restrictions with only one element are now 
considered as equality restrictions}}. What does that mean for the user?
* Could remove {{CFMetadaData.primaryKeyColumns()}} now that it's unused.
* The comment in {{DataLimits.CQLLimits.hasEnoughLiveData}} still misses some 
part, it used to (and should) read {{Getting that precise _number forces_ us 
...}}.
* Forgot that initially, but in {{DataLimits.CQLGroupByLimits.forPaging(int, 
ByteBuffer, int)}}, it's fishy to me that we use the partition key in 
parameters but reuse the pre-existing {{lastClustering}}. If we're guaranteed 
than {{lastReturnedKey == lastPartitionKey}} then we should assert it as it's 
not immediatly obvious, otherwise this is wrong.


> Add support for Group By to Select statement
> 
>
> Key: CASSANDRA-10707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10707
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> Now that Cassandra support aggregate functions, it makes sense to support 
> {{GROUP BY}} on the {{SELECT}} statements.
> It should be possible to group either at the partition level or at the 
> clustering column level.
> {code}
> SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey;
> SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP 
> BY partitionKey, clustering0, clustering1; 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8527) Account for range tombstones wherever we account for tombstones

2016-02-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172122#comment-15172122
 ] 

Sylvain Lebresne commented on CASSANDRA-8527:
-

There is a bunch of places where we don't properly account for range tombstone, 
not just for the thresholds. Those include tracing, statistics etc... We should 
fix all those while we're at it so expanding the title slightly.

> Account for range tombstones wherever we account for tombstones
> ---
>
> Key: CASSANDRA-8527
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8527
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
> Fix For: 2.2.x
>
>
> As discussed in CASSANDRA-8477, we should make sure the tombstone thresholds 
> also apply to range tombstones, since they poses the same problems than cell 
> tombstones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8527) Account for range tombstones wherever we account for tombstones

2016-02-29 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8527:

Summary: Account for range tombstones wherever we account for tombstones  
(was: Extend tombstone_warning_threshold to range tombstones)

> Account for range tombstones wherever we account for tombstones
> ---
>
> Key: CASSANDRA-8527
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8527
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
> Fix For: 2.2.x
>
>
> As discussed in CASSANDRA-8477, we should make sure the tombstone thresholds 
> also apply to range tombstones, since they poses the same problems than cell 
> tombstones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8345) Client notifications should carry the entire delta of the information that changed

2016-02-29 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-8345.
-
Resolution: Won't Fix

While we could do this in theory, having driver do an extra query to get the 
details is really not a big deal so I don't think it's worth the additional 
complexity for the binary protocol.

> Client notifications should carry the entire delta of the information that 
> changed
> --
>
> Key: CASSANDRA-8345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8345
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michaël Figuière
>  Labels: protocolv4
>
> Currently when the schema changes, a {{SCHEMA_CHANGE}} notification is sent 
> to the client to let it know that a modification happened in a specific table 
> or keyspace. If the client register for these notifications, this is likely 
> that it actually cares to have an up to date version of this information, so 
> the next step is logically for the client to query the {{system}} keyspace to 
> retrieve the latest version of the schema for the particular element that was 
> mentioned in the notification.
> The same thing happen with the {{TOPOLOGY_CHANGE}} notification as the client 
> will follow up with a query to retrieve the details that changed in the 
> {{system.peers}} table.
> It would be interesting to send the entire delta of the information that 
> changed within the notification. I see several advantages with this:
> * This would ensure that the data that are sent to the client are as small as 
> possible as such a delta will always be smaller than the resultset that would 
> eventually be received for a formal query on the {{system}} keyspace.
> * This avoid the Cassandra node to receive plenty of query after it issue a 
> notification but rather to prepare a delta once and send it to everybody.
> * This should improve the overall behaviour when dealing with very large 
> schemas with frequent changes (typically due to a tentative of implementing 
> multitenancy through separate keyspaces), as it has been observed that the 
> the notifications and subsequent queries traffic can become non negligible in 
> this case.
> * This would eventually simplify the driver design by removing the need for 
> an extra asynchronous operation to follow up with, although the benefit of 
> this point will be real only once the previous versions of the protocols are 
> far behind.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11214) Adding Support for system-Z(s390x) architecture

2016-02-29 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11214.
--
   Resolution: Fixed
 Assignee: Nirav
 Reviewer: Sylvain Lebresne
Fix Version/s: (was: 2.2.x)
   (was: 3.x)
   3.4
   3.0.4

I want to emphasis that we don't official support this System-Z architecture: 
no active committer has access to this architecture as far as I can tell and no 
test for it is done whatsoever. So I committed this patch because there is 
really no reason to,  but don't take this as official support. If you have 
other problems specific to that architecture, you're likely on your own.

> Adding Support for system-Z(s390x) architecture
> ---
>
> Key: CASSANDRA-11214
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11214
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability, Testing
> Environment: rhel/sles on s390x architecture
>Reporter: Nirav
>Assignee: Nirav
>Priority: Minor
> Fix For: 3.0.4, 3.4
>
> Attachments: 11214-cassandra-3.0.txt
>
>
> System-Z (s390x) supports unaligned memory access so adding the architecture 
> name in the list of architectures, supporting it.
> Required for few test-case execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8381) CFStats should record keys of largest N requests for time interval

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-8381.
-
Resolution: Duplicate

> CFStats should record keys of largest N requests for time interval
> --
>
> Key: CASSANDRA-8381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8381
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>
> Isolating the problem partition for a CF is right now incredibly difficult. 
> If we could keep the primary key of the largest N read or write requests for 
> the pervious interval or since counter has been cleared it would be extremely 
> useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8424) Collection filtering not working when using PK

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-8424.
-
Resolution: Duplicate

> Collection filtering not working when using PK
> --
>
> Key: CASSANDRA-8424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
> Project: Cassandra
>  Issue Type: Improvement
> Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
> protocol v3]
> Ubuntu 14.04.5 64-bit
>Reporter: Lex Lythius
>Priority: Minor
>  Labels: collections
>
> I can do queries for collection keys/values as detailed in 
> http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
> having a secondary index on the collection it will work (with {{ALLOW 
> FILTERING}}) but only as long as the query is performed through a *secondary* 
> index. If you go through PK it won't. Of course full-scan filtering query is 
> not allowed.
> As an example, I created this table:
> {code:SQL}
> CREATE TABLE test.uloc9 (
> usr int,
> type int,
> gb ascii,
> gb_q ascii,
> info map,
> lat float,
> lng float,
> q int,
> traits set,
> ts timestamp,
> PRIMARY KEY (usr, type)
> );
> CREATE INDEX uloc9_gb ON test.uloc9 (gb);
> CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
> CREATE INDEX uloc9_traits ON test.uloc9 (traits);
> {code}
> then added some data and queried:
> {code}
> cqlsh:test> select * from uloc9 where gb='/nw' and info contains 'argentina' 
> allow filtering;
>  usr | type | gb  | gb_q  | info | lat
>   | lng  | q | traits | ts
> -+--+-+---+--+--+--+---++--
>1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
>1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
> -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
> 18:20:29-0300
> (2 rows)
> cqlsh:test> select * from uloc9 where usr=1 and info contains 'argentina' 
> allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> cqlsh:test> select * from uloc9 where usr=1 and type=0 and info contains 
> 'argentina' allow filtering;
> code=2200 [Invalid query] message="No indexed columns present in by-columns 
> clause with Equal operator"
> {code}
> Maybe I got things wrong, but I don't see any reasons why collection 
> filtering should fail when using PK while it succeeds using any secondary 
> index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8511) repairedAt value is no longer logged

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8511:

Labels: lhf  (was: )

> repairedAt value is no longer logged
> 
>
> Key: CASSANDRA-8511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8511
> Project: Cassandra
>  Issue Type: Improvement
> Environment: OSX and Ubuntu
>Reporter: Philip Thompson
>Priority: Minor
>  Labels: lhf
> Fix For: 2.1.x
>
>
> The dtest repair_compaction.py is failing, which led me to discover that the 
> repairedAt value for sstables is no longer being logged during repair 
> sessions, even in DEBUG logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8612) Read metrics should be updated on all types of reads

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8612:

Labels: lhf metrics  (was: metrics)

> Read metrics should be updated on all types of reads
> 
>
> Key: CASSANDRA-8612
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8612
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Lohfink
>Priority: Minor
>  Labels: lhf, metrics
>
> Metrics like "sstables per read" are not updated on a range slice.  Although 
> separating things out for each type of read could make sense like we do for 
> latencies, only exposing the metrics for one type can be a little confusing 
> when people do a query and see nothing increases.  I think its sufficient to 
> use the same metrics for all reads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11239) Deprecated repair methods cause NPE

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11239:
-
Reviewer: Paulo Motta

> Deprecated repair methods cause NPE
> ---
>
> Key: CASSANDRA-11239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11239
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Nick Bailey
> Fix For: 3.0.4, 3.4
>
> Attachments: 0001-Don-t-NPE-when-using-forceRepairRangeAsync.patch
>
>
> The deprecated repair methods cause an NPE if you aren't doing local repairs. 
> Attaching patch to fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11226) nodetool tablestats' keyspace-level metrics are wrong/misleading

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11226:
-
Labels: lhf  (was: )

> nodetool tablestats' keyspace-level metrics are wrong/misleading
> 
>
> Key: CASSANDRA-11226
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11226
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In the nodetool tablestats output (formerly cfstats), we display "keyspace" 
> level metrics before the table-level metrics:
> {noformat}
> Keyspace: testks
> Read Count: 14772528
> Read Latency: 0.14456651623879135 ms.
> Write Count: 4761283
> Write Latency: 0.062120404521218336 ms.
> Pending Flushes: 0
> Table: processes
> SSTable count: 7
> Space used (live): 496.76 MB
> Space used (total): 496.76 MB
> Space used by snapshots (total): 0 bytes
> Off heap memory used (total): 285.76 KB
> SSTable Compression Ratio: 0.2318241570710227
> Number of keys (estimate): 3027
> Memtable cell count: 2140
> Memtable data size: 1.66 MB
> Memtable off heap memory used: 0 bytes
> Memtable switch count: 967
> Local read count: 14772528
> Local read latency: 0.159 ms
> Local write count: 4761283
> Local write latency: 0.068 ms
> {noformat}
> However, the keyspace-level metrics are misleading, at best.  They are 
> aggregate metrics for every table in the keyspace _that is included in the 
> command line filters_.  So, if you run {{tablestats}} for a single table, the 
> keyspace-level stats will only reflect that table's stats.
> I see two possible fixes:
> # If the command line options don't include the entire keyspace, skip the 
> keyspace-level stats
> # Ignore the command line options, and always make the keyspace-level stats 
> an aggregate of all tables in the keyspace
> My only concern with option 2 is that performance may suffer a bit on 
> keyspaces with many tables.  However, this is a command line tool, so as long 
> as the response time is reasonable, I don't think it's a big deal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11215) Reference leak with parallel repairs on the same table

2016-02-26 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169222#comment-15169222
 ] 

Sylvain Lebresne commented on CASSANDRA-11215:
--

This is patch available right?

> Reference leak with parallel repairs on the same table
> --
>
> Key: CASSANDRA-11215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11215
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>
> When starting multiple repairs on the same table Cassandra starts to log 
> about reference leak as:
> {noformat}
> ERROR [Reference-Reaper:1] 2016-02-23 15:02:05,516 Ref.java:187 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@5213f926) to class 
> org.apache.cassandra.io.sstable.format.SSTableReader
> $InstanceTidier@605893242:.../testrepair/standard1-dcf311a0da3411e5a5c0c1a39c091431/la-30-big
>  was not released before the reference was garbage collected
> {noformat}
> Reproducible with:
> {noformat}
> ccm create repairtest -v 2.2.5 -n 3
> ccm start
> ccm stress write n=100 -schema 
> replication(strategy=SimpleStrategy,factor=3) keyspace=testrepair
> # And then perform two repairs concurrently with:
> ccm node1 nodetool repair testrepair
> {noformat}
> I know that starting multiple repairs in parallel on the same table isn't 
> very wise, but this shouldn't result in reference leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11210) Unresolved hostname in replace address

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11210:
-
Labels: lhf  (was: )

> Unresolved hostname in replace address
> --
>
> Key: CASSANDRA-11210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11210
> Project: Cassandra
>  Issue Type: Bug
>Reporter: sankalp kohli
>Priority: Minor
>  Labels: lhf
>
> If you provide a hostname which could not be resolved by DNS, it leads to 
> replace args being ignored. If you provide an IP which is not in the cluster, 
> it does the right thing and complain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11207) Can not remove TTL on table with default_time_to_live

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11207:
-
Assignee: Benjamin Lerer

> Can not remove TTL on table with default_time_to_live
> -
>
> Key: CASSANDRA-11207
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11207
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Matthieu Nantern
>Assignee: Benjamin Lerer
>
> I've created a table with a default TTL:
> {code:sql}
> CREATE TABLE testmna.ndr (
> device_id text,
> event_year text,
> event_time timestamp,
> active boolean,
> PRIMARY KEY ((device_id, event_year), event_time)
> ) WITH CLUSTERING ORDER BY (event_time DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 600
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> When I insert data with a "runtime TTL" (INSERT ... USING TTL 86400) 
> everything works as expected (ttl is set to 86400).
> But I can't insert data without TTL at runtime: INSERT ... USING TTL 0; does 
> not work.
> Tested on C* 2.2.4, CentOS 7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11202) Cassandra - Commit log rename error

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11202.
--
Resolution: Not A Problem

We only officially support windows from 2.2 and we know there is renaming 
problems before so if you want to use windows, use 2.2. If you can reproduce on 
2.2 though, feel free to reopen and we can look at it.

> Cassandra - Commit log rename error
> ---
>
> Key: CASSANDRA-11202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11202
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Datastax Cassandra 2.1.10
> Windows 7
>Reporter: SUJATA
> Fix For: 2.1.x
>
>
> I am a newbie please bear with me if not very clear with the query.
> I have been working with cassandra for sometime. But suddenly it has stopped 
> working with the issue being reported as "Error processing commit log during 
> intialization" and that it is "rename of .log file failed".
> I have searched for similar reports but could not solve the issue.
> I tried the following: 1. clearing the DataStax Community\data\commitlog 
> folder all but one log file is always locked with cassandra. 2. Changed the 
> Windows Defender setting to allow the cassandra folder and not scan the .log 
> files
> I am using Datastax Communitity Cassandra version 2.1.10 with Java 1.8 64-bit 
> working on a single node.
> Thanking in anticipation Sujata
> The console log:
> :1.8.0_45]
> ... 11 common frames omitted
> INFO  11:32:11 Initializing key cache with capacity of 100 MBs.
> INFO  11:32:11 Initializing row cache with capacity of 0 MBs
> INFO  11:32:11 Initializing counter cache with capacity of 50 MBs
> INFO  11:32:11 Scheduling counter cache save to every 7200 seconds (going to  
>save all keys).
> INFO  11:32:12 Initializing system.sstable_activity
> INFO  11:32:16 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\sstable_activity- 
> 5a1ff267ace03f128563cfae6103c65e\system-sstable_activity-ka-1299 (1471 bytes)
> INFO  11:32:16 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\sstable_activity-5a1ff267ace03f128563cfae6103c65e\system-sstable_activity-ka-1301
>  (1698 bytes)
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\sstable_activity-5a1ff267ace03f128563cfae6103c65e\system-sstable_activity-ka-1300
>  (1560 bytes)
> INFO  11:32:17 Initializing system.hints
> INFO  11:32:17 Initializing system.compaction_history
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca\system-compaction_history-ka-938
>  (343 bytes)
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca\system-compaction_history-ka-937
>  (14710 bytes)
> INFO  11:32:17 Initializing system.peers
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\peers-37f71aca7dc2383ba70672528af04d4f\system-peers-ka-1
>  (30 bytes)
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\peers-37f71aca7dc2383ba70672528af04d4f\system-peers-ka-2
>  (30 bytes)
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\peers-37f71aca7dc2383ba70672528af04d4f\system-peers-ka-3
>  (30 bytes)
> INFO  11:32:17 Initializing system.schema_columnfamilies
> INFO  11:32:17 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\system\schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697\system-schema_columnfamilies-ka-532
>  (6166 bytes)
> .
> INFO  11:32:23 Initializing OpsCenter.pdps
> INFO  11:32:23 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\OpsCenter\pdps-2d611410725d11e58514c92bd1990991\OpsCenter-pdps-ka-234
>  (13992 bytes)
> INFO  11:32:23 Opening C:\Program Files (x86)\DataStax 
> Community\data\data\OpsCenter\pdps-2d611410725d11e58514c92bd1990991\OpsCenter-pdps-ka-233
>  (13727 bytes)
> ERROR 11:32:23 Unable to delete C:\Program Files (x86)\DataStax 
> Community\data\data\system\local-7ad54392bcdd35a684174e047860b377\system-local-ka-476-Data.db
>  (it will be removed on server restart; we'll also retry after GC)
> .
> INFO  11:32:24 Initializing people.employees
> INFO  11:32:24 Opening C:\Program Files (x86)\DataStax  
> Community\data\data\people\employees-ae275410730211e5b8b0c92bd1990991\people- 
> employees-ka-2 (364 bytes)
> INFO  11:32:24 Opening C:\Program Files (x86)\DataStax  
> Community\data\data\people\employees-ae275410730211e5b8b0c92bd1990991\people- 
> employees-ka-3 (188 bytes)
> INFO  11:32:24 Opening C:\Program Files (x86)\DataStax  
> Community\data\data\people\employees-ae275410730211e5b8b0c92bd1990991\people- 
> 

[jira] [Commented] (CASSANDRA-11200) CompactionExecutor thread error brings down JVM in 3.0.3

2016-02-26 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169209#comment-15169209
 ] 

Sylvain Lebresne commented on CASSANDRA-11200:
--

If you know how to reproduce that would be ideal, otherwise the core dump would 
be indeed useful.

> CompactionExecutor thread error brings down JVM in 3.0.3
> 
>
> Key: CASSANDRA-11200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: debian jesse latest release, updated Feb. 20th
>Reporter: Jason Kania
>Priority: Critical
>
> When launching Cassandra 3.0.3, with java version "1.8.0_74", Cassandra 
> writes the following to the debug file before a segmentation fault occurs 
> bringing down the JVM - the problem is repeatable.
> DEBUG [CompactionExecutor:1] 2016-02-20 18:26:16,892 CompactionTask.java:146 
> - Compacting (56f677c0-d829-11e5-b23a-25dbd4d727f6) 
> [/var/lib/cassandra/data/sensordb/periodicReading/ma-367-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-368-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-371-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-370-big-Data.db:level=0, 
> /var/lib/cassandra/data/sensordb/periodicReading/ma-369-big-Data.db:level=0, ]
> The JVM error that occurs is the following:
> \#
> \# A fatal error has been detected by the Java Runtime Environment:
> \#
> \#  SIGBUS (0x7) at pc=0x7fa8a1052150, pid=12179, tid=140361951868672
> \#
> \# JRE version: Java(TM) SE Runtime Environment (8.0_74-b02) (build 
> 1.8.0_74-b02)
> \# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.74-b02 mixed mode 
> linux-amd64 compressed oops)
> \# Problematic frame:
> \# v  ~StubRoutines::jbyte_disjoint_arraycopy
> \#
> \# Core dump written. Default location: /tmp/core or core.12179
> \#
> \# If you would like to submit a bug report, please visit:
> \#   http://bugreport.java.com/bugreport/crash.jsp
> \#
> ---  T H R E A D  ---
> Current thread (0x7fa89c56ac20):  JavaThread "CompactionExecutor:1" 
> daemon [_thread_in_Java, id=12323, 
> stack(0x7fa89043f000,0x7fa89048)]
> siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 
> 0x7fa838988002
> Even if all of the files associated with "ma-[NNN]*" are removed, the JVM 
> dies with the same error after the next group of "ma-[NNN]*" are eventually 
> written out and compacted.
> Though this may be strictly a JVM problem, I have seen the issue in Oracle 
> JVM 8.0_65 and 8.0_74 and I raise it in case this problem is due to JNI usage 
> of an external compression library or some direct memory usage.
> I have a core dump if that is helpful to anyone.
> Bug CASSANDRA-11201 may also be related although when the exception 
> referenced in the bug occurs, the JVM remains alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11193) Missing binary dependencies for running Cassandra in embedded mode

2016-02-26 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169196#comment-15169196
 ] 

Sylvain Lebresne commented on CASSANDRA-11193:
--

[~doanduyhai] Mind attaching a patch?

> Missing binary dependencies for running Cassandra in embedded mode
> --
>
> Key: CASSANDRA-11193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11193
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.3
>Reporter: DOAN DuyHai
>Priority: Minor
>
> When running Cassandra in embedded mode (pulling the *cassandra-all-3.3.jar* 
> from Maven) and activating *UDF*, I face the following exceptions when trying 
> to create an UDF:
> {noformat}
> 18:13:57.922 [main] DEBUG ACHILLES_DDL_SCRIPT -   SCRIPT : CREATE 
> FUNCTION convertToLong(input text) RETURNS NULL ON NULL INPUT RETURNS bigint 
> LANGUAGE java AS $$return Long.parseLong(input);$$;
> 18:13:57.970 [SharedPool-Worker-1] ERROR o.apache.cassandra.transport.Message 
> - Unexpected exception during request; channel = [id: 0x03f52731, 
> /192.168.1.16:55224 => /192.168.1.16:9240]
> java.lang.NoClassDefFoundError: org/objectweb/asm/ClassVisitor
>   at 
> org.apache.cassandra.cql3.functions.JavaBasedUDFunction.(JavaBasedUDFunction.java:79)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.functions.UDFunction.create(UDFunction.java:223) 
> ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.statements.CreateFunctionStatement.announceMigration(CreateFunctionStatement.java:162)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222) 
> ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [cassandra-all-3.3.jar:3.3]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60-ea]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [cassandra-all-3.3.jar:3.3]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-3.3.jar:3.3]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
> Caused by: java.lang.ClassNotFoundException: org.objectweb.asm.ClassVisitor
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
> ~[na:1.8.0_60-ea]
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
> ~[na:1.8.0_60-ea]
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 
> ~[na:1.8.0_60-ea]
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
> ~[na:1.8.0_60-ea]
>   ... 18 common frames omitted
> {noformat}
>  The stack-trace is quite explicit, some classes from the objectweb/asm are 
> missing. By looking into the {{$CASSANDRA_HOME/lib folder}}:
> {noformat}
>  19:44:07 :/opt/apps/apache-cassandra-3.2/lib]
> % ll
> total 48768
> -rw-r--r--@  1 archinnovinfo  wheel   234K Jan  7 22:42 ST4-4.0.8.jar
> -rw-r--r--@  1 archinnovinfo  wheel85K Jan  7 22:42 airline-0.6.jar
> -rw-r--r--@  1 archinnovinfo  wheel   164K Jan  7 22:42 
> antlr-runtime-3.5.2.jar
> -rw-r--r--@  1 archinnovinfo  wheel   5.1M Jan  7 22:42 
> apache-cassandra-3.2.jar
> -rw-r--r--@  1 archinnovinfo  wheel   189K Jan  7 22:42 
> apache-cassandra-clientutil-3.2.jar
> -rw-r--r--@  1 archinnovinfo  wheel   1.8M Jan  7 22:42 
> 

[jira] [Updated] (CASSANDRA-11193) Missing binary dependencies for running Cassandra in embedded mode

2016-02-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11193:
-
Priority: Minor  (was: Major)

> Missing binary dependencies for running Cassandra in embedded mode
> --
>
> Key: CASSANDRA-11193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11193
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.3
>Reporter: DOAN DuyHai
>Priority: Minor
>
> When running Cassandra in embedded mode (pulling the *cassandra-all-3.3.jar* 
> from Maven) and activating *UDF*, I face the following exceptions when trying 
> to create an UDF:
> {noformat}
> 18:13:57.922 [main] DEBUG ACHILLES_DDL_SCRIPT -   SCRIPT : CREATE 
> FUNCTION convertToLong(input text) RETURNS NULL ON NULL INPUT RETURNS bigint 
> LANGUAGE java AS $$return Long.parseLong(input);$$;
> 18:13:57.970 [SharedPool-Worker-1] ERROR o.apache.cassandra.transport.Message 
> - Unexpected exception during request; channel = [id: 0x03f52731, 
> /192.168.1.16:55224 => /192.168.1.16:9240]
> java.lang.NoClassDefFoundError: org/objectweb/asm/ClassVisitor
>   at 
> org.apache.cassandra.cql3.functions.JavaBasedUDFunction.(JavaBasedUDFunction.java:79)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.functions.UDFunction.create(UDFunction.java:223) 
> ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.statements.CreateFunctionStatement.announceMigration(CreateFunctionStatement.java:162)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222) 
> ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [cassandra-all-3.3.jar:3.3]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [cassandra-all-3.3.jar:3.3]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60-ea]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [cassandra-all-3.3.jar:3.3]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-3.3.jar:3.3]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
> Caused by: java.lang.ClassNotFoundException: org.objectweb.asm.ClassVisitor
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
> ~[na:1.8.0_60-ea]
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
> ~[na:1.8.0_60-ea]
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 
> ~[na:1.8.0_60-ea]
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
> ~[na:1.8.0_60-ea]
>   ... 18 common frames omitted
> {noformat}
>  The stack-trace is quite explicit, some classes from the objectweb/asm are 
> missing. By looking into the {{$CASSANDRA_HOME/lib folder}}:
> {noformat}
>  19:44:07 :/opt/apps/apache-cassandra-3.2/lib]
> % ll
> total 48768
> -rw-r--r--@  1 archinnovinfo  wheel   234K Jan  7 22:42 ST4-4.0.8.jar
> -rw-r--r--@  1 archinnovinfo  wheel85K Jan  7 22:42 airline-0.6.jar
> -rw-r--r--@  1 archinnovinfo  wheel   164K Jan  7 22:42 
> antlr-runtime-3.5.2.jar
> -rw-r--r--@  1 archinnovinfo  wheel   5.1M Jan  7 22:42 
> apache-cassandra-3.2.jar
> -rw-r--r--@  1 archinnovinfo  wheel   189K Jan  7 22:42 
> apache-cassandra-clientutil-3.2.jar
> -rw-r--r--@  1 archinnovinfo  wheel   1.8M Jan  7 22:42 
> apache-cassandra-thrift-3.2.jar
> -rw-r--r--@  1 

<    7   8   9   10   11   12   13   14   15   16   >