[jira] [Commented] (CASSANDRA-8642) Cassandra crashed after stress test of write

2015-01-19 Thread ZhongYu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283562#comment-14283562
 ] 

ZhongYu commented on CASSANDRA-8642:


JDK 1.7.0_71
Ubuntu 12.04 LTS 64-bit

> Cassandra crashed after stress test of write
> 
>
> Key: CASSANDRA-8642
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8642
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core 
> CPU, 16GB memory (heapsize 8G), Vmware virtual machine.
>Reporter: ZhongYu
> Fix For: 2.1.3
>
> Attachments: QQ拼音截图未命名.png
>
>
> When I am perform stress test of write using YCSB, Cassandra crashed. I look 
> at the logs, and here are the last  and only log:
> WARN  [SharedPool-Worker-25] 2015-01-18 17:35:16,611 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-25,5,main]: {}
> java.lang.InternalError: a fault occurred in a recent unsafe memory access 
> operation in compiled Java code
> at 
> org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:174) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_71]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.2.jar:2.1.2]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8646) Row Cache Miss with clustering key

2015-01-19 Thread Nitin Padalia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Padalia updated CASSANDRA-8646:
-
Summary: Row Cache Miss with clustering key  (was: For Column family )

> Row Cache Miss with clustering key
> --
>
> Key: CASSANDRA-8646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8646
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: OS: CentOS 6.5, [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL 
> spec 3.2.0 | Native protocol v3]. Using CQL CLI for query analysis.
>Reporter: Nitin Padalia
>Priority: Minor
> Fix For: 2.1.2
>
>
> Cassandra doesn't hit row cache for first and last row of a partition if we 
> mention cluster keys in where condition. However if we use limit then it hits 
> the row cache for the same rows.
> E.G. I've a column family:
> CREATE TABLE ucxndirdb2.usertable_cache (
> user_id uuid,
> dept_id uuid,
> location_id text,
> locationmap_id uuid,
> PRIMARY KEY ((user_id, dept_id), location_id)
> ) WITH CLUSTERING ORDER BY (location_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"3"}'
> AND comment = ''
> AND compaction = {'min_threshold': '4', 'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> It has 3 rows per partition keys enabled for row cache.
> Now for a cached request, if I run:
> select * from usertable_cache WHERE user_id = 
> 7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
> de3ac44f-2078-4321-a47c-de96c615d40d limit 3;
> Then its a cache hit with follow results:
>  user_id  | dept_id  
> | location_id | locationmap_id
> --+--+-+--
>  7bf16edf-b552-40f4-94ac-87b2e878d8c2 | de3ac44f-2078-4321-a47c-de96c615d40d 
> |  ABC4:1 | 32b97639-ea5b-427f-8c27-8a5016e2ad6e
>  7bf16edf-b552-40f4-94ac-87b2e878d8c2 | de3ac44f-2078-4321-a47c-de96c615d40d 
> | ABC4:10 | dfacc9fc-7a6a-4fb4-8a4f-c13c606d552b
>  7bf16edf-b552-40f4-94ac-87b2e878d8c2 | de3ac44f-2078-4321-a47c-de96c615d40d 
> |ABC4:100 | 9ba7236a-6124-41c8-839b-edd299f510f7
> However If I run:
>  select * from usertable_cache WHERE user_id = 
> 7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
> de3ac44f-2078-4321-a47c-de96c615d40d and location_id = 'ABC4:1';
> or 
>  select * from usertable_cache WHERE user_id = 
> 7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
> de3ac44f-2078-4321-a47c-de96c615d40d and location_id = 'ABC4:100';
> Then its a cache miss. However for following its hit for following
>  select * from usertable_cache WHERE user_id = 
> 7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
> de3ac44f-2078-4321-a47c-de96c615d40d and location_id = 'ABC4:10';
> and this behavior is consistent by increasing/decreasing rows_per_partiton 
> setting. Cache is miss for only first and last record of the partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8646) For Column family

2015-01-19 Thread Nitin Padalia (JIRA)
Nitin Padalia created CASSANDRA-8646:


 Summary: For Column family 
 Key: CASSANDRA-8646
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8646
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OS: CentOS 6.5, [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 
3.2.0 | Native protocol v3]. Using CQL CLI for query analysis.
Reporter: Nitin Padalia
Priority: Minor
 Fix For: 2.1.2


Cassandra doesn't hit row cache for first and last row of a partition if we 
mention cluster keys in where condition. However if we use limit then it hits 
the row cache for the same rows.

E.G. I've a column family:
CREATE TABLE ucxndirdb2.usertable_cache (
user_id uuid,
dept_id uuid,
location_id text,
locationmap_id uuid,
PRIMARY KEY ((user_id, dept_id), location_id)
) WITH CLUSTERING ORDER BY (location_id ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"3"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

It has 3 rows per partition keys enabled for row cache.

Now for a cached request, if I run:
select * from usertable_cache WHERE user_id = 
7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
de3ac44f-2078-4321-a47c-de96c615d40d limit 3;

Then its a cache hit with follow results:
 user_id  | dept_id  | 
location_id | locationmap_id
--+--+-+--
 7bf16edf-b552-40f4-94ac-87b2e878d8c2 | de3ac44f-2078-4321-a47c-de96c615d40d |  
ABC4:1 | 32b97639-ea5b-427f-8c27-8a5016e2ad6e
 7bf16edf-b552-40f4-94ac-87b2e878d8c2 | de3ac44f-2078-4321-a47c-de96c615d40d |  
   ABC4:10 | dfacc9fc-7a6a-4fb4-8a4f-c13c606d552b
 7bf16edf-b552-40f4-94ac-87b2e878d8c2 | de3ac44f-2078-4321-a47c-de96c615d40d |  
  ABC4:100 | 9ba7236a-6124-41c8-839b-edd299f510f7


However If I run:
 select * from usertable_cache WHERE user_id = 
7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
de3ac44f-2078-4321-a47c-de96c615d40d and location_id = 'ABC4:1';
or 
 select * from usertable_cache WHERE user_id = 
7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
de3ac44f-2078-4321-a47c-de96c615d40d and location_id = 'ABC4:100';

Then its a cache miss. However for following its hit for following
 select * from usertable_cache WHERE user_id = 
7bf16edf-b552-40f4-94ac-87b2e878d8c2  and dept_id = 
de3ac44f-2078-4321-a47c-de96c615d40d and location_id = 'ABC4:10';

and this behavior is consistent by increasing/decreasing rows_per_partiton 
setting. Cache is miss for only first and last record of the partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-19 Thread Abhishek Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282830#comment-14282830
 ] 

Abhishek Gupta edited comment on CASSANDRA-8638 at 1/20/15 5:36 AM:


[~s_delima] I am currently trying to analyze a fix for this defect. 
1. how do we reproduce this in real world scenario. How does these BOM 
characters get introduced. is it is because of different architectures like 
intel / sparc? 
2. how may BOM characters do we need to handle, is there a list of characters?
3. do we need to look for these characters at the beginning of file or anywhere 
in the file?



was (Author: abhish_gl):
[~s_delima] I am currently trying to analyze a fix for this defect. Here are 
the queries which will help me understand the defect and identify a correct fix:

1. how do we reproduce this in real world scenario. How does these BOM 
characters get introduced. is it is because of different architectures like 
intel / sparc? 
2. how may BOM characters do we need to handle, is there a list of characters?
3. do we need to look for these characters at the beginning of file or anywhere 
in the file?

thanks for help. 

> CQLSH -f option should ignore BOM in files
> --
>
> Key: CASSANDRA-8638
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: Red Hat linux
>Reporter: Sotirios Delimanolis
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 2.1.3
>
>
> I fell in byte order mark trap trying to execute a CQL script through CQLSH. 
> The file contained the simple (plus BOM)
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 
> -- and another "CREATE TABLE bucket_flags" query
> {noformat}
> I executed the script
> {noformat}
> [~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
> /home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
> /home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
> test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 
> '3'}  AND durable_writes = true; 
> /home/selimanolis/Schema/patches/setup.cql:2:  ^
> /home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
>  message="Cannot add column family 'bucket_flags' to non existing keyspace 
> 'test'.">
> {noformat}
> I realized much later that the file had a BOM which was seemingly screwing 
> with how CQLSH parsed the file.
> It would be nice to have CQLSH ignore the BOM when processing files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-19 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283416#comment-14283416
 ] 

Sotirios Delimanolis commented on CASSANDRA-8638:
-

See the wikipedia article here: http://en.wikipedia.org/wiki/Byte_order_mark

1. It's just a few bytes added at the beginning of a (text) file's content. 
These bytes are typically added when the file's content is meant to be 
exchanged between environments to compensate for different endianness. 

In my case, I was developing on monodevelop and the IDE seemed to introduce a 
UTF-8 BOM for regular files. I've seen other IDEs like Eclipse do the same 
thing (ex: for XML files). 

2-3. The wikipedia article shows some of the BOMs for various encodings. 
Special care should to be taken for when these characters appear in the middle 
of the content as opposed to the start.


> CQLSH -f option should ignore BOM in files
> --
>
> Key: CASSANDRA-8638
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: Red Hat linux
>Reporter: Sotirios Delimanolis
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 2.1.3
>
>
> I fell in byte order mark trap trying to execute a CQL script through CQLSH. 
> The file contained the simple (plus BOM)
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 
> -- and another "CREATE TABLE bucket_flags" query
> {noformat}
> I executed the script
> {noformat}
> [~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
> /home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
> /home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
> test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 
> '3'}  AND durable_writes = true; 
> /home/selimanolis/Schema/patches/setup.cql:2:  ^
> /home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
>  message="Cannot add column family 'bucket_flags' to non existing keyspace 
> 'test'.">
> {noformat}
> I realized much later that the file had a BOM which was seemingly screwing 
> with how CQLSH parsed the file.
> It would be nice to have CQLSH ignore the BOM when processing files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8583) Check for Thread.start()

2015-01-19 Thread Krzysztof Styrc (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283169#comment-14283169
 ] 

Krzysztof Styrc commented on CASSANDRA-8583:


Hi,

I've looked through the code under given references. Points 1) 2) and 5) are 
easily replaceable with a thread-pool indeed. With 4) and 6) we only have to 
implement ThreadFactory to provide backwards-compatible thread names. With 3) 
it gets even more cumbersome, as we have at least two extending classes from 
MessageHandler. It seems not to easy to use one thread-pool and provide 
backwards-compatible thread names (imagine IncomingMessageHandler being 
scheduled to thread with name STREAM-OUT-XXX).

I guess backwards-compatible thread names are desired, don't they?




> Check for Thread.start()
> 
>
> Key: CASSANDRA-8583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8583
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Robert Stupp
>Priority: Minor
>
> Old classes sometimes still use 
> {noformat}
>   new Thread(...).start()
> {noformat}
> which might be costly.
> This ticket's about to find and possibly fix such code.
> Locations in code worth to investigate (IMO). This list is not prioritized - 
> it's just the order I've found "Thread.start()"
> # 
> {{org.apache.cassandra.streaming.compress.CompressedInputStream#CompressedInputStream}}
>  creates one thread per input stream to decompress in a separate thread. If 
> necessary, should be easily replaceable with a thread-pool
> # 
> {{org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter#SSTableSimpleUnsortedWriter(java.io.File,
>  org.apache.cassandra.config.CFMetaData, 
> org.apache.cassandra.dht.IPartitioner, long)}} creates one thread per write. 
> If necessary, should be easily replaceable with a thread-pool
> # {{org.apache.cassandra.streaming.ConnectionHandler.MessageHandler#start}} 
> creates one thread. If necessary, should be easily replaceable with a 
> thread-pool.
> # {{org.apache.cassandra.net.OutboundTcpConnection#handshakeVersion}} creates 
> one thread just to implement a timeout. Not sure why not just using 
> {{Socket.setSoTimeout}}
> # 
> {{org.apache.cassandra.service.StorageService#forceRepairAsync(java.lang.String,
>  org.apache.cassandra.repair.messages.RepairOption)}} creates one thread per 
> repair. Not sure whether it's worth to investigate this one, since repairs 
> are "long running" operations
> # {{org.apache.cassandra.db.index.SecondaryIndex#buildIndexAsync}} creates a 
> thread. Not sure whether it's worth to investigate this one.
> Beside these, there are threads used in {{MessagingService}} and for 
> streaming (blocking I/O model). These could be changed by using non-blocking 
> I/O - but that's a much bigger task with much higher risks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8644) Cassandra node going down while running "COPY TO" command on around 7lakh records.....

2015-01-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8644.
--
Resolution: Duplicate

> Cassandra node going down while running "COPY TO" command on around 7lakh 
> records.
> --
>
> Key: CASSANDRA-8644
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8644
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Prajakta Bhosale
> Attachments: cassandra.yaml
>
>
> Cassandra node going down, While running the "Copy TO" command on one of my 
> colum-family which contains around 7 lakh records.
> We have 4 node cluster. Please find attached cassandra config file.
> $ cassandra -v
> 2.0.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8150) Revaluate Default JVM tuning parameters

2015-01-19 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283112#comment-14283112
 ] 

Jeremy Hanna commented on CASSANDRA-8150:
-

Have we made any progress towards determining whether these are reasonable new 
defaults?  It sounds like there is good evidence suggesting they are good but 
that we are waiting on tests and perhaps some gc logs?

> Revaluate Default JVM tuning parameters
> ---
>
> Key: CASSANDRA-8150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Matt Stump
>Assignee: Brandon Williams
> Attachments: upload.png
>
>
> It's been found that the old twitter recommendations of 100m per core up to 
> 800m is harmful and should no longer be used.
> Instead the formula used should be 1/3 or 1/4 max heap with a max of 2G. 1/3 
> or 1/4 is debatable and I'm open to suggestions. If I were to hazard a guess 
> 1/3 is probably better for releases greater than 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8645) sstableloader reports nonsensical bandwidth

2015-01-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283059#comment-14283059
 ] 

Aleksey Yeschenko commented on CASSANDRA-8645:
--

Committed, thanks (:

> sstableloader reports nonsensical bandwidth 
> 
>
> Key: CASSANDRA-8645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8645
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Barnash
>Assignee: Max Barnash
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: 0001-Round-up-timedeltas-lower-than-1ms.patch
>
>
> When restoring a snapshot I see sstableloader reporting this:
> {noformat}
> [total: 0% - 5MB/s (avg: 0MB/s)] 
> [total: 0% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> ...
> {noformat}
> {{2147483647MB/s}} doesn’t look right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Round up time deltas lower than 1ms in BulkLoader

2015-01-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 d469b4b81 -> cc0831c60


Round up time deltas lower than 1ms in BulkLoader

patch by Max Barnash; reviewed by Aleksey Yeschenko for CASSANDRA-8645


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae380da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae380da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae380da9

Branch: refs/heads/cassandra-2.1
Commit: ae380da9e9d73a4c0243ae35aa60cfee57a9cdf3
Parents: ce207cb
Author: Max Barnash 
Authored: Tue Jan 20 04:13:11 2015 +0700
Committer: Aleksey Yeschenko 
Committed: Tue Jan 20 00:36:36 2015 +0300

--
 CHANGES.txt | 4 
 src/java/org/apache/cassandra/tools/BulkLoader.java | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae380da9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54a6096..6604783 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.13:
+ * Round up time deltas lower than 1ms in BulkLoader (CASSANDRA-8645)
+
+
 2.0.12:
  * Use more efficient slice size for querying internal secondary
index tables (CASSANDRA-8550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae380da9/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 4077722..8e9cfb3 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -188,7 +188,7 @@ public class BulkLoader
 sb.append(" (").append(size == 0 ? 100L : current * 100L / 
size).append("%)] ");
 }
 long time = System.nanoTime();
-long deltaTime = TimeUnit.NANOSECONDS.toMillis(time - 
lastTime);
+long deltaTime = Math.max(1L, 
TimeUnit.NANOSECONDS.toMillis(time - lastTime));
 lastTime = time;
 long deltaProgress = totalProgress - lastProgress;
 lastProgress = totalProgress;
@@ -204,7 +204,7 @@ public class BulkLoader
 private int mbPerSec(long bytes, long timeInMs)
 {
 double bytesPerMs = ((double)bytes) / timeInMs;
-return (int)((bytesPerMs * 1000) / (1024 * 2024));
+return (int)((bytesPerMs * 1000) / (1024 * 1024));
 }
 }
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-19 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/BulkLoader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc0831c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc0831c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc0831c6

Branch: refs/heads/cassandra-2.1
Commit: cc0831c608d14747b809027144710003cf38a98a
Parents: d469b4b ae380da
Author: Aleksey Yeschenko 
Authored: Tue Jan 20 00:46:52 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Jan 20 00:46:52 2015 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/tools/BulkLoader.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc0831c6/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ff62d2,6604783..494376d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,59 -1,8 +1,60 @@@
 -2.0.13:
 +2.1.3
 + * (cqlsh) Escape clqshrc passwords properly (CASSANDRA-8618)
 + * Fix NPE when passing wrong argument in ALTER TABLE statement 
(CASSANDRA-8355)
 + * Pig: Refactor and deprecate CqlStorage (CASSANDRA-8599)
 + * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read "defrag" async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 + * Force config client mode in CQLSSTableWriter (CASSANDRA-8281)
 +Merged from 2.0:
+  * Round up time deltas lower than 1ms in BulkLoader (CASSANDRA-8645)
 -
 -
 -2.0.12:
   * Use more efficient slice size for queryi

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-19 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7a19215
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7a19215
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7a19215

Branch: refs/heads/trunk
Commit: a7a19215ca305c532ecbb9700cac47d5af6fe128
Parents: bea3a97 cc0831c
Author: Aleksey Yeschenko 
Authored: Tue Jan 20 00:47:10 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Jan 20 00:47:10 2015 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/tools/BulkLoader.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7a19215/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7a19215/src/java/org/apache/cassandra/tools/BulkLoader.java
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-19 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/BulkLoader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc0831c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc0831c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc0831c6

Branch: refs/heads/trunk
Commit: cc0831c608d14747b809027144710003cf38a98a
Parents: d469b4b ae380da
Author: Aleksey Yeschenko 
Authored: Tue Jan 20 00:46:52 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Jan 20 00:46:52 2015 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/tools/BulkLoader.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc0831c6/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ff62d2,6604783..494376d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,59 -1,8 +1,60 @@@
 -2.0.13:
 +2.1.3
 + * (cqlsh) Escape clqshrc passwords properly (CASSANDRA-8618)
 + * Fix NPE when passing wrong argument in ALTER TABLE statement 
(CASSANDRA-8355)
 + * Pig: Refactor and deprecate CqlStorage (CASSANDRA-8599)
 + * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read "defrag" async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 + * Force config client mode in CQLSSTableWriter (CASSANDRA-8281)
 +Merged from 2.0:
+  * Round up time deltas lower than 1ms in BulkLoader (CASSANDRA-8645)
 -
 -
 -2.0.12:
   * Use more efficient slice size for querying inter

[1/3] cassandra git commit: Round up time deltas lower than 1ms in BulkLoader

2015-01-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk bea3a9703 -> a7a19215c


Round up time deltas lower than 1ms in BulkLoader

patch by Max Barnash; reviewed by Aleksey Yeschenko for CASSANDRA-8645


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae380da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae380da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae380da9

Branch: refs/heads/trunk
Commit: ae380da9e9d73a4c0243ae35aa60cfee57a9cdf3
Parents: ce207cb
Author: Max Barnash 
Authored: Tue Jan 20 04:13:11 2015 +0700
Committer: Aleksey Yeschenko 
Committed: Tue Jan 20 00:36:36 2015 +0300

--
 CHANGES.txt | 4 
 src/java/org/apache/cassandra/tools/BulkLoader.java | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae380da9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54a6096..6604783 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.13:
+ * Round up time deltas lower than 1ms in BulkLoader (CASSANDRA-8645)
+
+
 2.0.12:
  * Use more efficient slice size for querying internal secondary
index tables (CASSANDRA-8550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae380da9/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 4077722..8e9cfb3 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -188,7 +188,7 @@ public class BulkLoader
 sb.append(" (").append(size == 0 ? 100L : current * 100L / 
size).append("%)] ");
 }
 long time = System.nanoTime();
-long deltaTime = TimeUnit.NANOSECONDS.toMillis(time - 
lastTime);
+long deltaTime = Math.max(1L, 
TimeUnit.NANOSECONDS.toMillis(time - lastTime));
 lastTime = time;
 long deltaProgress = totalProgress - lastProgress;
 lastProgress = totalProgress;
@@ -204,7 +204,7 @@ public class BulkLoader
 private int mbPerSec(long bytes, long timeInMs)
 {
 double bytesPerMs = ((double)bytes) / timeInMs;
-return (int)((bytesPerMs * 1000) / (1024 * 2024));
+return (int)((bytesPerMs * 1000) / (1024 * 1024));
 }
 }
 



cassandra git commit: Round up time deltas lower than 1ms in BulkLoader

2015-01-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ce207cba4 -> ae380da9e


Round up time deltas lower than 1ms in BulkLoader

patch by Max Barnash; reviewed by Aleksey Yeschenko for CASSANDRA-8645


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae380da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae380da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae380da9

Branch: refs/heads/cassandra-2.0
Commit: ae380da9e9d73a4c0243ae35aa60cfee57a9cdf3
Parents: ce207cb
Author: Max Barnash 
Authored: Tue Jan 20 04:13:11 2015 +0700
Committer: Aleksey Yeschenko 
Committed: Tue Jan 20 00:36:36 2015 +0300

--
 CHANGES.txt | 4 
 src/java/org/apache/cassandra/tools/BulkLoader.java | 4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae380da9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54a6096..6604783 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.13:
+ * Round up time deltas lower than 1ms in BulkLoader (CASSANDRA-8645)
+
+
 2.0.12:
  * Use more efficient slice size for querying internal secondary
index tables (CASSANDRA-8550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae380da9/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 4077722..8e9cfb3 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -188,7 +188,7 @@ public class BulkLoader
 sb.append(" (").append(size == 0 ? 100L : current * 100L / 
size).append("%)] ");
 }
 long time = System.nanoTime();
-long deltaTime = TimeUnit.NANOSECONDS.toMillis(time - 
lastTime);
+long deltaTime = Math.max(1L, 
TimeUnit.NANOSECONDS.toMillis(time - lastTime));
 lastTime = time;
 long deltaProgress = totalProgress - lastProgress;
 lastProgress = totalProgress;
@@ -204,7 +204,7 @@ public class BulkLoader
 private int mbPerSec(long bytes, long timeInMs)
 {
 double bytesPerMs = ((double)bytes) / timeInMs;
-return (int)((bytesPerMs * 1000) / (1024 * 2024));
+return (int)((bytesPerMs * 1000) / (1024 * 1024));
 }
 }
 



[jira] [Assigned] (CASSANDRA-8645) sstableloader reports nonsensical bandwidth

2015-01-19 Thread Max Barnash (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Barnash reassigned CASSANDRA-8645:
--

Assignee: Max Barnash

> sstableloader reports nonsensical bandwidth 
> 
>
> Key: CASSANDRA-8645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8645
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Barnash
>Assignee: Max Barnash
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: 0001-Round-up-timedeltas-lower-than-1ms.patch
>
>
> When restoring a snapshot I see sstableloader reporting this:
> {noformat}
> [total: 0% - 5MB/s (avg: 0MB/s)] 
> [total: 0% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> ...
> {noformat}
> {{2147483647MB/s}} doesn’t look right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8645) sstableloader reports nonsensical bandwidth

2015-01-19 Thread Max Barnash (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Barnash updated CASSANDRA-8645:
---
 Reviewer: Aleksey Yeschenko
Reproduced In: 2.0.11
Since Version: 2.0.0

> sstableloader reports nonsensical bandwidth 
> 
>
> Key: CASSANDRA-8645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8645
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Barnash
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: 0001-Round-up-timedeltas-lower-than-1ms.patch
>
>
> When restoring a snapshot I see sstableloader reporting this:
> {noformat}
> [total: 0% - 5MB/s (avg: 0MB/s)] 
> [total: 0% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> ...
> {noformat}
> {{2147483647MB/s}} doesn’t look right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8645) sstableloader reports nonsensical bandwidth

2015-01-19 Thread Max Barnash (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Barnash updated CASSANDRA-8645:
---
Fix Version/s: 2.0.13

> sstableloader reports nonsensical bandwidth 
> 
>
> Key: CASSANDRA-8645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8645
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Barnash
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: 0001-Round-up-timedeltas-lower-than-1ms.patch
>
>
> When restoring a snapshot I see sstableloader reporting this:
> {noformat}
> [total: 0% - 5MB/s (avg: 0MB/s)] 
> [total: 0% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> ...
> {noformat}
> {{2147483647MB/s}} doesn’t look right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8645) sstableloader reports nonsensical bandwidth

2015-01-19 Thread Max Barnash (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283012#comment-14283012
 ] 

Max Barnash commented on CASSANDRA-8645:


Attached patch only applies for 2.0.x since that code was heavily modified in 
2.1. Converting bytes to mbs is not fixed in 2.1 though, but I guess it doesn’t 
deserve a separate patch/issue :)

> sstableloader reports nonsensical bandwidth 
> 
>
> Key: CASSANDRA-8645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8645
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Barnash
>Priority: Minor
> Attachments: 0001-Round-up-timedeltas-lower-than-1ms.patch
>
>
> When restoring a snapshot I see sstableloader reporting this:
> {noformat}
> [total: 0% - 5MB/s (avg: 0MB/s)] 
> [total: 0% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 31MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> [total: 1% - 2147483647MB/s (avg: 0MB/s)] 
> ...
> {noformat}
> {{2147483647MB/s}} doesn’t look right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8645) sstableloader reports nonsensical bandwidth

2015-01-19 Thread Max Barnash (JIRA)
Max Barnash created CASSANDRA-8645:
--

 Summary: sstableloader reports nonsensical bandwidth 
 Key: CASSANDRA-8645
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8645
 Project: Cassandra
  Issue Type: Bug
Reporter: Max Barnash
Priority: Minor
 Attachments: 0001-Round-up-timedeltas-lower-than-1ms.patch

When restoring a snapshot I see sstableloader reporting this:

{noformat}
[total: 0% - 5MB/s (avg: 0MB/s)] 
[total: 0% - 31MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
[total: 1% - 31MB/s (avg: 0MB/s)] 
[total: 1% - 31MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
[total: 1% - 2147483647MB/s (avg: 0MB/s)] 
...
{noformat}

{{2147483647MB/s}} doesn’t look right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8552) Large compactions run out of off-heap RAM

2015-01-19 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282980#comment-14282980
 ] 

Alan Boudreault commented on CASSANDRA-8552:


For the record, I executed various other tests to reproduce this one without 
luck. I have been able to simulate OOM with a small environment but all the 
cached had been released before... so not really related to Brent's issue. 
[~thebrenthaines], Let me know if it happens again and perhaps you noticed some 
more details to reproduce it.

> Large compactions run out of off-heap RAM
> -
>
> Key: CASSANDRA-8552
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8552
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.4 
> AWS EC2
> 12 m1.xlarge nodes [4 cores, 16GB RAM, 1TB storage (251GB Used)]
> Java build 1.7.0_55-b13 and build 1.8.0_25-b17
>Reporter: Brent Haines
>Assignee: Benedict
>Priority: Blocker
> Fix For: 2.1.3
>
> Attachments: Screen Shot 2015-01-02 at 9.36.11 PM.png, data.cql, 
> fhandles.log, freelog.log, lsof.txt, meminfo.txt, sysctl.txt, system.log
>
>
> We have a large table of storing, effectively event logs and a pair of 
> denormalized tables for indexing.
> When updating from 2.0 to 2.1 we saw performance improvements, but some 
> random and silent crashes during nightly repairs. We lost a node (totally 
> corrupted) and replaced it. That node has never stabilized -- it simply can't 
> finish the compactions. 
> Smaller compactions finish. Larger compactions, like these two never finish - 
> {code}
> pending tasks: 48
>compaction type   keyspace table completed total   
>  unit   progress
> Compaction   data   stories   16532973358   75977993784   
> bytes 21.76%
> Compaction   data   stories_by_text   10593780658   38555048812   
> bytes 27.48%
> Active compaction remaining time :   0h10m51s
> {code}
> We are not getting exceptions and are not running out of heap space. The 
> Ubuntu OOM killer is reaping the process after all of the memory is consumed. 
> We watch memory in the opscenter console and it will grow. If we turn off the 
> OOM killer for the process, it will run until everything else is killed 
> instead and then the kernel panics.
> We have the following settings configured: 
> 2G Heap
> 512M New
> {code}
> memtable_heap_space_in_mb: 1024
> memtable_offheap_space_in_mb: 1024
> memtable_allocation_type: heap_buffers
> commitlog_total_space_in_mb: 2048
> concurrent_compactors: 1
> compaction_throughput_mb_per_sec: 128
> {code}
> The compaction strategy is leveled (these are read-intensive tables that are 
> rarely updated)
> I have tried every setting, every option and I have the system where the MTBF 
> is about an hour now, but we never finish compacting because there are some 
> large compactions pending. None of the GC tools or settings help because it 
> is not a GC problem. It is an off-heap memory problem.
> We are getting these messages in our syslog 
> {code}
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219527] BUG: Bad page map in 
> process java  pte:0320 pmd:2d6fa5067
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219545] addr:7fb820be3000 
> vm_flags:0870 anon_vma:  (null) mapping:  (null) 
> index:7fb820be3
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219556] CPU: 3 PID: 27344 
> Comm: java Tainted: GB3.13.0-24-generic #47-Ubuntu
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219559]  880028510e40 
> 88020d43da98 81715ac4 7fb820be3000
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219565]  88020d43dae0 
> 81174183 0320 0007fb820be3
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219568]  8802d6fa5f18 
> 0320 7fb820be3000 7fb820be4000
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219572] Call Trace:
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219584]  [] 
> dump_stack+0x45/0x56
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219591]  [] 
> print_bad_pte+0x1a3/0x250
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219594]  [] 
> vm_normal_page+0x69/0x80
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219598]  [] 
> unmap_page_range+0x3bb/0x7f0
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219602]  [] 
> unmap_single_vma+0x81/0xf0
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219605]  [] 
> unmap_vmas+0x49/0x90
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219610]  [] 
> exit_mmap+0x9c/0x170
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219617]  [] 
> ? __delayacct_add_tsk+0x153/0x170
> Jan  2 07:06:00 ip-10-0-2-226 kernel: [49801151.219621]  [] 
> mmput+0x5c/0x120
> Jan  

[jira] [Updated] (CASSANDRA-8642) Cassandra crashed after stress test of write

2015-01-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8642:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3

What version of the JDK are you running? What Ubuntu release are you using? 

> Cassandra crashed after stress test of write
> 
>
> Key: CASSANDRA-8642
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8642
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core 
> CPU, 16GB memory (heapsize 8G), Vmware virtual machine.
>Reporter: ZhongYu
> Fix For: 2.1.3
>
> Attachments: QQ拼音截图未命名.png
>
>
> When I am perform stress test of write using YCSB, Cassandra crashed. I look 
> at the logs, and here are the last  and only log:
> WARN  [SharedPool-Worker-25] 2015-01-18 17:35:16,611 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-25,5,main]: {}
> java.lang.InternalError: a fault occurred in a recent unsafe memory access 
> operation in compiled Java code
> at 
> org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:174) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_71]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.2.jar:2.1.2]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8643) merkle tree creation fails with NoSuchElementException

2015-01-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8643:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3

> merkle tree creation fails with NoSuchElementException
> --
>
> Key: CASSANDRA-8643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8643
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: We are running on a three node cluster with three in 
> replication(C* 2.1.2). It uses a default C* installation and STCS.
>Reporter: Jan Karlsson
> Fix For: 2.1.3
>
>
> We have a problem that we encountered during testing over the weekend. 
> During the tests we noticed that repairs started to fail. This error has 
> occured on multiple non-coordinator nodes during repair. It also ran at least 
> once without producing this error.
> We run repair -pr on all nodes on different days. CPU values were around 40% 
> and disk was 50% full.
> From what I understand, the coordinator asked for merkle trees from the other 
> two nodes. However one of the nodes fails to create his merkle tree.
> Unfortunately we do not have a way to reproduce this problem.
> The coordinator receives:
> {noformat}
> 2015-01-09T17:55:57.091+0100  INFO [RepairJobTask:4] RepairJob.java:145 
> [repair #59455950-9820-11e4-b5c1-7797064e1316] requesting merkle trees for 
> censored (to [/xx.90, /xx.98, /xx.82])
> 2015-01-09T17:55:58.516+0100  INFO [AntiEntropyStage:1] 
> RepairSession.java:171 [repair #59455950-9820-11e4-b5c1-7797064e1316] 
> Received merkle tree for censored from /xx.90
> 2015-01-09T17:55:59.581+0100 ERROR [AntiEntropySessions:76] 
> RepairSession.java:303 [repair #59455950-9820-11e4-b5c1-7797064e1316] session 
> completed with the following error
> org.apache.cassandra.exceptions.RepairException: [repair 
> #59455950-9820-11e4-b5c1-7797064e1316 on censored/censored, 
> (-6476420463551243930,-6471459119674373580]] Validation failed in /xx.98
> at 
> org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:384)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:126)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> 2015-01-09T17:55:59.582+0100 ERROR [AntiEntropySessions:76] 
> CassandraDaemon.java:153 Exception in thread 
> Thread[AntiEntropySessions:76,5,RMI Runtime]
> java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
> [repair #59455950-9820-11e4-b5c1-7797064e1316 on censored/censored, 
> (-6476420463551243930,-6471459119674373580]] Validation failed in /xx.98
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
>at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #59455950-9820-11e4-b5c1-7797064e1316 on censored/censored, 
> (-6476420463551243930,-6471459119674373580]] Validation failed in /xx.98
> at 
> org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:384)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:126)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
> ... 3 common frames omitted
> {noformat}
> While one of the other nod

[jira] [Comment Edited] (CASSANDRA-7432) Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60

2015-01-19 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282609#comment-14282609
 ] 

Jeremy Hanna edited comment on CASSANDRA-7432 at 1/19/15 7:53 PM:
--

It looks like this is in 2.0, but not 2.1 forward.


was (Author: jeromatron):
It looks like is in 2.0, but not 2.1 forward.

> Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60
> 
>
> Key: CASSANDRA-7432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7432
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: graham sanderson
>Assignee: Brandon Williams
> Fix For: 2.0.10, 2.1.1
>
> Attachments: 7432.txt
>
>
> The new flags in question are as follows:
> {code}
> -XX:+CMSParallelInitialMarkEnabled
> -XX:+CMSEdenChunksRecordAlways
> {code}
> Given we already have
> {code}
> JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC" 
> JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC" 
> JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled" 
> JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
> if [ "$JVM_ARCH" = "64-Bit" ] ; then
> JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
> fi
> {code}
> The assumption would be that people are at least running on large number CPU 
> cores/threads
> I would therefore recommend defaulting these flags if available - the only 
> two possible downsides for {{+CMSEdenChunksRecordAlways}}:
> 1) There is a new very short (probably un-contended) lock in the "slow" (non 
> TLAB) eden allocation path with {{+CMSEdenChunksRecordAlways}}. I haven't 
> detected this timing wise - this is the "slow" path after all
> 2) If you are running with {{-XX:-UseCMSCompactAtFullCollection}} (not the 
> default) *and* you call {{System.gc()}} then  {{+CMSEdenChunksRecordAlways}} 
> will expose you to a possible seg fault: (see
> [http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8021809])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7432) Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60

2015-01-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282931#comment-14282931
 ] 

Brandon Williams edited comment on CASSANDRA-7432 at 1/19/15 7:51 PM:
--

Not sure what happened, I see my original commit did go into 2.1, but couldn't 
find where it got removed.  Anyway, I put them back.


was (Author: brandon.williams):
Not what happened, I see my original commit did go into 2.1, but couldn't find 
where it got removed.  Anyway, I put them back.

> Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60
> 
>
> Key: CASSANDRA-7432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7432
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: graham sanderson
>Assignee: Brandon Williams
> Fix For: 2.0.10, 2.1.1
>
> Attachments: 7432.txt
>
>
> The new flags in question are as follows:
> {code}
> -XX:+CMSParallelInitialMarkEnabled
> -XX:+CMSEdenChunksRecordAlways
> {code}
> Given we already have
> {code}
> JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC" 
> JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC" 
> JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled" 
> JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
> if [ "$JVM_ARCH" = "64-Bit" ] ; then
> JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
> fi
> {code}
> The assumption would be that people are at least running on large number CPU 
> cores/threads
> I would therefore recommend defaulting these flags if available - the only 
> two possible downsides for {{+CMSEdenChunksRecordAlways}}:
> 1) There is a new very short (probably un-contended) lock in the "slow" (non 
> TLAB) eden allocation path with {{+CMSEdenChunksRecordAlways}}. I haven't 
> detected this timing wise - this is the "slow" path after all
> 2) If you are running with {{-XX:-UseCMSCompactAtFullCollection}} (not the 
> default) *and* you call {{System.gc()}} then  {{+CMSEdenChunksRecordAlways}} 
> will expose you to a possible seg fault: (see
> [http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8021809])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7432) Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60

2015-01-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282931#comment-14282931
 ] 

Brandon Williams commented on CASSANDRA-7432:
-

Not what happened, I see my original commit did go into 2.1, but couldn't find 
where it got removed.  Anyway, I put them back.

> Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60
> 
>
> Key: CASSANDRA-7432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7432
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: graham sanderson
>Assignee: Brandon Williams
> Fix For: 2.0.10, 2.1.1
>
> Attachments: 7432.txt
>
>
> The new flags in question are as follows:
> {code}
> -XX:+CMSParallelInitialMarkEnabled
> -XX:+CMSEdenChunksRecordAlways
> {code}
> Given we already have
> {code}
> JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC" 
> JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC" 
> JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled" 
> JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
> if [ "$JVM_ARCH" = "64-Bit" ] ; then
> JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
> fi
> {code}
> The assumption would be that people are at least running on large number CPU 
> cores/threads
> I would therefore recommend defaulting these flags if available - the only 
> two possible downsides for {{+CMSEdenChunksRecordAlways}}:
> 1) There is a new very short (probably un-contended) lock in the "slow" (non 
> TLAB) eden allocation path with {{+CMSEdenChunksRecordAlways}}. I haven't 
> detected this timing wise - this is the "slow" path after all
> 2) If you are running with {{-XX:-UseCMSCompactAtFullCollection}} (not the 
> default) *and* you call {{System.gc()}} then  {{+CMSEdenChunksRecordAlways}} 
> will expose you to a possible seg fault: (see
> [http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8021809])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: add missing jvm opts back

2015-01-19 Thread brandonwilliams
add missing jvm opts back


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d469b4b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d469b4b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d469b4b8

Branch: refs/heads/trunk
Commit: d469b4b817f8d108e33f69bba3c94ea792f2d9e9
Parents: f88864c
Author: Brandon Williams 
Authored: Mon Jan 19 13:47:49 2015 -0600
Committer: Brandon Williams 
Committed: Mon Jan 19 13:47:49 2015 -0600

--
 conf/cassandra-env.sh | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d469b4b8/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 755f962..3f4c21b 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -221,6 +221,10 @@ JVM_OPTS="$JVM_OPTS 
-XX:CompileCommandFile=$CASSANDRA_CONF/hotspot_compiler"
 JVM_OPTS="$JVM_OPTS -XX:CMSWaitDuration=1"
 
 # note: bash evals '1.7.x' as > '1.7' so this is really a >= 1.7 jvm check
+if { [ "$JVM_VERSION" \> "1.7" ] && [ "$JVM_VERSION" \< "1.8.0" ] && [ 
"$JVM_PATCH_VERSION" -ge "60" ]; } || [ "$JVM_VERSION" \> "1.8" ] ; then
+JVM_OPTS="$JVM_OPTS -XX:+CMSParallelInitialMarkEnabled 
-XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1"
+fi
+
 if [ "$JVM_ARCH" = "64-Bit" ] ; then
 JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
 fi



[1/3] cassandra git commit: add missing jvm opts back

2015-01-19 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f88864cc6 -> d469b4b81
  refs/heads/trunk d314c07a1 -> bea3a9703


add missing jvm opts back


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d469b4b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d469b4b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d469b4b8

Branch: refs/heads/cassandra-2.1
Commit: d469b4b817f8d108e33f69bba3c94ea792f2d9e9
Parents: f88864c
Author: Brandon Williams 
Authored: Mon Jan 19 13:47:49 2015 -0600
Committer: Brandon Williams 
Committed: Mon Jan 19 13:47:49 2015 -0600

--
 conf/cassandra-env.sh | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d469b4b8/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 755f962..3f4c21b 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -221,6 +221,10 @@ JVM_OPTS="$JVM_OPTS 
-XX:CompileCommandFile=$CASSANDRA_CONF/hotspot_compiler"
 JVM_OPTS="$JVM_OPTS -XX:CMSWaitDuration=1"
 
 # note: bash evals '1.7.x' as > '1.7' so this is really a >= 1.7 jvm check
+if { [ "$JVM_VERSION" \> "1.7" ] && [ "$JVM_VERSION" \< "1.8.0" ] && [ 
"$JVM_PATCH_VERSION" -ge "60" ]; } || [ "$JVM_VERSION" \> "1.8" ] ; then
+JVM_OPTS="$JVM_OPTS -XX:+CMSParallelInitialMarkEnabled 
-XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1"
+fi
+
 if [ "$JVM_ARCH" = "64-Bit" ] ; then
 JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
 fi



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-19 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bea3a970
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bea3a970
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bea3a970

Branch: refs/heads/trunk
Commit: bea3a9703c8494c59c89f8d2bd98c563d43faa0f
Parents: d314c07 d469b4b
Author: Brandon Williams 
Authored: Mon Jan 19 13:47:57 2015 -0600
Committer: Brandon Williams 
Committed: Mon Jan 19 13:47:57 2015 -0600

--
 conf/cassandra-env.sh | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bea3a970/conf/cassandra-env.sh
--



[jira] [Updated] (CASSANDRA-8619) using CQLSSTableWriter gives ConcurrentModificationException

2015-01-19 Thread Igor Berman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Berman updated CASSANDRA-8619:
---
Environment: 
sun jdk 7
linux - ubuntu

> using CQLSSTableWriter gives ConcurrentModificationException
> 
>
> Key: CASSANDRA-8619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8619
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: sun jdk 7
> linux - ubuntu
>Reporter: Igor Berman
> Attachments: TimeSeriesCassandraLoaderTest.java
>
>
> Using CQLSSTableWriter gives ConcurrentModificationException 
> I'm trying to load many timeseries into cassandra 2.0.11-1
> using  java driver 'org.apache.cassandra:cassandra-all:2.0.11'
> {noformat}
> java.util.ConcurrentModificationException
>   at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1115)
>   at java.util.TreeMap$ValueIterator.next(TreeMap.java:1160)
>   at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:126)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:215)
> schema
> CREATE TABLE test.sample (ts_id bigint, yr int, t timestamp, v double, tgs 
> set, PRIMARY KEY((ts_id,yr), t)) WITH CLUSTERING ORDER BY (t DESC) 
> AND COMPRESSION = {'sstable_compression': 'LZ4Compressor'};
> statement:
> INSERT INTO  test.sample(ts_id, yr, t, v) VALUES (?,?,?,?)
> {noformat}
> with .withBufferSizeInMB(128); it happens more than with
> .withBufferSizeInMB(256);
> code based on 
> http://planetcassandra.org/blog/using-the-cassandra-bulk-loader-updated/
> writer.addRow(tsId, year, new Date(time), value);
> Any suggestions will be highly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8619) using CQLSSTableWriter gives ConcurrentModificationException

2015-01-19 Thread Igor Berman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282853#comment-14282853
 ] 

Igor Berman commented on CASSANDRA-8619:


I believe this is connected to concurrent access to TreeMapBackedSortedColumns 
both from preparing(main) thread and from DiskWriter thread. 
TreeMapBackedSortedColumns  is not threadsafe.


> using CQLSSTableWriter gives ConcurrentModificationException
> 
>
> Key: CASSANDRA-8619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8619
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Igor Berman
> Attachments: TimeSeriesCassandraLoaderTest.java
>
>
> Using CQLSSTableWriter gives ConcurrentModificationException 
> I'm trying to load many timeseries into cassandra 2.0.11-1
> using  java driver 'org.apache.cassandra:cassandra-all:2.0.11'
> {noformat}
> java.util.ConcurrentModificationException
>   at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1115)
>   at java.util.TreeMap$ValueIterator.next(TreeMap.java:1160)
>   at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:126)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:215)
> schema
> CREATE TABLE test.sample (ts_id bigint, yr int, t timestamp, v double, tgs 
> set, PRIMARY KEY((ts_id,yr), t)) WITH CLUSTERING ORDER BY (t DESC) 
> AND COMPRESSION = {'sstable_compression': 'LZ4Compressor'};
> statement:
> INSERT INTO  test.sample(ts_id, yr, t, v) VALUES (?,?,?,?)
> {noformat}
> with .withBufferSizeInMB(128); it happens more than with
> .withBufferSizeInMB(256);
> code based on 
> http://planetcassandra.org/blog/using-the-cassandra-bulk-loader-updated/
> writer.addRow(tsId, year, new Date(time), value);
> Any suggestions will be highly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-19 Thread Abhishek Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282830#comment-14282830
 ] 

Abhishek Gupta commented on CASSANDRA-8638:
---

[~s_delima] I am currently trying to analyze a fix for this defect. Here are 
the queries which will help me understand the defect and identify a correct fix:

1. how do we reproduce this in real world scenario. How does these BOM 
characters get introduced. is it is because of different architectures like 
intel / sparc? 
2. how may BOM characters do we need to handle, is there a list of characters?
3. do we need to look for these characters at the beginning of file or anywhere 
in the file?

thanks for help. 

> CQLSH -f option should ignore BOM in files
> --
>
> Key: CASSANDRA-8638
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: Red Hat linux
>Reporter: Sotirios Delimanolis
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 2.1.3
>
>
> I fell in byte order mark trap trying to execute a CQL script through CQLSH. 
> The file contained the simple (plus BOM)
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 
> -- and another "CREATE TABLE bucket_flags" query
> {noformat}
> I executed the script
> {noformat}
> [~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
> /home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
> /home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
> test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 
> '3'}  AND durable_writes = true; 
> /home/selimanolis/Schema/patches/setup.cql:2:  ^
> /home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
>  message="Cannot add column family 'bucket_flags' to non existing keyspace 
> 'test'.">
> {noformat}
> I realized much later that the file had a BOM which was seemingly screwing 
> with how CQLSH parsed the file.
> It would be nice to have CQLSH ignore the BOM when processing files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8421) Cassandra 2.1.1 & Cassandra 2.1.2 UDT not returning value for LIST type as UDT

2015-01-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282674#comment-14282674
 ] 

Benjamin Lerer commented on CASSANDRA-8421:
---

[~thobbs] can you review?

> Cassandra 2.1.1 & Cassandra 2.1.2 UDT not returning value for LIST type as UDT
> --
>
> Key: CASSANDRA-8421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: single node cassandra 
>Reporter: madheswaran
>Assignee: Benjamin Lerer
> Fix For: 3.0, 2.1.3
>
> Attachments: 8421-unittest.txt, CASSANDRA-8421.txt, entity_data.csv
>
>
> I using List and its data type is UDT.
> UDT:
> {code}
> CREATE TYPE
> fieldmap (
>  key text,
>  value text
> );
> {code}
> TABLE:
> {code}
> CREATE TABLE entity (
>   entity_id uuid PRIMARY KEY,
>   begining int,
>   domain text,
>   domain_type text,
>   entity_template_name text,
>   field_values list,
>   global_entity_type text,
>   revision_time timeuuid,
>   status_key int,
>   status_name text,
>   uuid timeuuid
>   ) {code}
> INDEX:
> {code}
> CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
> CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
> CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
> {code}
> QUERY
> {code}
> SELECT * FROM entity WHERE status_key < 3 and field_values contains {key: 
> 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
> 'S4_1017.abc.com' allow filtering;
> {code}
> The above query return value for some row and not for many rows but those 
> rows and data's are exist.
> Observation:
> If I execute query with other than field_maps, then it returns value. I 
> suspect the problem with LIST with UDT.
> I have single node cassadra DB. Please let me know why this strange behavior 
> from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8421) Cassandra 2.1.1 & Cassandra 2.1.2 UDT not returning value for LIST type as UDT

2015-01-19 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8421:
--
Attachment: CASSANDRA-8421.txt

The patch fix the problem in {{CompositesSearcher}}.

> Cassandra 2.1.1 & Cassandra 2.1.2 UDT not returning value for LIST type as UDT
> --
>
> Key: CASSANDRA-8421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: single node cassandra 
>Reporter: madheswaran
>Assignee: Benjamin Lerer
> Fix For: 3.0, 2.1.3
>
> Attachments: 8421-unittest.txt, CASSANDRA-8421.txt, entity_data.csv
>
>
> I using List and its data type is UDT.
> UDT:
> {code}
> CREATE TYPE
> fieldmap (
>  key text,
>  value text
> );
> {code}
> TABLE:
> {code}
> CREATE TABLE entity (
>   entity_id uuid PRIMARY KEY,
>   begining int,
>   domain text,
>   domain_type text,
>   entity_template_name text,
>   field_values list,
>   global_entity_type text,
>   revision_time timeuuid,
>   status_key int,
>   status_name text,
>   uuid timeuuid
>   ) {code}
> INDEX:
> {code}
> CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
> CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
> CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
> {code}
> QUERY
> {code}
> SELECT * FROM entity WHERE status_key < 3 and field_values contains {key: 
> 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
> 'S4_1017.abc.com' allow filtering;
> {code}
> The above query return value for some row and not for many rows but those 
> rows and data's are exist.
> Observation:
> If I execute query with other than field_maps, then it returns value. I 
> suspect the problem with LIST with UDT.
> I have single node cassadra DB. Please let me know why this strange behavior 
> from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8421) Cassandra 2.1.1 & Cassandra 2.1.2 UDT not returning value for LIST type as UDT

2015-01-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282631#comment-14282631
 ] 

Benjamin Lerer commented on CASSANDRA-8421:
---

My mistake, the order difference was due to the fact that I was using different 
partitioners with the shell and the unit tests.


> Cassandra 2.1.1 & Cassandra 2.1.2 UDT not returning value for LIST type as UDT
> --
>
> Key: CASSANDRA-8421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: single node cassandra 
>Reporter: madheswaran
>Assignee: Benjamin Lerer
> Fix For: 3.0, 2.1.3
>
> Attachments: 8421-unittest.txt, entity_data.csv
>
>
> I using List and its data type is UDT.
> UDT:
> {code}
> CREATE TYPE
> fieldmap (
>  key text,
>  value text
> );
> {code}
> TABLE:
> {code}
> CREATE TABLE entity (
>   entity_id uuid PRIMARY KEY,
>   begining int,
>   domain text,
>   domain_type text,
>   entity_template_name text,
>   field_values list,
>   global_entity_type text,
>   revision_time timeuuid,
>   status_key int,
>   status_name text,
>   uuid timeuuid
>   ) {code}
> INDEX:
> {code}
> CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
> CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
> CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
> {code}
> QUERY
> {code}
> SELECT * FROM entity WHERE status_key < 3 and field_values contains {key: 
> 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
> 'S4_1017.abc.com' allow filtering;
> {code}
> The above query return value for some row and not for many rows but those 
> rows and data's are exist.
> Observation:
> If I execute query with other than field_maps, then it returns value. I 
> suspect the problem with LIST with UDT.
> I have single node cassadra DB. Please let me know why this strange behavior 
> from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7432) Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60

2015-01-19 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282609#comment-14282609
 ] 

Jeremy Hanna commented on CASSANDRA-7432:
-

It looks like is in 2.0, but not 2.1 forward.

> Add new CMS GC flags to cassandra_env.sh for JVM later than 1.7.0_60
> 
>
> Key: CASSANDRA-7432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7432
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: graham sanderson
>Assignee: Brandon Williams
> Fix For: 2.0.10, 2.1.1
>
> Attachments: 7432.txt
>
>
> The new flags in question are as follows:
> {code}
> -XX:+CMSParallelInitialMarkEnabled
> -XX:+CMSEdenChunksRecordAlways
> {code}
> Given we already have
> {code}
> JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC" 
> JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC" 
> JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled" 
> JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
> if [ "$JVM_ARCH" = "64-Bit" ] ; then
> JVM_OPTS="$JVM_OPTS -XX:+UseCondCardMark"
> fi
> {code}
> The assumption would be that people are at least running on large number CPU 
> cores/threads
> I would therefore recommend defaulting these flags if available - the only 
> two possible downsides for {{+CMSEdenChunksRecordAlways}}:
> 1) There is a new very short (probably un-contended) lock in the "slow" (non 
> TLAB) eden allocation path with {{+CMSEdenChunksRecordAlways}}. I haven't 
> detected this timing wise - this is the "slow" path after all
> 2) If you are running with {{-XX:-UseCMSCompactAtFullCollection}} (not the 
> default) *and* you call {{System.gc()}} then  {{+CMSEdenChunksRecordAlways}} 
> will expose you to a possible seg fault: (see
> [http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8021809])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: use constants

2015-01-19 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 17624248e -> d314c07a1


use constants


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d314c07a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d314c07a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d314c07a

Branch: refs/heads/trunk
Commit: d314c07a1fd621362d7b6bb80d7f551943018d44
Parents: 1762424
Author: Dave Brosius 
Authored: Mon Jan 19 09:35:21 2015 -0500
Committer: Dave Brosius 
Committed: Mon Jan 19 09:35:21 2015 -0500

--
 .../org/apache/cassandra/serializers/CollectionSerializer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d314c07a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
--
diff --git 
a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java 
b/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
index 29ae2fd..c747bfd 100644
--- a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
@@ -94,7 +94,7 @@ public abstract class CollectionSerializer implements 
TypeSerializer
 
 protected static void writeValue(ByteBuffer output, ByteBuffer value, int 
version)
 {
-if (version >= 3)
+if (version >= Server.VERSION_3)
 {
 if (value == null)
 {



[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2015-01-19 Thread Prajakta Bhosale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282474#comment-14282474
 ] 

Prajakta Bhosale commented on CASSANDRA-6538:
-

Yes even I agree, It would be useful in my scenario : 
I am trying to copy the columnfamily data to .csv files using "COPY TO" 
command: for records > "7 lakh" cassandra node going down without copying the 
data into .csv file.
So we decided to copy only required columns instead of complete table, which 
allow me to copy records > 7lakh without node failure.
If we will come to know the size of columns it will help us to decide number of 
columns to copy in .csv using "COPY TO" command. 

> Provide a read-time CQL function to display the data size of columns and rows
> -
>
> Key: CASSANDRA-6538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>  Labels: cql
>
> It would be extremely useful to be able to work out the size of rows and 
> columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2015-01-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282467#comment-14282467
 ] 

Robert Stupp edited comment on CASSANDRA-7438 at 1/19/15 12:41 PM:
---

I think the possibly best alternative to access malloc/free is {{Unsafe}} with 
jemalloc in LD_PRELOAD. Native code of {{Unsafe.allocateMemory}} is basically 
just a wrapper around {{malloc()}}/{{free()}}.

Updated the git branch with the following changes:
* update to OHC 0.3
* benchmark: add new command line option to specify key length (-kl)
* free capacity handling moved to segments
* allow to specify preferred memory allocation via system property 
"org.caffinitas.ohc.allocator"
* allow to specify defaults of OHCacheBuilder via system properties prefixed 
with "org.caffinitas.org."
* benchmark: make metrics in local to the driver threads
* benchmark: disable bucket histogram in stats by default

I did not change the default number of segments = 2 * CPUs - but I thought 
about that (since you experienced that 256 segments on c3.8xlarge gives some 
improvement). A naive approach to say e.g. 8 * CPUs feels too heavy for small 
systems (with one socket) and might be too much outside of benchmarking. If 
someone wants to get most out of it in production and really hits the number of 
segments, he can always configure it better. WDYT?

Using jemalloc on Linux via LD_PRELOAD is probably the way to go in C* (since 
off-heap is also used elsewhere).
I think we should leave the OS allocator on OSX.
Don't know much about allocator performance on Windows.

For now I do not plan any new features in OHC for C* - so maybe we shall start 
a final review round?


was (Author: snazy):
I think the possibly best alternative to access malloc/free is {{Unsafe}} with 
jemalloc in LD_PRELOAD. Native code of {{Unsafe.allocateMemory}} is basically 
just a wrapper around {{malloc()}}/{{free()}}.

Updated the git branch with the following changes:
* update to OHC 0.3
* benchmark: add new command line option to specify key length (-kl)
* free capacity handling moved to segments
* allow to specify preferred memory allocation via system property 
"org.caffinitas.ohc.allocator"
* allow to specify defaults of OHCacheBuilder via system properties prefixed 
with "org.caffinitas.org."
* benchmark: make metrics in local to the driver threads
* benchmark: disable bucket histogram in stats by default

I did not change the default number of segments = 2 * CPUs - but I thought 
about that (since you experienced that 256 segments on c3.8xlarge gives some 
improvement). A naive approach to say e.g. 8 * CPUs feels too heavy for small 
systems (with one socket) and might be too much outside of benchmarking. If 
someone wants to get most out of it in production and really hits the number of 
segments, he can always configure it better. WDYT?

Using jemalloc on Linux via LD_PRELOAD is probably the way to go in C* (since 
off-heap is also used elsewhere).
I think we should leave the OS allocator on OSX.
Don't know much about allocator performance on Windows.

For now I do not plan any new features for C* - so maybe we shall start a final 
review round?

> Serializing Row cache alternative (Fully off heap)
> --
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Linux
>Reporter: Vijay
>Assignee: Robert Stupp
>  Labels: performance
> Fix For: 3.0
>
> Attachments: 0001-CASSANDRA-7438.patch, tests.zip
>
>
> Currently SerializingCache is partially off heap, keys are still stored in 
> JVM heap as BB, 
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better 
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off 
> heap and use JNI to interact with cache. We might want to ensure that the new 
> implementation match the existing API's (ICache), and the implementation 
> needs to have safe memory access, low overhead in memory and less memcpy's 
> (As much as possible).
> We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2015-01-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282467#comment-14282467
 ] 

Robert Stupp commented on CASSANDRA-7438:
-

I think the possibly best alternative to access malloc/free is {{Unsafe}} with 
jemalloc in LD_PRELOAD. Native code of {{Unsafe.allocateMemory}} is basically 
just a wrapper around {{malloc()}}/{{free()}}.

Updated the git branch with the following changes:
* update to OHC 0.3
* benchmark: add new command line option to specify key length (-kl)
* free capacity handling moved to segments
* allow to specify preferred memory allocation via system property 
"org.caffinitas.ohc.allocator"
* allow to specify defaults of OHCacheBuilder via system properties prefixed 
with "org.caffinitas.org."
* benchmark: make metrics in local to the driver threads
* benchmark: disable bucket histogram in stats by default

I did not change the default number of segments = 2 * CPUs - but I thought 
about that (since you experienced that 256 segments on c3.8xlarge gives some 
improvement). A naive approach to say e.g. 8 * CPUs feels too heavy for small 
systems (with one socket) and might be too much outside of benchmarking. If 
someone wants to get most out of it in production and really hits the number of 
segments, he can always configure it better. WDYT?

Using jemalloc on Linux via LD_PRELOAD is probably the way to go in C* (since 
off-heap is also used elsewhere).
I think we should leave the OS allocator on OSX.
Don't know much about allocator performance on Windows.

For now I do not plan any new features for C* - so maybe we shall start a final 
review round?

> Serializing Row cache alternative (Fully off heap)
> --
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Linux
>Reporter: Vijay
>Assignee: Robert Stupp
>  Labels: performance
> Fix For: 3.0
>
> Attachments: 0001-CASSANDRA-7438.patch, tests.zip
>
>
> Currently SerializingCache is partially off heap, keys are still stored in 
> JVM heap as BB, 
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better 
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off 
> heap and use JNI to interact with cache. We might want to ensure that the new 
> implementation match the existing API's (ICache), and the implementation 
> needs to have safe memory access, low overhead in memory and less memcpy's 
> (As much as possible).
> We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8644) Cassandra node going down while running "COPY TO" command on around 7lakh records.....

2015-01-19 Thread Prajakta Bhosale (JIRA)
Prajakta Bhosale created CASSANDRA-8644:
---

 Summary: Cassandra node going down while running "COPY TO" command 
on around 7lakh records.
 Key: CASSANDRA-8644
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8644
 Project: Cassandra
  Issue Type: Bug
Reporter: Prajakta Bhosale
 Attachments: cassandra.yaml

Cassandra node going down, While running the "Copy TO" command on one of my 
colum-family which contains around 7 lakh records.
We have 4 node cluster. Please find attached cassandra config file.
$ cassandra -v
2.0.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8630) Faster sequencial IO (on compaction, streaming, etc)

2015-01-19 Thread Oleg Anastasyev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282315#comment-14282315
 ] 

Oleg Anastasyev edited comment on CASSANDRA-8630 at 1/19/15 11:17 AM:
--

So, here are numbers of tests from my laptop for uncompressed sstables:
With RAR patch the avg speed of compaction is
15.84 MB/s
without RAR patch: 
14.32 MB/s
so, the RAR patch alone gives + 10% for the uncompressed case.

BTW read is called slightly more than write for compaction. So any optimization 
to RAR would have more impact on compaction speed than for SW.


was (Author: m0nstermind):
So, here are numbers of tests from my laptop for uncompressed sstables:
With RAR patch the avg speed of compaction is
15.84 MB/s
without RAR patch: 
14.32 MB/s
so, the RAR patch alone gives + 10% for the uncompressed case.



> Faster sequencial IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Oleg Anastasyev
>Assignee: Oleg Anastasyev
>  Labels: performance
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequencial IO (on compaction, streaming, etc)

2015-01-19 Thread Oleg Anastasyev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282315#comment-14282315
 ] 

Oleg Anastasyev commented on CASSANDRA-8630:


So, here are numbers of tests from my laptop for uncompressed sstables:
With RAR patch the avg speed of compaction is
15.84 MB/s
without RAR patch: 
14.32 MB/s
so, the RAR patch alone gives + 10% for the uncompressed case.



> Faster sequencial IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Oleg Anastasyev
>Assignee: Oleg Anastasyev
>  Labels: performance
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8643) merkle tree creation fails with NoSuchElementException

2015-01-19 Thread Jan Karlsson (JIRA)
Jan Karlsson created CASSANDRA-8643:
---

 Summary: merkle tree creation fails with NoSuchElementException
 Key: CASSANDRA-8643
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8643
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: We are running on a three node cluster with three in 
replication(C* 2.1.2). It uses a default C* installation and STCS.
Reporter: Jan Karlsson


We have a problem that we encountered during testing over the weekend. 
During the tests we noticed that repairs started to fail. This error has 
occured on multiple non-coordinator nodes during repair. It also ran at least 
once without producing this error.

We run repair -pr on all nodes on different days. CPU values were around 40% 
and disk was 50% full.

>From what I understand, the coordinator asked for merkle trees from the other 
>two nodes. However one of the nodes fails to create his merkle tree.

Unfortunately we do not have a way to reproduce this problem.

The coordinator receives:
{noformat}
2015-01-09T17:55:57.091+0100  INFO [RepairJobTask:4] RepairJob.java:145 [repair 
#59455950-9820-11e4-b5c1-7797064e1316] requesting merkle trees for censored (to 
[/xx.90, /xx.98, /xx.82])
2015-01-09T17:55:58.516+0100  INFO [AntiEntropyStage:1] RepairSession.java:171 
[repair #59455950-9820-11e4-b5c1-7797064e1316] Received merkle tree for 
censored from /xx.90
2015-01-09T17:55:59.581+0100 ERROR [AntiEntropySessions:76] 
RepairSession.java:303 [repair #59455950-9820-11e4-b5c1-7797064e1316] session 
completed with the following error
org.apache.cassandra.exceptions.RepairException: [repair 
#59455950-9820-11e4-b5c1-7797064e1316 on censored/censored, 
(-6476420463551243930,-6471459119674373580]] Validation failed in /xx.98
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:384)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:126)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
~[apache-cassandra-2.1.1.jar:2.1.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
2015-01-09T17:55:59.582+0100 ERROR [AntiEntropySessions:76] 
CassandraDaemon.java:153 Exception in thread 
Thread[AntiEntropySessions:76,5,RMI Runtime]
java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
[repair #59455950-9820-11e4-b5c1-7797064e1316 on censored/censored, 
(-6476420463551243930,-6471459119674373580]] Validation failed in /xx.98
at com.google.common.base.Throwables.propagate(Throwables.java:160) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
~[apache-cassandra-2.1.1.jar:2.1.1]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: 
org.apache.cassandra.exceptions.RepairException: [repair 
#59455950-9820-11e4-b5c1-7797064e1316 on censored/censored, 
(-6476420463551243930,-6471459119674373580]] Validation failed in /xx.98
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:384)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:126)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
~[apache-cassandra-2.1.1.jar:2.1.1]
... 3 common frames omitted
{noformat}
While one of the other nodes produces this error:
{noformat}
2015-01-09T17:55:59.574+0100 ERROR [ValidationExecutor:16] Validator.java:232 
Failed creating a merkle tree for [repair #59455950-9820-11e4-b5c1-7797064e1316 
on censored/censored, (-6476420463551243930,-6471459119674373580]], /xx.82 (see 
log for details)
2015-01-09T17:55:59.578+0100 ERROR [ValidationExecutor:16] 
CassandraDaemon.java:153 Exception in thread 
Thread[ValidationExecutor

[jira] [Updated] (CASSANDRA-7974) Enable tooling to detect hot partitions

2015-01-19 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-7974:
-
Attachment: CASSANDRA-7974v3.txt

Attached with additional changes:
* Uses open type datatypes (compositedata with tabulardata) for safe mbean 
deserialization
* no longer exposes key validator, instead turns into human readable string 
along with raw bytes when sending over the compositedata
* use own executor instead of TRACE

> Enable tooling to detect hot partitions
> ---
>
> Key: CASSANDRA-7974
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7974
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Chris Lohfink
> Fix For: 2.1.3
>
> Attachments: 7974.txt, CASSANDRA-7974v3.txt, cassandra-2.1-7974v2.txt
>
>
> Sometimes you know you have a hot partition by the load on a replica set, but 
> have no way of determining which partition it is.  Tracing is inadequate for 
> this without a lot of post-tracing analysis that might not yield results.  
> Since we already include stream-lib for HLL in compaction metadata, it 
> shouldn't be too hard to wire up topK for X seconds via jmx/nodetool and then 
> return the top partitions hit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Make CassandraException unchecked, extend RuntimeException

2015-01-19 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6ba099940 -> 17624248e


Make CassandraException unchecked, extend RuntimeException

Patch by Robert Stupp, reviewed by Sylvain Lebresne for CASSANDRA-8560


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17624248
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17624248
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17624248

Branch: refs/heads/trunk
Commit: 17624248efc316de125d1bc3c6be4f0cb4e174a2
Parents: 6ba0999
Author: Robert Stupp 
Authored: Mon Jan 19 09:22:51 2015 +0100
Committer: Robert Stupp 
Committed: Mon Jan 19 09:22:51 2015 +0100

--
 CHANGES.txt |   1 +
 .../cassandra/auth/CassandraAuthorizer.java |  17 +-
 .../cassandra/auth/PasswordAuthenticator.java   |  25 +--
 .../org/apache/cassandra/client/RingCache.java  |   2 +-
 .../org/apache/cassandra/config/CFMetaData.java |  28 +--
 .../apache/cassandra/cql3/QueryProcessor.java   | 117 
 .../org/apache/cassandra/cql3/TypeCast.java |  21 +--
 .../apache/cassandra/cql3/UntypedResultSet.java |  16 +-
 .../cql3/statements/CreateTableStatement.java   |  19 +-
 .../statements/SchemaAlteringStatement.java |  17 +-
 .../cql3/statements/TruncateStatement.java  |  10 +-
 .../db/index/SecondaryIndexManager.java |  11 +-
 .../db/marshal/DynamicCompositeType.java|  22 +--
 .../exceptions/CassandraException.java  |   2 +-
 .../apache/cassandra/hadoop/ConfigHelper.java   |  24 +--
 .../hadoop/pig/AbstractCassandraStorage.java|   4 -
 .../io/compress/CompressionParameters.java  |   9 +-
 .../locator/AbstractReplicationStrategy.java|  24 +--
 .../cassandra/schema/LegacySchemaTables.java| 178 ---
 .../org/apache/cassandra/tracing/Tracing.java   |   5 -
 .../org/apache/cassandra/cql3/CQLTester.java|  73 ++--
 .../cassandra/cql3/ThriftCompatibilityTest.java |  13 +-
 22 files changed, 187 insertions(+), 451 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17624248/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f5a10ee..41bdba9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
  * Support direct buffer decompression for reads (CASSANDRA-8464)
  * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
  * Add role based access control (CASSANDRA-7653)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17624248/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
--
diff --git a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java 
b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
index 6239bc4..1d672b3 100644
--- a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
@@ -369,18 +369,11 @@ public class CassandraAuthorizer implements IAuthorizer
 
 private SelectStatement prepare(String entityname, String permissionsTable)
 {
-try
-{
-String query = String.format("SELECT permissions FROM %s.%s WHERE 
%s = ? AND resource = ?",
- AuthKeyspace.NAME,
- permissionsTable,
- entityname);
-return (SelectStatement) QueryProcessor.getStatement(query, 
ClientState.forInternalCalls()).statement;
-}
-catch (RequestValidationException e)
-{
-throw new AssertionError(e);
-}
+String query = String.format("SELECT permissions FROM %s.%s WHERE %s = 
? AND resource = ?",
+ AuthKeyspace.NAME,
+ permissionsTable,
+ entityname);
+return (SelectStatement) QueryProcessor.getStatement(query, 
ClientState.forInternalCalls()).statement;
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17624248/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
--
diff --git a/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java 
b/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
index 14a6ecf..2ab2316 100644
--- a/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
+++ b/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
@@ -140,18 +140,10 @@ public class PasswordAuthenticator implements 
IAuthenticator
 private AuthenticatedUser doAuthenticate(String u

[jira] [Commented] (CASSANDRA-8560) Make CassandraException be an unchecked exception

2015-01-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282237#comment-14282237
 ] 

Sylvain Lebresne commented on CASSANDRA-8560:
-

+1

> Make CassandraException be an unchecked exception
> -
>
> Key: CASSANDRA-8560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8560
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8560-001.txt
>
>
> {{CassandraException}} (which is the base class of our query validation and 
> execution exception, including {{InvalidRequestException}}, 
> {{UnavailableException}}, ...) is a checked exception. Those exceptions are 
> pervasive and are rarely meant to be caught within Cassandra since they are 
> meant for reporting problems to the end user and so I'm not convinced the 
> benefit of checked exceptions outweight the cost of having to put throws 
> everywhere.
> Concretely, the fact that these are checked exception is currently a pain for 
> 2 outstanding tickets:
> * CASSANDRA-8528: as Robert put it, it forces to "touch half of the source 
> files just to add a throws/catch even in code that can never use UDFs"
> * CASSANDRA-8099: the ticket transform some code (in StorageProxy for 
> instance) to iterators, but an iterator can't throw checked exception. In 
> fact, the current WIP patch for that ticket already switch 
> {{CassandraException}} to extend {{RuntimeException}} for that very reason.
> I understand that "checked" vs "unchecked" exceptions is an old debate with 
> proponent of both camp, but I'm pretty sure the costs of checked outweight 
> the cons in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)