Re: leveled compaction - improve log message

2012-04-09 Thread aaron morton
If you would like to see a change create a request for an improvement here 
https://issues.apache.org/jira/browse/CASSANDRA

Cheers


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 6/04/2012, at 12:51 PM, Radim Kolar wrote:

 it would be really helpfull if leveled compaction prints level into syslog.
 
 Demo:
 
 INFO [CompactionExecutor:891] 2012-04-05 22:39:27,043 CompactionTask.java 
 (line 113) Compacting ***LEVEL 1*** 
 [SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19690-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19688-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19691-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19700-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19686-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19696-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19687-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19695-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19689-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19694-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19693-Data.db')]
 
 INFO [CompactionExecutor:891] 2012-04-05 22:39:57,299 CompactionTask.java 
 (line 221) *** LEVEL 1 *** Compacted to 
 [/var/lib/cassandra/data/rapidshare/querycache-hc-19701-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19702-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19703-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19704-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19705-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19706-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19707-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19708-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19709-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19710-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19711-Data.db,].
   59,643,011 to 57,564,216 (~96% of original) bytes for 590,909 keys at 
 1.814434MB/s.  Time: 30,256ms.
 
 



Re: a very simple indexing question (strange thing seen in CLI)

2012-04-09 Thread aaron morton
 First off, why do I see (01)? I have a similar CF where I just see 1.
The CF uses BytesType as the comparator, which displays values as Hex. 01 is 
the hex representation of 1.

 Before inserting the data, I did assume to ascii
 on the keys, comparator and validator.
This is a feature of cassandra-cli and does not change the server side schema. 

 What is happening? Sorry for the admittedly trivial question, obviously I'm 
 stuck with something quite simple
 which I managed to do with zero effort in the past.
This works for me:

[default@dev] get files where '1'='1460103677';

0 Row Returned.

This fails.

[default@dev] assume files comparator as ascii;
Assumption for column family 'files' added successfully.
[default@dev] get files where '01'='1460103677';
No indexed columns present in index clause with operator EQ
[default@dev] 

Do you still have the assume present ? 

Cheers


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 8/04/2012, at 12:48 PM, Maxim Potekhin wrote:

 Greetings,
 Cassandra 0.8.8 is used.
 
 I'm trying to create an additional CF which is trivial in all respects. Just 
 ascii columns and a few indexes.
 
 This is how I add an index:
 update column family files with column_metadata = [{column_name : '1',  
 validation_class : AsciiType, index_type : 0, index_name : 'pandaid'}];
 
 When I do show keyspaces, I see this:
 
ColumnFamily: files
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.BytesType
  Row cache size / save period in seconds: 0.0/0
  Row Cache Provider: 
 org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 2.2828125/1440/487 (millions of ops/minutes/MB)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: true
  Built indexes: [files.pandaid]
  Column Metadata:
Column Name:  (01)
  Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Index Name: pandaid
  Index Type: KEYS
 
 First off, why do I see (01)? I have a similar CF where I just see 1. 
 Before inserting the data, I did assume to ascii
 on the keys, comparator and validator. The index has been built. When I try 
 to access the data via the index, I get this:
 [default@PANDA] get files where '1'='1460103677';
 InvalidRequestException(why:No indexed columns present in index clause with 
 operator EQ)
 
 
 What is happening? Sorry for the admittedly trivial question, obviously I'm 
 stuck with something quite simple
 which I managed to do with zero effort in the past.
 
 Maxim
 
 
 



Re: Resident size growth

2012-04-09 Thread aaron morton
see http://wiki.apache.org/cassandra/FAQ#mmap

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 9/04/2012, at 5:09 AM, ruslan usifov wrote:

 mmap sstables? It's normal
 
 2012/4/5 Omid Aladini omidalad...@gmail.com
 Hi,
 
 I'm experiencing a steady growth in resident size of JVM running
 Cassandra 1.0.7. I disabled JNA and off-heap row cache, tested with
 and without mlockall disabling paging, and upgraded to JRE 1.6.0_31 to
 prevent this bug [1] to leak memory. Still JVM's resident set size
 grows steadily. A process with Xmx=2048M has grown to 6GB resident
 size and one with Xmx=8192M to 16GB in a few hours and increasing. Has
 anyone experienced this? Any idea how to deal with this issue?
 
 Thanks,
 Omid
 
 [1] http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7066129
 



Re: Listen and RPC address

2012-04-09 Thread aaron morton
Background: Configuration section 
http://www.datastax.com/dev/blog/bulk-loading

I *think* you can get by with changing the rpc_port and storage_port for the 
bulkl loader config. If that does not work create another loop back interface 
and bind the bulk loader to it…

sudo ifconfig lo0 alias 127.0.0.2 up

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 9/04/2012, at 8:31 PM, Rishabh Agrawal wrote:

 Hello,
 
 I have three nodes cluster with having listen address as xxx.xx.1.101, 
 xxx.xx.1.102, xxx.xx.1.103 and Rpc address to be xxx.xx.1.111, xxx.xx.1.112, 
 xxx.xx.1.113. rpc_port and storage_port are 9160 and 7000 respectively.
 
 Now when I run sstableloader tool I get following error:
 
 org.apache.cassandra.config.ConfigurationException: /xxx.xx.1.101:7000 is in 
 use by another process.  Change listen_address:storage_port in cassandra.yaml 
 to values that do not conflict with other service.
 
 Can someone help me with what am I missing in configuration.
 
  
 Thanks and Regards
 
 Rishabh Agarawal
 
 
 
 Impetus to sponsor and exhibit at Structure Data 2012, NY; Mar 21-22. Know 
 more about our Big Data quick-start program at the event. 
 
 New Impetus webcast ‘Cloud-enabled Performance Testing vis-à-vis On-premise’ 
 available at http://bit.ly/z6zT4L. 
 
 
 NOTE: This message may contain information that is confidential, proprietary, 
 privileged or otherwise protected by law. The message is intended solely for 
 the named addressee. If received in error, please destroy and notify the 
 sender. Any use of this email is prohibited when received in error. Impetus 
 does not represent, warrant and/or guarantee, that the integrity of this 
 communication has been maintained nor that the communication is free of 
 errors, virus, interception or interference.



RE: Listen and RPC address

2012-04-09 Thread Rishabh Agrawal
Thanks, it just worked. Though I am able to load sstables but I get following 
error:

ERROR 15:44:23,557 Error in ThreadPoolExecutor
java.lang.IllegalArgumentException: Unknown CF 1000

what could be the reason.
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Monday, April 09, 2012 3:30 PM
To: user@cassandra.apache.org
Subject: Re: Listen and RPC address

Background: Configuration section 
http://www.datastax.com/dev/blog/bulk-loading

I *think* you can get by with changing the rpc_port and storage_port for the 
bulkl loader config. If that does not work create another loop back interface 
and bind the bulk loader to it...

sudo ifconfig lo0 alias 127.0.0.2 up

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 9/04/2012, at 8:31 PM, Rishabh Agrawal wrote:


Hello,
I have three nodes cluster with having listen address as xxx.xx.1.101, 
xxx.xx.1.102, xxx.xx.1.103 and Rpc address to be xxx.xx.1.111, xxx.xx.1.112, 
xxx.xx.1.113. rpc_port and storage_port are 9160 and 7000 respectively.
Now when I run sstableloader tool I get following error:
org.apache.cassandra.config.ConfigurationException: /xxx.xx.1.101:7000 is in 
use by another process.  Change listen_address:storage_port in cassandra.yaml 
to values that do not conflict with other service.
Can someone help me with what am I missing in configuration.

Thanks and Regards
Rishabh Agarawal



Impetus to sponsor and exhibit at Structure Data 2012, NY; Mar 21-22. Know more 
about our Big Data quick-start program at the event.

New Impetus webcast 'Cloud-enabled Performance Testing vis-à-vis On-premise' 
available at http://bit.ly/z6zT4L.


NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.




Impetus to sponsor and exhibit at Structure Data 2012, NY; Mar 21-22. Know more 
about our Big Data quick-start program at the event.

New Impetus webcast 'Cloud-enabled Performance Testing vis-à-vis On-premise' 
available at http://bit.ly/z6zT4L.


NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.


Re: Resident size growth

2012-04-09 Thread Jeremiah Jordan
He says he disabled JNA.  You can't mmap without JNA can you?

On Apr 9, 2012, at 4:52 AM, aaron morton wrote:

see http://wiki.apache.org/cassandra/FAQ#mmap

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.comhttp://www.thelastpickle.com/

On 9/04/2012, at 5:09 AM, ruslan usifov wrote:

mmap sstables? It's normal

2012/4/5 Omid Aladini omidalad...@gmail.commailto:omidalad...@gmail.com
Hi,

I'm experiencing a steady growth in resident size of JVM running
Cassandra 1.0.7. I disabled JNA and off-heap row cache, tested with
and without mlockall disabling paging, and upgraded to JRE 1.6.0_31 to
prevent this bug [1] to leak memory. Still JVM's resident set size
grows steadily. A process with Xmx=2048M has grown to 6GB resident
size and one with Xmx=8192M to 16GB in a few hours and increasing. Has
anyone experienced this? Any idea how to deal with this issue?

Thanks,
Omid

[1] http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7066129





Re: Resident size growth

2012-04-09 Thread Omid Aladini
Thanks. Yes it's due to mmappd SSTables pages that count as resident size.

Jeremiah: mmap isn't through JNA, it's via java.nio.MappedByteBuffer I think.

-- Omid

On Mon, Apr 9, 2012 at 4:15 PM, Jeremiah Jordan
jeremiah.jor...@morningstar.com wrote:
 He says he disabled JNA.  You can't mmap without JNA can you?

 On Apr 9, 2012, at 4:52 AM, aaron morton wrote:

 see http://wiki.apache.org/cassandra/FAQ#mmap

 Cheers

 -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 9/04/2012, at 5:09 AM, ruslan usifov wrote:

 mmap sstables? It's normal

 2012/4/5 Omid Aladini omidalad...@gmail.com

 Hi,

 I'm experiencing a steady growth in resident size of JVM running
 Cassandra 1.0.7. I disabled JNA and off-heap row cache, tested with
 and without mlockall disabling paging, and upgraded to JRE 1.6.0_31 to
 prevent this bug [1] to leak memory. Still JVM's resident set size
 grows steadily. A process with Xmx=2048M has grown to 6GB resident
 size and one with Xmx=8192M to 16GB in a few hours and increasing. Has
 anyone experienced this? Any idea how to deal with this issue?

 Thanks,
 Omid

 [1] http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7066129






Re: issue with composite row key on CassandraStorage pig?

2012-04-09 Thread Janne Jalkanen

I don't think the Pig code supports Composite *keys* yet. The 1.0.9 code 
supports Composite Column Names tho'...

/Janne

On Apr 8, 2012, at 06:02 , Janwar Dinata wrote:

 Hi,
 
 I have a column family that uses DynamicCompositeType for its 
 keys_validation_class.
 When I try to dump the row keys using pig but it fails with 
 java.lang.ClassCastException: org.apache.pig.data.DataByteArray cannot be 
 cast to org.apache.pig.data.Tuple
 
 This is how I create the column family
 create column family CompoKey
with
  key_validation_class =
'DynamicCompositeType(
  a=AsciiType,
  o=BooleanType,
  b=BytesType,
  e=DateType,
  d=DoubleType,
  f=FloatType,
  i=IntegerType,
  x=LexicalUUIDType,
  l=LongType,
  t=TimeUUIDType,
  s=UTF8Type,
  u=UUIDType)' and
  comparator =
'DynamicCompositeType(
  a=AsciiType,
  o=BooleanType,
  b=BytesType,
  e=DateType,
  d=DoubleType,
  f=FloatType,
  i=IntegerType,
  x=LexicalUUIDType,
  l=LongType,
  t=TimeUUIDType,
  s=UTF8Type,
  u=UUIDType)' and
  default_validation_class = CounterColumnType;   
 
 This is my pig script
 rows =  LOAD 'cassandra://PigTest/CompoKey' USING CassandraStorage();
 keys = FOREACH rows GENERATE flatten(key);
 dump keys;
 
 I'm on cassandra 1.0.9 and pig 0.9.2.
 
 Thanks.



Re: issue with composite row key on CassandraStorage pig?

2012-04-09 Thread Janwar Dinata
Hi Janne,

Do you happen to know if support for composite row key is in the pipeline?

It seems that you did a patch for composite columns support on
CassandraStorage.java.
Do you have any pointers for implementing composite row key feature?

Thanks.

On Mon, Apr 9, 2012 at 11:32 AM, Janne Jalkanen janne.jalka...@ecyrd.comwrote:


 I don't think the Pig code supports Composite *keys* yet. The 1.0.9 code
 supports Composite Column Names tho'...

 /Janne

 On Apr 8, 2012, at 06:02 , Janwar Dinata wrote:

 Hi,

 I have a column family that uses DynamicCompositeType for its
 keys_validation_class.
 When I try to dump the row keys using pig but it fails
 with java.lang.ClassCastException: org.apache.pig.data.DataByteArray cannot
 be cast to org.apache.pig.data.Tuple

 This is how I create the column family
 create column family CompoKey
with
  key_validation_class =
'DynamicCompositeType(
  a=AsciiType,
  o=BooleanType,
  b=BytesType,
  e=DateType,
  d=DoubleType,
  f=FloatType,
  i=IntegerType,
  x=LexicalUUIDType,
  l=LongType,
  t=TimeUUIDType,
  s=UTF8Type,
  u=UUIDType)' and
  comparator =
'DynamicCompositeType(
  a=AsciiType,
  o=BooleanType,
  b=BytesType,
  e=DateType,
  d=DoubleType,
  f=FloatType,
  i=IntegerType,
  x=LexicalUUIDType,
  l=LongType,
  t=TimeUUIDType,
  s=UTF8Type,
  u=UUIDType)' and
  default_validation_class = CounterColumnType;

 This is my pig script
 rows =  LOAD 'cassandra://PigTest/CompoKey' USING CassandraStorage();
 keys = FOREACH rows GENERATE flatten(key);
 dump keys;

 I'm on cassandra 1.0.9 and pig 0.9.2.

 Thanks.





Re: Request timeout and host marked down

2012-04-09 Thread Daning Wang
Thanks Aaron! Here is the exception, is that the timeout between nodes? any
parameter I can change to reduce timeout?

me.prettyprint.hector.api.exceptions.HectorTransportException:
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
at
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:33)
at
me.prettyprint.cassandra.model.CqlQuery$1.execute(CqlQuery.java:130)
at
me.prettyprint.cassandra.model.CqlQuery$1.execute(CqlQuery.java:100)
at
me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
at
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:246)
at
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.CqlQuery.execute(CqlQuery.java:99)
at
com.netseer.cassandra.cache.dao.CacheReader.getRows(CacheReader.java:267)
at
com.netseer.cassandra.cache.dao.CacheReader.getCache0(CacheReader.java:55)
at
com.netseer.cassandra.cache.dao.CacheDao.getCaches(CacheDao.java:85)
at
com.netseer.cassandra.cache.dao.CacheDao.getCache(CacheDao.java:71)
at
com.netseer.cassandra.cache.dao.CacheDao.getCache(CacheDao.java:149)
at
com.netseer.cassandra.cache.service.CacheServiceImpl.getCache(CacheServiceImpl.java:55)
at
com.netseer.cassandra.cache.service.CacheServiceImpl.getCache(CacheServiceImpl.java:28)
at
com.netseer.dsat.cache.CassandraDSATCacheImpl.get(CassandraDSATCacheImpl.java:62)
at
com.netseer.dsat.cache.CassandraDSATCacheImpl.getTimedValue(CassandraDSATCacheImpl.java:144)
at
com.netseer.dsat.serving.GenericCacheManager$4.call(GenericCacheManager.java:427)
at
com.netseer.dsat.serving.GenericCacheManager$4.call(GenericCacheManager.java:423)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql_query(Cassandra.java:1698)
at
org.apache.cassandra.thrift.Cassandra$Client.execute_cql_query(Cassandra.java:1682)
at
me.prettyprint.cassandra.model.CqlQuery$1.execute(CqlQuery.java:106)
... 21 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 31 more


*and there is the tpstats*

[cassy@s2.dsat4 ~]$  ~/bin/nodetool -h localhost tpstats
Pool NameActive   Pending  Completed   Blocked  All
time blocked
ReadStage 3 3  414129625
0 0
RequestResponseStage  0 0  300591600
0 0
MutationStage 0 0   96585276
0 0
ReadRepairStage   0 0   94185465
0 0
ReplicateOnWriteStage 0 0  0
0 0
GossipStage   0 02684813
0 0
AntiEntropyStage  0 0   5436
0 0
MigrationStage0 0 22
0 0
MemtablePostFlusher   0 0   3553
0 0
StreamStage   0 0167
0 0
FlushWriter   0 0   3582
023
MiscStage 0 0   1163
0 0
AntiEntropySessions   0 0399
0 0
InternalResponseStage 0   

Re: Bulk loading errors with 1.0.8

2012-04-09 Thread Jonathan Ellis
On Thu, Apr 5, 2012 at 10:58 AM, Benoit Perroud ben...@noisette.ch wrote:
 ERROR [Thread-23] 2012-04-05 09:58:12,252 AbstractCassandraDaemon.java
 (line 139) Fatal exception in thread Thread[Thread-23,5,main]
 java.lang.RuntimeException: Insufficient disk space to flush
 7813594056494754913 bytes
        at 
 org.apache.cassandra.db.ColumnFamilyStore.getFlushPath(ColumnFamilyStore.java:635)
        at 
 org.apache.cassandra.streaming.StreamIn.getContextMapping(StreamIn.java:92)
        at 
 org.apache.cassandra.streaming.IncomingStreamReader.init(IncomingStreamReader.java:68)
        at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:185)
        at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:81)

 Here I'm not really sure I was able to generate 7 exa bytes of data ;)

The bulk loader told the Cassandra node, I have 7EB of data for you.
 And the C* node threw this error.  So you need to troubleshoot the
bulk loader side.

If you feel lucky, we've done some work on streaming in 1.1 to make it
more robust, but I don't recognize this specific problem so I can't
say for sure if 1.1 would help.

 ERROR [Thread-46] 2012-04-05 09:58:14,453 AbstractCassandraDaemon.java
 (line 139) Fatal exception in thread Thread[Thread-46,5,main]
 java.lang.NullPointerException
        at 
 org.apache.cassandra.io.sstable.SSTable.getMinimalKey(SSTable.java:156)
        at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
        at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:302)
        at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:155)
        at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:89)
        at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:185)
        at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:81)

 This one sounds like a null key added to the SSTable at some point,
 but I'm rather confident I'm checking for key nullity.

The stacktrace indicates an error with the very first key in the
sstable, if that helps.

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: leveled compaction - improve log message

2012-04-09 Thread Jonathan Ellis
CompactionExecutor doesn't have level information available to it; it
just compacts the sstables it's told to.  But if you enable debug
logging on LeveledManifest you'd see what you want.  (Compaction
candidates for L{} are {})

2012/4/5 Radim Kolar h...@filez.com:
 it would be really helpfull if leveled compaction prints level into syslog.

 Demo:

 INFO [CompactionExecutor:891] 2012-04-05 22:39:27,043 CompactionTask.java
 (line 113) Compacting ***LEVEL 1***
 [SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19690-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19688-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19691-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19700-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19686-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19696-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19687-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19695-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19689-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19694-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19693-Data.db')]

  INFO [CompactionExecutor:891] 2012-04-05 22:39:57,299 CompactionTask.java
 (line 221) *** LEVEL 1 *** Compacted to
 [/var/lib/cassandra/data/rapidshare/querycache-hc-19701-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19702-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19703-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19704-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19705-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19706-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19707-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19708-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19709-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19710-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19711-Data.db,].
  59,643,011 to 57,564,216 (~96% of original) bytes for 590,909 keys at
 1.814434MB/s.  Time: 30,256ms.





-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: leveled compaction - improve log message

2012-04-09 Thread Maki Watanabe
for details, open conf/log4j-server.properties and add following configuration:

log4j.logger.org.apache.cassandra.db.compaction.LeveledManifest=DEBUG

fyi.

maki


2012/4/10 Jonathan Ellis jbel...@gmail.com:
 CompactionExecutor doesn't have level information available to it; it
 just compacts the sstables it's told to.  But if you enable debug
 logging on LeveledManifest you'd see what you want.  (Compaction
 candidates for L{} are {})

 2012/4/5 Radim Kolar h...@filez.com:
 it would be really helpfull if leveled compaction prints level into syslog.

 Demo:

 INFO [CompactionExecutor:891] 2012-04-05 22:39:27,043 CompactionTask.java
 (line 113) Compacting ***LEVEL 1***
 [SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19690-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19688-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19691-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19700-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19686-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19696-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19687-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19695-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19689-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19694-Data.db'),
 SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19693-Data.db')]

  INFO [CompactionExecutor:891] 2012-04-05 22:39:57,299 CompactionTask.java
 (line 221) *** LEVEL 1 *** Compacted to
 [/var/lib/cassandra/data/rapidshare/querycache-hc-19701-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19702-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19703-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19704-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19705-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19706-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19707-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19708-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19709-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19710-Data.db,/var/lib/cassandra/data/rapidshare/querycache-hc-19711-Data.db,].
  59,643,011 to 57,564,216 (~96% of original) bytes for 590,909 keys at
 1.814434MB/s.  Time: 30,256ms.





 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com


cassandra and .net

2012-04-09 Thread puneet loya
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Thrift.Collections;
using Thrift.Protocol;
using Thrift.Transport;
using Apache.Cassandra;

namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
TTransport transport=null;
try
{
transport = new TBufferedTransport(new TSocket(127.0.0.1,
7000));


//if(buffered)
//trans = new TBufferedTransport(trans as
TStreamTransport);
//if (framed)
//trans = new TFramedTransport(trans);

TProtocol protocol = new TBinaryProtocol(transport);
Cassandra.Client client = new Cassandra.Client(protocol);

Console.WriteLine(Opening connection);

if (!transport.IsOpen)
transport.Open();

client.describe_keyspace(abc);   // Crashing
at this point

  }
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{ if(transport!=null)
transport.Close(); }
Console.ReadLine();
}
}
}

I m trying to interact with cassandra server(database) from .net. For that
i have referred two libraries i.e, apacheCassandra08.dll and thrift.dll..
In the following piece of code the connection is getting opened but when i
m using client object it is giving an error stating Cannot read, Remote
side has closed.

Can any1 help me out with this? Has any1 faced the same prob?


Re: cassandra and .net

2012-04-09 Thread Pierre Chalamet
hello,

9160 is probably the port to use if you use the default config.

- Pierre

On Apr 10, 2012, at 7:26 AM, puneet loya puneetl...@gmail.com wrote:

 using System;
 using System.Collections.Generic;
 using System.Linq;
 using System.Text;
 using Thrift.Collections;
 using Thrift.Protocol;
 using Thrift.Transport;
 using Apache.Cassandra;

 namespace ConsoleApplication1
 {
 class Program
 {
 static void Main(string[] args)
 {
 TTransport transport=null;
 try
 {
 transport = new TBufferedTransport(new TSocket(127.0.0.1, 
 7000));


 //if(buffered)
 //trans = new TBufferedTransport(trans as 
 TStreamTransport);
 //if (framed)
 //trans = new TFramedTransport(trans);

 TProtocol protocol = new TBinaryProtocol(transport);
 Cassandra.Client client = new Cassandra.Client(protocol);

 Console.WriteLine(Opening connection);

 if (!transport.IsOpen)
 transport.Open();

 client.describe_keyspace(abc);   // Crashing at 
 this point

   }
 catch (Exception ex)
 {
 Console.WriteLine(ex.Message);
 }
 finally
 { if(transport!=null)
 transport.Close(); }
 Console.ReadLine();
 }
 }
 }

 I m trying to interact with cassandra server(database) from .net. For that i 
 have referred two libraries i.e, apacheCassandra08.dll and thrift.dll.. In 
 the following piece of code the connection is getting opened but when i m 
 using client object it is giving an error stating Cannot read, Remote side 
 has closed.

 Can any1 help me out with this? Has any1 faced the same prob?




Re: cassandra and .net

2012-04-09 Thread puneet loya
hi,

sorry i posted the port as 7000. I m using 9160 but still has the same
error.

Cannot read, Remote side has closed.
Can u guess whats happening??

On Tue, Apr 10, 2012 at 11:00 AM, Pierre Chalamet pie...@chalamet.netwrote:

 hello,

 9160 is probably the port to use if you use the default config.

 - Pierre

 On Apr 10, 2012, at 7:26 AM, puneet loya puneetl...@gmail.com wrote:

  using System;
  using System.Collections.Generic;
  using System.Linq;
  using System.Text;
  using Thrift.Collections;
  using Thrift.Protocol;
  using Thrift.Transport;
  using Apache.Cassandra;
 
  namespace ConsoleApplication1
  {
  class Program
  {
  static void Main(string[] args)
  {
  TTransport transport=null;
  try
  {
  transport = new TBufferedTransport(new
 TSocket(127.0.0.1, 7000));
 
 
  //if(buffered)
  //trans = new TBufferedTransport(trans as
 TStreamTransport);
  //if (framed)
  //trans = new TFramedTransport(trans);
 
  TProtocol protocol = new TBinaryProtocol(transport);
  Cassandra.Client client = new Cassandra.Client(protocol);
 
  Console.WriteLine(Opening connection);
 
  if (!transport.IsOpen)
  transport.Open();
 
  client.describe_keyspace(abc);   //
 Crashing at this point
 
}
  catch (Exception ex)
  {
  Console.WriteLine(ex.Message);
  }
  finally
  { if(transport!=null)
  transport.Close(); }
  Console.ReadLine();
  }
  }
  }
 
  I m trying to interact with cassandra server(database) from .net. For
 that i have referred two libraries i.e, apacheCassandra08.dll and
 thrift.dll.. In the following piece of code the connection is getting
 opened but when i m using client object it is giving an error stating
 Cannot read, Remote side has closed.
 
  Can any1 help me out with this? Has any1 faced the same prob?
 
 



Re: cassandra and .net

2012-04-09 Thread Maki Watanabe
Check your cassandra log.
If you can't find any interesting log, set cassandra log level
to DEBUG and run your program again.

maki

2012/4/10 puneet loya puneetl...@gmail.com:
 hi,

 sorry i posted the port as 7000. I m using 9160 but still has the same
 error.

 Cannot read, Remote side has closed.
 Can u guess whats happening??

 On Tue, Apr 10, 2012 at 11:00 AM, Pierre Chalamet pie...@chalamet.net
 wrote:

 hello,

 9160 is probably the port to use if you use the default config.

 - Pierre

 On Apr 10, 2012, at 7:26 AM, puneet loya puneetl...@gmail.com wrote:

  using System;
  using System.Collections.Generic;
  using System.Linq;
  using System.Text;
  using Thrift.Collections;
  using Thrift.Protocol;
  using Thrift.Transport;
  using Apache.Cassandra;
 
  namespace ConsoleApplication1
  {
      class Program
      {
          static void Main(string[] args)
          {
              TTransport transport=null;
              try
              {
                  transport = new TBufferedTransport(new
  TSocket(127.0.0.1, 7000));
 
 
                  //if(buffered)
                  //            trans = new TBufferedTransport(trans as
  TStreamTransport);
                  //if (framed)
                  //    trans = new TFramedTransport(trans);
 
                  TProtocol protocol = new TBinaryProtocol(transport);
                  Cassandra.Client client = new
  Cassandra.Client(protocol);
 
                  Console.WriteLine(Opening connection);
 
                  if (!transport.IsOpen)
                      transport.Open();
 
                  client.describe_keyspace(abc);               //
  Crashing at this point
 
            }
              catch (Exception ex)
              {
                  Console.WriteLine(ex.Message);
              }
              finally
              { if(transport!=null)
                  transport.Close(); }
              Console.ReadLine();
          }
      }
  }
 
  I m trying to interact with cassandra server(database) from .net. For
  that i have referred two libraries i.e, apacheCassandra08.dll and
  thrift.dll.. In the following piece of code the connection is getting 
  opened
  but when i m using client object it is giving an error stating Cannot 
  read,
  Remote side has closed.
 
  Can any1 help me out with this? Has any1 faced the same prob?