Re: Composite columns: CLI or CQL?

2012-06-11 Thread Sylvain Lebresne
More details would help, like at least the query you're using to do
the insertion and that fails, as well as the exact error message.

--
Sylvain

On Mon, Jun 11, 2012 at 2:42 AM, Georg Köster  wrote:
> Dear all,
>
> I'm really excited about Cassandra's peer-to-peer architecture and sorted
> values.
>
> Currently I'm blocked in trials: I cannot insert longs into 'val' in:
>
> create columnfamily entries (
>
>     id varchar,
>
>     va varchar,
>
>     ts bigint,
>
>     val bigint,
>
>     PRIMARY KEY (id, va, ts)
>
> );
>
> I get validation errors (String didn't validate). I can insert strings as
> much as I like. With int it seems to work, too.
>
> Did I make a mistake? The error is somewhat spurious, very seldomly it goes
> through, but I cannot really reproduce that situation. Another question is
> of course if I can switch validation off for the time it's still buggy?
>
> Since I don't get to trials: How fast is inserting a new column into the
> middle of a 100,000 columns row anyways?
>
> Cheers,
> Georg


Why cassandra nodes ownership is 0.00%

2012-06-11 Thread Prakrati Agrawal
Dear all

I have a Cassandra cluster with 2 nodes. I am using NetworkTopologyStrategy.
I was trying to increase the replication factor of keyspace in Cassandra to 2. 
I did the following steps:
UPDATE KEYSPACE demo WITH strategy_options = {DC1:2,DC2:2}; on both the nodes
Then I ran the nodetool repair on both the nodes
Then I ran my Hector code to count the number of rows and columns in the 
database.
I get the following error: Unavailable Exception
Also when I run the command
./nodetool -h ip_address ring
I found that both nodes ownership is 0 %. Please tell me how should I fix that.

Thanks and Regards
Prakrati




This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


Re: Composite columns: CLI or CQL?

2012-06-11 Thread Georg Köster
I used Pelops and now I tried Astyanax and I get no details on the value
that cassandra is analyzing - only the composite key is in the error
message. If I insert a String instead of the Long (Long.toString(...) and
Long.valueOf()) my tests work but my column size is larger of course. I can
use it that way but I would rather find out what's wrong here.

More details tonight - I can send a test case, too!
Georg

On Mon, Jun 11, 2012 at 9:14 AM, Sylvain Lebresne wrote:

> More details would help, like at least the query you're using to do
> the insertion and that fails, as well as the exact error message.
>
> --
> Sylvain
>
> On Mon, Jun 11, 2012 at 2:42 AM, Georg Köster 
> wrote:
> > Dear all,
> >
> > I'm really excited about Cassandra's peer-to-peer architecture and sorted
> > values.
> >
> > Currently I'm blocked in trials: I cannot insert longs into 'val' in:
> >
> > create columnfamily entries (
> >
> > id varchar,
> >
> > va varchar,
> >
> > ts bigint,
> >
> > val bigint,
> >
> > PRIMARY KEY (id, va, ts)
> >
> > );
> >
> > I get validation errors (String didn't validate). I can insert strings as
> > much as I like. With int it seems to work, too.
> >
> > Did I make a mistake? The error is somewhat spurious, very seldomly it
> goes
> > through, but I cannot really reproduce that situation. Another question
> is
> > of course if I can switch validation off for the time it's still buggy?
> >
> > Since I don't get to trials: How fast is inserting a new column into the
> > middle of a 100,000 columns row anyways?
> >
> > Cheers,
> > Georg
>


Re: Commit log durability per column family

2012-06-11 Thread aaron morton
It's not possible and it's not possible to make the change. 

A write to the same key in multiple CF's is written as a single log record. So 
it's not possible for difference CF's to use different strategies.  

The closest thing is the durable_writes KS property which turns off the commit 
log for the KS. 

Cheers
 
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 11/06/2012, at 7:28 AM, osishkin osishkin wrote:

> I require durability for inserts to my column families so I'm using
> batch mode to insert data.
> However, I have some column families which I use for less important
> data (indexes) which are much more write intensive.
> If I could change the commit log setting only for them to periodic
> instead of batch, it would improve performance for me, without losing
> too much guarantees.
> 
> Is that even possible?
> I'm using Cassandra 0.7 (and yes, I know it's bad, I will upgrade soon)
> 
> Thank you



Re: Effective ownership on both nodes 0 %

2012-06-11 Thread aaron morton
> Also when I run the command
> ./nodetool –h ip_address ring
> I found that both nodes ownership is 0 %. Please tell me how should I fix 
> that.
It would be a lot easier to answer you question if you showed the output from 
nodetool ring. 

> UPDATE KEYSPACE demo WITH strategy_options = {DC1:2,DC2:2}; on both the nodes

This would only make sense if you had 2 data centres, with at least 2 nodes in 
each. 
You probably just want {DC1:2}

> I get the following error: Unavailable Exception

This means there are not enough available nodes to perform the request at the 
Consistency Level you code has requests. Probably due to the strategy_options 
set above.  

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 11/06/2012, at 6:20 PM, Prakrati Agrawal wrote:

> Dear all
>  
> I have a Cassandra cluster with 2 nodes.
> I was trying to increase the replication factor of keyspace in Cassandra to 
> 2. I did the following steps:
> UPDATE KEYSPACE demo WITH strategy_options = {DC1:2,DC2:2}; on both the nodes
> Then I ran the nodetool repair on both the nodes
> Then I ran my Hector code to count the number of rows and columns in the 
> database.
> I get the following error: Unavailable Exception
> Also when I run the command
> ./nodetool –h ip_address ring
> I found that both nodes ownership is 0 %. Please tell me how should I fix 
> that.
>  
> Thanks and Regards
> Prakrati
>  
>  
>  
> 
> This email message may contain proprietary, private and confidential 
> information. The information transmitted is intended only for the person(s) 
> or entities to which it is addressed. Any review, retransmission, 
> dissemination or other use of, or taking of any action in reliance upon, this 
> information by persons or entities other than the intended recipient is 
> prohibited and may be illegal. If you received this in error, please contact 
> the sender and delete the message from your system.
> 
> Mu Sigma takes all reasonable steps to ensure that its electronic 
> communications are free from viruses. However, given Internet accessibility, 
> the Company cannot accept liability for any virus introduced by this e-mail 
> or any attachment and you are advised to use up-to-date virus checking 
> software.



RE: how to configure cassandra as multi tenant

2012-06-11 Thread MOHD ARSHAD SALEEM
Hi Aaron,

Can you send me some particular link related to multi tenant research

Regards
Arshad

From: aaron morton [aa...@thelastpickle.com]
Sent: Thursday, June 07, 2012 3:34 PM
To: user@cassandra.apache.org
Subject: Re: how to configure cassandra as multi tenant

Cassandra is not designed to run as a multi tenant database.

There have been some recent discussions on this, search the user group for more 
detailed answers.

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 7/06/2012, at 7:03 PM, MOHD ARSHAD SALEEM wrote:

Hi All,

I wanted to know how to use cassandra as a multi tenant .

Regards
Arshad



Re: how to configure cassandra as multi tenant

2012-06-11 Thread Sasha Dolgy
Google, man.

http://wiki.apache.org/cassandra/MultiTenant
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/about-multitenant-datamodel-td7575966.html
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/For-multi-tenant-is-it-good-to-have-a-key-space-for-each-tenant-td6723290.html

On Mon, Jun 11, 2012 at 11:37 AM, MOHD ARSHAD SALEEM <
marshadsal...@tataelxsi.co.in> wrote:

>  Hi Aaron,
>
> Can you send me some particular link related to multi tenant research
>
> Regards
> Arshad
>  --
> *From:* aaron morton [aa...@thelastpickle.com]
> *Sent:* Thursday, June 07, 2012 3:34 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: how to configure cassandra as multi tenant
>
>  Cassandra is not designed to run as a multi tenant database.
>
>  There have been some recent discussions on this, search the user group
> for more detailed answers.
>
>  Cheers
>
>-
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
>  On 7/06/2012, at 7:03 PM, MOHD ARSHAD SALEEM wrote:
>
>   Hi All,
>
> I wanted to know how to use cassandra as a multi tenant .
>
> Regards
> Arshad
>
>
>


-- 
Sasha Dolgy
sasha.do...@gmail.com


RE: how to configure cassandra as multi tenant

2012-06-11 Thread MOHD ARSHAD SALEEM
Hi Sasha,

Thanks for your reply. but what you send this is just to create keyspace 
manually
using command prompt.how to create keyspace(Multi tenant) automatically
 using cassandra API's.

Regards
Arshad


From: Sasha Dolgy [sdo...@gmail.com]
Sent: Monday, June 11, 2012 3:09 PM
To: user@cassandra.apache.org
Subject: Re: how to configure cassandra as multi tenant

Google, man.

http://wiki.apache.org/cassandra/MultiTenant
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/about-multitenant-datamodel-td7575966.html
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/For-multi-tenant-is-it-good-to-have-a-key-space-for-each-tenant-td6723290.html

On Mon, Jun 11, 2012 at 11:37 AM, MOHD ARSHAD SALEEM 
mailto:marshadsal...@tataelxsi.co.in>> wrote:
Hi Aaron,

Can you send me some particular link related to multi tenant research

Regards
Arshad

From: aaron morton [aa...@thelastpickle.com]
Sent: Thursday, June 07, 2012 3:34 PM
To: user@cassandra.apache.org
Subject: Re: how to configure cassandra as multi tenant

Cassandra is not designed to run as a multi tenant database.

There have been some recent discussions on this, search the user group for more 
detailed answers.

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 7/06/2012, at 7:03 PM, MOHD ARSHAD SALEEM wrote:

Hi All,

I wanted to know how to use cassandra as a multi tenant .

Regards
Arshad




--
Sasha Dolgy
sasha.do...@gmail.com


Re: Dead node still being pinged

2012-06-11 Thread Samuel CARRIERE
Well, I don't see anything special in the logs. "Remove token" seems to 
have done its job : accorging to the logs, old stored hints have been 
deleted.

If I were you, I would connect (through JMX, with jconsole) to one of the 
nodes that is sending messages to an old node, and would have a look at 
these MBean :
   - org.apache.net.FailureDetector : does SimpleStates looks good ? (or 
do you see an IP of an old node)
   - org.apache.net.MessagingService : do you see one of the old IP in one 
of the attributes ?
   - org.apache.net.StreamingService : do you see an old IP in 
StreamSources or StreamDestinations ?
   - org.apache.internal.HintedHandoff : are there non-zero ActiveCount, 
CurrentlyBlockedTasks, PendingTasks, TotalBlockedTask ?

Samuel




Nicolas Lalevée  
08/06/2012 21:03
Veuillez répondre à
user@cassandra.apache.org


A
user@cassandra.apache.org
cc

Objet
Re: Dead node still being pinged







Le 8 juin 2012 à 20:02, Samuel CARRIERE a écrit :

> I'm in the train but just a guess : maybe it's hinted handoff. A look in 
the logs of the new nodes could confirm that : look for the IP of an old 
node and maybe you'll find hinted handoff related messages.

I grepped on every node about every old node, I got nothing since the 
"crash".

If it can be of some help, here is some grepped log of the crash:

system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
00:39:30,241 StorageService.java (line 2417) Endpoint /10.10.0.24 is down 
and will not receive data for re-replication of /10.10.0.22
system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
00:39:30,242 StorageService.java (line 2417) Endpoint /10.10.0.24 is down 
and will not receive data for re-replication of /10.10.0.22
system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
00:39:30,242 StorageService.java (line 2417) Endpoint /10.10.0.24 is down 
and will not receive data for re-replication of /10.10.0.22
system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
00:39:30,243 StorageService.java (line 2417) Endpoint /10.10.0.24 is down 
and will not receive data for re-replication of /10.10.0.22
system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
00:39:30,243 StorageService.java (line 2417) Endpoint /10.10.0.24 is down 
and will not receive data for re-replication of /10.10.0.22
system.log.1: INFO [GossipStage:1] 2012-05-06 00:44:33,822 Gossiper.java 
(line 818) InetAddress /10.10.0.24 is now dead.
system.log.1: INFO [GossipStage:1] 2012-05-06 04:25:23,894 Gossiper.java 
(line 818) InetAddress /10.10.0.24 is now dead.
system.log.1: INFO [OptionalTasks:1] 2012-05-06 04:25:23,895 
HintedHandOffManager.java (line 179) Deleting any stored hints for 
/10.10.0.24
system.log.1: INFO [GossipStage:1] 2012-05-06 04:25:23,895 
StorageService.java (line 1157) Removing token 
127605887595351923798765477786913079296 for /10.10.0.24
system.log.1: INFO [GossipStage:1] 2012-05-09 04:26:25,015 Gossiper.java 
(line 818) InetAddress /10.10.0.24 is now dead.


Maybe its the way I have removed nodes ? AFAIR I didn't used the 
decommission command. For each node I got the node down and then issue a 
remove token command.
Here is what I can find in the log about when I removed one of them:

system.log.1: INFO [GossipTasks:1] 2012-05-02 17:21:10,281 Gossiper.java 
(line 818) InetAddress /10.10.0.24 is now dead.
system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:21:21,496 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [GossipStage:1] 2012-05-02 17:21:59,307 Gossiper.java 
(line 818) InetAddress /10.10.0.24 is now dead.
system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:31:20,336 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:41:06,177 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:51:18,148 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 18:00:31,709 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 18:11:02,521 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 18:20:38,282 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 18:31:09,513 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 18:40:31,565 
HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
delivery, aborting
system.log.1: INFO [HintedHandoff:1] 2012-05-02 18:51:10,566 
HintedHandOffManager

Hector code not running when replication factor set to 2

2012-06-11 Thread Prakrati Agrawal
Dear all

I had a 2 node cluster with replication factor set to 1. Then I changed the 
replication factor to 2 and brought down one node so that only 1 node was up 
and running. Then I ran my Hector code on the running node. But it gave me 
Unavailable Exception. I also had a Thrift code which ran successfully. I am 
confused as to why the Hector code did not run. Did I miss something? Please 
help me.

Thanks and Regards
Prakrati



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


Re: how to configure cassandra as multi tenant

2012-06-11 Thread Sasha Dolgy
Arshad,

I used google with the following query:  apache cassandra multitenant

Suggest you do the same?  As was mentioned earlier, there has been a lot of
discussion about this topic for the past year -- especially on this mailing
list.  If you want to use Thrift or, to make your life easier, using Hector
or a similar API,  you can create keyspaces however you want ... aligned to
your design / architecture to support Multitenancy.  If it's code specific
help you want ... check out the maililng lists / resources for the various
API's that make working with Thrift easier:

Hector
Pycassa
PHPCassa

etc.

-sd

On Mon, Jun 11, 2012 at 12:05 PM, MOHD ARSHAD SALEEM <
marshadsal...@tataelxsi.co.in> wrote:

>  Hi Sasha,
>
> Thanks for your reply. but what you send this is just to create keyspace
> manually
> using command prompt.how to create keyspace(Multi tenant) automatically
>  using cassandra API's.
>
> Regards
> Arshad
>
>  --
> *From:* Sasha Dolgy [sdo...@gmail.com]
> *Sent:* Monday, June 11, 2012 3:09 PM
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: how to configure cassandra as multi tenant
>
>  Google, man.
>
> http://wiki.apache.org/cassandra/MultiTenant
>
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/about-multitenant-datamodel-td7575966.html
>
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/For-multi-tenant-is-it-good-to-have-a-key-space-for-each-tenant-td6723290.html
>
> On Mon, Jun 11, 2012 at 11:37 AM, MOHD ARSHAD SALEEM <
> marshadsal...@tataelxsi.co.in> wrote:
>
>>  Hi Aaron,
>>
>> Can you send me some particular link related to multi tenant research
>>
>> Regards
>> Arshad
>>  --
>> *From:* aaron morton [aa...@thelastpickle.com]
>> *Sent:* Thursday, June 07, 2012 3:34 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: how to configure cassandra as multi tenant
>>
>>  Cassandra is not designed to run as a multi tenant database.
>>
>>  There have been some recent discussions on this, search the user group
>> for more detailed answers.
>>
>>  Cheers
>>
>>-
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>>  On 7/06/2012, at 7:03 PM, MOHD ARSHAD SALEEM wrote:
>>
>>  Hi All,
>>
>> I wanted to know how to use cassandra as a multi tenant .
>>
>> Regards
>> Arshad
>>
>>
>>
>
>
> --
> Sasha Dolgy
> sasha.do...@gmail.com
>



-- 
Sasha Dolgy
sasha.do...@gmail.com


RE: how to configure cassandra as multi tenant

2012-06-11 Thread MOHD ARSHAD SALEEM
Hi Sasha,

one more thing by using API's how to create keyspace and column family 
dynamically. write now i have to create first through the command prompt then 
that keyspace and column family used in API's.

Regards
Arshad

From: Sasha Dolgy [sdo...@gmail.com]
Sent: Monday, June 11, 2012 4:49 PM
To: user@cassandra.apache.org
Subject: Re: how to configure cassandra as multi tenant

Arshad,

I used google with the following query:  apache cassandra multitenant

Suggest you do the same?  As was mentioned earlier, there has been a lot of 
discussion about this topic for the past year -- especially on this mailing 
list.  If you want to use Thrift or, to make your life easier, using Hector or 
a similar API,  you can create keyspaces however you want ... aligned to your 
design / architecture to support Multitenancy.  If it's code specific help you 
want ... check out the maililng lists / resources for the various API's that 
make working with Thrift easier:

Hector
Pycassa
PHPCassa

etc.

-sd

On Mon, Jun 11, 2012 at 12:05 PM, MOHD ARSHAD SALEEM 
mailto:marshadsal...@tataelxsi.co.in>> wrote:
Hi Sasha,

Thanks for your reply. but what you send this is just to create keyspace 
manually
using command prompt.how to create keyspace(Multi tenant) automatically
 using cassandra API's.

Regards
Arshad


From: Sasha Dolgy [sdo...@gmail.com]
Sent: Monday, June 11, 2012 3:09 PM

To: user@cassandra.apache.org
Subject: Re: how to configure cassandra as multi tenant

Google, man.

http://wiki.apache.org/cassandra/MultiTenant
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/about-multitenant-datamodel-td7575966.html
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/For-multi-tenant-is-it-good-to-have-a-key-space-for-each-tenant-td6723290.html

On Mon, Jun 11, 2012 at 11:37 AM, MOHD ARSHAD SALEEM 
mailto:marshadsal...@tataelxsi.co.in>> wrote:
Hi Aaron,

Can you send me some particular link related to multi tenant research

Regards
Arshad

From: aaron morton [aa...@thelastpickle.com]
Sent: Thursday, June 07, 2012 3:34 PM
To: user@cassandra.apache.org
Subject: Re: how to configure cassandra as multi tenant

Cassandra is not designed to run as a multi tenant database.

There have been some recent discussions on this, search the user group for more 
detailed answers.

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 7/06/2012, at 7:03 PM, MOHD ARSHAD SALEEM wrote:

Hi All,

I wanted to know how to use cassandra as a multi tenant .

Regards
Arshad




--
Sasha Dolgy
sasha.do...@gmail.com



--
Sasha Dolgy
sasha.do...@gmail.com


RE: Setting column to null

2012-06-11 Thread Leonid Ilyevsky
Thanks, I understand what you are telling me. Obviously deleting the column is 
the proper way to do this in Cassandra.
What I was looking for, is some convenient wrapper on top of that which will do 
it for me. Here is my scenario.

I have a function that takes a record to be saved in Cassandra (array of 
objects, or Map). Let say, it can have up to 8 columns. I 
prepare a statement like this:

Insert into  values(?, ?, ?, ?, ?, ?, ?, ?)

If I somehow could put null when I execute it, it would be enough to prepare 
that statement once and execute it multiple times. I would then expect that 
when some element is null, the corresponding column is not inserted (for the 
new key) or deleted (for the existing key).
The way it is now, in my code I have to examine which columns are present and 
which are not, depending on that I have to generate customized statement, and 
it is going to be different for the case of existing key versus case of the new 
key.
Isn't this too much hassle?

Related question. I assumed that prepared statement in Cassandra is there for 
the same reason as in RDBMS, that is, for efficiency. In the above scenario, 
how expensive is it to execute specialized statement for every record compare 
to prepared statement executed multiple times?

If I need to execute those specialized statements, should I still use prepared 
statement or should I just generate a string with everything in ascii format?

-Original Message-
From: Roshni Rajagopal [mailto:roshni.rajago...@wal-mart.com]
Sent: Monday, June 11, 2012 12:58 AM
To: user@cassandra.apache.org
Subject: Re: Setting column to null

Would you want to view data like this "there was a key, which had this column , 
but now it does not have any value as of this time."

Unless you specifically want this information, I believe you should just delete 
the column, rather than have an alternate value for NULL or create a composite 
column.

Because in cassandra that’s the way deletion is dealt with, putting NULLs is 
the way we deal with it in RDBMS because we have a fixed number of columns 
which always have to have some value, even if its NULL, and we have to have the 
same set of columns for every row.
In Cassandara, we can delete the column, and in most scenarios that’s what we 
should do, unless we specifically want to preserve some history that this 
column was turned null at this time…Each row can have different columns.

Regards,
Roshni

From: Edward Capriolo mailto:edlinuxg...@gmail.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Re: Setting column to null

Your best bet is to define the column as a composite column where one part 
represents is null and the other part is the data.

On Friday, June 8, 2012, shashwat shriparv 
mailto:dwivedishash...@gmail.com>> wrote:
> What you can do is you can define some specific variable like "NULLDATA" some 
> thing like that to update in columns that does have value
>
>
> On Fri, Jun 8, 2012 at 11:58 PM, aaron morton 
> mailto:aa...@thelastpickle.com>> wrote:
>
> You don't nee to set columns to null, delete the column instead.
> Cheers
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
> On 8/06/2012, at 9:34 AM, Leonid Ilyevsky wrote:
>
> Is it possible to explicitly set a column value to null?
>
> I see that if insert statement does not include a specific column, that 
> column comes up as null (assuming we are creating a record with new unique 
> key).
> But if we want to update a record, how we set it to null?
>
> Another situation is when I use prepared cql3 statement (in Java) and send 
> parameters when I execute it. If I want to leave some column unassigned, I 
> need a special statement without that column.
> What I would like is, prepare one statement including all columns, and then 
> be able to set some of them to null. I tried to set corresponding ByteBuffer 
> parameter to null, obviously got an exception.
> 
> This email, along with any attachments, is confidential and may be legally 
> privileged or otherwise protected from disclosure. Any unauthorized 
> dissemination, copying or use of the contents of this email is strictly 
> prohibited and may be in violation of law. If you are not the intended 
> recipient, any disclosure, copying, forwarding or distribution of this email 
> is strictly prohibited and this email and any attachments should be deleted 
> immediately. This email and any attachments do not constitute an offer to 
> sell or a solicitation of an offer to purchase any interest in any investment 
> vehicle sponsored by Moon Capital Management LP (“Moon Capital”). Moon 
> Capital does not provide legal, accounting or tax advice. Any statement 
> regarding legal, accounting or tax matters was not intended or writt

Re: Dead node still being pinged

2012-06-11 Thread Nicolas Lalevée

Le 11 juin 2012 à 12:12, Samuel CARRIERE a écrit :

> 
> Well, I don't see anything special in the logs. "Remove token" seems to have 
> done its job : accorging to the logs, old stored hints have been deleted. 
> 
> If I were you, I would connect (through JMX, with jconsole) to one of the 
> nodes that is sending messages to an old node, and would have a look at these 
> MBean : 
>- org.apache.net.FailureDetector : does SimpleStates looks good ? (or do 
> you see an IP of an old node) 
>- org.apache.net.MessagingService : do you see one of the old IP in one of 
> the attributes ? 
>- org.apache.net.StreamingService : do you see an old IP in StreamSources 
> or StreamDestinations ? 
>- org.apache.internal.HintedHandoff : are there non-zero ActiveCount, 
> CurrentlyBlockedTasks, PendingTasks, TotalBlockedTask ? 

I feared I had too do such lookups... JMX sucks when there is some ssh 
tunneling to do. I'll get time to look into thoses. Thanks.

By the way, maybe an interesting info (same on every node):
root@data-5 ~ # nodetool -h data-local gossipinfo
/10.10.0.27
  LOAD:2.34205351889E11
  SCHEMA:21099fc0-978c-11e1--bc70eee231ef
  RPC_ADDRESS:10.10.0.27
  STATUS:NORMAL,113427455640312814857969558651062452224
  RELEASE_VERSION:1.0.9
/10.10.0.26
  LOAD:2.64617657147E11
  SCHEMA:21099fc0-978c-11e1--bc70eee231ef
  RPC_ADDRESS:10.10.0.26
  STATUS:NORMAL,56713727820156407428984779325531226112
  RELEASE_VERSION:1.0.9
/10.10.0.25
  LOAD:2.34154095981E11
  SCHEMA:21099fc0-978c-11e1--bc70eee231ef
  RPC_ADDRESS:10.10.0.25
  STATUS:NORMAL,0
  RELEASE_VERSION:1.0.9
/10.10.0.24
  STATUS:removed,127605887595351923798765477786913079296,1336530323263
  REMOVAL_COORDINATOR:REMOVER,0
/10.10.0.22
  STATUS:removed,42535295865117307932921825928971026432,1336529659203
  REMOVAL_COORDINATOR:REMOVER,113427455640312814857969558651062452224


Nicolas


> 
> Samuel 
> 
> 
> 
> Nicolas Lalevée 
> 08/06/2012 21:03
> Veuillez répondre à
> user@cassandra.apache.org
> 
> A
> user@cassandra.apache.org
> cc
> Objet
> Re: Dead node still being pinged
> 
> 
> 
> 
> 
> 
> Le 8 juin 2012 à 20:02, Samuel CARRIERE a écrit :
> 
> > I'm in the train but just a guess : maybe it's hinted handoff. A look in 
> > the logs of the new nodes could confirm that : look for the IP of an old 
> > node and maybe you'll find hinted handoff related messages.
> 
> I grepped on every node about every old node, I got nothing since the "crash".
> 
> If it can be of some help, here is some grepped log of the crash:
> 
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,241 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,242 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,242 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,243 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,243 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: INFO [GossipStage:1] 2012-05-06 00:44:33,822 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [GossipStage:1] 2012-05-06 04:25:23,894 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [OptionalTasks:1] 2012-05-06 04:25:23,895 
> HintedHandOffManager.java (line 179) Deleting any stored hints for /10.10.0.24
> system.log.1: INFO [GossipStage:1] 2012-05-06 04:25:23,895 
> StorageService.java (line 1157) Removing token 
> 127605887595351923798765477786913079296 for /10.10.0.24
> system.log.1: INFO [GossipStage:1] 2012-05-09 04:26:25,015 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> 
> 
> Maybe its the way I have removed nodes ? AFAIR I didn't used the decommission 
> command. For each node I got the node down and then issue a remove token 
> command.
> Here is what I can find in the log about when I removed one of them:
> 
> system.log.1: INFO [GossipTasks:1] 2012-05-02 17:21:10,281 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:21:21,496 
> HintedHandOffManager.java (line 292) Endpoint /10.10.0.24 died before hint 
> delivery, aborting
> system.log.1: INFO [GossipStage:1] 2012-05-02 17:21:59,307 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:31:20,336 

Re: [RELEASE] Apache Cassandra 1.1.1 released

2012-06-11 Thread Omid Aladini
I think the change log for 1.1.1 is missing CASSANDRA-4150 [1].

-- Omid

[1] https://issues.apache.org/jira/browse/CASSANDRA-4150

On Mon, Jun 4, 2012 at 9:52 PM, Sylvain Lebresne  wrote:
> The Cassandra team is pleased to announce the release of Apache Cassandra
> version 1.1.1.
>
> Cassandra is a highly scalable second-generation distributed database,
> bringing together Dynamo's fully distributed design and Bigtable's
> ColumnFamily-based data model. You can read more here:
>
>  http://cassandra.apache.org/
>
> Downloads of source and binary distributions are listed in our download
> section:
>
>  http://cassandra.apache.org/download/
>
> This version is the first maintenance/bug fix release[1] on the 1.1 series. As
> always, please pay attention to the release notes[2] and Let us know[3] if you
> were to encounter any problem.
>
> Enjoy!
>
> [1]: http://goo.gl/4Dxae (CHANGES.txt)
> [2]: http://goo.gl/ZE8ZK (NEWS.txt)
> [3]: https://issues.apache.org/jira/browse/CASSANDRA


Re: Dead node still being pinged

2012-06-11 Thread Nicolas Lalevée
finally, thanks to the groovy jmx builder, it was not that hard.


Le 11 juin 2012 à 12:12, Samuel CARRIERE a écrit :

> If I were you, I would connect (through JMX, with jconsole) to one of the 
> nodes that is sending messages to an old node, and would have a look at these 
> MBean : 
>- org.apache.net.FailureDetector : does SimpleStates looks good ? (or do 
> you see an IP of an old node)

SimpleStates:[/10.10.0.22:DOWN, /10.10.0.24:DOWN, /10.10.0.26:UP, 
/10.10.0.25:UP, /10.10.0.27:UP]

>- org.apache.net.MessagingService : do you see one of the old IP in one of 
> the attributes ?

data-5:
CommandCompletedTasks:
[10.10.0.22:2, 10.10.0.26:6147307, 10.10.0.27:6084684, 10.10.0.24:2]
CommandPendingTasks:
[10.10.0.22:0, 10.10.0.26:0, 10.10.0.27:0, 10.10.0.24:0]
ResponseCompletedTasks:
[10.10.0.22:1487, 10.10.0.26:6187204, 10.10.0.27:6062890, 10.10.0.24:1495]
ResponsePendingTasks:
[10.10.0.22:0, 10.10.0.26:0, 10.10.0.27:0, 10.10.0.24:0]

data-6:
CommandCompletedTasks:
[10.10.0.22:2, 10.10.0.27:6064992, 10.10.0.24:2, 10.10.0.25:6308102]
CommandPendingTasks:
[10.10.0.22:0, 10.10.0.27:0, 10.10.0.24:0, 10.10.0.25:0]
ResponseCompletedTasks:
[10.10.0.22:1463, 10.10.0.27:6067943, 10.10.0.24:1474, 10.10.0.25:6367692]
ResponsePendingTasks:
[10.10.0.22:0, 10.10.0.27:0, 10.10.0.24:2, 10.10.0.25:0]

data-7:
CommandCompletedTasks:
[10.10.0.22:2, 10.10.0.26:6043653, 10.10.0.24:2, 10.10.0.25:5964168]
CommandPendingTasks:
[10.10.0.22:0, 10.10.0.26:0, 10.10.0.24:0, 10.10.0.25:0]
ResponseCompletedTasks:
[10.10.0.22:1424, 10.10.0.26:6090251, 10.10.0.24:1431, 10.10.0.25:6094954]
ResponsePendingTasks:
[10.10.0.22:4, 10.10.0.26:0, 10.10.0.24:1, 10.10.0.25:0]

>- org.apache.net.StreamingService : do you see an old IP in StreamSources 
> or StreamDestinations ?

nothing streaming on the 3 nodes.
nodetool netstats confirmed that.

>- org.apache.internal.HintedHandoff : are there non-zero ActiveCount, 
> CurrentlyBlockedTasks, PendingTasks, TotalBlockedTask ?

On the 3 nodes, all at 0.

I don't know much what I'm looking at, but it seems that some 
ResponsePendingTasks needs to end.

Nicolas

> 
> Samuel 
> 
> 
> 
> Nicolas Lalevée 
> 08/06/2012 21:03
> Veuillez répondre à
> user@cassandra.apache.org
> 
> A
> user@cassandra.apache.org
> cc
> Objet
> Re: Dead node still being pinged
> 
> 
> 
> 
> 
> 
> Le 8 juin 2012 à 20:02, Samuel CARRIERE a écrit :
> 
> > I'm in the train but just a guess : maybe it's hinted handoff. A look in 
> > the logs of the new nodes could confirm that : look for the IP of an old 
> > node and maybe you'll find hinted handoff related messages.
> 
> I grepped on every node about every old node, I got nothing since the "crash".
> 
> If it can be of some help, here is some grepped log of the crash:
> 
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,241 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,242 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,242 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,243 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: WARN [RMI TCP Connection(1037)-10.10.0.26] 2012-05-06 
> 00:39:30,243 StorageService.java (line 2417) Endpoint /10.10.0.24 is down and 
> will not receive data for re-replication of /10.10.0.22
> system.log.1: INFO [GossipStage:1] 2012-05-06 00:44:33,822 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [GossipStage:1] 2012-05-06 04:25:23,894 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [OptionalTasks:1] 2012-05-06 04:25:23,895 
> HintedHandOffManager.java (line 179) Deleting any stored hints for /10.10.0.24
> system.log.1: INFO [GossipStage:1] 2012-05-06 04:25:23,895 
> StorageService.java (line 1157) Removing token 
> 127605887595351923798765477786913079296 for /10.10.0.24
> system.log.1: INFO [GossipStage:1] 2012-05-09 04:26:25,015 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> 
> 
> Maybe its the way I have removed nodes ? AFAIR I didn't used the decommission 
> command. For each node I got the node down and then issue a remove token 
> command.
> Here is what I can find in the log about when I removed one of them:
> 
> system.log.1: INFO [GossipTasks:1] 2012-05-02 17:21:10,281 Gossiper.java 
> (line 818) InetAddress /10.10.0.24 is now dead.
> system.log.1: INFO [HintedHandoff:1] 2012-05-02 17:21:21,496 
> HintedHandOffManager.java (line 292) Endpoint /10.10.0

Possible bug in Cassandra 1.1.1 with NTS

2012-06-11 Thread Carlo Pires
Just installed cassandra 1.1.1 and run:

root@carlo-laptop:/tmp# cassandra-cli -h localhost
Connected to: "Test Cluster" on localhost/9160
Welcome to Cassandra CLI version 1.1.1

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@unknown] create keyspace accounts
...  with placement_strategy = 'NetworkTopologyStrategy'
...  and strategy_options = {dc-test : 2}
...  and durable_writes = true;
d05f1952-58a8-393b-b8b5-605ac8e434c4
Waiting for schema agreement...
... schemas agree across the cluster
[default@unknown]
[default@unknown] use accounts;
Authenticated to keyspace: accounts
[default@accounts]
[default@accounts] create column family Users
...  with column_type = 'Standard'
...  and comparator = 'BytesType'
...  and default_validation_class = 'BytesType'
...  and key_validation_class = 'TimeUUIDType';
a8943316-ee58-3160-a922-5826f15bd674
Waiting for schema agreement...
... schemas agree across the cluster
[default@accounts] list Users;
Using default limit of 100
Using default column limit of 100
null
UnavailableException()
at
org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12346)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:692)
at
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:676)
at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1425)
at
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
at
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
[default@accounts] root@carlo-laptop:/tmp#
root@carlo-laptop:/tmp# cat /etc/cassandra/cassandra-topology.properties
127.0.0.1:dc-test:my-notebook
root@carlo-laptop:/tmp#


I did not find anything relevant in JIRA. The setup is a single instance
cassandra in my notebook running the default cassandra 1.1.1 of debian repo.
-- 
  Carlo Pires


Re: Possible bug in Cassandra 1.1.1 with NTS

2012-06-11 Thread Nick Bailey
The property file snitch isn't used by default. Did you change your
cassandra.yaml to use PropertyFileSnitch so it reads
cassandra-topology.properties?

Also the formatting in your dc property file isn't right. It should be
'=:'. So:

127.0.0.1=dc-test:my-notebook

On Mon, Jun 11, 2012 at 1:49 PM, Carlo Pires  wrote:
> Just installed cassandra 1.1.1 and run:
>
> root@carlo-laptop:/tmp# cassandra-cli -h localhost
> Connected to: "Test Cluster" on localhost/9160
> Welcome to Cassandra CLI version 1.1.1
>
> Type 'help;' or '?' for help.
> Type 'quit;' or 'exit;' to quit.
>
> [default@unknown] create keyspace accounts
> ...      with placement_strategy = 'NetworkTopologyStrategy'
> ...      and strategy_options = {dc-test : 2}
> ...      and durable_writes = true;
> d05f1952-58a8-393b-b8b5-605ac8e434c4
> Waiting for schema agreement...
> ... schemas agree across the cluster
> [default@unknown]
> [default@unknown] use accounts;
> Authenticated to keyspace: accounts
> [default@accounts]
> [default@accounts] create column family Users
> ...      with column_type = 'Standard'
> ...      and comparator = 'BytesType'
> ...      and default_validation_class = 'BytesType'
> ...      and key_validation_class = 'TimeUUIDType';
> a8943316-ee58-3160-a922-5826f15bd674
> Waiting for schema agreement...
> ... schemas agree across the cluster
> [default@accounts] list Users;
> Using default limit of 100
> Using default column limit of 100
> null
> UnavailableException()
>     at
> org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12346)
>     at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>     at
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:692)
>     at
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:676)
>     at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1425)
>     at
> org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
>     at
> org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
>     at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
> [default@accounts] root@carlo-laptop:/tmp#
> root@carlo-laptop:/tmp# cat /etc/cassandra/cassandra-topology.properties
> 127.0.0.1:dc-test:my-notebook
> root@carlo-laptop:/tmp#
>
>
> I did not find anything relevant in JIRA. The setup is a single instance
> cassandra in my notebook running the default cassandra 1.1.1 of debian repo.
> --
>   Carlo Pires
>


Re: Possible bug in Cassandra 1.1.1 with NTS

2012-06-11 Thread Carlo Pires
I forgot to change cassandra.yaml to use PropertyFileSnitch AND
cassandra-topology syntax was incorrect. Thanks, Nick.

I don't know why I got no error in 1.0.8 with PropertyFileSnitch in
cassandra.yaml and wrong syntax in cassandra-topology.properties.

PS: I had to change JVM_OPTS in /etc/cassandra/cassandra-env.sh to use 160k
instead 128k. This has not been fixed?

My java is:
root@carlo-laptop:/tmp# java -version
java version "1.7.0_04"
Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)


2012/6/11 Nick Bailey 

> The property file snitch isn't used by default. Did you change your
> cassandra.yaml to use PropertyFileSnitch so it reads
> cassandra-topology.properties?
>
> Also the formatting in your dc property file isn't right. It should be
> '=:'. So:
>
> 127.0.0.1=dc-test:my-notebook
>
> On Mon, Jun 11, 2012 at 1:49 PM, Carlo Pires  wrote:
> > Just installed cassandra 1.1.1 and run:
> >
> > root@carlo-laptop:/tmp# cassandra-cli -h localhost
> > Connected to: "Test Cluster" on localhost/9160
> > Welcome to Cassandra CLI version 1.1.1
> >
> > Type 'help;' or '?' for help.
> > Type 'quit;' or 'exit;' to quit.
> >
> > [default@unknown] create keyspace accounts
> > ...  with placement_strategy = 'NetworkTopologyStrategy'
> > ...  and strategy_options = {dc-test : 2}
> > ...  and durable_writes = true;
> > d05f1952-58a8-393b-b8b5-605ac8e434c4
> > Waiting for schema agreement...
> > ... schemas agree across the cluster
> > [default@unknown]
> > [default@unknown] use accounts;
> > Authenticated to keyspace: accounts
> > [default@accounts]
> > [default@accounts] create column family Users
> > ...  with column_type = 'Standard'
> > ...  and comparator = 'BytesType'
> > ...  and default_validation_class = 'BytesType'
> > ...  and key_validation_class = 'TimeUUIDType';
> > a8943316-ee58-3160-a922-5826f15bd674
> > Waiting for schema agreement...
> > ... schemas agree across the cluster
> > [default@accounts] list Users;
> > Using default limit of 100
> > Using default column limit of 100
> > null
> > UnavailableException()
> > at
> >
> org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12346)
> > at
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
> > at
> >
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:692)
> > at
> >
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:676)
> > at
> org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1425)
> > at
> >
> org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
> > at
> >
> org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
> > at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
> > [default@accounts] root@carlo-laptop:/tmp#
> > root@carlo-laptop:/tmp# cat /etc/cassandra/cassandra-topology.properties
> > 127.0.0.1:dc-test:my-notebook
> > root@carlo-laptop:/tmp#
> >
> >
> > I did not find anything relevant in JIRA. The setup is a single instance
> > cassandra in my notebook running the default cassandra 1.1.1 of debian
> repo.
> > --
> >   Carlo Pires
> >
>



-- 
  Carlo Pires
  62 8209-1444 TIM
  62 3251-1383
  Skype: carlopires


Re: Possible bug in Cassandra 1.1.1 with NTS

2012-06-11 Thread Nick Bailey
> I don't know why I got no error in 1.0.8 with PropertyFileSnitch in
> cassandra.yaml and wrong syntax in cassandra-topology.properties.
>

Not sure either.

> PS: I had to change JVM_OPTS in /etc/cassandra/cassandra-env.sh to use 160k
> instead 128k. This has not been fixed?

Still marked as unresolved.
https://issues.apache.org/jira/browse/CASSANDRA-4275


Re: Offset in slicequeries for pagination

2012-06-11 Thread R. Verlangen
I solved this with creating a manual index with as column keys integers and
column values the uuid's of the results. Then run a slicequery to determine
the batch to fetch.

2012/6/11 Cyril Auburtin 

> using  10 results maximum per page,
>
> to go directly to 14th page, there is no offset=141 possibility I guess?
> or does a Java client proposes that?
>
> What is the best solution, perform a get with a limit = page*10, and then
> a get with a column_start equals the lastest column received, and a limit
> of 10,
> I guess also, client side should cache results but it's off topic
>



-- 
With kind regards,

Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl

Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If you are not the intended recipient, you are reminded that
the information remains the property of the sender. You must not use,
disclose, distribute, copy, print or rely on this e-mail. If you have
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.


Re: Offset in slicequeries for pagination

2012-06-11 Thread Cyril Auburtin
If my columns are ("k1:k2" => data1), ("k11:k32" => data211), ("k10:k211"
=> data91)

U mean transforming to  ("1:k1:k2" => data1), ("2:k11:k32" => data211) but
I need the previous columns names to slice query on them

2012/6/11 R. Verlangen 

> I solved this with creating a manual index with as column keys integers
> and column values the uuid's of the results. Then run a slicequery to
> determine the batch to fetch.
>
>
> 2012/6/11 Cyril Auburtin 
>
>> using  10 results maximum per page,
>>
>> to go directly to 14th page, there is no offset=141 possibility I guess?
>> or does a Java client proposes that?
>>
>> What is the best solution, perform a get with a limit = page*10, and then
>> a get with a column_start equals the lastest column received, and a limit
>> of 10,
>> I guess also, client side should cache results but it's off topic
>>
>
>
>
> --
> With kind regards,
>
> Robin Verlangen
> *Software engineer*
> *
> *
> W http://www.robinverlangen.nl
> E ro...@us2.nl
>
> Disclaimer: The information contained in this message and attachments is
> intended solely for the attention and use of the named addressee and may be
> confidential. If you are not the intended recipient, you are reminded that
> the information remains the property of the sender. You must not use,
> disclose, distribute, copy, print or rely on this e-mail. If you have
> received this message in error, please contact the sender immediately and
> irrevocably delete this message and any copies.
>
>


Much more native memory used by Cassandra then the configured JVM heap size

2012-06-11 Thread Jason Tang
Hi

We have some problem with Cassandra memory usage, we configure the JVM HEAP
6G, but after runing Cassandra for several hours (insert, update, delete).
The total memory used by Cassandra go up to 15G, which cause the OS low
memory.
 So I wonder if it is normal to have so many memory used by cassandra?

And how to limit the native memory used by Cassandra?


===
Cassandra 1.0.3, 64 bit jdk.

Memory ocupied by Cassandra 15G
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 9567 casadm20   0 28.3g  15g 9.1g S  269 65.1 385:57.65 java

=
-Xms6G -Xmx6G -Xmn1600M

 # ps -ef | grep  9567
casadm9567 1 55 Jun11 ?05:59:44 /opt/jdk1.6.0_29/bin/java
-ea -javaagent:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G
-Xmn1600M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true
-Dcom.sun.management.jmxremote.port=6080
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Daccess.properties=/opt/dve/cassandra/conf/access.properties
-Dpasswd.properties=/opt/dve/cassandra/conf/passwd.properties
-Dpasswd.mode=MD5 -Dlog4j.configuration=log4j-server.properties
-Dlog4j.defaultInitOverride=true -cp
/opt/dve/cassandra/bin/../conf:/opt/dve/cassandra/bin/../build/classes/main:/opt/dve/cassandra/bin/../build/classes/thrift:/opt/dve/cassandra/bin/../lib/Cassandra-Extensions-1.0.0.jar:/opt/dve/cassandra/bin/../lib/antlr-3.2.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-clientutil-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-thrift-1.0.3.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-fixes.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/dve/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/dve/cassandra/bin/../lib/commons-codec-1.2.jar:/opt/dve/cassandra/bin/../lib/commons-lang-2.4.jar:/opt/dve/cassandra/bin/../lib/compress-lzf-0.8.4.jar:/opt/dve/cassandra/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/dve/cassandra/bin/../lib/guava-r08.jar:/opt/dve/cassandra/bin/../lib/high-scale-lib-1.1.2.jar:/opt/dve/cassandra/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar:/opt/dve/cassandra/bin/../lib/jline-0.9.94.jar:/opt/dve/cassandra/bin/../lib/json-simple-1.1.jar:/opt/dve/cassandra/bin/../lib/libthrift-0.6.jar:/opt/dve/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/dve/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/dve/cassandra/bin/../lib/slf4j-api-1.6.1.jar:/opt/dve/cassandra/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/dve/cassandra/bin/../lib/snakeyaml-1.6.jar:/opt/dve/cassandra/bin/../lib/snappy-java-1.0.4.1.jar
org.apache.cassandra.thrift.CassandraDaemon

==
# nodetool -h 127.0.0.1 -p 6080 info
Token: 85070591730234615865843651857942052864
Gossip active: true
Load : 20.59 GB
Generation No: 1339423322
Uptime (seconds) : 39626
Heap Memory (MB) : 3418.42 / 5984.00
Data Center  : datacenter1
Rack : rack1
Exceptions   : 0

=
All row cache and key cache are disabled by default

Key cache: disabled
Row cache: disabled


==

# pmap 9567
9567: java
START   SIZE RSS PSS   DIRTYSWAP PERM MAPPING
4000 36K 36K 36K  0K  0K r-xp
/opt/jdk1.6.0_29/bin/java
40108000  8K  8K  8K  8K  0K rwxp
/opt/jdk1.6.0_29/bin/java
4010a000  18040K  17988K  17988K  17988K  0K rwxp [heap]
00067ae0 6326700K 6258664K 6258664K 6258664K  0K rwxp [anon]
0007fd06b000  48724K  0K  0K  0K  0K rwxp [anon]
7fbed153 1331104K  0K  0K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-219-Data.db
7fbf22918000 2097152K  0K  0K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-219-Data.db
7fbfa2918000 2097148K 1124464K 1124462K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-219-Data.db
7fc022917000 2097156K 2096496K 2096492K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-219-Data.db
7fc0a2918000 2097148K 2097148K 2097146K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-219-Data.db
7fc1a2917000 733584K   6444K   6444K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-109-Data.db
7fc1cf57b000 2097148K  20980K  20980K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-109-Data.db
7fc24f57a000 2097152K 456480K 456478K  0K  0K r-xs
/var/cassandra/data/drc/queue-hb-109-Data.db
7fc2cf57a000 2097156K 1168320K 1168318K  

Re: Much more native memory used by Cassandra then the configured JVM heap size

2012-06-11 Thread Jeffrey Kesselman
What is your Native heap size?  And how are you measuring memory usage?
It woudl also hep to see the commandlien you are using to launch the JVM
On Mon, Jun 11, 2012 at 9:14 PM, Jason Tang  wrote:

> Hi
>
> We have some problem with Cassandra memory usage, we configure the JVM
> HEAP 6G, but after runing Cassandra for several hours (insert, update,
> delete). The total memory used by Cassandra go up to 15G, which cause the
> OS low memory.
>  So I wonder if it is normal to have so many memory used by cassandra?
>
> And how to limit the native memory used by Cassandra?
>
>
> ===
> Cassandra 1.0.3, 64 bit jdk.
>
> Memory ocupied by Cassandra 15G
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>  9567 casadm20   0 28.3g  15g 9.1g S  269 65.1 385:57.65 java
>
> =
> -Xms6G -Xmx6G -Xmn1600M
>
>  # ps -ef | grep  9567
> casadm9567 1 55 Jun11 ?05:59:44 /opt/jdk1.6.0_29/bin/java
> -ea -javaagent:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar
> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G
> -Xmn1600M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
> -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
> -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true
> -Dcom.sun.management.jmxremote.port=6080
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.authenticate=false
> -Daccess.properties=/opt/dve/cassandra/conf/access.properties
> -Dpasswd.properties=/opt/dve/cassandra/conf/passwd.properties
> -Dpasswd.mode=MD5 -Dlog4j.configuration=log4j-server.properties
> -Dlog4j.defaultInitOverride=true -cp
> /opt/dve/cassandra/bin/../conf:/opt/dve/cassandra/bin/../build/classes/main:/opt/dve/cassandra/bin/../build/classes/thrift:/opt/dve/cassandra/bin/../lib/Cassandra-Extensions-1.0.0.jar:/opt/dve/cassandra/bin/../lib/antlr-3.2.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-clientutil-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-thrift-1.0.3.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-fixes.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/dve/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/dve/cassandra/bin/../lib/commons-codec-1.2.jar:/opt/dve/cassandra/bin/../lib/commons-lang-2.4.jar:/opt/dve/cassandra/bin/../lib/compress-lzf-0.8.4.jar:/opt/dve/cassandra/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/dve/cassandra/bin/../lib/guava-r08.jar:/opt/dve/cassandra/bin/../lib/high-scale-lib-1.1.2.jar:/opt/dve/cassandra/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar:/opt/dve/cassandra/bin/../lib/jline-0.9.94.jar:/opt/dve/cassandra/bin/../lib/json-simple-1.1.jar:/opt/dve/cassandra/bin/../lib/libthrift-0.6.jar:/opt/dve/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/dve/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/dve/cassandra/bin/../lib/slf4j-api-1.6.1.jar:/opt/dve/cassandra/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/dve/cassandra/bin/../lib/snakeyaml-1.6.jar:/opt/dve/cassandra/bin/../lib/snappy-java-1.0.4.1.jar
> org.apache.cassandra.thrift.CassandraDaemon
>
> ==
> # nodetool -h 127.0.0.1 -p 6080 info
> Token: 85070591730234615865843651857942052864
> Gossip active: true
> Load : 20.59 GB
> Generation No: 1339423322
> Uptime (seconds) : 39626
> Heap Memory (MB) : 3418.42 / 5984.00
> Data Center  : datacenter1
> Rack : rack1
> Exceptions   : 0
>
> =
> All row cache and key cache are disabled by default
>
> Key cache: disabled
> Row cache: disabled
>
>
> ==
>
> # pmap 9567
> 9567: java
> START   SIZE RSS PSS   DIRTYSWAP PERM MAPPING
> 4000 36K 36K 36K  0K  0K r-xp
> /opt/jdk1.6.0_29/bin/java
> 40108000  8K  8K  8K  8K  0K rwxp
> /opt/jdk1.6.0_29/bin/java
> 4010a000  18040K  17988K  17988K  17988K  0K rwxp [heap]
> 00067ae0 6326700K 6258664K 6258664K 6258664K  0K rwxp [anon]
> 0007fd06b000  48724K  0K  0K  0K  0K rwxp [anon]
> 7fbed153 1331104K  0K  0K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fbf22918000 2097152K  0K  0K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fbfa2918000 2097148K 1124464K 1124462K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fc022917000 2097156K 2096496K 2096492K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fc0a2918000 2097148K 2097148K 2097146K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fc1a2917000 7335

Re: Much more native memory used by Cassandra then the configured JVM heap size

2012-06-11 Thread Jeffrey Kesselman
Btw.  I suggest you spin up JConsole as it will give you much more detai
kon what your VM is actually doing.



On Mon, Jun 11, 2012 at 9:14 PM, Jason Tang  wrote:

> Hi
>
> We have some problem with Cassandra memory usage, we configure the JVM
> HEAP 6G, but after runing Cassandra for several hours (insert, update,
> delete). The total memory used by Cassandra go up to 15G, which cause the
> OS low memory.
>  So I wonder if it is normal to have so many memory used by cassandra?
>
> And how to limit the native memory used by Cassandra?
>
>
> ===
> Cassandra 1.0.3, 64 bit jdk.
>
> Memory ocupied by Cassandra 15G
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>  9567 casadm20   0 28.3g  15g 9.1g S  269 65.1 385:57.65 java
>
> =
> -Xms6G -Xmx6G -Xmn1600M
>
>  # ps -ef | grep  9567
> casadm9567 1 55 Jun11 ?05:59:44 /opt/jdk1.6.0_29/bin/java
> -ea -javaagent:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar
> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G
> -Xmn1600M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
> -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
> -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true
> -Dcom.sun.management.jmxremote.port=6080
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.authenticate=false
> -Daccess.properties=/opt/dve/cassandra/conf/access.properties
> -Dpasswd.properties=/opt/dve/cassandra/conf/passwd.properties
> -Dpasswd.mode=MD5 -Dlog4j.configuration=log4j-server.properties
> -Dlog4j.defaultInitOverride=true -cp
> /opt/dve/cassandra/bin/../conf:/opt/dve/cassandra/bin/../build/classes/main:/opt/dve/cassandra/bin/../build/classes/thrift:/opt/dve/cassandra/bin/../lib/Cassandra-Extensions-1.0.0.jar:/opt/dve/cassandra/bin/../lib/antlr-3.2.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-clientutil-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-thrift-1.0.3.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-fixes.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/dve/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/dve/cassandra/bin/../lib/commons-codec-1.2.jar:/opt/dve/cassandra/bin/../lib/commons-lang-2.4.jar:/opt/dve/cassandra/bin/../lib/compress-lzf-0.8.4.jar:/opt/dve/cassandra/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/dve/cassandra/bin/../lib/guava-r08.jar:/opt/dve/cassandra/bin/../lib/high-scale-lib-1.1.2.jar:/opt/dve/cassandra/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar:/opt/dve/cassandra/bin/../lib/jline-0.9.94.jar:/opt/dve/cassandra/bin/../lib/json-simple-1.1.jar:/opt/dve/cassandra/bin/../lib/libthrift-0.6.jar:/opt/dve/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/dve/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/dve/cassandra/bin/../lib/slf4j-api-1.6.1.jar:/opt/dve/cassandra/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/dve/cassandra/bin/../lib/snakeyaml-1.6.jar:/opt/dve/cassandra/bin/../lib/snappy-java-1.0.4.1.jar
> org.apache.cassandra.thrift.CassandraDaemon
>
> ==
> # nodetool -h 127.0.0.1 -p 6080 info
> Token: 85070591730234615865843651857942052864
> Gossip active: true
> Load : 20.59 GB
> Generation No: 1339423322
> Uptime (seconds) : 39626
> Heap Memory (MB) : 3418.42 / 5984.00
> Data Center  : datacenter1
> Rack : rack1
> Exceptions   : 0
>
> =
> All row cache and key cache are disabled by default
>
> Key cache: disabled
> Row cache: disabled
>
>
> ==
>
> # pmap 9567
> 9567: java
> START   SIZE RSS PSS   DIRTYSWAP PERM MAPPING
> 4000 36K 36K 36K  0K  0K r-xp
> /opt/jdk1.6.0_29/bin/java
> 40108000  8K  8K  8K  8K  0K rwxp
> /opt/jdk1.6.0_29/bin/java
> 4010a000  18040K  17988K  17988K  17988K  0K rwxp [heap]
> 00067ae0 6326700K 6258664K 6258664K 6258664K  0K rwxp [anon]
> 0007fd06b000  48724K  0K  0K  0K  0K rwxp [anon]
> 7fbed153 1331104K  0K  0K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fbf22918000 2097152K  0K  0K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fbfa2918000 2097148K 1124464K 1124462K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fc022917000 2097156K 2096496K 2096492K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fc0a2918000 2097148K 2097148K 2097146K  0K  0K r-xs
> /var/cassandra/data/drc/queue-hb-219-Data.db
> 7fc1a2917000 733584K   6444K   6444K  0K  

Re: Much more native memory used by Cassandra then the configured JVM heap size

2012-06-11 Thread Jason Tang
See my post, I limit the HVM heap 6G, but actually Cassandra will use more
memory which is not calculated in JVM heap.

I use top to monitor total memory used by Cassandra.

=
-Xms6G -Xmx6G -Xmn1600M

2012/6/12 Jeffrey Kesselman 

> Btw.  I suggest you spin up JConsole as it will give you much more detai
> kon what your VM is actually doing.
>
>
>
> On Mon, Jun 11, 2012 at 9:14 PM, Jason Tang  wrote:
>
>> Hi
>>
>> We have some problem with Cassandra memory usage, we configure the JVM
>> HEAP 6G, but after runing Cassandra for several hours (insert, update,
>> delete). The total memory used by Cassandra go up to 15G, which cause the
>> OS low memory.
>>  So I wonder if it is normal to have so many memory used by cassandra?
>>
>> And how to limit the native memory used by Cassandra?
>>
>>
>> ===
>> Cassandra 1.0.3, 64 bit jdk.
>>
>> Memory ocupied by Cassandra 15G
>>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>>  9567 casadm20   0 28.3g  15g 9.1g S  269 65.1 385:57.65 java
>>
>> =
>> -Xms6G -Xmx6G -Xmn1600M
>>
>>  # ps -ef | grep  9567
>> casadm9567 1 55 Jun11 ?05:59:44 /opt/jdk1.6.0_29/bin/java
>> -ea -javaagent:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar
>> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G
>> -Xmn1600M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC
>> -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
>> -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
>> -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true
>> -Dcom.sun.management.jmxremote.port=6080
>> -Dcom.sun.management.jmxremote.ssl=false
>> -Dcom.sun.management.jmxremote.authenticate=false
>> -Daccess.properties=/opt/dve/cassandra/conf/access.properties
>> -Dpasswd.properties=/opt/dve/cassandra/conf/passwd.properties
>> -Dpasswd.mode=MD5 -Dlog4j.configuration=log4j-server.properties
>> -Dlog4j.defaultInitOverride=true -cp
>> /opt/dve/cassandra/bin/../conf:/opt/dve/cassandra/bin/../build/classes/main:/opt/dve/cassandra/bin/../build/classes/thrift:/opt/dve/cassandra/bin/../lib/Cassandra-Extensions-1.0.0.jar:/opt/dve/cassandra/bin/../lib/antlr-3.2.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-clientutil-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-thrift-1.0.3.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-fixes.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/dve/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/dve/cassandra/bin/../lib/commons-codec-1.2.jar:/opt/dve/cassandra/bin/../lib/commons-lang-2.4.jar:/opt/dve/cassandra/bin/../lib/compress-lzf-0.8.4.jar:/opt/dve/cassandra/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/dve/cassandra/bin/../lib/guava-r08.jar:/opt/dve/cassandra/bin/../lib/high-scale-lib-1.1.2.jar:/opt/dve/cassandra/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar:/opt/dve/cassandra/bin/../lib/jline-0.9.94.jar:/opt/dve/cassandra/bin/../lib/json-simple-1.1.jar:/opt/dve/cassandra/bin/../lib/libthrift-0.6.jar:/opt/dve/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/dve/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/dve/cassandra/bin/../lib/slf4j-api-1.6.1.jar:/opt/dve/cassandra/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/dve/cassandra/bin/../lib/snakeyaml-1.6.jar:/opt/dve/cassandra/bin/../lib/snappy-java-1.0.4.1.jar
>> org.apache.cassandra.thrift.CassandraDaemon
>>
>> ==
>> # nodetool -h 127.0.0.1 -p 6080 info
>> Token: 85070591730234615865843651857942052864
>> Gossip active: true
>> Load : 20.59 GB
>> Generation No: 1339423322
>> Uptime (seconds) : 39626
>> Heap Memory (MB) : 3418.42 / 5984.00
>> Data Center  : datacenter1
>> Rack : rack1
>> Exceptions   : 0
>>
>> =
>> All row cache and key cache are disabled by default
>>
>> Key cache: disabled
>> Row cache: disabled
>>
>>
>> ==
>>
>> # pmap 9567
>> 9567: java
>> START   SIZE RSS PSS   DIRTYSWAP PERM MAPPING
>> 4000 36K 36K 36K  0K  0K r-xp
>> /opt/jdk1.6.0_29/bin/java
>> 40108000  8K  8K  8K  8K  0K rwxp
>> /opt/jdk1.6.0_29/bin/java
>> 4010a000  18040K  17988K  17988K  17988K  0K rwxp [heap]
>> 00067ae0 6326700K 6258664K 6258664K 6258664K  0K rwxp [anon]
>> 0007fd06b000  48724K  0K  0K  0K  0K rwxp [anon]
>> 7fbed153 1331104K  0K  0K  0K  0K r-xs
>> /var/cassandra/data/drc/queue-hb-219-Data.db
>> 7fbf22918000 2097152K  0K  0K  0K  0K r-xs
>> /var/cassandra/data/drc/queue-hb-219-Data.db
>> 7fbfa2918000 2097148K 1124464K 1124462K  0K 

Re: Offset in slicequeries for pagination

2012-06-11 Thread Rajat Mathur
Hi Cyril,

This may help.

http://architecturalatrocities.com/post/13918146722/implementing-column-pagination-in-cassandra

On Tue, Jun 12, 2012 at 3:18 AM, Cyril Auburtin wrote:

> If my columns are ("k1:k2" => data1), ("k11:k32" => data211), ("k10:k211"
> => data91)
>
> U mean transforming to  ("1:k1:k2" => data1), ("2:k11:k32" => data211) but
> I need the previous columns names to slice query on them
>
> 2012/6/11 R. Verlangen 
>
> I solved this with creating a manual index with as column keys integers
>> and column values the uuid's of the results. Then run a slicequery to
>> determine the batch to fetch.
>>
>>
>> 2012/6/11 Cyril Auburtin 
>>
>>> using  10 results maximum per page,
>>>
>>> to go directly to 14th page, there is no offset=141 possibility I guess?
>>> or does a Java client proposes that?
>>>
>>> What is the best solution, perform a get with a limit = page*10, and
>>> then a get with a column_start equals the lastest column received, and a
>>> limit of 10,
>>> I guess also, client side should cache results but it's off topic
>>>
>>
>>
>>
>> --
>> With kind regards,
>>
>> Robin Verlangen
>> *Software engineer*
>> *
>> *
>> W http://www.robinverlangen.nl
>> E ro...@us2.nl
>>
>> Disclaimer: The information contained in this message and attachments is
>> intended solely for the attention and use of the named addressee and may be
>> confidential. If you are not the intended recipient, you are reminded that
>> the information remains the property of the sender. You must not use,
>> disclose, distribute, copy, print or rely on this e-mail. If you have
>> received this message in error, please contact the sender immediately and
>> irrevocably delete this message and any copies.
>>
>>
>


-- 
*Rajat Mathur
B.Tech (IT) Final Year
IIIT Allahabad

09945990291

Find me @ Facebook 
Follow me @ Twitter *


Re: batch isolation

2012-06-11 Thread aaron morton
Row level deletion information is included in the row level isolation. 

Cheers


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 5/06/2012, at 6:05 AM, Todd Burruss wrote:

> I don't think I'm being clear.  I just was wondering if a "row delete" is
> isolated with all the other inserts or deletes to a specific column family
> and key in the same batch.
> 
> On 6/4/12 1:58 AM, "Sylvain Lebresne"  wrote:
> 
>> On Sun, Jun 3, 2012 at 6:05 PM, Todd Burruss  wrote:
>>> I just meant there is a "row delete" in the same batch as inserts - all
>>> to
>>> the same column family and key
>> 
>> Then it's the timestamp that will decide what happens. Whatever has a
>> timestamp lower or equal to the tombstone timestamp will be deleted
>> (that stands for insert in the batch itself).
>> 
>> --
>> Sylvain
>> 
>> 
>>> 
>>> 
>>> -Original Message-
>>> From: Sylvain Lebresne [sylv...@datastax.com]
>>> Received: Sunday, 03 Jun 2012, 3:44am
>>> To: user@cassandra.apache.org [user@cassandra.apache.org]
>>> Subject: Re: batch isolation
>>> 
>>> On Sun, Jun 3, 2012 at 2:53 AM, Todd Burruss 
>>> wrote:>
 1 ­ does this mean that a batch_mutate that first sends a "row delete"
 mutation on key X, then subsequent insert mutations for key X is
 isolated?
>>> 
>>> I'm not sure what you mean by having "a batch_mutate that first sends
>>> ... then ...", since a batch_mutate is a single API call.
>>> 
 2 ­ does isolation span column families for the same key  within the
 same
 batch_mutate?
>>> 
>>> No, it doesn't span column families (contrarily to atomicity). There
>>> is more details in
>>> http://www.datastax.com/dev/blog/row-level-isolation.
>>> 
>>> --
>>> Sylvain
> 



Re: how to configure cassandra as multi tenant

2012-06-11 Thread aaron morton
Check the documentation for you client or 
http://www.datastax.com/docs/1.0/dml/using_cql

Cheers


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 11/06/2012, at 11:25 PM, MOHD ARSHAD SALEEM wrote:

> Hi Sasha,
> 
> one more thing by using API's how to create keyspace and column family 
> dynamically. write now i have to create first through the command prompt then 
> that keyspace and column family used in API's.
> 
> Regards
> Arshad
> From: Sasha Dolgy [sdo...@gmail.com]
> Sent: Monday, June 11, 2012 4:49 PM
> To: user@cassandra.apache.org
> Subject: Re: how to configure cassandra as multi tenant
> 
> Arshad,
> 
> I used google with the following query:  apache cassandra multitenant
> 
> Suggest you do the same?  As was mentioned earlier, there has been a lot of 
> discussion about this topic for the past year -- especially on this mailing 
> list.  If you want to use Thrift or, to make your life easier, using Hector 
> or a similar API,  you can create keyspaces however you want ... aligned to 
> your design / architecture to support Multitenancy.  If it's code specific 
> help you want ... check out the maililng lists / resources for the various 
> API's that make working with Thrift easier:
> 
> Hector
> Pycassa
> PHPCassa
> 
> etc.
> 
> -sd
> 
> On Mon, Jun 11, 2012 at 12:05 PM, MOHD ARSHAD SALEEM 
> wrote:
> Hi Sasha,
> 
> Thanks for your reply. but what you send this is just to create keyspace 
> manually 
> using command prompt.how to create keyspace(Multi tenant) automatically
>  using cassandra API's.
> 
> Regards
> Arshad
> 
> From: Sasha Dolgy [sdo...@gmail.com]
> Sent: Monday, June 11, 2012 3:09 PM
> 
> To: user@cassandra.apache.org
> Subject: Re: how to configure cassandra as multi tenant
> 
> Google, man.
> 
> http://wiki.apache.org/cassandra/MultiTenant
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/about-multitenant-datamodel-td7575966.html
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/For-multi-tenant-is-it-good-to-have-a-key-space-for-each-tenant-td6723290.html
> 
> On Mon, Jun 11, 2012 at 11:37 AM, MOHD ARSHAD SALEEM 
> wrote:
> Hi Aaron,
> 
> Can you send me some particular link related to multi tenant research
> 
> Regards
> Arshad
> From: aaron morton [aa...@thelastpickle.com]
> Sent: Thursday, June 07, 2012 3:34 PM
> To: user@cassandra.apache.org
> Subject: Re: how to configure cassandra as multi tenant
> 
> Cassandra is not designed to run as a multi tenant database. 
> 
> There have been some recent discussions on this, search the user group for 
> more detailed answers. 
> 
> Cheers
> 
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 7/06/2012, at 7:03 PM, MOHD ARSHAD SALEEM wrote:
> 
>> Hi All,
>> 
>> I wanted to know how to use cassandra as a multi tenant .
>> 
>> Regards
>> Arshad
> 
> 
> 
> 
> -- 
> Sasha Dolgy
> sasha.do...@gmail.com
> 
> 
> 
> -- 
> Sasha Dolgy
> sasha.do...@gmail.com



Re: Setting column to null

2012-06-11 Thread Roshni Rajagopal
Leonid,


Are you using some client for doing these operations..?

Hector is a java client which provides APIs for adding/deleting columns to
a column family in cassandra.
I don¹t think you really need to write your wrapper in this format- you
are restricting the number of columns it can use etc.I suggest your code
can  accept user input to get col family name, operation, keys  , and
operation, and accordingly call the appropriate hector API for
adding/deleting data.


Regards,
Roshni


On 11/06/12 7:20 PM, "Leonid Ilyevsky"  wrote:

>Thanks, I understand what you are telling me. Obviously deleting the
>column is the proper way to do this in Cassandra.
>What I was looking for, is some convenient wrapper on top of that which
>will do it for me. Here is my scenario.
>
>I have a function that takes a record to be saved in Cassandra (array of
>objects, or Map). Let say, it can have up to 8 columns. I
>prepare a statement like this:
>
>Insert into  values(?, ?, ?, ?, ?, ?, ?, ?)
>
>If I somehow could put null when I execute it, it would be enough to
>prepare that statement once and execute it multiple times. I would then
>expect that when some element is null, the corresponding column is not
>inserted (for the new key) or deleted (for the existing key).
>The way it is now, in my code I have to examine which columns are present
>and which are not, depending on that I have to generate customized
>statement, and it is going to be different for the case of existing key
>versus case of the new key.
>Isn't this too much hassle?
>
>Related question. I assumed that prepared statement in Cassandra is there
>for the same reason as in RDBMS, that is, for efficiency. In the above
>scenario, how expensive is it to execute specialized statement for every
>record compare to prepared statement executed multiple times?
>
>If I need to execute those specialized statements, should I still use
>prepared statement or should I just generate a string with everything in
>ascii format?
>
>-Original Message-
>From: Roshni Rajagopal [mailto:roshni.rajago...@wal-mart.com]
>Sent: Monday, June 11, 2012 12:58 AM
>To: user@cassandra.apache.org
>Subject: Re: Setting column to null
>
>Would you want to view data like this "there was a key, which had this
>column , but now it does not have any value as of this time."
>
>Unless you specifically want this information, I believe you should just
>delete the column, rather than have an alternate value for NULL or create
>a composite column.
>
>Because in cassandra that¹s the way deletion is dealt with, putting NULLs
>is the way we deal with it in RDBMS because we have a fixed number of
>columns which always have to have some value, even if its NULL, and we
>have to have the same set of columns for every row.
>In Cassandara, we can delete the column, and in most scenarios that¹s
>what we should do, unless we specifically want to preserve some history
>that this column was turned null at this timeŠEach row can have different
>columns.
>
>Regards,
>Roshni
>
>From: Edward Capriolo
>mailto:edlinuxg...@gmail.com>>
>Reply-To: "user@cassandra.apache.org"
>mailto:user@cassandra.apache.org>>
>To: "user@cassandra.apache.org"
>mailto:user@cassandra.apache.org>>
>Subject: Re: Setting column to null
>
>Your best bet is to define the column as a composite column where one
>part represents is null and the other part is the data.
>
>On Friday, June 8, 2012, shashwat shriparv
>mailto:dwivedishash...@gmail.com>> wrote:
>> What you can do is you can define some specific variable like
>>"NULLDATA" some thing like that to update in columns that does have value
>>
>>
>> On Fri, Jun 8, 2012 at 11:58 PM, aaron morton
>>mailto:aa...@thelastpickle.com>> wrote:
>>
>> You don't nee to set columns to null, delete the column instead.
>> Cheers
>> -
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>> On 8/06/2012, at 9:34 AM, Leonid Ilyevsky wrote:
>>
>> Is it possible to explicitly set a column value to null?
>>
>> I see that if insert statement does not include a specific column, that
>>column comes up as null (assuming we are creating a record with new
>>unique key).
>> But if we want to update a record, how we set it to null?
>>
>> Another situation is when I use prepared cql3 statement (in Java) and
>>send parameters when I execute it. If I want to leave some column
>>unassigned, I need a special statement without that column.
>> What I would like is, prepare one statement including all columns, and
>>then be able to set some of them to null. I tried to set corresponding
>>ByteBuffer parameter to null, obviously got an exception.
>> 
>> This email, along with any attachments, is confidential and may be
>>legally privileged or otherwise protected from disclosure. Any
>>unauthorized dissemination, copying or use of the contents of this email
>>is strictly prohibited and

Re: Much more native memory used by Cassandra then the configured JVM heap size

2012-06-11 Thread Jason Tang
Hi

I found some information of this issue
And seems we can have other strategy for data access to reduce mmap usage,
in order to use less memory.

But I didn't find the document to describe the parameters for Cassandra
1.x, is it a good way to use this parameter to reduce shared memory usage
and what's the impact? (btw, our data model is dynamical, which means the
although the through put is high, but the life cycle of the data is short,
one hour or less).

"
# Choices are auto, standard, mmap, and mmap_index_only.
disk_access_mode: auto
"

http://comments.gmane.org/gmane.comp.db.cassandra.user/7390

2012/6/12 Jason Tang 

> See my post, I limit the HVM heap 6G, but actually Cassandra will use more
> memory which is not calculated in JVM heap.
>
> I use top to monitor total memory used by Cassandra.
>
> =
> -Xms6G -Xmx6G -Xmn1600M
>
> 2012/6/12 Jeffrey Kesselman 
>
>> Btw.  I suggest you spin up JConsole as it will give you much more detai
>> kon what your VM is actually doing.
>>
>>
>>
>> On Mon, Jun 11, 2012 at 9:14 PM, Jason Tang  wrote:
>>
>>> Hi
>>>
>>>  We have some problem with Cassandra memory usage, we configure the JVM
>>> HEAP 6G, but after runing Cassandra for several hours (insert, update,
>>> delete). The total memory used by Cassandra go up to 15G, which cause the
>>> OS low memory.
>>>  So I wonder if it is normal to have so many memory used by cassandra?
>>>
>>> And how to limit the native memory used by Cassandra?
>>>
>>>
>>> ===
>>> Cassandra 1.0.3, 64 bit jdk.
>>>
>>> Memory ocupied by Cassandra 15G
>>>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>>>  9567 casadm20   0 28.3g  15g 9.1g S  269 65.1 385:57.65 java
>>>
>>> =
>>> -Xms6G -Xmx6G -Xmn1600M
>>>
>>>  # ps -ef | grep  9567
>>> casadm9567 1 55 Jun11 ?05:59:44
>>> /opt/jdk1.6.0_29/bin/java -ea
>>> -javaagent:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar
>>> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G
>>> -Xmn1600M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC
>>> -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
>>> -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
>>> -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true
>>> -Dcom.sun.management.jmxremote.port=6080
>>> -Dcom.sun.management.jmxremote.ssl=false
>>> -Dcom.sun.management.jmxremote.authenticate=false
>>> -Daccess.properties=/opt/dve/cassandra/conf/access.properties
>>> -Dpasswd.properties=/opt/dve/cassandra/conf/passwd.properties
>>> -Dpasswd.mode=MD5 -Dlog4j.configuration=log4j-server.properties
>>> -Dlog4j.defaultInitOverride=true -cp
>>> /opt/dve/cassandra/bin/../conf:/opt/dve/cassandra/bin/../build/classes/main:/opt/dve/cassandra/bin/../build/classes/thrift:/opt/dve/cassandra/bin/../lib/Cassandra-Extensions-1.0.0.jar:/opt/dve/cassandra/bin/../lib/antlr-3.2.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-clientutil-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-thrift-1.0.3.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-fixes.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/dve/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/dve/cassandra/bin/../lib/commons-codec-1.2.jar:/opt/dve/cassandra/bin/../lib/commons-lang-2.4.jar:/opt/dve/cassandra/bin/../lib/compress-lzf-0.8.4.jar:/opt/dve/cassandra/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/dve/cassandra/bin/../lib/guava-r08.jar:/opt/dve/cassandra/bin/../lib/high-scale-lib-1.1.2.jar:/opt/dve/cassandra/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar:/opt/dve/cassandra/bin/../lib/jline-0.9.94.jar:/opt/dve/cassandra/bin/../lib/json-simple-1.1.jar:/opt/dve/cassandra/bin/../lib/libthrift-0.6.jar:/opt/dve/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/dve/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/dve/cassandra/bin/../lib/slf4j-api-1.6.1.jar:/opt/dve/cassandra/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/dve/cassandra/bin/../lib/snakeyaml-1.6.jar:/opt/dve/cassandra/bin/../lib/snappy-java-1.0.4.1.jar
>>> org.apache.cassandra.thrift.CassandraDaemon
>>>
>>> ==
>>> # nodetool -h 127.0.0.1 -p 6080 info
>>> Token: 85070591730234615865843651857942052864
>>> Gossip active: true
>>> Load : 20.59 GB
>>> Generation No: 1339423322
>>> Uptime (seconds) : 39626
>>> Heap Memory (MB) : 3418.42 / 5984.00
>>> Data Center  : datacenter1
>>> Rack : rack1
>>> Exceptions   : 0
>>>
>>> =
>>> All row cache and key cache are disabled by default
>>>
>>> Key cache: disabled
>>> Row cache: disabled
>>>
>>>
>>> ==
>>>
>>> # pmap 9567
>>> 9567: java
>>> START   SIZE RSS P

RESTful API for GET

2012-06-11 Thread James Pirz
Dear all,

I am trying to query the system, specifically performing a GET for a
specific key, through Jmeter (or CURL)
and I am wondering what is the best pure RESTful API for the system (with
the lowest overhead) that I can use.

Thanks,
James


Re: RESTful API for GET

2012-06-11 Thread Tamar Fraenkel
Hi!
I am using java and jersey.
Works fine,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Jun 12, 2012 at 9:06 AM, James Pirz  wrote:

> Dear all,
>
> I am trying to query the system, specifically performing a GET for a
> specific key, through Jmeter (or CURL)
> and I am wondering what is the best pure RESTful API for the system (with
> the lowest overhead) that I can use.
>
> Thanks,
> James
>
<>

Re: RESTful API for GET

2012-06-11 Thread James Pirz
Hi,

Thanks for the reply,
But can you tell me how do you form your request URLs,
I mean does Cassandra support a native RESTful api for talking to the
system, and if yes, on which specific port it is listening for the coming
requests ? and what does it expect for the format for the URLs ?

Thanks in advance,

James

On Mon, Jun 11, 2012 at 11:09 PM, Tamar Fraenkel wrote:

> Hi!
> I am using java and jersey.
> Works fine,
>
> *Tamar Fraenkel *
> Senior Software Engineer, TOK Media
>
> [image: Inline image 1]
>
> ta...@tok-media.com
> Tel:   +972 2 6409736
> Mob:  +972 54 8356490
> Fax:   +972 2 5612956
>
>
>
>
>
> On Tue, Jun 12, 2012 at 9:06 AM, James Pirz  wrote:
>
>> Dear all,
>>
>> I am trying to query the system, specifically performing a GET for a
>> specific key, through Jmeter (or CURL)
>> and I am wondering what is the best pure RESTful API for the system (with
>> the lowest overhead) that I can use.
>>
>> Thanks,
>> James
>>
>
>
<>

Re: RESTful API for GET

2012-06-11 Thread Tom

Hi James,

No, Cassandra doesn't supports a RESTful api.

As Tamar points out, you have to supply this functionality yourself 
specifically for your data model.


When designing your RESTful server application:
- consider using a RESTful framework (for example: Jersey)
- use a cassandra client to access your Cassandra data (for example: 
astyanax)


Good luck,
Tom


On 06/11/2012 11:15 PM, James Pirz wrote:

Hi,

Thanks for the reply,
But can you tell me how do you form your request URLs,
I mean does Cassandra support a native RESTful api for talking to the
system, and if yes, on which specific port it is listening for the
coming requests ? and what does it expect for the format for the URLs ?

Thanks in advance,

James

On Mon, Jun 11, 2012 at 11:09 PM, Tamar Fraenkel mailto:ta...@tok-media.com>> wrote:

Hi!
I am using java and jersey.
Works fine,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

Inline image 1

ta...@tok-media.com 
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956





On Tue, Jun 12, 2012 at 9:06 AM, James Pirz mailto:james.p...@gmail.com>> wrote:

Dear all,

I am trying to query the system, specifically performing a GET
for a specific key, through Jmeter (or CURL)
and I am wondering what is the best pure RESTful API for the
system (with the lowest overhead) that I can use.

Thanks,
James







Re: RESTful API for GET

2012-06-11 Thread Sasha Dolgy
https://github.com/hmsonline/virgil Brian O'Neill posted this a while ago
... sits on top of Cassandra to give you the RESTful API you want

Another option ... http://code.google.com/p/restish/

Or, you could simply build your own ...


On Tue, Jun 12, 2012 at 8:46 AM, Tom  wrote:

> Hi James,
>
> No, Cassandra doesn't supports a RESTful api.
>
> As Tamar points out, you have to supply this functionality yourself
> specifically for your data model.
>
> When designing your RESTful server application:
> - consider using a RESTful framework (for example: Jersey)
> - use a cassandra client to access your Cassandra data (for example:
> astyanax)
>
> Good luck,
> Tom
>
>
>
> On 06/11/2012 11:15 PM, James Pirz wrote:
>
>> Hi,
>>
>> Thanks for the reply,
>> But can you tell me how do you form your request URLs,
>> I mean does Cassandra support a native RESTful api for talking to the
>> system, and if yes, on which specific port it is listening for the
>> coming requests ? and what does it expect for the format for the URLs ?
>>
>> Thanks in advance,
>>
>> James
>>
>> On Mon, Jun 11, 2012 at 11:09 PM, Tamar Fraenkel > > wrote:
>>
>>Hi!
>>I am using java and jersey.
>>Works fine,
>>
>>*Tamar Fraenkel *
>>
>>Senior Software Engineer, TOK Media
>>
>>Inline image 1
>>
>>ta...@tok-media.com 
>>
>>Tel: +972 2 6409736
>>Mob: +972 54 8356490
>>Fax: +972 2 5612956
>>
>>
>>
>>
>>
>>On Tue, Jun 12, 2012 at 9:06 AM, James Pirz >> wrote:
>>
>>Dear all,
>>
>>I am trying to query the system, specifically performing a GET
>>for a specific key, through Jmeter (or CURL)
>>and I am wondering what is the best pure RESTful API for the
>>system (with the lowest overhead) that I can use.
>>
>>Thanks,
>>James
>>
>>
>>
>>
>


-- 
Sasha Dolgy
sasha.do...@gmail.com