Counter question

2012-03-29 Thread Tamar Fraenkel
Hi!
Asking again, as I didn't get responses :)

I have a ring with 3 nodes and replication factor of 2.
I have counter cf with the following definition:

CREATE COLUMN FAMILY tk_counters
with comparator = 'UTF8Type'
and default_validation_class = 'CounterColumnType'
and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
and replicate_on_write = true;

In my code (Java, Hector), I increment a counter and then read it.
Is it possible that the value read will be the value before increment?
If yes, how can I ensure it does not happen. All my reads and writes are
done with consistency level one.
If this is consistency issue, can I do only the actions on tk_counters
column family with a higher consistency level?
What does replicate_on_write mean? I thought this should help, but maybe
even if replicating after write, my read happen before replication finished
and it returns value from a still not updated node.

My increment code is:
MutatorComposite mutator =
HFactory.createMutator(keyspace,
CompositeSerializer.get());
mutator.incrementCounter(key,tk_counters, columnName, inc);
mutator.execute();

My read counter code is:
CounterQueryComposite,String query =
createCounterColumnQuery(keyspace,
CompositeSerializer.get(), StringSerializer.get());
query.setColumnFamily(tk_counters);
query.setKey(key);
query.setName(columnName);
QueryResultHCounterColumnString r = query.execute();
return r.get().getValue();

Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Counter question

2012-03-29 Thread Shimi Kiviti
Like everything else in Cassandra, If you need full consistency you need to
make sure that you have the right combination of (write consistency level)
+ (read consistency level)

if
W = write consistency level
R = read consistency level
N = replication factor
then
W + R  N

Shimi

On Thu, Mar 29, 2012 at 10:09 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Asking again, as I didn't get responses :)

 I have a ring with 3 nodes and replication factor of 2.
 I have counter cf with the following definition:

 CREATE COLUMN FAMILY tk_counters
 with comparator = 'UTF8Type'
 and default_validation_class = 'CounterColumnType'
 and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
 and replicate_on_write = true;

 In my code (Java, Hector), I increment a counter and then read it.
 Is it possible that the value read will be the value before increment?
 If yes, how can I ensure it does not happen. All my reads and writes are
 done with consistency level one.
 If this is consistency issue, can I do only the actions on tk_counters
 column family with a higher consistency level?
 What does replicate_on_write mean? I thought this should help, but maybe
 even if replicating after write, my read happen before replication
 finished and it returns value from a still not updated node.

 My increment code is:
 MutatorComposite mutator =
 HFactory.createMutator(keyspace,
 CompositeSerializer.get());
 mutator.incrementCounter(key,tk_counters, columnName, inc);
 mutator.execute();

 My read counter code is:
 CounterQueryComposite,String query =
 createCounterColumnQuery(keyspace,
 CompositeSerializer.get(), StringSerializer.get());
 query.setColumnFamily(tk_counters);
 query.setKey(key);
 query.setName(columnName);
 QueryResultHCounterColumnString r = query.execute();
 return r.get().getValue();

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.png

opscenter

2012-03-29 Thread puneet loya
I m currently using the the datastax opscenter.

How do we add column to the column families in opscenter??


Re: opscenter

2012-03-29 Thread R. Verlangen
As far as I'm aware of that is not possible using the opscenter.

I recommend you use the cassandra-cli and perform an update column family
query.

2012/3/29 puneet loya puneetl...@gmail.com

 I m currently using the the datastax opscenter.

 How do we add column to the column families in opscenter??





-- 
With kind regards,

Robin Verlangen
www.robinverlangen.nl


Re: Counter question

2012-03-29 Thread Tamar Fraenkel
Can this be set on a CF basis.
Only this CF needs higher consistency level.
Thanks,
Tamar
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Mar 29, 2012 at 10:44 AM, Shimi Kiviti shim...@gmail.com wrote:

 Like everything else in Cassandra, If you need full consistency you need
 to make sure that you have the right combination of (write consistency
 level) + (read consistency level)

 if
 W = write consistency level
 R = read consistency level
 N = replication factor
 then
 W + R  N

 Shimi


 On Thu, Mar 29, 2012 at 10:09 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Asking again, as I didn't get responses :)

 I have a ring with 3 nodes and replication factor of 2.
 I have counter cf with the following definition:

 CREATE COLUMN FAMILY tk_counters
 with comparator = 'UTF8Type'
 and default_validation_class = 'CounterColumnType'
 and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
 and replicate_on_write = true;

 In my code (Java, Hector), I increment a counter and then read it.
 Is it possible that the value read will be the value before increment?
 If yes, how can I ensure it does not happen. All my reads and writes are
 done with consistency level one.
 If this is consistency issue, can I do only the actions on tk_counters
 column family with a higher consistency level?
 What does replicate_on_write mean? I thought this should help, but maybe
 even if replicating after write, my read happen before replication
 finished and it returns value from a still not updated node.

 My increment code is:
 MutatorComposite mutator =
 HFactory.createMutator(keyspace,
 CompositeSerializer.get());
 mutator.incrementCounter(key,tk_counters, columnName, inc);
 mutator.execute();

 My read counter code is:
 CounterQueryComposite,String query =
 createCounterColumnQuery(keyspace,
 CompositeSerializer.get(), StringSerializer.get());
 query.setColumnFamily(tk_counters);
 query.setKey(key);
 query.setName(columnName);
 QueryResultHCounterColumnString r = query.execute();
 return r.get().getValue();

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.pngtokLogo.png

Re: Counter question

2012-03-29 Thread Paolo Bernardi
On Thu, 2012-03-29 at 11:47 +0200, Tamar Fraenkel wrote:
 Can this be set on a CF basis.
 Only this CF needs higher consistency level.

The consistency level of read/write operations is specified at each
single read/write function call. This means that you have to use the
desired consistency level for each read/write operation involving your
CF.

Paolo

-- 
@bernarpa
http://paolobernardi.wordpress.com



Re: Counter question

2012-03-29 Thread Shimi Kiviti
You set the consistency with every request.
Usually a client library will let you set a default one for all write/read
requests.
I don't know if Hector lets you set a default consistency level per CF.
Take a look at the Hector docs or ask it in the Hector mailing list.

Shimi

On Thu, Mar 29, 2012 at 11:47 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Can this be set on a CF basis.
 Only this CF needs higher consistency level.
 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Mar 29, 2012 at 10:44 AM, Shimi Kiviti shim...@gmail.com wrote:

 Like everything else in Cassandra, If you need full consistency you need
 to make sure that you have the right combination of (write consistency
 level) + (read consistency level)

 if
 W = write consistency level
 R = read consistency level
 N = replication factor
 then
 W + R  N

 Shimi


 On Thu, Mar 29, 2012 at 10:09 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Asking again, as I didn't get responses :)

 I have a ring with 3 nodes and replication factor of 2.
 I have counter cf with the following definition:

 CREATE COLUMN FAMILY tk_counters
 with comparator = 'UTF8Type'
 and default_validation_class = 'CounterColumnType'
 and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
 and replicate_on_write = true;

 In my code (Java, Hector), I increment a counter and then read it.
 Is it possible that the value read will be the value before increment?
 If yes, how can I ensure it does not happen. All my reads and writes are
 done with consistency level one.
 If this is consistency issue, can I do only the actions on tk_counters
 column family with a higher consistency level?
 What does replicate_on_write mean? I thought this should help, but maybe
 even if replicating after write, my read happen before replication
 finished and it returns value from a still not updated node.

 My increment code is:
 MutatorComposite mutator =
 HFactory.createMutator(keyspace,
 CompositeSerializer.get());
 mutator.incrementCounter(key,tk_counters, columnName, inc);
 mutator.execute();

 My read counter code is:
 CounterQueryComposite,String query =
 createCounterColumnQuery(keyspace,
 CompositeSerializer.get(), StringSerializer.get());
 query.setColumnFamily(tk_counters);
 query.setKey(key);
 query.setName(columnName);
 QueryResultHCounterColumnString r = query.execute();
 return r.get().getValue();

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

Re: [BETA RELEASE] Apache Cassandra 1.1.0-beta2 released

2012-03-29 Thread Sylvain Lebresne
To be clear, the incompatibility we've talked about does *not* concern
any of the 1.0 releases (you'll want to refer NEWS file for any
details on the upgrade path for these versions).
The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.

--
Sylvain

On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
 We are currently using 1.0.0-2  version. Do we still need to migrate to the
 latest release of 1.0 before migrating to 1.1? Looks like incompatibility is
 only between 1.0.3-1.0.8.


 On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch wrote:

 Thanks for the quick feedback.

 I will drop the schema then.

 Benoit.


 Le 27 mars 2012 14:50, Sylvain Lebresne sylv...@datastax.com a écrit :
  Actually, there was a few changes to the on-disk format of schema
  between beta1 and beta2 so upgrade is not supported between those two
  beta versions.
  Sorry for any inconvenience.
 
  --
  Sylvain
 
  On Tue, Mar 27, 2012 at 12:57 PM, Benoit Perroud ben...@noisette.ch
  wrote:
  Hi All,
 
  Thanks a lot for the release.
  I just upgraded my 1.1-beta1 to 1.1-beta2, and I get the following
  error :
 
   INFO 10:56:17,089 Opening
  /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-18
  (74 bytes)
   INFO 10:56:17,092 Opening
  /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-17
  (486 bytes)
  ERROR 10:56:17,306 Exception encountered during startup
  java.lang.NullPointerException
         at
  org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:163)
         at
  org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:120)
         at
  org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:77)
         at
  org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:97)
         at
  org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:35)
         at
  org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:87)
         at
  org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1008)
         at
  org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1053)
         at
  org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:261)
         at
  org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:242)
         at
  org.apache.cassandra.db.DefsTable.loadFromTable(DefsTable.java:158)
         at
  org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:514)
         at
  org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:182)
         at
  org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:353)
         at
  org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
 
 
  Thanks for your support,
 
  Benoit.
 
 
  Le 27 mars 2012 11:55, Sylvain Lebresne sylv...@datastax.com a écrit
  :
  The Cassandra team is pleased to announce the release of the second
  beta for
  the future Apache Cassandra 1.1.
 
  Note that this is beta software and as such is *not* ready for
  production use.
 
  The goal of this release is to give a preview of what will become
  Cassandra
  1.1 and to get wider testing before the final release. All help in
  testing
  this release would be therefore greatly appreciated and please report
  any
  problem you may encounter[3,4]. Have a look at the change log[1] and
  the
  release notes[2] to see where Cassandra 1.1 differs from the previous
  series.
 
  Apache Cassandra 1.1.0-beta2[5] is available as usual from the
  cassandra
  website (http://cassandra.apache.org/download/) and a debian package
  is
  available using the 11x branch (see
  http://wiki.apache.org/cassandra/DebianPackaging).
 
  Thank you for your help in testing and have fun with it.
 
  [1]: http://goo.gl/nX7UL (CHANGES.txt)
  [2]: http://goo.gl/TB9ro (NEWS.txt)
  [3]: https://issues.apache.org/jira/browse/CASSANDRA
  [4]: user@cassandra.apache.org
  [5]:
  http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/cassandra-1.1.0-beta2
 
 
 
  --
  sent from my Nokia 3210



 --
 sent from my Nokia 3210




Re: opscenter

2012-03-29 Thread puneet loya
thank you :)

I m using cqlsh now..

Its workin..

Can we add an autoincrement field using cqlsh??

On Thu, Mar 29, 2012 at 2:23 PM, R. Verlangen ro...@us2.nl wrote:

 As far as I'm aware of that is not possible using the opscenter.

 I recommend you use the cassandra-cli and perform an update column family
 query.


 2012/3/29 puneet loya puneetl...@gmail.com

 I m currently using the the datastax opscenter.

 How do we add column to the column families in opscenter??





 --
 With kind regards,

 Robin Verlangen
 www.robinverlangen.nl




Re: Counter question

2012-03-29 Thread Tamar Fraenkel
Thanks! will do.
Tamar
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Mar 29, 2012 at 12:11 PM, Shimi Kiviti shim...@gmail.com wrote:

 You set the consistency with every request.
 Usually a client library will let you set a default one for all write/read
 requests.
 I don't know if Hector lets you set a default consistency level per CF.
 Take a look at the Hector docs or ask it in the Hector mailing list.

 Shimi


 On Thu, Mar 29, 2012 at 11:47 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Can this be set on a CF basis.
 Only this CF needs higher consistency level.
 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Mar 29, 2012 at 10:44 AM, Shimi Kiviti shim...@gmail.com wrote:

 Like everything else in Cassandra, If you need full consistency you need
 to make sure that you have the right combination of (write consistency
 level) + (read consistency level)

 if
  W = write consistency level
 R = read consistency level
 N = replication factor
 then
 W + R  N

 Shimi


 On Thu, Mar 29, 2012 at 10:09 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Asking again, as I didn't get responses :)

 I have a ring with 3 nodes and replication factor of 2.
 I have counter cf with the following definition:

 CREATE COLUMN FAMILY tk_counters
 with comparator = 'UTF8Type'
 and default_validation_class = 'CounterColumnType'
 and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
 and replicate_on_write = true;

 In my code (Java, Hector), I increment a counter and then read it.
 Is it possible that the value read will be the value before increment?
 If yes, how can I ensure it does not happen. All my reads and writes
 are done with consistency level one.
 If this is consistency issue, can I do only the actions on tk_counters
 column family with a higher consistency level?
 What does replicate_on_write mean? I thought this should help, but
 maybe even if replicating after write, my read happen before
 replication finished and it returns value from a still not updated
 node.

 My increment code is:
 MutatorComposite mutator =
 HFactory.createMutator(keyspace,
 CompositeSerializer.get());
 mutator.incrementCounter(key,tk_counters, columnName, inc);
 mutator.execute();

 My read counter code is:
 CounterQueryComposite,String query =
 createCounterColumnQuery(keyspace,
 CompositeSerializer.get(), StringSerializer.get());
 query.setColumnFamily(tk_counters);
 query.setKey(key);
 query.setName(columnName);
 QueryResultHCounterColumnString r = query.execute();
 return r.get().getValue();

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956







tokLogo.pngtokLogo.png

Re: Cassandra 06x debian repo GONE

2012-03-29 Thread Sylvain Lebresne
Not sure what happened in there. Anyway, I've regenerated the Packages
file so hopefully that should be fixed (but it may take up to an hour
to propagate to apache mirrors).

That being said, I wouldn't advise relying on the 0.6 packages to be
available forever. Apache is not fond of us keeping too many different
Cassandra versions on the main servers so we may have to remove them
at some point (old versions will always be available at
http://archive.apache.org/dist/cassandra/, but debian packages are not
guaranteed to be there forever).

--
Sylvain

On Thu, Mar 29, 2012 at 6:14 AM, Ashley Martens amart...@ngmoco.com wrote:
 Using this apt source list:

 deb     http://www.apache.org/dist/cassandra/debian 06x main
 deb-src http://www.apache.org/dist/cassandra/debian 06x main

 E: Package 'cassandra' has no installation candidate

 Has the apt source changed?

 On Wed, Mar 28, 2012 at 7:18 PM, Michael Shuler mich...@pbandjelly.org
 wrote:

 On 03/28/2012 07:45 PM, Ashley Martens wrote:
  Where the F^$% have the packages for 06x gone?

 Easy there, pardner.

  http://www.apache.org/dist/cassandra/debian/dists/06x/main/binary-amd64/
 
  Is empty. What gives?

 While the repository Packages list does appear to be empty, the 0.6.13
 package does exist and you can grab it out of the pool:

 $ wget

 http://www.apache.org/dist/cassandra/debian/pool/main/c/cassandra/cassandra_0.6.13_all.deb
 $ sudo dpkg -i cassandra_0.6.13_all.deb

 --
 Kind regards,
 Michael




 --
 gpg --keyserver pgpkeys.mit.edu --recv-keys 0x23e861255b0d6abb
 Key fingerprint = 0E9E 0E22 3957 BB04 DD72  B093 23E8 6125 5B0D 6ABB



Re: [BETA RELEASE] Apache Cassandra 1.1.0-beta2 released

2012-03-29 Thread Mohit Anchlia
This is from NEWS.txt. So my question is if we are on 1.0.0-2 release do we
still need to upgrade since this impacts releases between 1.0.3-1.0.5?
-
If you are running a multi datacenter setup, you should upgrade to
  the latest 1.0.x (or 0.8.x) release before upgrading.  Versions
  0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is
incompatible
  with 1.1.
-

On Thu, Mar 29, 2012 at 4:51 AM, Sylvain Lebresne sylv...@datastax.comwrote:

 To be clear, the incompatibility we've talked about does *not* concern
 any of the 1.0 releases (you'll want to refer NEWS file for any
 details on the upgrade path for these versions).
 The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.

 --
 Sylvain

 On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  We are currently using 1.0.0-2  version. Do we still need to migrate to
 the
  latest release of 1.0 before migrating to 1.1? Looks like
 incompatibility is
  only between 1.0.3-1.0.8.
 
 
  On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch
 wrote:
 
  Thanks for the quick feedback.
 
  I will drop the schema then.
 
  Benoit.
 
 
  Le 27 mars 2012 14:50, Sylvain Lebresne sylv...@datastax.com a écrit
 :
   Actually, there was a few changes to the on-disk format of schema
   between beta1 and beta2 so upgrade is not supported between those two
   beta versions.
   Sorry for any inconvenience.
  
   --
   Sylvain
  
   On Tue, Mar 27, 2012 at 12:57 PM, Benoit Perroud ben...@noisette.ch
   wrote:
   Hi All,
  
   Thanks a lot for the release.
   I just upgraded my 1.1-beta1 to 1.1-beta2, and I get the following
   error :
  
INFO 10:56:17,089 Opening
  
 /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-18
   (74 bytes)
INFO 10:56:17,092 Opening
  
 /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-17
   (486 bytes)
   ERROR 10:56:17,306 Exception encountered during startup
   java.lang.NullPointerException
  at
  
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:163)
  at
  
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:120)
  at
   org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:77)
  at
   org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:97)
  at
   org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:35)
  at
  
 org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:87)
  at
  
 org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1008)
  at
  
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1053)
  at
  
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:261)
  at
  
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:242)
  at
   org.apache.cassandra.db.DefsTable.loadFromTable(DefsTable.java:158)
  at
  
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:514)
  at
  
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:182)
  at
  
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:353)
  at
  
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
  
  
   Thanks for your support,
  
   Benoit.
  
  
   Le 27 mars 2012 11:55, Sylvain Lebresne sylv...@datastax.com a
 écrit
   :
   The Cassandra team is pleased to announce the release of the second
   beta for
   the future Apache Cassandra 1.1.
  
   Note that this is beta software and as such is *not* ready for
   production use.
  
   The goal of this release is to give a preview of what will become
   Cassandra
   1.1 and to get wider testing before the final release. All help in
   testing
   this release would be therefore greatly appreciated and please
 report
   any
   problem you may encounter[3,4]. Have a look at the change log[1] and
   the
   release notes[2] to see where Cassandra 1.1 differs from the
 previous
   series.
  
   Apache Cassandra 1.1.0-beta2[5] is available as usual from the
   cassandra
   website (http://cassandra.apache.org/download/) and a debian
 package
   is
   available using the 11x branch (see
   http://wiki.apache.org/cassandra/DebianPackaging).
  
   Thank you for your help in testing and have fun with it.
  
   [1]: http://goo.gl/nX7UL (CHANGES.txt)
   [2]: http://goo.gl/TB9ro (NEWS.txt)
   [3]: https://issues.apache.org/jira/browse/CASSANDRA
   [4]: user@cassandra.apache.org
   [5]:
  
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/cassandra-1.1.0-beta2
  
  
  
   --
   sent from my Nokia 3210
 
 
 
  --
  sent from my Nokia 3210
 
 



RE: Any improvements in Cassandra JDBC driver ?

2012-03-29 Thread Jeremiah Jordan
There is no such thing as pure insert which will give an error if the thing 
already exists.  Everything is really UPDATE OR INSERT.  Whether you say 
UPDATE, or INSERT, it will all act like UPDATE OR INSERT, if the thing is 
there it get over written, if it isn't there it gets inserted.

-Jeremiah



From: Dinusha Dilrukshi [sdddilruk...@gmail.com]
Sent: Wednesday, March 28, 2012 11:41 PM
To: user@cassandra.apache.org
Subject: Any improvements in Cassandra JDBC driver ?

Hi,

We are using Cassandra JDBC driver (found in [1]) to call to Cassandra sever 
using CQL and JDBC calls.  One of the main disadvantage is, this driver is not 
available in maven repository where people can publicly access. Currently we 
have to checkout the source and build ourselves. Is there any possibility to 
host this driver in a maven repository ?

And one of the other limitation in driver is, it does not support for the 
insert query. If we need to do a insert , then it can be done using the 
update statement. So basically it will be same query used for both UPDATE and 
INSERT. As an example, if you execute following query:
update USER set 'username'=?, 'password'=? where key = ?
and if the provided 'KEY' already exist in the Column family then it will do a 
update to existing  columns. If the provided KEY does not already exist, then 
it will do a insert..
Is that the INSERT query option is now available in latest driver?

Are there any other improvements/supports added to this driver recently ?

Is this driver compatible with Cassandra-1.1.0 and is that the changes done for 
driver will be backward compatible with older Cassandra versions (1.0.0) ?

[1]. 
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/

Regards,
~Dinusha~



Re: Cassandra 06x debian repo GONE

2012-03-29 Thread Ashley Martens
Got it. Moving the packages I use to another repo now.

On Thu, Mar 29, 2012 at 7:17 AM, Sylvain Lebresne sylv...@datastax.comwrote:

 Not sure what happened in there. Anyway, I've regenerated the Packages
 file so hopefully that should be fixed (but it may take up to an hour
 to propagate to apache mirrors).

 That being said, I wouldn't advise relying on the 0.6 packages to be
 available forever. Apache is not fond of us keeping too many different
 Cassandra versions on the main servers so we may have to remove them
 at some point (old versions will always be available at
 http://archive.apache.org/dist/cassandra/, but debian packages are not
 guaranteed to be there forever).

 --
 Sylvain

 On Thu, Mar 29, 2012 at 6:14 AM, Ashley Martens amart...@ngmoco.com
 wrote:
  Using this apt source list:
 
  deb http://www.apache.org/dist/cassandra/debian 06x main
  deb-src http://www.apache.org/dist/cassandra/debian 06x main
 
  E: Package 'cassandra' has no installation candidate
 
  Has the apt source changed?
 
  On Wed, Mar 28, 2012 at 7:18 PM, Michael Shuler mich...@pbandjelly.org
  wrote:
 
  On 03/28/2012 07:45 PM, Ashley Martens wrote:
   Where the F^$% have the packages for 06x gone?
 
  Easy there, pardner.
 
  
 http://www.apache.org/dist/cassandra/debian/dists/06x/main/binary-amd64/
  
   Is empty. What gives?
 
  While the repository Packages list does appear to be empty, the 0.6.13
  package does exist and you can grab it out of the pool:
 
  $ wget
 
 
 http://www.apache.org/dist/cassandra/debian/pool/main/c/cassandra/cassandra_0.6.13_all.deb
  $ sudo dpkg -i cassandra_0.6.13_all.deb
 
  --
  Kind regards,
  Michael
 
 
 
 
  --
  gpg --keyserver pgpkeys.mit.edu --recv-keys 0x23e861255b0d6abb
  Key fingerprint = 0E9E 0E22 3957 BB04 DD72  B093 23E8 6125 5B0D 6ABB
 




-- 
gpg --keyserver pgpkeys.mit.edu --recv-keys 0x23e861255b0d6abb
Key fingerprint = 0E9E 0E22 3957 BB04 DD72  B093 23E8 6125 5B0D 6ABB


Re: Any improvements in Cassandra JDBC driver ?

2012-03-29 Thread Dinusha Dilrukshi
What I want to tell was this driver does not use INSERT key word.  Since
CQL support for using INSERT keyword and it is more generic key word used
to add new records, it's more user friendly to  use INSERT key word  to add
new record set rather using UPDATE keyword.

Regards,
~Dinusha~



On Thu, Mar 29, 2012 at 8:34 PM, Jeremiah Jordan 
jeremiah.jor...@morningstar.com wrote:

  There is no such thing as pure insert which will give an error if the
 thing already exists.  Everything is really UPDATE OR INSERT.  Whether
 you say UPDATE, or INSERT, it will all act like UPDATE OR INSERT, if the
 thing is there it get over written, if it isn't there it gets inserted.

 -Jeremiah


  --
 *From:* Dinusha Dilrukshi [sdddilruk...@gmail.com]
 *Sent:* Wednesday, March 28, 2012 11:41 PM
 *To:* user@cassandra.apache.org
 *Subject:* Any improvements in Cassandra JDBC driver ?

  Hi,

  We are using Cassandra JDBC driver (found in [1]) to call to Cassandra
 sever using CQL and JDBC calls.  One of the main disadvantage is, this
 driver is not available in maven repository where people can publicly
 access. Currently we have to checkout the source and build ourselves. Is
 there any possibility to host this driver in a maven repository ?

  And one of the other limitation in driver is, it does not support for the
 insert query. If we need to do a insert , then it can be done using the
 update statement. So basically it will be same query used for both UPDATE
 and INSERT. As an example, if you execute following query:
 update USER set 'username'=?, 'password'=? where key = ?
 and if the provided 'KEY' already exist in the Column family then it will
 do a update to existing  columns. If the provided KEY does not already
 exist, then it will do a insert..
 Is that the INSERT query option is now available in latest driver?

  Are there any other improvements/supports added to this driver recently
 ?

  Is this driver compatible with Cassandra-1.1.0 and is that the changes
 done for driver will be backward compatible with older Cassandra versions
 (1.0.0) ?

  [1]. 
 http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/

  Regards,
 ~Dinusha~




Cassandra multi DC

2012-03-29 Thread Alexandru Sicoe
Hello everyone,
 How are people running multi DC Cassandra across remote locations? Are
VPNs used? Or some dedicated application proxis? What is the norm here?

Any advice is much appreciated,
Alex


Re: Cassandra multi DC

2012-03-29 Thread David Schairer
We built multi-DC right from the start -- two equal locations about 1000 miles 
apart but on the same provider backbone so there was reasonably robust and 
consistent connectivity between them.  We did this using software ipsec to set 
up tunnels between the private networks in each location, but we did this a 
year ago when SSL on cassandra connections wasn't viable yet.  Today I would 
drop the private networks and use SSL and certificate validation between nodes, 
as it'll be more reliable and reduce a layer of complexity.

As for multi-DC, we've seen _very_ little trouble and it's been a life saver on 
several locations when we or our provider needs to do major local maintenance 
or is suffering a DoS attack on the facility.  It lets you do several cheap DCs 
rather than one very expensive one.  

--DRS

On Mar 29, 2012, at 10:35 AM, Alexandru Sicoe wrote:

 Hello everyone,
  How are people running multi DC Cassandra across remote locations? Are VPNs 
 used? Or some dedicated application proxis? What is the norm here?
 
 Any advice is much appreciated,
 Alex



another DataStax OpsCenter question

2012-03-29 Thread Alexandru Sicoe
Hello,
 I am planning on testing OpsCenter to see how it can monitor a multi DC
cluster. There are 2 DCs each on a different side of a firewall. I've
configured NAT on the firewall to allow the communication between all
Cassandra nodes on ports 7000, 7199 and 9160. The cluster works fine.
However when I start OpsCenter (obviously on one side of the firewall) the
OpsCenter CF gives me two schema versions in the cluster and basically
messes up everything. Plus, I can only see the nodes on one the same side.

What are the requirements to let the OpsCenter on one side see the
Cassandra nodes and the OpsCenter agents on the other, and viceversa?

Is it possible to use OpsCenter across a firewall?

Cheers,
Alex


cassandra gui

2012-03-29 Thread Tim Dunphy
hey all,

 I have a new cassandra node that I've setup so that I can get better
acquainted with this technology. Thus far I've been using the
cassandra-cli and it'd been a fun experience so far. However I know
that there are a few cassandra gui's out there and I was just
wondering which ones you've used that you've had a good experience
with and can recommend?

 So far I've heard of (but not used) DataStax OpsCenter as well as the
Apollo gui and I think there may be others. Ideally what I'd like to
be able to do is both manage nodes and enter data into the keystores
using a graphical user interface.

Thanks
tim
-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B


Re: Cassandra multi DC

2012-03-29 Thread Eric Tamme

On 03/29/2012 01:35 PM, Alexandru Sicoe wrote:

Hello everyone,
 How are people running multi DC Cassandra across remote locations? 
Are VPNs used? Or some dedicated application proxis? What is the norm 
here?


Any advice is much appreciated,
Alex


No vpn for us.  We do it all over pure ipv6 circuits that limit access 
at the router level.


-Eric


Re: Any improvements in Cassandra JDBC driver ?

2012-03-29 Thread R. Verlangen
The best would to not use update / insert at all but set / put / save.

Cheers!

2012/3/29 Dinusha Dilrukshi sdddilruk...@gmail.com

 What I want to tell was this driver does not use INSERT key word.  Since
 CQL support for using INSERT keyword and it is more generic key word used
 to add new records, it's more user friendly to  use INSERT key word  to add
 new record set rather using UPDATE keyword.

 Regards,
 ~Dinusha~




 On Thu, Mar 29, 2012 at 8:34 PM, Jeremiah Jordan 
 jeremiah.jor...@morningstar.com wrote:

  There is no such thing as pure insert which will give an error if the
 thing already exists.  Everything is really UPDATE OR INSERT.  Whether
 you say UPDATE, or INSERT, it will all act like UPDATE OR INSERT, if the
 thing is there it get over written, if it isn't there it gets inserted.

 -Jeremiah


  --
 *From:* Dinusha Dilrukshi [sdddilruk...@gmail.com]
 *Sent:* Wednesday, March 28, 2012 11:41 PM
 *To:* user@cassandra.apache.org
 *Subject:* Any improvements in Cassandra JDBC driver ?

  Hi,

  We are using Cassandra JDBC driver (found in [1]) to call to Cassandra
 sever using CQL and JDBC calls.  One of the main disadvantage is, this
 driver is not available in maven repository where people can publicly
 access. Currently we have to checkout the source and build ourselves. Is
 there any possibility to host this driver in a maven repository ?

  And one of the other limitation in driver is, it does not support for
 the insert query. If we need to do a insert , then it can be done using the
 update statement. So basically it will be same query used for both UPDATE
 and INSERT. As an example, if you execute following query:
 update USER set 'username'=?, 'password'=? where key = ?
 and if the provided 'KEY' already exist in the Column family then it will
 do a update to existing  columns. If the provided KEY does not already
 exist, then it will do a insert..
 Is that the INSERT query option is now available in latest driver?

  Are there any other improvements/supports added to this driver recently
 ?

  Is this driver compatible with Cassandra-1.1.0 and is that the changes
 done for driver will be backward compatible with older Cassandra versions
 (1.0.0) ?

  [1]. 
 http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/

  Regards,
 ~Dinusha~





-- 
With kind regards,

Robin Verlangen
www.robinverlangen.nl


Re: cassandra gui

2012-03-29 Thread Mucklow, Blaine (GE Energy)
Datastax OpsCenter really is pretty awesome.  I haven't tried anything
else, but have had no issues with opscenter.

On 3/29/12 1:53 PM, Tim Dunphy bluethu...@gmail.com wrote:

hey all,

 I have a new cassandra node that I've setup so that I can get better
acquainted with this technology. Thus far I've been using the
cassandra-cli and it'd been a fun experience so far. However I know
that there are a few cassandra gui's out there and I was just
wondering which ones you've used that you've had a good experience
with and can recommend?

 So far I've heard of (but not used) DataStax OpsCenter as well as the
Apollo gui and I think there may be others. Ideally what I'd like to
be able to do is both manage nodes and enter data into the keystores
using a graphical user interface.

Thanks
tim
-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B



Re: cassandra gui

2012-03-29 Thread Nick Bailey
Just so you know. OpsCenter is a good tool for managing your cluster
and viewing data in your keyspaces/columnfamilies (data besides string
data currently doesn't display an extremely user friendly way, it will
display hex). Currently you can not insert data to your cluster using
the OpsCenter gui though. That is planned for some point in the future
hopefully, but it is hard to say when it will be added.

On Thu, Mar 29, 2012 at 1:25 PM, Mucklow, Blaine (GE Energy)
blaine.muck...@ge.com wrote:
 Datastax OpsCenter really is pretty awesome.  I haven't tried anything
 else, but have had no issues with opscenter.

 On 3/29/12 1:53 PM, Tim Dunphy bluethu...@gmail.com wrote:

hey all,

 I have a new cassandra node that I've setup so that I can get better
acquainted with this technology. Thus far I've been using the
cassandra-cli and it'd been a fun experience so far. However I know
that there are a few cassandra gui's out there and I was just
wondering which ones you've used that you've had a good experience
with and can recommend?

 So far I've heard of (but not used) DataStax OpsCenter as well as the
Apollo gui and I think there may be others. Ideally what I'd like to
be able to do is both manage nodes and enter data into the keystores
using a graphical user interface.

Thanks
tim
--
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B



Re: cassandra gui

2012-03-29 Thread Tim Dunphy
Cool guys, thanks. I'll certainly give it a try now that my cassandra
setup is functioning well. But what about the Apollo gui for
Cassandra. Has anyone else had any experience with that and maybe know
if it supports entering data into the cluster?

tx!


On Thu, Mar 29, 2012 at 2:33 PM, Nick Bailey n...@datastax.com wrote:
 Just so you know. OpsCenter is a good tool for managing your cluster
 and viewing data in your keyspaces/columnfamilies (data besides string
 data currently doesn't display an extremely user friendly way, it will
 display hex). Currently you can not insert data to your cluster using
 the OpsCenter gui though. That is planned for some point in the future
 hopefully, but it is hard to say when it will be added.

 On Thu, Mar 29, 2012 at 1:25 PM, Mucklow, Blaine (GE Energy)
 blaine.muck...@ge.com wrote:
 Datastax OpsCenter really is pretty awesome.  I haven't tried anything
 else, but have had no issues with opscenter.

 On 3/29/12 1:53 PM, Tim Dunphy bluethu...@gmail.com wrote:

hey all,

 I have a new cassandra node that I've setup so that I can get better
acquainted with this technology. Thus far I've been using the
cassandra-cli and it'd been a fun experience so far. However I know
that there are a few cassandra gui's out there and I was just
wondering which ones you've used that you've had a good experience
with and can recommend?

 So far I've heard of (but not used) DataStax OpsCenter as well as the
Apollo gui and I think there may be others. Ideally what I'd like to
be able to do is both manage nodes and enter data into the keystores
using a graphical user interface.

Thanks
tim
--
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B




-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B


Re: [BETA RELEASE] Apache Cassandra 1.1.0-beta2 released

2012-03-29 Thread Mohit Anchlia
Any updates?

On Thu, Mar 29, 2012 at 7:31 AM, Mohit Anchlia mohitanch...@gmail.comwrote:

 This is from NEWS.txt. So my question is if we are on 1.0.0-2 release do
 we still need to upgrade since this impacts releases between 1.0.3-1.0.5?
 -
 If you are running a multi datacenter setup, you should upgrade to
   the latest 1.0.x (or 0.8.x) release before upgrading.  Versions
   0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is
 incompatible
   with 1.1.
 -

   On Thu, Mar 29, 2012 at 4:51 AM, Sylvain Lebresne 
 sylv...@datastax.comwrote:

 To be clear, the incompatibility we've talked about does *not* concern
 any of the 1.0 releases (you'll want to refer NEWS file for any
 details on the upgrade path for these versions).
 The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.

 --
 Sylvain

 On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  We are currently using 1.0.0-2  version. Do we still need to migrate to
 the
  latest release of 1.0 before migrating to 1.1? Looks like
 incompatibility is
  only between 1.0.3-1.0.8.
 
 
  On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch
 wrote:
 
  Thanks for the quick feedback.
 
  I will drop the schema then.
 
  Benoit.
 
 
  Le 27 mars 2012 14:50, Sylvain Lebresne sylv...@datastax.com a
 écrit :
   Actually, there was a few changes to the on-disk format of schema
   between beta1 and beta2 so upgrade is not supported between those two
   beta versions.
   Sorry for any inconvenience.
  
   --
   Sylvain
  
   On Tue, Mar 27, 2012 at 12:57 PM, Benoit Perroud ben...@noisette.ch
 
   wrote:
   Hi All,
  
   Thanks a lot for the release.
   I just upgraded my 1.1-beta1 to 1.1-beta2, and I get the following
   error :
  
INFO 10:56:17,089 Opening
  
 /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-18
   (74 bytes)
INFO 10:56:17,092 Opening
  
 /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-17
   (486 bytes)
   ERROR 10:56:17,306 Exception encountered during startup
   java.lang.NullPointerException
  at
  
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:163)
  at
  
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:120)
  at
   org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:77)
  at
   org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:97)
  at
   org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:35)
  at
  
 org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:87)
  at
  
 org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1008)
  at
  
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1053)
  at
  
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:261)
  at
  
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:242)
  at
   org.apache.cassandra.db.DefsTable.loadFromTable(DefsTable.java:158)
  at
  
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:514)
  at
  
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:182)
  at
  
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:353)
  at
  
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
  
  
   Thanks for your support,
  
   Benoit.
  
  
   Le 27 mars 2012 11:55, Sylvain Lebresne sylv...@datastax.com a
 écrit
   :
   The Cassandra team is pleased to announce the release of the second
   beta for
   the future Apache Cassandra 1.1.
  
   Note that this is beta software and as such is *not* ready for
   production use.
  
   The goal of this release is to give a preview of what will become
   Cassandra
   1.1 and to get wider testing before the final release. All help in
   testing
   this release would be therefore greatly appreciated and please
 report
   any
   problem you may encounter[3,4]. Have a look at the change log[1]
 and
   the
   release notes[2] to see where Cassandra 1.1 differs from the
 previous
   series.
  
   Apache Cassandra 1.1.0-beta2[5] is available as usual from the
   cassandra
   website (http://cassandra.apache.org/download/) and a debian
 package
   is
   available using the 11x branch (see
   http://wiki.apache.org/cassandra/DebianPackaging).
  
   Thank you for your help in testing and have fun with it.
  
   [1]: http://goo.gl/nX7UL (CHANGES.txt)
   [2]: http://goo.gl/TB9ro (NEWS.txt)
   [3]: https://issues.apache.org/jira/browse/CASSANDRA
   [4]: user@cassandra.apache.org
   [5]:
  
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/cassandra-1.1.0-beta2
  
  
  
   --
   sent from my Nokia 3210
 
 
 
  --
  sent from my Nokia 3210
 
 





Re: another DataStax OpsCenter question

2012-03-29 Thread Nick Bailey
This setup may be possible although there are a few potential issues.
Firstly, see: 
http://www.datastax.com/docs/opscenter/configure_opscenter#configuring-firewall-port-access

Basically the agents and OpsCenter communicate on ports 61620 and
61621 by default (those can be configured though). The agents will
contact the the OpsCenter machine on port 61620. You can specify the
interface the agents will use to connect to this port when
installing/setting up the agents.

The OpsCenter machine will contact the agents on port 61621. Right now
the OpsCenter machine will only talk to the nodes using the
listen_address configured in your cassandra conf. We have a task to
fix this in the future so that you can configure the interface that
opscenter will contact each agent on. In the meantime though OpsCenter
will need to be able to hit the listen_address for each node.

On Thu, Mar 29, 2012 at 12:47 PM, Alexandru Sicoe adsi...@gmail.com wrote:
 Hello,
  I am planning on testing OpsCenter to see how it can monitor a multi DC
 cluster. There are 2 DCs each on a different side of a firewall. I've
 configured NAT on the firewall to allow the communication between all
 Cassandra nodes on ports 7000, 7199 and 9160. The cluster works fine.
 However when I start OpsCenter (obviously on one side of the firewall) the
 OpsCenter CF gives me two schema versions in the cluster and basically
 messes up everything. Plus, I can only see the nodes on one the same side.

 What are the requirements to let the OpsCenter on one side see the Cassandra
 nodes and the OpsCenter agents on the other, and viceversa?

 Is it possible to use OpsCenter across a firewall?

 Cheers,
 Alex


Re: [BETA RELEASE] Apache Cassandra 1.1.0-beta2 released

2012-03-29 Thread Sylvain Lebresne
As the NEWS file says, only the version 1.0.3-1.0.5 are generating
those cross-dc forwarding messages that are incompatible with 1.1. If
you're on 1.0.0, you shouldn't have that problem. To be more precise,
1.0.0 does not generate cross-dc forwarding message at all, so you're
safe on that side.

--
Sylvain

On Thu, Mar 29, 2012 at 9:33 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
 Any updates?


 On Thu, Mar 29, 2012 at 7:31 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

 This is from NEWS.txt. So my question is if we are on 1.0.0-2 release do
 we still need to upgrade since this impacts releases between 1.0.3-1.0.5?
 -
 If you are running a multi datacenter setup, you should upgrade to
   the latest 1.0.x (or 0.8.x) release before upgrading.  Versions
   0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is
 incompatible
   with 1.1.
 -

 On Thu, Mar 29, 2012 at 4:51 AM, Sylvain Lebresne sylv...@datastax.com
 wrote:

 To be clear, the incompatibility we've talked about does *not* concern
 any of the 1.0 releases (you'll want to refer NEWS file for any
 details on the upgrade path for these versions).
 The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.

 --
 Sylvain

 On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  We are currently using 1.0.0-2  version. Do we still need to migrate to
  the
  latest release of 1.0 before migrating to 1.1? Looks like
  incompatibility is
  only between 1.0.3-1.0.8.
 
 
  On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch
  wrote:
 
  Thanks for the quick feedback.
 
  I will drop the schema then.
 
  Benoit.
 
 
  Le 27 mars 2012 14:50, Sylvain Lebresne sylv...@datastax.com a écrit
  :
   Actually, there was a few changes to the on-disk format of schema
   between beta1 and beta2 so upgrade is not supported between those
   two
   beta versions.
   Sorry for any inconvenience.
  
   --
   Sylvain
  
   On Tue, Mar 27, 2012 at 12:57 PM, Benoit Perroud
   ben...@noisette.ch
   wrote:
   Hi All,
  
   Thanks a lot for the release.
   I just upgraded my 1.1-beta1 to 1.1-beta2, and I get the following
   error :
  
    INFO 10:56:17,089 Opening
  
   /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-18
   (74 bytes)
    INFO 10:56:17,092 Opening
  
   /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-17
   (486 bytes)
   ERROR 10:56:17,306 Exception encountered during startup
   java.lang.NullPointerException
          at
  
   org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:163)
          at
  
   org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:120)
          at
   org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:77)
          at
   org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:97)
          at
   org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:35)
          at
  
   org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:87)
          at
  
   org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1008)
          at
  
   org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1053)
          at
  
   org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:261)
          at
  
   org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:242)
          at
   org.apache.cassandra.db.DefsTable.loadFromTable(DefsTable.java:158)
          at
  
   org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:514)
          at
  
   org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:182)
          at
  
   org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:353)
          at
  
   org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
  
  
   Thanks for your support,
  
   Benoit.
  
  
   Le 27 mars 2012 11:55, Sylvain Lebresne sylv...@datastax.com a
   écrit
   :
   The Cassandra team is pleased to announce the release of the
   second
   beta for
   the future Apache Cassandra 1.1.
  
   Note that this is beta software and as such is *not* ready for
   production use.
  
   The goal of this release is to give a preview of what will become
   Cassandra
   1.1 and to get wider testing before the final release. All help in
   testing
   this release would be therefore greatly appreciated and please
   report
   any
   problem you may encounter[3,4]. Have a look at the change log[1]
   and
   the
   release notes[2] to see where Cassandra 1.1 differs from the
   previous
   series.
  
   Apache Cassandra 1.1.0-beta2[5] is available as usual from the
   cassandra
   website (http://cassandra.apache.org/download/) and a debian
   package
   is
   available using the 11x branch (see
   http://wiki.apache.org/cassandra/DebianPackaging).
  
   Thank you for your help in 

Re: [BETA RELEASE] Apache Cassandra 1.1.0-beta2 released

2012-03-29 Thread Mohit Anchlia
On Thu, Mar 29, 2012 at 1:32 PM, Sylvain Lebresne sylv...@datastax.comwrote:

 As the NEWS file says, only the version 1.0.3-1.0.5 are generating
 those cross-dc forwarding messages that are incompatible with 1.1. If
 you're on 1.0.0, you shouldn't have that problem. To be more precise,
 1.0.0 does not generate cross-dc forwarding message at all, so you're
 safe on that side.

 Is cross-dc forwarding different than replication?

 --
 Sylvain

 On Thu, Mar 29, 2012 at 9:33 PM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  Any updates?
 
 
  On Thu, Mar 29, 2012 at 7:31 AM, Mohit Anchlia mohitanch...@gmail.com
  wrote:
 
  This is from NEWS.txt. So my question is if we are on 1.0.0-2 release do
  we still need to upgrade since this impacts releases between
 1.0.3-1.0.5?
  -
  If you are running a multi datacenter setup, you should upgrade to
the latest 1.0.x (or 0.8.x) release before upgrading.  Versions
0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is
  incompatible
with 1.1.
  -
 
  On Thu, Mar 29, 2012 at 4:51 AM, Sylvain Lebresne sylv...@datastax.com
 
  wrote:
 
  To be clear, the incompatibility we've talked about does *not* concern
  any of the 1.0 releases (you'll want to refer NEWS file for any
  details on the upgrade path for these versions).
  The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.
 
  --
  Sylvain
 
  On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia mohitanch...@gmail.com
 
  wrote:
   We are currently using 1.0.0-2  version. Do we still need to migrate
 to
   the
   latest release of 1.0 before migrating to 1.1? Looks like
   incompatibility is
   only between 1.0.3-1.0.8.
  
  
   On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch
   wrote:
  
   Thanks for the quick feedback.
  
   I will drop the schema then.
  
   Benoit.
  
  
   Le 27 mars 2012 14:50, Sylvain Lebresne sylv...@datastax.com a
 écrit
   :
Actually, there was a few changes to the on-disk format of schema
between beta1 and beta2 so upgrade is not supported between those
two
beta versions.
Sorry for any inconvenience.
   
--
Sylvain
   
On Tue, Mar 27, 2012 at 12:57 PM, Benoit Perroud
ben...@noisette.ch
wrote:
Hi All,
   
Thanks a lot for the release.
I just upgraded my 1.1-beta1 to 1.1-beta2, and I get the
 following
error :
   
 INFO 10:56:17,089 Opening
   
   
 /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-18
(74 bytes)
 INFO 10:56:17,092 Opening
   
   
 /app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-17
(486 bytes)
ERROR 10:56:17,306 Exception encountered during startup
java.lang.NullPointerException
   at
   
   
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:163)
   at
   
   
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:120)
   at
   
 org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:77)
   at
org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:97)
   at
   
 org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:35)
   at
   
   
 org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:87)
   at
   
   
 org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1008)
   at
   
   
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1053)
   at
   
   
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:261)
   at
   
   
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:242)
   at
   
 org.apache.cassandra.db.DefsTable.loadFromTable(DefsTable.java:158)
   at
   
   
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:514)
   at
   
   
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:182)
   at
   
   
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:353)
   at
   
   
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
   
   
Thanks for your support,
   
Benoit.
   
   
Le 27 mars 2012 11:55, Sylvain Lebresne sylv...@datastax.com a
écrit
:
The Cassandra team is pleased to announce the release of the
second
beta for
the future Apache Cassandra 1.1.
   
Note that this is beta software and as such is *not* ready for
production use.
   
The goal of this release is to give a preview of what will
 become
Cassandra
1.1 and to get wider testing before the final release. All help
 in
testing
this release would be therefore greatly appreciated and please
report
any
problem you may encounter[3,4]. Have a look at the change log[1]
and
the
release notes[2] to see where Cassandra 1.1 differs from the
 

Re: [BETA RELEASE] Apache Cassandra 1.1.0-beta2 released

2012-03-29 Thread Sylvain Lebresne
On Thu, Mar 29, 2012 at 10:37 PM, Mohit Anchlia mohitanch...@gmail.com wrote:


 On Thu, Mar 29, 2012 at 1:32 PM, Sylvain Lebresne sylv...@datastax.com
 wrote:

 As the NEWS file says, only the version 1.0.3-1.0.5 are generating
 those cross-dc forwarding messages that are incompatible with 1.1. If
 you're on 1.0.0, you shouldn't have that problem. To be more precise,
 1.0.0 does not generate cross-dc forwarding message at all, so you're
 safe on that side.

 Is cross-dc forwarding different than replication?

Here cross-dc forwarding means the fact of optimizing cross-dc
replication. If we're in some DC1 and need to replicate a write to 3
replicas in DC2, we send just one message cross-DC, and they have the
one node in DC2 forward the message to the 2 other replica, instead of
just sending 3 message cross-DC.

--
Sylvain



 --
 Sylvain

 On Thu, Mar 29, 2012 at 9:33 PM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  Any updates?
 
 
  On Thu, Mar 29, 2012 at 7:31 AM, Mohit Anchlia mohitanch...@gmail.com
  wrote:
 
  This is from NEWS.txt. So my question is if we are on 1.0.0-2 release
  do
  we still need to upgrade since this impacts releases between
  1.0.3-1.0.5?
  -
  If you are running a multi datacenter setup, you should upgrade to
    the latest 1.0.x (or 0.8.x) release before upgrading.  Versions
    0.8.8 and 1.0.3-1.0.5 generate cross-dc forwarding that is
  incompatible
    with 1.1.
  -
 
  On Thu, Mar 29, 2012 at 4:51 AM, Sylvain Lebresne
  sylv...@datastax.com
  wrote:
 
  To be clear, the incompatibility we've talked about does *not* concern
  any of the 1.0 releases (you'll want to refer NEWS file for any
  details on the upgrade path for these versions).
  The incompatibility here is only between 1.1.0-beta1 and 1.1.0-beta2.
 
  --
  Sylvain
 
  On Thu, Mar 29, 2012 at 2:50 AM, Mohit Anchlia
  mohitanch...@gmail.com
  wrote:
   We are currently using 1.0.0-2  version. Do we still need to migrate
   to
   the
   latest release of 1.0 before migrating to 1.1? Looks like
   incompatibility is
   only between 1.0.3-1.0.8.
  
  
   On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch
   wrote:
  
   Thanks for the quick feedback.
  
   I will drop the schema then.
  
   Benoit.
  
  
   Le 27 mars 2012 14:50, Sylvain Lebresne sylv...@datastax.com a
   écrit
   :
Actually, there was a few changes to the on-disk format of schema
between beta1 and beta2 so upgrade is not supported between those
two
beta versions.
Sorry for any inconvenience.
   
--
Sylvain
   
On Tue, Mar 27, 2012 at 12:57 PM, Benoit Perroud
ben...@noisette.ch
wrote:
Hi All,
   
Thanks a lot for the release.
I just upgraded my 1.1-beta1 to 1.1-beta2, and I get the
following
error :
   
 INFO 10:56:17,089 Opening
   
   
/app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-18
(74 bytes)
 INFO 10:56:17,092 Opening
   
   
/app/cassandra/data/data/system/LocationInfo/system-LocationInfo-hc-17
(486 bytes)
ERROR 10:56:17,306 Exception encountered during startup
java.lang.NullPointerException
       at
   
   
org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:163)
       at
   
   
org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:120)
       at
   
org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:77)
       at
org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:97)
       at
   
org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:35)
       at
   
   
org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:87)
       at
   
   
org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1008)
       at
   
   
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1053)
       at
   
   
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:261)
       at
   
   
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:242)
       at
   
org.apache.cassandra.db.DefsTable.loadFromTable(DefsTable.java:158)
       at
   
   
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:514)
       at
   
   
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:182)
       at
   
   
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:353)
       at
   
   
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
   
   
Thanks for your support,
   
Benoit.
   
   
Le 27 mars 2012 11:55, Sylvain Lebresne sylv...@datastax.com a
écrit
:
The Cassandra team is pleased to announce the release of the
second
beta for
the future Apache Cassandra 1.1.
   
Note that this is beta 

Re: counter column family

2012-03-29 Thread Tyler Hobbs
On Tue, Mar 27, 2012 at 9:35 AM, puneet loya puneetl...@gmail.com wrote:

 now i want to have a field incrementing with every row insertion. how do i
 do it in cassandra??


There's nothing that will do it automatically.  You need to increment it
yourself.

-- 
Tyler Hobbs
DataStax http://datastax.com/


really bad select performance

2012-03-29 Thread Chris Hart
Hi,

I have the following cluster:

136112946768375385385349842972707284580 
ip address  MountainViewRAC1Up Normal  1.86 GB 20.00%  0  
 
ip address  MountainViewRAC1Up Normal  2.17 GB 33.33%  
56713727820156410577229101238628035242  
ip address  MountainViewRAC1Up Normal  2.41 GB 33.33%  
113427455640312821154458202477256070485 
ip address Rackspace   RAC1Up Normal  3.9 GB  13.33%  
136112946768375385385349842972707284580

The following query runs quickly on all nodes except 1 MountainView node:

 select * from Access_Log where row_loaded = 0 limit 1;

There is a secondary index on row_loaded.  The query usually doesn't complete 
(but sometimes does) on the bad node and returns very quickly on all other 
nodes.  I've upping the rpc timeout to a full minute (rpc_timeout_in_ms: 6) 
in the yaml, but it still often doesn't complete in a minute.  It seems just as 
likely to complete and takes about the same amount of time whether the limit is 
1, 100 or 1000.


Thanks for any help,
Chris


AW: cassandra gui

2012-03-29 Thread Markus Wiesenbacher | Codefreun.de
Hi,

yes you can insert data into cassandra with apollo, just try the demo
center: http://www.codefreun.de/apolloUI/ 

You can login by just press the login-button (autologin) and play around
with it. 

More info: http://codefreun.de/en/apollo-en

If you miss any feature, please let me know!

Best regards
Markus


-Ursprüngliche Nachricht-
Von: Tim Dunphy [mailto:bluethu...@gmail.com] 
Gesendet: Donnerstag, 29. März 2012 21:07
An: user@cassandra.apache.org
Betreff: Re: cassandra gui

Cool guys, thanks. I'll certainly give it a try now that my cassandra setup
is functioning well. But what about the Apollo gui for Cassandra. Has anyone
else had any experience with that and maybe know if it supports entering
data into the cluster?

tx!


On Thu, Mar 29, 2012 at 2:33 PM, Nick Bailey n...@datastax.com wrote:
 Just so you know. OpsCenter is a good tool for managing your cluster 
 and viewing data in your keyspaces/columnfamilies (data besides string 
 data currently doesn't display an extremely user friendly way, it will 
 display hex). Currently you can not insert data to your cluster using 
 the OpsCenter gui though. That is planned for some point in the future 
 hopefully, but it is hard to say when it will be added.

 On Thu, Mar 29, 2012 at 1:25 PM, Mucklow, Blaine (GE Energy) 
 blaine.muck...@ge.com wrote:
 Datastax OpsCenter really is pretty awesome.  I haven't tried 
 anything else, but have had no issues with opscenter.

 On 3/29/12 1:53 PM, Tim Dunphy bluethu...@gmail.com wrote:

hey all,

 I have a new cassandra node that I've setup so that I can get better 
acquainted with this technology. Thus far I've been using the 
cassandra-cli and it'd been a fun experience so far. However I know 
that there are a few cassandra gui's out there and I was just 
wondering which ones you've used that you've had a good experience 
with and can recommend?

 So far I've heard of (but not used) DataStax OpsCenter as well as 
the Apollo gui and I think there may be others. Ideally what I'd like 
to be able to do is both manage nodes and enter data into the 
keystores using a graphical user interface.

Thanks
tim
--
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B




--
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B