Chronologic

2011-08-06 Thread Sal Fuentes
Hello Scott,

I noticed you had previously posted this snippet:
https://gist.github.com/832414

And I was curious to know if there were any plans to open source Chronologic
in the future? If so, I think the Cassandra and Ruby communities would thank
you for it. :)

-- 
Sal


Re: How to solve this kind of schema disagreement...

2011-08-06 Thread Dikang Gu
I have tried this, but the schema still does not agree in the cluster:

[default@unknown] describe cluster;
Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions:
UNREACHABLE: [192.168.1.28]
75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.168.1.25]
5a54ebd0-bd90-11e0--9510c23fceff: [192.168.1.27]

Any other suggestions to solve this?

Because I have some production data saved in the cassandra cluster, so I can
not afford data lost...

Thanks.

On Fri, Aug 5, 2011 at 8:55 PM, Benoit Perroud  wrote:

> Based on http://wiki.apache.org/cassandra/FAQ#schema_disagreement,
> 75eece10-bf48-11e0--4d205df954a7 own the majority, so shutdown and
> remove the schema* and migration* sstables from both 192.168.1.28 and
> 192.168.1.27
>
>
> 2011/8/5 Dikang Gu :
> > [default@unknown] describe cluster;
> > Cluster Information:
> >Snitch: org.apache.cassandra.locator.SimpleSnitch
> >Partitioner: org.apache.cassandra.dht.RandomPartitioner
> >Schema versions:
> > 743fe590-bf48-11e0--4d205df954a7: [192.168.1.28]
> > 75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.168.1.25]
> > 06da9aa0-bda8-11e0--9510c23fceff: [192.168.1.27]
> >
> >  three different schema versions in the cluster...
> > --
> > Dikang Gu
> > 0086 - 18611140205
> >
>



-- 
Dikang Gu

0086 - 18611140205


strange json2sstable cast exception

2011-08-06 Thread Dan Kuebrich
Having run into a recurring compaction problem due to a corrupt sstable
(perceived row size was 13 petabytes or something), I sstable2json -x 'd
 the key and am now trying to re-import the sstable without it.  However,
I'm running into the following exception:

Importing 2882 keys...
java.lang.ClassCastException: org.apache.cassandra.db.ExpiringColumn cannot
be cast to org.apache.cassandra.db.SuperColumn
at
org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:363)
 at
org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:347)
at
org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:88)
 at
org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:107)
at
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:147)
 at
org.apache.cassandra.tools.SSTableImport.importUnsorted(SSTableImport.java:290)
at
org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:252)
 at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:476)
ERROR: org.apache.cassandra.db.ExpiringColumn cannot be cast to
org.apache.cassandra.db.SuperColumn

The CF is a SuperColumnFamily, if that's relevant.

1. What should I do about this problem?

2. (somewhat unrelated) Our usage of this SCF has moved away from requiring
"super"-ness.  Aside from missing out on the potential for future seconary
indexes, are we suffering any sort of operational/performance hit from this
classification?


How to release a customised Cassandra from Eclipse?

2011-08-06 Thread Alvin UW
Hello,

I set up a Cassandra project in Eclipse following
http://wiki.apache.org/cassandra/RunningCassandraInEclipse
Then, I made a few modifications on it to form a customised Cassandra.
But I don't know how can I release this new Cassandra from Eclipse as a jar
file to use in EC2.

I tried "ant release" command in command line. It can successful build .jar
file.
Then I typed java -jar apache-cassandra-0.7.0-beta1-SNAPSHOT.jar

"Error: Failed to load Main-Class manifest attribute from "

I edited a MANIFEST.MF like:
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.1
Created-By: 16.3-b01 (Sun Microsystems Inc.)
Implementation-Title: Cassandra
Implementation-Version: 0.7.0-beta1-SNAPSHOT
Implementation-Vendor: Apache
Main-Class: org.apache.cassandra.thrift.CassandraDaemon

and tried again. the error is like below:

Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/thrift/transport/TTransportException
Caused by: java.lang.ClassNotFoundException:
org.apache.thrift.transport.TTransportException
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:319)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:264)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332)
Could not find the main class: org.apache.cassandra.thrift.CassandraDaemon.
Program will exit.

So what's the problem?


Thanks.
Alvin


Re: strange json2sstable cast exception

2011-08-06 Thread Jonathan Ellis
You should probably upgrade, it looks like you have a version that
doesn't support sstable2json with expiring columns.

On Sat, Aug 6, 2011 at 9:29 AM, Dan Kuebrich  wrote:
> Having run into a recurring compaction problem due to a corrupt sstable
> (perceived row size was 13 petabytes or something), I sstable2json -x 'd
>  the key and am now trying to re-import the sstable without it.  However,
> I'm running into the following exception:
> Importing 2882 keys...
> java.lang.ClassCastException: org.apache.cassandra.db.ExpiringColumn cannot
> be cast to org.apache.cassandra.db.SuperColumn
> at
> org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:363)
> at
> org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:347)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:88)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:107)
> at
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:147)
> at
> org.apache.cassandra.tools.SSTableImport.importUnsorted(SSTableImport.java:290)
> at
> org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:252)
> at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:476)
> ERROR: org.apache.cassandra.db.ExpiringColumn cannot be cast to
> org.apache.cassandra.db.SuperColumn
> The CF is a SuperColumnFamily, if that's relevant.
> 1. What should I do about this problem?
> 2. (somewhat unrelated) Our usage of this SCF has moved away from requiring
> "super"-ness.  Aside from missing out on the potential for future seconary
> indexes, are we suffering any sort of operational/performance hit from this
> classification?



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: How to release a customised Cassandra from Eclipse?

2011-08-06 Thread Jonathan Ellis
look at bin/cassandra, you can't just run it with "java -jar"

On Sat, Aug 6, 2011 at 10:43 AM, Alvin UW  wrote:
> Hello,
>
> I set up a Cassandra project in Eclipse following
> http://wiki.apache.org/cassandra/RunningCassandraInEclipse
> Then, I made a few modifications on it to form a customised Cassandra.
> But I don't know how can I release this new Cassandra from Eclipse as a jar
> file to use in EC2.
>
> I tried "ant release" command in command line. It can successful build .jar
> file.
> Then I typed java -jar apache-cassandra-0.7.0-beta1-SNAPSHOT.jar
>
> "Error: Failed to load Main-Class manifest attribute from "
>
> I edited a MANIFEST.MF like:
> Manifest-Version: 1.0
> Ant-Version: Apache Ant 1.7.1
> Created-By: 16.3-b01 (Sun Microsystems Inc.)
> Implementation-Title: Cassandra
> Implementation-Version: 0.7.0-beta1-SNAPSHOT
> Implementation-Vendor: Apache
> Main-Class: org.apache.cassandra.thrift.CassandraDaemon
>
> and tried again. the error is like below:
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/thrift/transport/TTransportException
> Caused by: java.lang.ClassNotFoundException:
> org.apache.thrift.transport.TTransportException
>     at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:319)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:264)
>     at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332)
> Could not find the main class: org.apache.cassandra.thrift.CassandraDaemon.
> Program will exit.
>
> So what's the problem?
>
>
> Thanks.
> Alvin
>
>
>
>
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: How to release a customised Cassandra from Eclipse?

2011-08-06 Thread Alvin UW
Thanks.

I am a beginner.
I checked bin folder under myCassandra. There are only some classes without
executable file.
after "ant release", I got the jar file from build folder.




2011/8/6 Jonathan Ellis 

> look at bin/cassandra, you can't just run it with "java -jar"
>
> On Sat, Aug 6, 2011 at 10:43 AM, Alvin UW  wrote:
> > Hello,
> >
> > I set up a Cassandra project in Eclipse following
> > http://wiki.apache.org/cassandra/RunningCassandraInEclipse
> > Then, I made a few modifications on it to form a customised Cassandra.
> > But I don't know how can I release this new Cassandra from Eclipse as a
> jar
> > file to use in EC2.
> >
> > I tried "ant release" command in command line. It can successful build
> .jar
> > file.
> > Then I typed java -jar apache-cassandra-0.7.0-beta1-SNAPSHOT.jar
> >
> > "Error: Failed to load Main-Class manifest attribute from "
> >
> > I edited a MANIFEST.MF like:
> > Manifest-Version: 1.0
> > Ant-Version: Apache Ant 1.7.1
> > Created-By: 16.3-b01 (Sun Microsystems Inc.)
> > Implementation-Title: Cassandra
> > Implementation-Version: 0.7.0-beta1-SNAPSHOT
> > Implementation-Vendor: Apache
> > Main-Class: org.apache.cassandra.thrift.CassandraDaemon
> >
> > and tried again. the error is like below:
> >
> > Exception in thread "main" java.lang.NoClassDefFoundError:
> > org/apache/thrift/transport/TTransportException
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.thrift.transport.TTransportException
> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:319)
> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:264)
> > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332)
> > Could not find the main class:
> org.apache.cassandra.thrift.CassandraDaemon.
> > Program will exit.
> >
> > So what's the problem?
> >
> >
> > Thanks.
> > Alvin
> >
> >
> >
> >
> >
> >
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>


Re: strange json2sstable cast exception

2011-08-06 Thread Dan Kuebrich
Forgot to mention node is: new install of 0.8.2, though data was streamed
over from nodes that have been upgraded over time from 0.7.
On Aug 6, 2011 10:47 AM, "Jonathan Ellis"  wrote:
> You should probably upgrade, it looks like you have a version that
> doesn't support sstable2json with expiring columns.
>
> On Sat, Aug 6, 2011 at 9:29 AM, Dan Kuebrich 
wrote:
>> Having run into a recurring compaction problem due to a corrupt sstable
>> (perceived row size was 13 petabytes or something), I sstable2json -x 'd
>>  the key and am now trying to re-import the sstable without it.  However,
>> I'm running into the following exception:
>> Importing 2882 keys...
>> java.lang.ClassCastException: org.apache.cassandra.db.ExpiringColumn
cannot
>> be cast to org.apache.cassandra.db.SuperColumn
>> at
>>
org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:363)
>> at
>>
org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:347)
>> at
>>
org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:88)
>> at
>>
org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:107)
>> at
>>
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:147)
>> at
>>
org.apache.cassandra.tools.SSTableImport.importUnsorted(SSTableImport.java:290)
>> at
>>
org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:252)
>> at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:476)
>> ERROR: org.apache.cassandra.db.ExpiringColumn cannot be cast to
>> org.apache.cassandra.db.SuperColumn
>> The CF is a SuperColumnFamily, if that's relevant.
>> 1. What should I do about this problem?
>> 2. (somewhat unrelated) Our usage of this SCF has moved away from
requiring
>> "super"-ness.  Aside from missing out on the potential for future
seconary
>> indexes, are we suffering any sort of operational/performance hit from
this
>> classification?
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com


Re: Compacting large row

2011-08-06 Thread Patrik Modesto
On Fri, Aug 5, 2011 at 15:02, Jonathan Ellis  wrote:
> It's logging the actual key, not the md5.  It's just converting the
> key bytes to hex first to make sure it's printable.

Great! I'm using MD5 as a key so I didn't notice that.

Thanks,
P.


Re: How to release a customised Cassandra from Eclipse?

2011-08-06 Thread aaron morton
Have a look at this file in the source repo 
https://github.com/apache/cassandra/blob/trunk/bin/cassandra

try using "ant artefacts" and look in the build/dist dir.

cheers
 
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 7 Aug 2011, at 03:58, Alvin UW wrote:

> 
> Thanks.
> 
> I am a beginner.
> I checked bin folder under myCassandra. There are only some classes without 
> executable file.
> after "ant release", I got the jar file from build folder.
> 
> 
> 
> 
> 2011/8/6 Jonathan Ellis 
> look at bin/cassandra, you can't just run it with "java -jar"
> 
> On Sat, Aug 6, 2011 at 10:43 AM, Alvin UW  wrote:
> > Hello,
> >
> > I set up a Cassandra project in Eclipse following
> > http://wiki.apache.org/cassandra/RunningCassandraInEclipse
> > Then, I made a few modifications on it to form a customised Cassandra.
> > But I don't know how can I release this new Cassandra from Eclipse as a jar
> > file to use in EC2.
> >
> > I tried "ant release" command in command line. It can successful build .jar
> > file.
> > Then I typed java -jar apache-cassandra-0.7.0-beta1-SNAPSHOT.jar
> >
> > "Error: Failed to load Main-Class manifest attribute from "
> >
> > I edited a MANIFEST.MF like:
> > Manifest-Version: 1.0
> > Ant-Version: Apache Ant 1.7.1
> > Created-By: 16.3-b01 (Sun Microsystems Inc.)
> > Implementation-Title: Cassandra
> > Implementation-Version: 0.7.0-beta1-SNAPSHOT
> > Implementation-Vendor: Apache
> > Main-Class: org.apache.cassandra.thrift.CassandraDaemon
> >
> > and tried again. the error is like below:
> >
> > Exception in thread "main" java.lang.NoClassDefFoundError:
> > org/apache/thrift/transport/TTransportException
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.thrift.transport.TTransportException
> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:319)
> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:264)
> > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332)
> > Could not find the main class: org.apache.cassandra.thrift.CassandraDaemon.
> > Program will exit.
> >
> > So what's the problem?
> >
> >
> > Thanks.
> > Alvin
> >
> >
> >
> >
> >
> >
> 
> 
> 
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
> 



Re: How to solve this kind of schema disagreement...

2011-08-06 Thread aaron morton
After there restart you what was in the  logs for the 1.27 machine  from the 
Migration.java logger ? Some of the messages will start with "Applying 
migration"

You should have shut down both of the nodes, then deleted the schema* and 
migration* system sstables, then restarted one of them and watched to see if it 
got to schema agreement. 

Cheers
  
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 6 Aug 2011, at 22:56, Dikang Gu wrote:

> I have tried this, but the schema still does not agree in the cluster:
> 
> [default@unknown] describe cluster;
> Cluster Information:
>Snitch: org.apache.cassandra.locator.SimpleSnitch
>Partitioner: org.apache.cassandra.dht.RandomPartitioner
>Schema versions: 
>   UNREACHABLE: [192.168.1.28]
>   75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.168.1.25]
>   5a54ebd0-bd90-11e0--9510c23fceff: [192.168.1.27]
> 
> Any other suggestions to solve this?
> 
> Because I have some production data saved in the cassandra cluster, so I can 
> not afford data lost...
> 
> Thanks.
> 
> On Fri, Aug 5, 2011 at 8:55 PM, Benoit Perroud  wrote:
> Based on http://wiki.apache.org/cassandra/FAQ#schema_disagreement,
> 75eece10-bf48-11e0--4d205df954a7 own the majority, so shutdown and
> remove the schema* and migration* sstables from both 192.168.1.28 and
> 192.168.1.27
> 
> 
> 2011/8/5 Dikang Gu :
> > [default@unknown] describe cluster;
> > Cluster Information:
> >Snitch: org.apache.cassandra.locator.SimpleSnitch
> >Partitioner: org.apache.cassandra.dht.RandomPartitioner
> >Schema versions:
> > 743fe590-bf48-11e0--4d205df954a7: [192.168.1.28]
> > 75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.168.1.25]
> > 06da9aa0-bda8-11e0--9510c23fceff: [192.168.1.27]
> >
> >  three different schema versions in the cluster...
> > --
> > Dikang Gu
> > 0086 - 18611140205
> >
> 
> 
> 
> -- 
> Dikang Gu
> 
> 0086 - 18611140205
> 



Re: column metadata and sstable

2011-08-06 Thread aaron morton
AFAIK it just makes it easier for client API's to understand what data type to 
use. e.g. it can give your code a long rather than a str / byte array . 

Personally I'm on the fence about using it. It has some advantages to the 
client, but given the server does not really need the information it feels a 
little like additional coupling that's not needed . 

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 6 Aug 2011, at 11:58, Yi Yang wrote:

> Dear all,
> 
> I'm wondering what's the advantage of assigning column metadata when NOT 
> using secondary indices.
> 
> I've gone through the SSTable internal and found out it won't do such 
> conversion.Thus I think the only advantage we got via column metadata is 
> a data validation type, am I correct?
> 
> Thanks.
> Steve



Re: Planet Cassandra (an aggregation site for Cassandra News)

2011-08-06 Thread Edward Capriolo
On Thu, Aug 4, 2011 at 5:12 AM, Boris Yen  wrote:

> Looking forward to it. ^^
>
> On Thu, Aug 4, 2011 at 1:56 PM, Eldad Yamin  wrote:
>
>> Great! I hope it will be open soon!
>>
>>
>> On Wed, Aug 3, 2011 at 10:33 PM, Ed Anuff  wrote:
>>
>>> Awesome, great news!
>>>
>>>
>>> On Wed, Aug 3, 2011 at 11:53 AM, Lynn Bender  wrote:
>>>
 Greetings all,

 I just wanted to send a note out to let everyone know about Planet
 Cassandra -- an aggregation site for Cassandra news and blogs. Andrew
 Llavore from DataStax and I built the site.

 We are currently waiting for approval from the Apache Software
 Foundation before we publicly launch. However, in the meantime, we'd love 
 to
 hear from you. If you have any favorite Cassandra-related blogs, or blogs
 that frequently contain quality Cassandra content, please send us the URL,
 so that we can contact the author about including a site feed.

 If you have any questions or comments, please send them to
 pla...@geekaustin.org.

 -Lynn Bender

 --
 -Lynn Bender
 http://geekaustin.org
 http://linuxagainstpoverty.org
 http://twitter.com/linearb
 http://twitter.com/geekaustin




>>>
>>
>
I have started a blog to support the High Performance Cassandra Cookbook:

http://www.jointhegrid.com/highperfcassandra/

I am going to use blog to continue writing about features and tips for
Cassandra in the writing style used for the book.

Lynn, please consider it for syndication. All others, please enjoy.


Re: Dropped messages

2011-08-06 Thread aaron morton
Just added this to the wiki 

http://wiki.apache.org/cassandra/FAQ#dropped_messages

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 6 Aug 2011, at 10:53, Philippe wrote:

> Hi,
> I see lines like this in my log file
>  INFO [ScheduledTasks:1] 2011-08-06 00:51:57,650 MessagingService.java (line 
> 586) 358 MUTATION messages dropped in server lifetime
>  INFO [ScheduledTasks:1] 2011-08-06 00:51:57,658 MessagingService.java (line 
> 586) 297 READ messages dropped in server lifetime
>  INFO [ScheduledTasks:1] 2011-08-06 00:51:57,658 MessagingService.java (line 
> 586) 4696 RANGE_SLICE messages dropped in server lifetime
> 
> How worried should I be ?
> Are they reported as timeout exceptions on the client ?
> 
> Thanks



Re: no stack trace :(

2011-08-06 Thread aaron morton
Do you have MX4J in the class path ? 

It feels like an error from there: MalformedURLException is a checked exception 
a it's only used in the cassandra code when reading a file; and "agent" sounds 
like JMX Agent. 

I had a quick search through mx4j source and while I could not find an exact 
match, there is a lot of places where MalformedURLException is thrown.

Hope that helps. 

   
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 6 Aug 2011, at 07:02, Dean Hiller wrote:

> nope, just one and then it exits and quits(won't start up) and I can't 
> compile the code since the repository is down :( 
> 
> I am working on web-tier setup and will come back hoping the cloudera 
> repository goes back online in the next day.(crossing my fingers).
> 
> Dean
> 
> On Fri, Aug 5, 2011 at 12:07 PM, mcasandra  wrote:
> Are you seeing lot of these errors? Can you try XX:-OmitStackTraceInFastThrow
> 
> --
> View this message in context: 
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/no-stack-trace-tp6654590p6657485.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
> Nabble.com.
> 



Re: How to solve this kind of schema disagreement...

2011-08-06 Thread Dikang Gu
I restart both nodes, and deleted the shcema* and migration* and restarted
them.

The current cluster looks like this:
[default@unknown] describe cluster;
Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions:
75eece10-bf48-11e0--4d205df954a7: [192.168.1.28, 192.168.1.9,
192.168.1.25]
5a54ebd0-bd90-11e0--9510c23fceff: [192.168.1.27]

the 1.28 looks good, and the 1.27 still can not get the schema agreement...

I have tried several times, even delete all the data on 1.27, and rejoin it
as a new node, but it is still unhappy.

And the ring looks like this:

Address DC  RackStatus State   LoadOwns
   Token

   127605887595351923798765477786913079296
192.168.1.28datacenter1 rack1   Up Normal  8.38 GB
25.00%  1
192.168.1.25datacenter1 rack1   Up Normal  8.55 GB
34.01%  57856537434773737201679995572503935972
192.168.1.27datacenter1 rack1   Up Joining 1.81 GB
24.28%  99165710459060760249270263771474737125
192.168.1.9 datacenter1 rack1   Up Normal  8.75 GB
16.72%  127605887595351923798765477786913079296

The 1.27 seems can not join the cluster, and it just hangs there...

Any suggestions?

Thanks.


On Sun, Aug 7, 2011 at 10:01 AM, aaron morton wrote:

> After there restart you what was in the  logs for the 1.27 machine  from
> the Migration.java logger ? Some of the messages will start with "Applying
> migration"
>
> You should have shut down both of the nodes, then deleted the schema* and
> migration* system sstables, then restarted one of them and watched to see if
> it got to schema agreement.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6 Aug 2011, at 22:56, Dikang Gu wrote:
>
> I have tried this, but the schema still does not agree in the cluster:
>
> [default@unknown] describe cluster;
> Cluster Information:
>Snitch: org.apache.cassandra.locator.SimpleSnitch
>Partitioner: org.apache.cassandra.dht.RandomPartitioner
>Schema versions:
> UNREACHABLE: [192.168.1.28]
> 75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.168.1.25]
>  5a54ebd0-bd90-11e0--9510c23fceff: [192.168.1.27]
>
> Any other suggestions to solve this?
>
> Because I have some production data saved in the cassandra cluster, so I
> can not afford data lost...
>
> Thanks.
>
> On Fri, Aug 5, 2011 at 8:55 PM, Benoit Perroud  wrote:
>
>> Based on http://wiki.apache.org/cassandra/FAQ#schema_disagreement,
>> 75eece10-bf48-11e0--4d205df954a7 own the majority, so shutdown and
>> remove the schema* and migration* sstables from both 192.168.1.28 and
>> 192.168.1.27
>>
>>
>> 2011/8/5 Dikang Gu :
>> > [default@unknown] describe cluster;
>> > Cluster Information:
>> >Snitch: org.apache.cassandra.locator.SimpleSnitch
>> >Partitioner: org.apache.cassandra.dht.RandomPartitioner
>> >Schema versions:
>> > 743fe590-bf48-11e0--4d205df954a7: [192.168.1.28]
>> > 75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.168.1.25]
>> > 06da9aa0-bda8-11e0--9510c23fceff: [192.168.1.27]
>> >
>> >  three different schema versions in the cluster...
>> > --
>> > Dikang Gu
>> > 0086 - 18611140205
>> >
>>
>
>
>
> --
> Dikang Gu
>
> 0086 - 18611140205
>
>
>


-- 
Dikang Gu

0086 - 18611140205


Re: move one node for load re-balancing then it status stuck at "Leaving"

2011-08-06 Thread Yan Chunlu
is that possible that the implements of cassandra only calculate live nodes?

for example:
"node move node3" cause node3 "Leaving", then cassandra iterate over the
endpoints and found node1 and node2. so the endpoints is 2, but RF=3,
Exception raised.

is that true?



On Fri, Aug 5, 2011 at 3:20 PM, Yan Chunlu  wrote:

> nothing...
>
> nodetool -h node3 netstats
> Mode: Normal
> Not sending any streams.
>  Nothing streaming from /10.28.53.11
> Pool NameActive   Pending  Completed
> Commandsn/a 0  186669475
> Responses   n/a 0  117986130
>
>
> nodetool -h node3 compactionstats
> compaction type: n/a
> column family: n/a
> bytes compacted: n/a
> bytes total in progress: n/a
> pending tasks: 0
>
>
>
> On Fri, Aug 5, 2011 at 1:47 PM, mcasandra  wrote:
> > Check things like netstats, disk space etc to see why it's in Leaving
> state.
> > Anything in the logs that shows Leaving?
> >
> > --
> > View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/move-one-node-for-load-re-balancing-then-it-status-stuck-at-Leaving-tp6655168p6655326.html
> > Sent from the cassandra-u...@incubator.apache.org mailing list archive
> at Nabble.com.
> >
>


Re: move one node for load re-balancing then it status stuck at "Leaving"

2011-08-06 Thread Dikang Gu
Yes, I think you are right.

The "nodetool move" will move the keys on the node to the other two nodes,
and the required replication is 3, but you will only have 2 live nodes after
the move, so you have the exception.


On Sun, Aug 7, 2011 at 2:03 PM, Yan Chunlu  wrote:

> is that possible that the implements of cassandra only calculate live
> nodes?
>
> for example:
> "node move node3" cause node3 "Leaving", then cassandra iterate over the
> endpoints and found node1 and node2. so the endpoints is 2, but RF=3,
> Exception raised.
>
> is that true?
>
>
>
> On Fri, Aug 5, 2011 at 3:20 PM, Yan Chunlu  wrote:
>
>> nothing...
>>
>> nodetool -h node3 netstats
>> Mode: Normal
>> Not sending any streams.
>>  Nothing streaming from /10.28.53.11
>> Pool NameActive   Pending  Completed
>> Commandsn/a 0  186669475
>> Responses   n/a 0  117986130
>>
>>
>> nodetool -h node3 compactionstats
>> compaction type: n/a
>> column family: n/a
>> bytes compacted: n/a
>> bytes total in progress: n/a
>> pending tasks: 0
>>
>>
>>
>> On Fri, Aug 5, 2011 at 1:47 PM, mcasandra  wrote:
>> > Check things like netstats, disk space etc to see why it's in Leaving
>> state.
>> > Anything in the logs that shows Leaving?
>> >
>> > --
>> > View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/move-one-node-for-load-re-balancing-then-it-status-stuck-at-Leaving-tp6655168p6655326.html
>> > Sent from the cassandra-u...@incubator.apache.org mailing list archive
>> at Nabble.com.
>> >
>>
>
>


-- 
Dikang Gu

0086 - 18611140205


Re: Cassandra encountered an internal error processing this request: TApplicationError type: 6 message:Internal error

2011-08-06 Thread aaron morton
The NPE is fixed in 0.8.2 see 
https://github.com/apache/cassandra/blob/cassandra-0.8.2/CHANGES.txt#L13

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 5 Aug 2011, at 12:46, Dikang Gu wrote:

> Sure, I can find the stack trace for some exceptions:
> 
> ERROR [pool-2-thread-132] 2011-07-23 13:29:04,869 Cassandra.java (line 3210) 
> Internal error processing get_range_slices
> java.lang.NullPointerException
> at org.apache.cassandra.db.ColumnFamily.diff(ColumnFamily.java:298)
> at org.apache.cassandra.db.ColumnFamily.diff(ColumnFamily.java:406)
> at 
> org.apache.cassandra.service.RowRepairResolver.maybeScheduleRepairs(RowRepairResolver.java:103)
> at 
> org.apache.cassandra.service.RangeSliceResponseResolver$2.getReduced(RangeSliceResponseResolver.java:120)
> at 
> org.apache.cassandra.service.RangeSliceResponseResolver$2.getReduced(RangeSliceResponseResolver.java:85)
> at 
> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:715)
> at 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:617)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.process(Cassandra.java:3202)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
> at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:636)
>  INFO [NonPeriodicTasks:1] 2011-07-23 13:38:23,284 ColumnFamilyStore.java 
> (line 1013) Enqueuing flush of Memtable-MessageKey@2036597133(5020/62750 
> serialized/live bytes, 61 ops)
> 
> But can no for some others:
> 
> ERROR [pool-2-thread-181] 2011-07-27 11:20:39,550 Cassandra.java (line 3210) 
> Internal error processing get_range_slices
> java.lang.NullPointerException
>  INFO [NonPeriodicTasks:1] 2011-07-27 11:22:43,561 ColumnFamilyStore.java 
> (line 1013) Enqueuing flush of Memtable-MessageKey@1288355086(74715/933937 
> serialized/live bytes, 773 ops)
> 
> Why does this happen?
> 
> Thanks.
> 
> On Fri, Aug 5, 2011 at 6:26 AM, aaron morton  wrote:
> The error log will contain a call stack, we need that. 
> 
> e.g. 
> 
> Failed with exception java.io.IOException:java.lang.NullPointerException
> ERROR 15:22:33,528 Failed with exception 
> java.io.IOException:java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:341)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:133)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1114)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getCurrentKey(ColumnFamilyRecordReader.java:82)
>   at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getCurrentKey(ColumnFamilyRecordReader.java:53)
>   at 
> org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat$2.next(HiveCassandraStandardColumnInputFormat.java:164)
>   at 
> org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat$2.next(HiveCassandraStandardColumnInputFormat.java:111)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:326)
>   ... 10 more
> 
> Cheers
> 
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 4 Aug 2011, at 15:26, Dikang Gu wrote:
> 
>> Yes, I do find the error log! 
>> 
>> ERROR [pool-2-thread-63] 2011-08-04 13:23:54,138 Cassandra.java (line 3210) 
>> Internal error processing get_range_slices
>> java.lang.NullPointerException
>> 
>> I'm using the cassandra-0.8.1, is this a known bug?
>> 
>

Re: Cassandra encountered an internal error processing this request: TApplicationError type: 6 message:Internal error

2011-08-06 Thread Dikang Gu
That's great!

Thanks Aaron.

On Sun, Aug 7, 2011 at 2:21 PM, aaron morton wrote:

> The NPE is fixed in 0.8.2 see
> https://github.com/apache/cassandra/blob/cassandra-0.8.2/CHANGES.txt#L13
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 5 Aug 2011, at 12:46, Dikang Gu wrote:
>
> Sure, I can find the stack trace for some exceptions:
>
> ERROR [pool-2-thread-132] 2011-07-23 13:29:04,869 Cassandra.java (line
> 3210) Internal error processing get_range_slices
> java.lang.NullPointerException
> at org.apache.cassandra.db.ColumnFamily.diff(ColumnFamily.java:298)
> at org.apache.cassandra.db.ColumnFamily.diff(ColumnFamily.java:406)
> at
> org.apache.cassandra.service.RowRepairResolver.maybeScheduleRepairs(RowRepairResolver.java:103)
> at
> org.apache.cassandra.service.RangeSliceResponseResolver$2.getReduced(RangeSliceResponseResolver.java:120)
> at
> org.apache.cassandra.service.RangeSliceResponseResolver$2.getReduced(RangeSliceResponseResolver.java:85)
> at
> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
> at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:715)
> at
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:617)
> at
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.process(Cassandra.java:3202)
> at
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
> at
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:636)
>  INFO [NonPeriodicTasks:1] 2011-07-23 13:38:23,284 ColumnFamilyStore.java
> (line 1013) Enqueuing flush of Memtable-MessageKey@2036597133(5020/62750
> serialized/live bytes, 61 ops)
>
> But can no for some others:
>
> ERROR [pool-2-thread-181] 2011-07-27 11:20:39,550 Cassandra.java (line
> 3210) Internal error processing get_range_slices
> java.lang.NullPointerException
>  INFO [NonPeriodicTasks:1] 2011-07-27 11:22:43,561 ColumnFamilyStore.java
> (line 1013) Enqueuing flush of Memtable-MessageKey@1288355086(74715/933937
> serialized/live bytes, 773 ops)
>
> Why does this happen?
>
> Thanks.
>
> On Fri, Aug 5, 2011 at 6:26 AM, aaron morton wrote:
>
>> The error log will contain a call stack, we need that.
>>
>> e.g.
>>
>> Failed with exception java.io.IOException:java.lang.NullPointerException
>> ERROR 15:22:33,528 Failed with exception
>> java.io.IOException:java.lang.NullPointerException
>> java.io.IOException: java.lang.NullPointerException
>>  at
>> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:341)
>> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:133)
>>  at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1114)
>> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
>>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: java.lang.NullPointerException
>> at
>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getCurrentKey(ColumnFamilyRecordReader.java:82)
>>  at
>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getCurrentKey(ColumnFamilyRecordReader.java:53)
>> at
>> org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat$2.next(HiveCassandraStandardColumnInputFormat.java:164)
>>  at
>> org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat$2.next(HiveCassandraStandardColumnInputFormat.java:111)
>> at
>> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:326)
>>  ... 10 more
>>
>> Cheers
>>
>>  -
>> Aaron Morton
>> Freelance Cassandra Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 4 Aug 2011, at 15:26, Dikang Gu wrote:
>>
>>  Yes, I do find the error log!
>>
>> ERROR [pool-2-thread-63] 2011-08-04 13:23:54,138 Cassandra.java (line
>> 3210) Internal error processing get_range_slices
>> java.lang.NullPointerException
>>
>> I'm using the cassandra-0.8.1, is this a kno