The main issue is that cassandra has two of everything. Two access apis,
two meta data systems, and two groups of users.

Those groups of users using the original systems thrift, cfmetadata, and
following the advice of three years ago have been labled obsolete (did you
ever see that twighlight zone episode?).

If you suggest a thrift only feature get ready to fight. People seem
oblivious to the fact that you may have a 38 node cluster with 12 tb of
data under compact storage, and that you can't just snap your fingers and
adopt whatever new system to pack data that someone comes up with.

Earlier in the thread I detailed a potential way to store collection like
things in compact storage. You would just assume that with all the
collective brain power in the project, that somehow, some way collections
could make their way into compact storage. Or the new language would offer
similiar features regardless of storage chosen (say like innodb and
mariadb).

The shelf life of codd's normal form has been what? 30 or 40 years and
still going strong? Im always rather pisswd that 3 years after i start
using cassandra everything has changed, that im not the future, and that no
one is really interested in supporting anything i used the datastore for.


On Friday, February 21, 2014, Sylvain Lebresne <sylv...@datastax.com> wrote:
> On Thu, Feb 20, 2014 at 10:49 PM, Rüdiger Klaehn <rkla...@gmail.com>
wrote:
>>
>> Hi Sylvain,
>>
>> I applied the patch to the cassandra-2.0 branch (this required some
manual work since I could not figure out which commit it was supposed to
apply for, and it did not apply to the head of cassandra-2.0).
>
> Yeah, some commit yesterday made the patch not apply cleanly anymore. In
any case, It's not committed to the cassandra-2.0 branch and will be part
of 2.0.6.
>>
>> The benchmark now runs in pretty much identical time to the thrift based
benchmark. ~30s for 1000 inserts of 10000 key/value pairs each. Great work!
>
> Glad that it helped.
>
>>
>> I still have some questions regarding the mapping. Please bear with me
if these are stupid questions. I am quite new to Cassandra.
>>
>> The basic cassandra data model for a keyspace is something like this,
right?
>>
>> SortedMap<byte[], SortedMap<byte[], Pair<Long, byte[]>>
>>                  ^ row key. determines which server(s) the rest is
stored on
>>                                              ^ column key
>>                                                                ^
timestamp (latest one wins)
>>
^ value (can be size 0)
>
> It's a reasonable way to think of how things are stored internally, yes.
Though as DuyHai mentioned, the first map is really sorting by token and in
general that means you use mostly the sorting of the second map concretely.
>
>>
>> So if I have a table like the one in my benchmark (using blobs)
>>
>> CREATE TABLE IF NOT EXISTS test.wide (
>> time blob,
>> name blob,
>> value blob,
>> PRIMARY KEY (time,name))
>> WITH COMPACT STORAGE
>>
>> From reading http://www.datastax.com/dev/blog/thrift-to-cql3 it seems
that
>>
>> - time maps to the row key and name maps to the column key without any
overhead
>> - value directly maps to value in the model above without any prefix
>>
>> is that correct, or is there some overhead involved in CQL over the raw
model as described above? If so, where exactly?
>
> That's correct.
> For completeness sake, if you were to remove the COMPACT STORAGE, there
would be some overhead in how it maps to the underlying column key, but
that overhead would buy you much more flexibility in how you could evolve
this table schema (you could add more CQL columns later if needs be, have
collections or have static columns following CASSANDRA-6561 that comes in
2.0.6; none of which you can have with COMPACT STORAGE). Note that it's
perfectly fine to use COMPACT STORAGE if you know you don't and won't need
the additional flexibility, but I generally advise people to actually check
first that using COMPACT STORAGE does make a concrete and meaningful
difference for their use case (be careful with premature optimization
really). The difference in performance/storage space used is not always all
that noticeable in practice (note that I didn't said it's never
noticeable!) and is narrowing with Cassandra evolution (it's not impossible
at all that we will get to "never noticeable" someday, while COMPACT
STORAGE tables will never get the flexibility of normal tables because
there is backwards compatibility issues). It's also my experience that more
often that not (again, not always), flexibility turns out to be more
important that squeezing every bit of performance you can (if it comes at
the price of that flexibility that is) in the long run. Do what you want of
that advise :)
> --
> Sylvain
>
>>
>> kind regards and many thanks for your help,
>>
>> Rüdiger
>>
>>
>> On Thu, Feb 20, 2014 at 8:36 AM, Sylvain Lebresne <sylv...@datastax.com>
wrote:
>>>
>>>
>>>
>>> On Wed, Feb 19, 2014 at 9:38 PM, Rüdiger Klaehn <rkla...@gmail.com>
wrote:
>>>>
>>>> I have cloned the cassandra repo, applied the patch, and built it. But
when I want to run the bechmark I get an exception. See below. I tried with
a non-managed dependency to
cassandra-driver-core-2.0.0-rc3-SNAPSHOT-jar-with-dependencies.jar, which I
compiled from source because I read that that might help. But that did not
make a difference.
>>>>
>>>> So currently I don't know how to give the patch a try. Any ideas?
>>>>
>>>> cheers,
>>>>
>>>> Rüdiger
>>>>
>>>> Exception in thread "main" java.lang.IllegalArgumentException:
replicate_on_write is not a column defined in this metadata
>>>>     at
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
>>>>     at
com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
>>>>     at com.datastax.driver.core.Row.getBool(Row.java:117)
>>>>     at
com.datastax.driver.core.TableMetadata$Options.<init>(TableMetadata.java:474)
>>>>     at
com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
>>>>     at
com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
>>>>     at
com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
>>>>     at
com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
>>>>     at
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
>>>>     at
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
>>>>     at
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
>>>>     at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
>>>>     at
com.datastax.driver.core.Cluster$Manager.newSession(Cluster.java:910)
>>>>     at
com.datastax.driver.core.Cluster$Manager.access$200(Cluster.java:806)
>>>>     at com.datastax.driver.core.Cluster.connect(Cluster.java:158)
>>>>     at
cassandra.CassandraTestMinimized$delayedInit$body.apply(CassandraTestMinimized.scala:31)
>>>>     at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
>>>>     at
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
>>>>     at scala.App$$anonfun$main$1.apply(App.scala:71)
>>>>     at scala.App$$anonfun$main$1.apply(App.scala:71)
>>>>     at scala.collection.immutable.List.foreach(List.scala:318)
>>>>     at
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
>>>>     at scala.App$class.main(App.scala:71)
>>>>     at
cassandra.CassandraTestMinimized$.main(CassandraTestMinimized.scala:5)
>>>>     at
cassandra.CassandraTestMinimized.main(CassandraTestMinimized.scala)
>>>
>>> I believe you've tried the cassandra trunk branch? trunk is basically
the future Cassandra 2.1 and the driver is currently unhappy because the
replicate_on_write option has been removed in that version. I'm supposed to
have fixed that on the driver 2.0 branch like 2 days ago so maybe you're
also using a slightly old version of the driver sources in there? Or maybe
I've screwed up my fix, I'll double check. But anyway, it would be overall
simpler to test with the cassandra-2.0 branch of Cassandra, with which you
shouldn't run into that.
>>> --
>>> Sylvain
>
>

-- 
Sorry this was sent from mobile. Will do less grammar and spell check than
usual.

Reply via email to