Thank you Eric,

I think we have a limited number of dynamically named columns but I'm not inclined to have them added in the schema.

I have just managed to do what I want with the schema below. But It cost me my secondary index on eventId. Because eventId is a clustering_key it's not yet supported. (Bad Request: Secondary index on CLUSTERING_KEY column name is not yet supported for compact table)

Of course, I'm not sure how this will effect our archaic application so I have to test hard before applying this change.




CREATE TABLE "EventKeys" (
  key ascii,
  name ascii,
  value ascii,
  PRIMARY KEY (key,"name")
)

Select * from EventKeys where key='b5f0d4be-c0fc-4dc4-8e38-0a00e4552866';

 key                                  | name       | value
--------------------------------------+------------+--------------------------
b5f0d4be-c0fc-4dc4-8e38-0a00e4552866 | actionId | 080abda2-3623-4a98-a84a-d33b6aecbe99
 b5f0d4be-c0fc-4dc4-8e38-0a00e4552866 |     code    |  var x = .....\n
b5f0d4be-c0fc-4dc4-8e38-0a00e4552866 | eventId | ce3b0c03-dcce-4522-a35a-864909cb024f



On 10/15/2015 06:09 PM, Eric Stevens wrote:
If the columns are not dynamically named (as in "actionId" and "code") you should be able to add that to your CQL table definition with ALTER TABLE, and those columns should be available in the query results.

If the columns /are/ dynamically named, and you can't reasonably add every option to the CQL definition, your job gets a lot harder. If you're using composite names, there might be some hope if you happen to conform to the same standard as CQL collections (not impossible, but probably not super likely). You can create a test table with one of each collection type, insert a record, then look at the Thrift to see how those map.

If your dynamically named columns are string concatenation or some other custom serialization format, then your only hope is basically a data migration from your thrift format to your CQL format. You should be able to accomplish all the same business functionality using CQL, but you might not be able to create a CQL schema that maps exactly to the data at rest for your historic schema.

On Thu, Oct 15, 2015 at 8:54 AM Onur Yalazı <onur.yal...@8digits.com <mailto:onur.yal...@8digits.com>> wrote:

    Hello,

    I have a cassandra cluster from pre-cql era and I am having problems
    accessing data via cqlsh.
    As you can see below, I can not reach dynamic columns via cqlsh
    but they
    are accessible via cassandra-cli.

    How can I make the data shown on cqlsh?


    cqlsh:automation> select * from "EventKeys" where
    key='b5f0d4be-c0fc-4dc4-8e38-0a00e4552866' ;

      key                                        |eventId
    
--------------------------------------+--------------------------------------
      b5f0d4be-c0fc-4dc4-8e38-0a00e4552866 |
    ce3b0c03-dcce-4522-a35a-864909cb024f

    (1 rows)


    [default@keyspace] get
    EventKeys['b5f0d4be-c0fc-4dc4-8e38-0a00e4552866'];
    => (name=actionId, value=3038...............643333623661656362653939,
    timestamp=1431608711629002)
    => (name=code, b0a..........0a0a, timestamp=1431608711629003)
    => (name=eventId, value=ce3b0c03-dcce-4522-a35a-864909cb024f,
    timestamp=1431608711629000)
    Returned 3 results.


    ColumnFamily Description:
         ColumnFamily: EventKeys
           Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
           Default column value validator:
    org.apache.cassandra.db.marshal.BytesType
           Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
           GC grace seconds: 864000
           Compaction min/max thresholds: 4/32
           Read repair chance: 0.1
           DC Local Read repair chance: 0.0
           Populate IO Cache on flush: false
           Replicate on write: true
           Caching: KEYS_ONLY
           Default time to live: 0
           Bloom Filter FP chance: 0.01
           Index interval: 128
           Speculative Retry: 99.0PERCENTILE
           Built indexes: [EventKeys.eventkeys_eventid_idx]
           Column Metadata:
             Column Name: scenarioId
               Validation Class: org.apache.cassandra.db.marshal.AsciiType
               Index Name: eventkeys_eventid_idx
               Index Type: KEYS
               Index Options: {}
           Compaction Strategy:
    org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
           Compaction Strategy Options:
             min_threshold: 4
             max_threshold: 32
           Compression Options:
             sstable_compression:
    org.apache.cassandra.io
    <http://org.apache.cassandra.io>.compress.SnappyCompressor


    CQL Desc of the table:

    CREATE TABLE "EventKeys" (
       key ascii,
       "eventId" ascii,
       PRIMARY KEY (key)
    ) WITH COMPACT STORAGE AND
       bloom_filter_fp_chance=0.010000 AND
       caching='KEYS_ONLY' AND
       comment='' AND
       dclocal_read_repair_chance=0.000000 AND
       gc_grace_seconds=864000 AND
       index_interval=128 AND
       read_repair_chance=0.100000 AND
       replicate_on_write='true' AND
       populate_io_cache_on_flush='false' AND
       default_time_to_live=0 AND
       speculative_retry='99.0PERCENTILE' AND
       memtable_flush_period_in_ms=0 AND
       compaction={'min_threshold': '4', 'class':
    'SizeTieredCompactionStrategy', 'max_threshold': '32'} AND
       compression={'sstable_compression': 'SnappyCompressor'};
    CREATE INDEX eventkeys_eventid_idx ON "EventKeys" ("eventId");


Reply via email to