I'm pleased to announce ScyllaDB Rust Driver 0.5.0, an asynchronous CQL
driver for Rust, fully compatible with Apache Cassandra™!
Cool, ever growing open-source stats:
* over 38k downloads on crates;
* over 300 GitHub stars!
=== Notable changes ===
* Client-side timeouts are here! Request
I'm pleased to announce Scylla Rust Driver 0.4.0, an asynchronous CQL
driver for Rust, fully compatible with Apache Cassandra™!
We're also excited to shamelessly brag about:
* over 10k downloads on crates;
* over 200 GitHub stars!
=== Notable changes ===
Non-standard partitioners used
I'm pleased to announce Scylla Rust Driver 0.3.0, an asynchronous CQL
driver for Rust, fully compatible with Apache Cassandra™!
=== Notable changes ===
* Connection management is heavily revamped:
> a configurable pool of connections is kept per node or per shard
(Scylla only)
>
I'm pleased to announce Scylla Rust Driver 0.2.0, an asynchronous CQL
driver for Rust, fully compatible with Apache Cassandra™!
Our Rust driver now also has an official documentation page:
https://rust-driver.docs.scylladb.com/ , thanks to Laura Novich and
David Garcia's contributions
Thanks, Piotr & team. Fantastic contribution!
I'll request Constantia.io to get in contact with you to shortlist it in
next month's Changelog blog post. Cheers!
I am pleased to announce the first release of a brand new, asynchronous
CQL driver for Rust, fully compatible with Apache Cassandra™ - Scylla
Rust Driver 0.1.0.
Our new driver is capable of the following, and much more:
* Asynchronous API based on Tokio (tokio.rs)
* Token-aware routing
ELECT * FROM test WHERE client_id IN ? PER PARTITION LIMIT 1;
>
The "PER PARTITION LIMIT" option is documented here, although I do agree
it's a rather terse explanation:
https://cassandra.apache.org/doc/latest/cql/dml.html#limiting-results
What it does
In your schema case, for each client_id you will get a single 'when'
row. Just one. Even when there are multiple rows (clustering keys)
On Thu, May 7, 2020 at 12:14 AM Check Peck wrote:
>
> I have a scylla table as shown below:
>
>
> cqlsh:sampleks> describe table test;
>
>
> CREATE
I have a scylla table as shown below:
cqlsh:sampleks> describe table test;
CREATE TABLE test (
client_id int,
when timestamp,
process_ids list,
md text,
PRIMARY KEY (client_id, when) ) WITH CLUSTERING ORDER BY (when DESC)
AND
N, if that is a more
workable format for you.
Sean Durity – Staff Systems Engineer, Cassandra
-Original Message-
From: Marc Richter
Sent: Wednesday, April 22, 2020 6:22 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Issues, understanding how CQL works
Hi Jeff,
thank you f
ed in whatever format you need.
>>>
>>> Or, since this is a single node scenario, you could try sstable2json to
>>> export the
>>> sstables (files on disk) into JSON, if that is a more workable format for
>>> you.
>>>
>>> Se
rg
Subject: [EXTERNAL] Re: Issues, understanding how CQL works
Hi Jeff,
thank you for your exhaustive and verbose answer!
Also, a very big "Thank you!" to all the other replyers; I hope you
understand that I summarize all your feedback in this single answer.
From what I understa
Marc,
In DSE CQL offers option called CAPTURE, which can save output of query to a
directed file. May be you can use that option to save all values you need in
that file to see all signalids or whichever columns you need. File may grow big
based on your dataset, so I am not sure what limit
, if that is a more workable format for
you.
Sean Durity – Staff Systems Engineer, Cassandra
-Original Message-
From: Marc Richter
Sent: Wednesday, April 22, 2020 6:22 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Issues, understanding how CQL works
Hi Jeff,
thank you for your
json to
> export the sstables (files on disk) into JSON, if that is a more workable
> format for you.
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
> -Original Message-
> From: Marc Richter
> Sent: Wednesday, April 22, 2020 6:22 AM
> To: user@cassandra.apa
@cassandra.apache.org
Subject: [EXTERNAL] Re: Issues, understanding how CQL works
Hi Jeff,
thank you for your exhaustive and verbose answer!
Also, a very big "Thank you!" to all the other replyers; I hope you
understand that I summarize all your feedback in this single answer.
F
column to learn this would be "insertdate".
In SQL I would do something like this:
SELECT insertdate FROM tagdata.central
ORDER BY insertdate DESC LIMIT 1;
In CQL, however, I just can't get it to work.
What I have tried already is this:
SELECT insertdate FROM
ing like this:
>
> SELECT insertdate FROM tagdata.central
> ORDER BY insertdate DESC LIMIT 1;
>
> In CQL, however, I just can't get it to work.
>
As others have already pointed out, you need to design your data model to
support the queries you need. CQL is not SQL and you cannot query th
lready of round about 260 GB in size.
> I now need to know what is the most recent entry in it; the correct
> column to learn this would be "insertdate".
>
> In SQL I would do something like this:
>
> SELECT insertdate FROM tagdata.central
> ORDER BY insertdate
ut 260 GB in size.
> I now need to know what is the most recent entry in it; the correct
> column to learn this would be "insertdate".
>
> In SQL I would do something like this:
>
> SELECT insertdate FROM tagdata.central
> ORDER BY insertdate DESC LIMIT 1;
>
> In CQ
Marc, have you had any exposure to DynamoDB at all? The API approach is
different, but the fundamental concepts are similar. That’s actually a better
reference point to have than an RDBMS, because really it’s a small subset of
usage patterns that would overlap with CQL. If you were
The short answer is that CQL isn't SQL. It looks a bit like it, but the
structure of the data is totally different. Essentially (ignoring
secondary indexes, which have some issues in practice and I think are
generally not recommended) the only way to look the data up is by the
partition key
In SQL I would do something like this:
SELECT insertdate FROM tagdata.central
ORDER BY insertdate DESC LIMIT 1;
In CQL, however, I just can't get it to work.
What I have tried already is this:
SELECT insertdate FROM "tagdata.central"
ORDER BY insertdate DESC LIMIT 1;
But this gives me an
apache.org
Subject: [EXTERNAL] n00b q re UPDATE v. INSERT in CQL
Hi folks,
I'm working on a clean-up task for some bad data in a cassandra db.
The bad data in this case are values with mixed case that will need to
be lowercased. In some tables the value that needs to be changed is a
primary key
Hi folks,
I'm working on a clean-up task for some bad data in a cassandra db.
The bad data in this case are values with mixed case that will need to
be lowercased. In some tables the value that needs to be changed is a
primary key, in other cases it is not.
>From the reading I've done, the
rallel (that’s a use-case). While
> Batch Query 2 completed successfully, query 1 failed with exception.
>
> Following are driver logs and sequence of log events.
>
>
>
> QUERY 1: STARTED
>
> 2019-04-30T13:14:50.858+ CQL update "EACH_QUORUM"
I had two queries run on same row in parallel (that's a use-case). While Batch
Query 2 completed successfully, query 1 failed with exception.
Following are driver logs and sequence of log events.
QUERY 1: STARTED
2019-04-30T13:14:50.858+ CQL update "EACH_QUORUM" "UPDATE dir
Hi,
JanusGraph currently supports Thrift and CQL to communicate with Cassandra,
but CQL not yet for OLAP jobs executed on Spark [1].
If you're not familiar with JanusGraph: JanusGraph[2] is a scalable graph
database that uses different storage and index backends to store the data
and support
> propose has been discussed in the past and it is something that is
> currently unsupported.
>
> Dinesh
>
>
> On Tuesday, November 27, 2018, 11:05:32 PM PST, Shaurya Gupta <
> shaurya.n...@gmail.com> wrote:
>
>
> Hi,
>
> We want to throttle maximum qu
unsupported.
Dinesh
On Tuesday, November 27, 2018, 11:05:32 PM PST, Shaurya Gupta
wrote:
Hi,
We want to throttle maximum queries on any keyspace for clients connecting via
CQL native transport. This option is available for clients connecting via
thrift by property of request_scheduler
Hi,
We want to throttle maximum queries on any keyspace for clients connecting
via CQL native transport. This option is available for clients connecting
via thrift by property of request_scheduler in cassandra.yaml.
Is there some option available for clients connecting via CQL native
transport
I’ve never been a big fan of the “COPY” statement.
My preference for stuff like this (though I am definitely in the minority I
think!) — particularly for the amount of data you’re talking about — is to use
the open source tool “cassandradump” — which is similar to mysqldump but for
cassandra.
Hi Alain,
That is exactly what I did yesterday in the end. I ran the selects and
output the results to a file, I ran some greps on that file to leave myself
with just the data rows removing any white space and headers.
I then copied this data into a notepad on my local machine and saved it as
a
> Does anyone have any ideas of what I can do to generate inserts based on
> primary key numbers in an excel spreadsheet?
A quick thought:
What about using a column of the spreadsheet to actually store the SELECT
result and generate the INSERT statement (and I would probably do the
DELETE too)
Hi All,
I have a problem that I'm trying to work out and can't find anything online
that may help me.
I have been asked to delete 4K records from a Column Family that has a
total of 1.8 million rows. I have been given an excel spreadsheet with a
list of the 4K PRIMARY KEY numbers to be deleted.
So 3.0.16
has the fix, 3.9 doesn't have it, but 3.11.2 has it. Best regards, Yoshi
2018年8月10日(金) 17:10 thiranjith : Hi, According to
documentation at
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html#cql_data_types_c__cql_data_type_compatibility
we should not be able
日(金) 17:10 thiranjith :
> Hi,
>
> According to documentation at
> https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html#cql_data_types_c__cql_data_type_compatibility
> we
> should not be able to change the column type from ascii to text.
>
> I have had a mix
Hi, According to documentation at
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html#cql_data_types_c__cql_data_type_compatibility
we should not be able to change the column type from ascii to text. I have had
a mix experience with conversion between data types
test cluster? :)
>
> Dinesh
>
>
> On Friday, June 22, 2018, 1:26:56 AM PDT, Fernando Neves <
> fernando1ne...@gmail.com> wrote:
>
>
> Hi guys,
> We are running one of our Cassandra cluster under 2.0.17 Thrift version
> and we started the 2.0.17 CQL migration p
started the 2.0.17 CQL migration plan through
CQLSSTableWriter/sstableloader method.
Simple question, maybe someone worked in similar scenario, is there any problem
to do the migration under the same Cassandra instances (nodes) but in different
keyspace (ks_thrift to ks_cql) or should we create another
Hi guys,
We are running one of our Cassandra cluster under 2.0.17 Thrift version and
we started the 2.0.17 CQL migration plan through
CQLSSTableWriter/sstableloader method.
Simple question, maybe someone worked in similar scenario, is there any
problem to do the migration under the same Cassandra
Hi All,
I wanted to let the Cassandra community know that usql, a universal
SQL command-line client modeled after psql, now supports Cassandra +
CQL.
usql is written in Go, is MIT licensed, and has been around for a
little over a year. If you're familiar with (or simply a fan of)
PostgreSQL's
I have two queries. One that gives me the first page from a cassandra table,
and another one that retrieves the successive pages. The fist one is like :
select * from images_by_user where token(iduser) = token(5) limit 10 allow
filtering;
The successive ones are :
select * from
..@tradeweb.com>:
>
>> Hi Community,
>>
>>
>>
>> Can anyone help me understand what class’s id need to set logging on , if
>> I want to capture the cql commands being run through the driver, similar to
>> how profiler (MSSQL) would work? I need to see wh
CreateTableStatement.RawStatement cts =
>>> (CreateTableStatement.RawStatement) query;
>>>
>>> CFMetaData
>>>
>>> .compile(stmt, cts.keyspace())
>>>
>>>
>>>
>>> .getColu
;> (CreateTableStatement.RawStatement) query;
>>
>> CFMetaData
>>
>> .compile(stmt, cts.keyspace())
>>
>>
>>
>> .getColumnMetadata()
>>
>> .values()
>>
>> .stream()
>>
>&
CreateTableStatement.RawStatement cts =
> (CreateTableStatement.RawStatement) query;
>
> CFMetaData
> .*compile*(stmt, cts.keyspace())
>
>
> .getColumnMetadata()
> .values()
> .stream()
> .forEach(cd -> System
.RawStatement cts =
>> (CreateTableStatement.RawStatement) query;
>>>> CFMetaData
>>
>> .*compile*(stmt, cts.keyspace())
>>
>>
>>
>> .getColumnMetadata()
>>
>>
.forEach(cd -> System.out.println(cd));
>
> }
>
>}
>
> }
>
>
> On Mon, Feb 5, 2018 at 2:13 PM, Kant Kodali <k...@peernova.com> wrote:
>
>> Hi Anant,
>>
>> I just have CQL create table statement as a string I want to extract a
it must be made in a development
environment.
Lucas B. Dias
2018-02-22 8:27 GMT-03:00 Jonathan Baynes <jonathan.bay...@tradeweb.com>:
> Hi Community,
>
>
>
> Can anyone help me understand what class’s id need to set logging on , if
> I want to capture the cql commands bein
Hi Community,
Can anyone help me understand what class's id need to set logging on , if I
want to capture the cql commands being run through the driver, similar to how
profiler (MSSQL) would work? I need to see what's being run, and if the query
is actually getting to cassandra?
Has anyone
etColumnMetadata()
.values()
.stream()
.forEach(cd -> System.out.println(cd));
}
}
}
On Mon, Feb 5, 2018 at 2:13 PM, Kant Kodali <k...@peernova.com> wrote:
> Hi Anant,
>
> I just have CQL create table statement as a string I wa
Hi Anant,
I just have CQL create table statement as a string I want to extract all
the parts like, tableName, KeySpaceName, regular Columns, partitionKey,
ClusteringKey, Clustering Order and so on. Thats really it!
Thanks!
On Mon, Feb 5, 2018 at 1:50 PM, Rahul Singh <rahul.xavier
ve a need where I get a raw CQL create table statement as a String and I
> need to parse the keyspace, tablename, columns and so on..so I can use it for
> various queries and send it to C*. I used the example below from this link. I
> get the following error. And I thought maybe some
Hi All,
I have a need where I get a raw CQL create table statement as a String and
I need to parse the keyspace, tablename, columns and so on..so I can use it
for various queries and send it to C*. I used the example below from this
link <https://github.com/tacoo/cassandra-antlr-sample>.
Hi,
Sometimes it is nice to navigate graphically in your schema. If you ever
need to recreate graph from a CQL schema, you can use this small tool :
https://github.com/lbruand/cql2plantuml
NB: It does not deal automatically with the relations ... You need to
define them in the plantuml file
Thanks!
So assuming C* 3.0 and that my table stores only one collection, using
clustering keys will be more performant?
Extending this to sets - would doing something like this make sense?
(
id UUID PRIMARY KEY,
val text,
PRIMARY KEY (id, val))
);
SELECT count(*) FROM TABLE WHERE id = 123
In 3.0, clustering columns are not actually part of the column name anymore.
Yay. Aaron Morton wrote a detailed analysis of the 3.x storage engine here:
http://thelastpickle.com/blog/2016/03/04/introductiont-to-the-apache-cassandra-3-storage-engine.html
Yes, your remark is correct.
However, once CASSANDRA-7396 (right now in 4.0 trunk) get released, you
will be able to get a slice of map values using their (sorted) keys
SELECT map[fromKey ... toKey] FROM TABLE ...
Needless to say, it will be also possible to get a single element from the
map by
Hi,
What would be the tradeoffs between using
1) Map
(
id UUID PRIMARY KEY,
myMap map
);
2) Clustering key
(
id UUID PRIMARY KEY,
key int,
val text,
PRIMARY KEY (id, key))
);
My understanding is that maps are stored very similarly to clustering
columns, where the map key
On Mon, Sep 25, 2017 at 11:16:36AM +0200, Michiel Buddingh wrote:
> We are currently in the process of migrating data from an old Cassandra
> cluster to a new one. When quering data from a table that was copied using
> sstableloader, we find that even at consistency level ALL, results contain
>
LS,
We are currently in the process of migrating data from an old Cassandra
cluster to a new one. When quering data from a table that was copied
using sstableloader, we find that even at consistency level ALL, results
contain 200% duplicate entries, or worse, keep paginating and repeating
Hi,
In CQL binary protocol v4, is it guaranteed by the protocol that a fetch of
the next page will see writes which were ACKed before that fetch, but after
the whole query was started?
Regards,
Tomek
It doesn't work because of the white space. By default the NULL value is an
empty string, extra white spaces are not trimmed automatically.
This should work:
ce98d62a-3666-4d3a-ae2f-df315ad448aa,Jonsson,Malcom,,2001-01-19
17:55:17+
You can change the string representing missing values with
Hi
I am trying to copy a file of CSV data into a table
But I get an error since sometimes one of the columns (which is a UUID) is empty
Is this a bug or what am I missing?
Here is how it looks like
Table
id uuid,
lastname text,
firstname text,
address_id uuid,
dateofbirth
[mailto:j...@jonhaddad.com]
Sent: vendredi 12 mai 2017 04:21
To: @Nandan@ <nandanpriyadarshi...@gmail.com>; user@cassandra.apache.org
Subject: Re: Reg:- CQL SOLR Query Not gives result
This is a question for datastax support, not the Apache mailing list. Folks
here are more than happy t
This is a question for datastax support, not the Apache mailing list. Folks
here are more than happy to help with open source, Apache Cassandra
questions, if you've got one.
On Thu, May 11, 2017 at 9:06 PM @Nandan@
wrote:
> Hi ,
>
> In my table, I am having few
Hi ,
In my table, I am having few records and implemented SOLR for partial
search but not able to retrieve data.
SELECT * from revall_book_by_title where solr_query = 'language:中';
SELECT * from revall_book_by_title where solr_query = 'language:中*';
None of them are working.
Any suggestions.
We're using Cassandra 2.2.
This document lists a number of CQL limits. I'm particularly interested in the
Collection limits for Set and List. If I've interpreted it correctly, the
document states that values in Sets are limited to 65535 bytes.
This limit, as far as I know, exists because
arun Barala <varunbaral...@gmail.com>
> wrote:
>
>> use `bigint` for long.
>>
>>
>> Regards,
>> Varun Barala
>>
>> On Thu, Dec 8, 2016 at 10:32 AM, Check Peck <comptechge...@gmail.com>
>> wrote:
>>
>>> What is th
8, 2016 at 10:32 AM, Check Peck <comptechge...@gmail.com>
> wrote:
>
>> What is the CQL data type I should use for long? I have to create a
>> column with long data type. Cassandra version is 2.0.10.
>>
>> CREATE TABLE storage (
>> key text,
&g
use `bigint` for long.
Regards,
Varun Barala
On Thu, Dec 8, 2016 at 10:32 AM, Check Peck <comptechge...@gmail.com> wrote:
> What is the CQL data type I should use for long? I have to create a column
> with long data type. Cassandra version is 2.0.10.
>
> CREATE TABLE sto
What is the CQL data type I should use for long? I have to create a column
with long data type. Cassandra version is 2.0.10.
CREATE TABLE storage (
key text,
clientid int,
deviceid long, // this is wrong I guess as I don't see long in CQL?
PRIMARY KEY (topic, partition
a Column family using compact storage and it
> has 3 columns which are not part of the composite primary key (I know this
> is not allowed by CQL protocol.)
>
> The version of my current Cassandra cluster is 2.0.17. The schema when I
> do "show schema" using the thrift protoc
this
is not allowed by CQL protocol.)
The version of my current Cassandra cluster is 2.0.17. The schema when I do
"show schema" using the thrift protocol comes to be:
create column family store
with column_type = 'Standard'
and comparator =
'Com
ngxing...@pearson.com>
wrote:
> Hi,
>
> I secured my C* cluster by having "authenticator:
> org.apache.cassandra.auth.PasswordAuthenticator" in cassandra.yaml. I
> know it secures the CQL native interface running on port 9042 because my
> code uses such interface. Does this
Hi,
I secured my C* cluster by having "authenticator:
org.apache.cassandra.auth.PasswordAuthenticator" in cassandra.yaml. I know
it secures the CQL native interface running on port 9042 because my code
uses such interface. Does this also secure the Thrift API interface running
on po
t; [cqlsh 4.1.1 | Cassandra 2.1.11 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
>
> On Thu, Oct 20, 2016 at 10:33 AM, Stefania Alborghetti <
> stefania.alborghe...@datastax.com> wrote:
>
>> Have you already tried using unset values?
>>
>> http://www.datastax.com/dev
Thanks Stefania, we haven't tried before, and I think the version is not
matched, we are still using,
[cqlsh 4.1.1 | Cassandra 2.1.11 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
On Thu, Oct 20, 2016 at 10:33 AM, Stefania Alborghetti <
stefania.alborghe...@datastax.com> wrote:
> Have yo
t;.
>
> key | column1 | column2 | column3 | column4 | value
> --+++
> ++--
> test by thrift | accessState | | | |
> 0x5
>
uot;.
key | column1 | column2 | column3 | column4 | value
--+++
++--
test by thrift | accessState | | | |
0x5
But when we use Cql, we couldn't set this e
-0400coderhlj coder...@gmail.com
wrote
Hi all,
We use Cassandra 2.1.11 in our product, and we update the Java Drive from
Astyanax(Thrift API) to DataStax Java Driver(Cql) recently, but we encounter a
difficult issue as following, please help us, thanks in advance.
Previously we
Hi all,
We use Cassandra 2.1.11 in our product, and we update the Java Drive from
Astyanax(Thrift API) to DataStax Java Driver(Cql) recently, but we
encounter a difficult issue as following, please help us, thanks in advance.
Previously we were using Astyanax API, and we can insert empty
At this moment no, as this is a maven plugin. Extracting such code would be
relatively trivial.
-- Brice
On Fri, Oct 7, 2016 at 1:24 PM, Ali Akhtar <ali.rac...@gmail.com> wrote:
> Is there a way to call this programatically such as from unit tests, to
> create keyspace / table schem
Is there a way to call this programatically such as from unit tests, to
create keyspace / table schema from a cql file?
On Fri, Oct 7, 2016 at 2:40 PM, Brice Dutheil <brice.duth...@gmail.com>
wrote:
> Hi there,
>
> I’d like to share a very simple project around handling CQL fil
Hi there,
I’d like to share a very simple project around handling CQL files with
maven. We were using the cassandra-maven-plugin before, but with
limitations on the authentication and the use of thrift protocol. I was
tempted to write a replacement focused only the execution of CQL
statements
t fixed then but the issue resurfaced in regression?
>>
>> could you please confirm one way or the other?
>>
>> Thanks and Regards,
>> Samba
>>
>>
>> On Tue, Sep 6, 2016 at 6:34 PM, Samba <saas...@gmail.com> wrote:
>>
>>> Hi,
>>&
other?
>
> Thanks and Regards,
> Samba
>
>
> On Tue, Sep 6, 2016 at 6:34 PM, Samba <saas...@gmail.com> wrote:
>
>> Hi,
>>
>> "CASSANDRA-5376: CQL IN clause on last key not working when schema
>> includes set,list or map"
>>
>> is
then but the issue resurfaced in regression?
could you please confirm one way or the other?
Thanks and Regards,
Samba
On Tue, Sep 6, 2016 at 6:34 PM, Samba <saas...@gmail.com> wrote:
> Hi,
>
> "CASSANDRA-5376: CQL IN clause on last key not working when schema
> includes set,lis
Hi,
"CASSANDRA-5376: CQL IN clause on last key not working when schema includes
set,list or map"
is marked resolved in 1.2.4 but i still see the issue (not an Assertion
Error, but an query validation message)
was the issue resolved only to report proper error message or was it fixed
ver's native protocol (CQL, port 9042).
>
> But it seems that when I enable Thrift in cassandra.yaml (start_rpc=true,
> port 9160), I cannot subsequently connect remotely to the remote node using
> CQL, from inside the driver.
>
> Is there a way of having Thrift enabled on a node bu
Hello,
I have a question. I have changed the Cassandra driver (3.0.0) so we can
connect to and query two nodes (Cassandra 3.2.1), the local one and a remote
node, using the driver's native protocol (CQL, port 9042).
But it seems that when I enable Thrift in cassandra.yaml (start_rpc=true, port
Good stuff, thanks for sharing.
On Sun, Jun 5, 2016 at 12:45 PM, Benoît Canet <ben...@cloudius-systems.com>
wrote:
>
> Hi List,
>
> I am from ScyllaDB and took some time to iterate on the
> wireshark CQL dissector that was previously written by
> Aaron Ten Clay.
>
>
Hi List,
I am from ScyllaDB and took some time to iterate on the
wireshark CQL dissector that was previously written by
Aaron Ten Clay.
The result is that wireshark upstream now have a fully working
CQL V3 dissectors merged in the following commit:
https://github.com/wireshark/wireshark/commit
If I were you, I'd do both. If you're trying to build a multi-tenanted
system, it's probably a better idea to include tenant ID as the partition
key of every cross-tenant table. You can easily run Cassandra with a 4 gig
heap, but I'd never plan on doing so for a production use except for very
so i guess i have to 1) increase the heap size or 2) reduce the number of
keyspaces/column families.
Thanks for you confirmation.
On Tue, May 24, 2016 at 10:08 AM, Eric Stevens wrote:
> Large numbers of tables is generally recommended against. Each table has
> a fixed
Large numbers of tables is generally recommended against. Each table has a
fixed on-heap memory overhead, and by your description it sounds like you
might have as many as 12,000 total tables when you start running into
trouble.
With such a small heap to begin with, you've probably used up most
We are exploring cassandra's limit by creating a lot of keyspaces with
moderate number of column families (roughly 40 - 50) per keyspace and we
have a problem after we reach certain amount of keyspaces, that cqlsh
starts to time out when connecting to cassandra.
This is our cassandra setup. We
hi,
I have posted this as the latest "answer" (though it isn't really) with all
details on stackoverflow
http://stackoverflow.com/questions/27966440/normal-query-on-cassandra-using-datastax-enterprise-works-but-not-solr-query
. Hope someone from DS can help.
Thanks,
Joseph
On Tue, Apr 12, 2016
Hi,
I am new cassandra solr cql. Can you share java program to execute solr
query in cql.
Regards,
Rajesh
1 - 100 of 1044 matches
Mail list logo