Well, the usual access goal for queries in C* is “one partition per query” -
maybe a handful partitions in some cases.
That does not differ for aggregates since the read path is still the same.
Aggregates in C* are meant to move some computation (for example on the data in
a time-frame
...@jonhaddad.com]
Sent: Monday, December 21, 2015 2:50 PM
To: user@cassandra.apache.org; dinesh.shanb...@isanasystems.com
Subject: Re: Cassandra 3.1 - Aggregation query failure
Even if you get this to work for now, I really recommend using a different
tool, like Spark. Personally I wouldn't use
Generally speaking (both for Cassandra as well as for many other projects),
timestamps don't carry a timezone directly. A single point in time has a
consistent value for timestamp regardless of the timezone, and when you
convert a timestamp to a human-friendly value, you can attach a timezone to
s
Even if you get this to work for now, I really recommend using a different
tool, like Spark. Personally I wouldn't use UDAs outside of a single
partition.
On Mon, Dec 21, 2015 at 1:50 AM Dinesh Shanbhag <
dinesh.shanb...@isanasystems.com> wrote:
>
> Thanks for the pointers! I edited jvm.options
Thanks for the pointers! I edited jvm.options in
$CASSANDRA_HOME/conf/jvm.options to increase -Xms and -Xmx to 1536M.
The result is the same.
And in $CASSANDRA_HOME/logs/system.log, grep GC system.log produces this
(when jvm.options had not been changed):
INFO [Service Thread] 2015-12-1
https://datastax.github.io/java-driver/features/query_timestamps/
On Sun, Dec 20, 2015 at 9:48 PM, Harikrishnan A wrote:
> Hello,
>
> How do I set a timestamp value with specific timezone in cassandra. I
> understand that it captures the timezone of the co ordinator node while
> inserting.
> Wha
Hello,
How do I set a timestamp value with specific timezone in cassandra. I
understand that it captures the timezone of the co ordinator node while
inserting. What about if I want to insert and display the timezone that I
preferred nstead of the default co ordinator timezone.
Thanks & Regards,
On Fri, Dec 18, 2015 at 9:17 AM, DuyHai Doan wrote:
> Cassandra will perform a full table scan and fetch all the data in memory
> to apply the aggregate function.
Just to clarify for others on the list: when executing aggregation
functions, Cassandra *will* use paging internally, so at most one
g the
partition key in the query ("select late_flights(uniquecarrier, depdel15)
from flightsbydate;") Cassandra will perform a full table scan and fetch
all the data in memory to apply the aggregate function.
With a small Java HEAP size, there is a possibility that Cassandra runs o
int in the tuple represents delayed flights
of the corresponding uniquecarrier. The second int represents total
flights of the uniquecarrier.
This aggregation query on a subset of the days of the month works:
cqlsh:flightdata> select late_flights(uniquecarrier, depdel15) from
flights
I agree with Jon. It's almost a statistical certainty that such updates
will be processed out of order some of the time because the clock sync
between machines will never be perfect.
Depending on how your actual code that shows this problem is structured,
there are ways to reduce or eliminate such
High volume updates to a single key in a distributed system that relies on
a timestamp for conflict resolution is not a particularly great idea. If
you ever do this from multiple clients you'll find unexpected results at
least some of the time.
On Tue, Dec 15, 2015 at 12:41 PM Paulo Motta
wrote:
> We are using 2.1.7.1
Then you should be able to use the java driver timestamp generators.
> So, we need to look for clock sync issues between nodes in our ring? How
close do they need to be?
millisecond precision since that is the server precision for timestamps, so
probably NTP should do the
On Tue, Dec 15, 2015 at 2:57 PM Paulo Motta
wrote:
> What cassandra and driver versions are you running?
>
>
We are using 2.1.7.1
> It may be that the second update is getting the same timestamp as the
> first, or even a lower timestamp if it's being processed by another server
> with unsynced
What cassandra and driver versions are you running?
It may be that the second update is getting the same timestamp as the
first, or even a lower timestamp if it's being processed by another server
with unsynced clock, so that update may be getting lost.
If you have high frequency updates in the s
We are encountering a situation in our environment (a 6-node Cassandra
ring) where we are trying to insert a row and then immediately update it,
using LOCAL_QUORUM consistency (replication factor = 3). I have replicated
the issue using the following code:
https://gist.github.com/jwcarman/72714e6d
nnect(KEYSPACE);
try {
session.execute("INSERT INTO " + TABLENAME + "(tnt_id,
data) VALUES (5, 'cql test value')");
} finally {
session.close();
}
} finally {
cluster.close();
}
nt_id 5
From: Carlos Alonso [mailto:i...@mrcalonso.com]
Sent: Monday, November 23, 2015 9:00 PM
To: user@cassandra.apache.org
Subject: Re: No query results while expecting results
Did you tried to observe it using cassandra-cli? (the thrift client)
It shows the 'disk-layout' of the data
Michael,
Thanks for pointing that out. It is a driver issue affecting CQL export
(but not the execution API).
I created a ticket to track and resolve:
https://datastax-oss.atlassian.net/browse/PYTHON-447
Adam
On Sat, Nov 21, 2015 at 8:38 AM, Laing, Michael
wrote:
> Quickly reviewing this spec
Did you tried to observe it using cassandra-cli? (the thrift client)
It shows the 'disk-layout' of the data and may help as well.
Otherwise, if you can reproduce it having a varint as the last part of the
partition key (or at any other location), this may well be a bug.
Carlos Alonso | Software E
Hello Carlos,
On Mon, Nov 23, 2015 at 3:31 PM, Carlos Alonso wrote:
> Well, this makes me wonder how varints are compared in java vs python
> because the problem may be there.
>
> I'd suggest getting the token, to know which server contains the missing
> data. Go there and convert sstables to js
llo Prem,
>>
>> On Mon, Nov 23, 2015 at 2:36 PM, Prem Yadav wrote:
>>
>>> Can you run the trace again for the query "select * " without any
>>> conditions and see if you are getting results for tnt_id=5?
>>> <http://www.iqnomy.com/&
ter.com/calonso>
On 23 November 2015 at 13:55, Ramon Rockx wrote:
> Hello Prem,
>
> On Mon, Nov 23, 2015 at 2:36 PM, Prem Yadav wrote:
>
>> Can you run the trace again for the query "select * " without any
>> conditions and see if you are getting results for tnt
Hello Prem,
On Mon, Nov 23, 2015 at 2:36 PM, Prem Yadav wrote:
> Can you run the trace again for the query "select * " without any
> conditions and see if you are getting results for tnt_id=5?
> <http://www.iqnomy.com/>
Of course, here are the results, with trac
Can you run the trace again for the query "select * " without any
conditions and see if you are getting results for tnt_id=5?
On Mon, Nov 23, 2015 at 1:23 PM, Ramon Rockx wrote:
> Hello Oded and Carlos,
>
> Many thanks for your tips. I modified the consistency level in c
14:13:05,898 | 192.168.0.210 |276
Executing single-partition query
on te | 14:13:05,898 | 192.168.0.211 |259
Enqueuing data request to /
192.168.0.211 | 14:13:05,898 | 192.168.0.210 |
62015032164 | 2063819251 | 105e7210-cfdb-11e4-85e9-000c2981ebb4 |
> 0 | {"v":1451221,"s":2130304,"r":104769,"u":"http://www.example.com"}
> 62015061055 | 2147429759 | 35b97470-0f68-11e5-8cc3-000c2981ebb4 |
>
It might be a consistency issue.
Assume your data for tnt 5 should be on nodes 1 and 2, but actually never got
to node 1 for various reasons, and the hint wasn’t replayed for some reason and
you didn’t run repairs. The data for tnt 5 is only on node 2.
A query without restrictions on the
//www.example.com"}
62015032164 | 2063819251 | 105e7210-cfdb-11e4-85e9-000c2981ebb4 |0
| {"v":1451221,"s":2130304,"r":104769,"u":"http://www.example.com"}
62015061055 | 2147429759 | 35b97470-0f68-11e5-8cc3-000c2981ebb4 |1
|
{"v&q
Quickly reviewing this spec:
https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec
I see that column_name is a utf-8 encoded string.
So you should be able to pass unicode into the python driver and have it do
the "right thing".
If not, it's a bug IMHO.
On Sat, Nov 21, 2015
>
> All these pain we need to take because the column names have special
>> character like " ' _- ( ) '' ¬ " etc.
>>
>
Hmm. I tried:
cqlsh:test> create table quoted_col_name ( pk int primary key, "'_-()""¬"
int);
cqlsh:test> select * from quoted_col_name;
*pk* | *'_-()"¬*
+-
(0 row
more new column names from
> this XML
> 3) check the system.schema_columns if these column_name(s) exist in the
> table
> 4) If the column don't exist in the table "ALTER table tablename add new
> column_name text"
> 5) Inject data into this new column "Update table n
these column_name(s) exist in the table
4) If the column don't exist in the table "ALTER table tablename add new
column_name text"
5) Inject data into this new column "Update table name set column_name =value
where id=blah"
We did tried the map columns, but the query par
,value: {'SOX'}});
INSERT INTO test_path3 (path_id, mdata ) VALUES ( '1', { key
:'applicable-security-policy',value: {'FOX'}});
*Can I query Something likecqlsh:mykeyspace> SELECT * FROM test_path3
where mdata.value CONTAINS {'Mime'};SyntaxException: *
Thanks
regards
Neha
config value need to be updated for some parameter in
> cassandra.yaml
>
> Do you know which one?
>
>
>
> --
> *From:* Laing, Michael [michael.la...@nytimes.com]
> *Sent:* 13 November 2015 12:26
>
> *To:* user@cassandra.apache.org
> *
you know which one?
From: Laing, Michael [michael.la...@nytimes.com]
Sent: 13 November 2015 12:26
To: user@cassandra.apache.org
Subject: Re: Getting code=2200 [Invalid query] message=Invalid column name ...
while executing ALTER statement
Dynamic schema changes
er@cassandra.apache.org
> *Subject:* Re: Getting code=2200 [Invalid query] message=Invalid column
> name ... while executing ALTER statement
>
> Maybe schema disagreement?
>
> Run nodetool describecluster to discover
>
> Carlos Alonso | Software Engineer | @calonso <htt
2015 11:55
To: user@cassandra.apache.org
Subject: Re: Getting code=2200 [Invalid query] message=Invalid column name ...
while executing ALTER statement
Maybe schema disagreement?
Run nodetool describecluster to discover
Carlos Alonso | Software Engineer | @calonso<https://twitter.com/calo
ALTER TABLE test.iau ADD col5 text
> ALTER TABLE test.iau ADD col6 text
> ALTER TABLE test.iau ADD col7 text
> ALTER TABLE test.iau ADD col8 text
> ALTER TABLE test.iau ADD col9 text
> E
> ==
>
> --
Traceback (most recent call last):
File "UnitTests.py", line 313, in test_insert_data
session.execute(sqlAlterStatement1)
File "/usr/local/lib/python2.7/site-packages/cassandra/cluster.py", line
1405, in
Well...
I think that pretty much is showing the problem. The problem I'd say is a
bad data model. Your read query is perfect, it hits a single partition and
that's the best situation, but on the other hand, it turns out that there
are one or two huge partitions and fully reading them is
and in all nodes?
>
> I checked and on all nodes, the read latency and read local latency are
> within 15 to 40ms.
>
> I also noticed that C* was taking a fair bit of CPU on some of the nodes
> (ranging from 60% to 200%), looking at ttop output it was mostly taken by
> SharedPool-Worker threads, which I assume are the thread that are doing the
> real query work.
>
> Well, I'm puzzled, and I'll keep searching, thanks for your help!
> --
> Brice Figureau
>
threads, which I assume are the thread that are doing the
real query work.
Well, I'm puzzled, and I'll keep searching, thanks for your help!
--
Brice Figureau
wrote my e-mail, things
> are a bit better.
>
> This might be because I moved from openjdk 7 to oracle jdk 8 after
> having seen a warning in the C* log about openjdk, and I also added a
> node (for other reasons).
>
> Now the query itself takes only 1.5s~2s instead of the 5s~6
Hi,
Thanks for your answer. Unfortunately since I wrote my e-mail, things
are a bit better.
This might be because I moved from openjdk 7 to oracle jdk 8 after
having seen a warning in the C* log about openjdk, and I also added a
node (for other reasons).
Now the query itself takes only 1.5s~2s
o | Software Engineer | @calonso <https://twitter.com/calonso>
>
> On 17 October 2015 at 16:15, Brice Figureau <mailto:brice+cassan...@daysofwonder.com>> wrote:
> Hi,
>
> I've read all I could find on how cassandra works, I'm still wondering why
> the
e read all I could find on how cassandra works, I'm still wondering why
> the following query takes more than 5s to return on a simple (and modest) 3
> nodes cassandra 2.1.9 cluster:
>
> SELECT sequence_nr, used
> FROM messages
> WHERE persistence_id = 'session-
Hi,
I've read all I could find on how cassandra works, I'm still wondering why the
following query takes more than 5s to return on a simple (and modest) 3 nodes
cassandra 2.1.9 cluster:
SELECT sequence_nr, used
FROM messages
WHERE persistence_id = 'session-SW' AND pa
On Thu, Oct 15, 2015 at 9:01 AM, Paulo Motta
wrote:
> (OP says:) So - isn't setting unchecked_tombstone_compaction to "true" a
>> dangerous setting? Won't it cause resurrections? What is the use case for
>> this knob, and when do I know I can set it to true safely?
>>
>
To expand slightly on Pa
Hello Deepak,
The dev@cassandra list is exclusive for development announcements and
discussions, so I will reply to users@cassandra as someone else might have
a similar question.
Basically, there is pre-check, that defines which sstables are eligible for
single-sstable tombstone compaction, and a
;
>>> *racing session: *566477c0-6ebc-11e5-9493-9131aba66d63
>>>
>>> *activity*
>>>
>>> |
>>> *timestamp* | *source*| *source_elapsed*
>>>
>>> -----
>
>Execute CQL3 query |
|
>> *timestamp* | *source*| *source_elapsed*
>>
>> ------++---+
>>
>>
>>
-
>
>
> *Execute CQL3 query* | *2015-10-09
> 15:31:28.70* | *172.31.17.129* | *0*
> *Parsing select * from processinfometric_profile where
> profilecontext='GENERIC' and id=‘1
--++---+
Execute CQL3 query | 2015-10-09
15:31:28.70 | 172.31.17.129
ing schema
>> to hold the profile.
>>
>> CREATE TABLE myprofile (
>> id text,
>> month text,
>> day text,
>> hour text,
>> subthings text,
>> lastvalue double,
>> count int,
>> stddev double,
>>
ndra.yaml to avoid it.
If you still see performance problems after that, can you try tracing the
query with cqlsh?
On Fri, Oct 9, 2015 at 12:01 PM, Nazario Parsacala
wrote:
> So I upgraded to 2.2.2 and change the compaction strategy from
> DateTieredCompactionStrategy
> to LeveledCompa
pecific thing and subthing.
>
> A profile can be defined as monthly, daily, hourly. So in case of monthly the
> month will be set to the current month (i.e. ‘Oct’) and the day and hour will
> be set to empty ‘’ string.
>
>
> The problem that we have observed is that over
cs that can use in the
>> context of the ‘thing’ or in the context of specific thing and subthing.
>>
>> A profile can be defined as monthly, daily, hourly. So in case of monthly
>> the month will be set to the current month (i.e. ‘Oct’) and the day and
>> hour will be set
be defined as monthly, daily, hourly. So in case of monthly
> the month will be set to the current month (i.e. ‘Oct’) and the day and
> hour will be set to empty ‘’ string.
>
>
> The problem that we have observed is that over time (actually in just a
> matter of hours) we will see
is that over time (actually in just a matter
of hours) we will see a huge degradation of query response for the monthly
profile. At the start it will be respinding in 10-100 ms and after a couple of
hours it will go to 2000-3000 ms . If you leave it for a couple of days you
will start
septembre 2015 12:35
À : user@cassandra.apache.org
Objet : Cassandra Query using UDF
Hello
I am wondering is it possible to execute a search using a Cassandra UDF.
Similarly to the way I can execute find queries in mongo using custom
javascript.
Thanks
Michael.
Ce message et les pièces jo
Hello
I am wondering is it possible to execute a search using a Cassandra UDF.
Similarly to the way I can execute find queries in mongo using custom
javascript.
Thanks
Michael.
Hi All.
I'm trying to set up cassandra load testing and came up with the next YAML
config (https://gist.github.com/folex/d297cc8208a2e54a36d7) :
keyspace: stress
keyspace_definition: |
CREATE KEYSPACE stress WITH replication = {'class': 'SimpleStrategy',
'replication_factor': 3};
table: messa
Check Cassandra logs for tombstone threshold error
On Aug 3, 2015 7:32 PM, "Robert Coli" wrote:
> On Mon, Aug 3, 2015 at 2:48 PM, Sid Tantia > wrote:
>
>> Any select all or select count query on a particular table is timing out
>> with "Cassandra::Erro
On Mon, Aug 3, 2015 at 2:48 PM, Sid Tantia
wrote:
> Any select all or select count query on a particular table is timing out
> with "Cassandra::Errors::TimeoutError: Timed out"
>
> A “SELECT * FROM WHERE = ‘’
> on the table works, but a “SELECT * FROM LIMIT 1; d
gt;
>> Any select all or select count query on a particular table is timing out
>> with "Cassandra::Errors::TimeoutError: Timed out"
>>
>> A “SELECT * FROM WHERE = ‘’
>> on the table works, but a “SELECT * FROM LIMIT 1; does not work.
>> All other tables and queries work.
>>
>> Any ideas as to why this might be happening?
>>
>>
Hello,
Any select all or select count query on a particular table is timing out with
"Cassandra::Errors::TimeoutError: Timed out"
A “SELECT * FROM WHERE = ‘’ on the
table works, but a “SELECT * FROM LIMIT 1; does not work. All other
tables and queries work.
Any ideas as t
@cassandra.apache.org
主题: Re: query statement return empty
What consistency level are you using with your query?
What replication factor are you using on your keyspace?
Have you run repair?
The most likely explanation is that you wrote with low consistency (ANY, ONE,
etc), and that one or more
Using java/C# rewrite the test case, the results are consistency.
Is there any problem for the python driver?
发件人: 鄢来琼
发送时间: Friday, July 31, 2015 9:03 AM
收件人: 'user@cassandra.apache.org'
主题: query statement return empty
Hi ALL
The result of “select * from t_test where id = 1” statem
What consistency level are you using with your query?
What replication factor are you using on your keyspace?
Have you run repair?
The most likely explanation is that you wrote with low consistency (ANY, ONE,
etc), and that one or more replicas does not have the cell. You’re then reading
with
Hi ALL
The result of “select * from t_test where id = 1” statement is not consistency,
Could you tell me why?
test case,
I = 0;
While I < 5:
result = cassandra_session.execute(“select ratio from t_test where id = 1”)
print result
testing result:
[Row(ratio=Decimal('0.000'))]
[]
[Row(ratio=De
lable for query
at consistency LOCAL_ONE (1 required
but only 0 alive)
at org.apache.cassandra.stress.Operation.error(Operation.java:216)
at org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:188)
at
org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.ja
-Original Message-
> From: anton [mailto:anto...@gmx.de]
> Sent: Tuesday, July 21, 2015 7:54 PM
> To: user@cassandra.apache.org
> Subject: howto do sql query like in a relational database
>
> Hi,
>
> I have a simple (perhaps stupid) question.
>
> If I want to *sear
:54 PM
To: user@cassandra.apache.org
Subject: howto do sql query like in a relational database
Hi,
I have a simple (perhaps stupid) question.
If I want to *search* data in cassandra, how could find in a text field all
records which start with 'Cas'
( in sql I do select * from table wh
Hi,
I have a simple (perhaps stupid) question.
If I want to *search* data in cassandra,
how could find in a text field all records
which start with 'Cas'
( in sql I do select * from table where field like 'Cas%')
I know that this is not directly possible.
- But how is it possible?
- Do nobo
Hi,
Is there any way to export the results of a query (e.g. select * from tbl1
where id ="aa" and loc ="bb") into a file as CSV format?
I tried to use "COPY" command with "cqlsh", but the command does not work
when you have "where " condition
t; On Wednesday, June 10, 2015 11:27 AM, Sotirios Delimanolis <
> sotodel...@yahoo.com> wrote:
>
>
> Will this "eventually they will all go through" behavior apply to the IN?
> How is this query written to the commitlog?
>
> Do you mean prepare a query like
>
Similarly, should we send multiple SELECT requests or a single one with a
SELECT...IN ?
On Wednesday, June 10, 2015 11:27 AM, Sotirios Delimanolis
wrote:
Will this "eventually they will all go through" behavior apply to the IN? How
is this query written to the commitl
No dispute about that. But the main design requirement Cassandra strives to
meet is to be a blazing fast transactional database - here's the key, give
me the data, and here's the key, write this data. Any additional query
requirements are a distant second at best. A big part of that tra
Hadoop for querying
by hive.
Example: “We found a few records with incorrect data. How many more records
like that are out there?”
Sean Durity
From: Peter Lin [mailto:wool...@gmail.com]
Sent: Wednesday, June 10, 2015 8:17 AM
To: user@cassandra.apache.org
Subject: Re: Support for ad-hoc query
Will this "eventually they will all go through" behavior apply to the IN? How
is this query written to the commitlog?
Do you mean prepare a query likeDELETE FROM MastersOfTheUniverse WHERE
mastersID = ?;and execute it asynchronously 3000 times or add 3000 of these
DELETE (bound
gt;
> For example, given 3000 keys for rows I want to delete, should I issue a
> single DELETE query and provide all the keys in the IN argument or should
> I add 3000 DELETE queries to a BATCH statement?
>
> Thank you,
> Sotirios
>
>
>
h where if one fails, all fail? If that
is the case, is there any reason to use a BATCH statement with multiple single
DELETE statement or should we always prefer a DELETE with an IN clause?
For example, given 3000 keys for rows I want to delete, should I issue a single
DELETE query and provide al
o run. Allowing
arbitrary ad-hoc queries is a known anti-pattern for cassandra. If the
system needs to query multiple cf to derive/calculate some result, using
Cassandra alone isn't going to do it. You'll need some other system to give
you better query capabilities like Hive.
If you need dat
cluster?
3. How complex do you expect them to be - how many clauses and operators?
4. What is their net cardinality - are they selecting just a few rows or
many rows?
5. Do they have individual query clauses that select many rows even if the
net combination of all select clauses is not so many rows?
The
Thanks guys for the inputs.
By ad-hoc queries I mean that I don't know the queries during cf design
time. The data may be from single cf or multiple cf. (This feature maybe
required if I want to do analysis on the data stored in cassandra, do you
have any better ideas)?
Regards,
Seenu.
On Tue,
what do you mean by ad-hoc queries?
Do you mean simple queries against a single column family aka table?
Or do you mean MDX style queries that looks at multiple tables?
if it's MDX style queries, many people extract data from Cassandra into a
data warehouse that support multi-dimensional cubes.
use of, or taking any action in reliance upon, this
information by persons or entities other than the intended recipient is
strictly prohibited.
From: Srinivasa T N
Reply-To:
Date: Tuesday, June 9, 2015 at 2:38 AM
To: "user@cassandra.apache.org"
Subject: Support for ad-hoc que
Hi All,
I have an web application running with my backend data stored in
cassandra. Now I want to do some analysis on the data stored which
requires some ad-hoc queries fired on cassandra. How can I do the same?
Regards,
Seenu.
Hi Jens,
thanks a lot for the link! Your ticket seems very similar to my request.
kind regards,
Christian
On Sat, May 2, 2015 at 2:25 PM, Jens Rantil wrote:
> Hi Christian,
>
> I just know Sylvain explicitly stated he wasn't a fan of exposing
> tombstones here:
> https://issues.apache.org/jir
Hi Christian,
I just know Sylvain explicitly stated he wasn't a fan of exposing
tombstones here:
https://issues.apache.org/jira/browse/CASSANDRA-8574?focusedCommentId=14292063&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14292063
Cheers,
Jens
On Wed, Apr 29, 2015
2 when running a query
(contains IN on the partition key and an ORDER BY ) using datastax driver for
Java.
However, I am able to run this query alright in cqlsh.
cqlsh:> show version;
[cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native protocol v3]
cqlsh:gps> select * from log wh
Hi
I have run into the following issue
https://issues.apache.org/jira/browse/CASSANDRA-6722 when running a query
(contains IN on the partition key and an ORDER BY ) using datastax driver
for Java.
However, I am able to run this query alright in cqlsh.
cqlsh:> show version;
[cqlsh 5.
Hi,
did anybody ever raise a feature request for selecting tombstones in
CQL/thrift?
It would be nice if I could use CQLSH to see where my tombstones are coming
from. This would much more convenient than using sstable2json.
Maybe someone can point me to an existing jira-ticket, but I also
apprec
There's a ticket for range deletions in CQL here:
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-6237
On Apr 15, 2015 6:27 PM, "Dan Kinder" wrote:
>
> I understand that range deletes are currently not supported (
http://stackoverflow.com/questions/19390335/cassandra-cql-de
I understand that range deletes are currently not supported (
http://stackoverflow.com/questions/19390335/cassandra-cql-delete-using-a-less-than-operator-on-a-secondary-key
)
Since Cassandra now does have range tombstones is there a reason why it
can't be allowed? Is there a ticket for supporting
"maxBackupIndex" or
"maxFileSize" to make sure u keep enough log files around.
anishek
On Thu, Apr 2, 2015 at 11:53 AM, 鄢来琼 wrote:
> Hi all,
>
>
>
> Cassandra 2.1.2 is used in my project, but some node is down after
> executing query some statements.
>
> Co
Hi all,
Cassandra 2.1.2 is used in my project, but some node is down after executing
query some statements.
Could I configure the Cassandra to log all the executed statement?
Hope the log file can be used to identify the problem.
Thanks.
Peter
person_idx ON PERSON(stargate) USING some
>>> 'com.tuplejump.stargate.RowIndex' WITH options =
>>> {
>>> 'sg_options':'{
>>> "fields":{
>>> "eyeColor":{},
>>>
401 - 500 of 1183 matches
Mail list logo