is it limits the number of returned rows *per partition*. So,
for example, with your schema, if you have the following data:
cqlsh:ks1> SELECT client_id, when FROM test;
client_id | when
---+-
1 | 2020-01-01 22:00:00.00+
1 | 2019-12-31 22:00:00.00+
In your schema case, for each client_id you will get a single 'when'
row. Just one. Even when there are multiple rows (clustering keys)
On Thu, May 7, 2020 at 12:14 AM Check Peck wrote:
>
> I have a scylla table as shown below:
>
>
> cqlsh:sampleks> describe table test;
>
>
> CREATE
I have a scylla table as shown below:
cqlsh:sampleks> describe table test;
CREATE TABLE test (
client_id int,
when timestamp,
process_ids list,
md text,
PRIMARY KEY (client_id, when) ) WITH CLUSTERING ORDER BY (when DESC)
AND
hosts that hold
> these partition keys, and this slow downs the operation, and adds an
> additional load to the coordinating node. If you execute queries in
> parallel (using async) for every of combination of pk1 & pk2, and then
> consolidate data application side - this could be faster t
load to the coordinating node. If you execute queries in
parallel (using async) for every of combination of pk1 & pk2, and then
consolidate data application side - this could be faster than query with IN.
Answer:
You need to pass list as value of temp - IN expects list there...
q
during design time.
Sean Durity
From: Attila Wind
Sent: Friday, February 21, 2020 2:52 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: IN OPERATOR VS BATCH QUERY
Hi Sergio,
AFAIK you use batches when you want to get "all or nothing" approach from
Cassandra. So turnin
On Fri, Feb 21, 2020 at 2:12 PM Deepak Sharma
wrote:
>
> We have a use case where we need to have two separate PreparedStatement
> objects (one with RetryPolicy and the other without any retry policy) for
> the same query string. And when we try to create two separate
> Prepare
Hi There,
We have a use case where we need to have two separate PreparedStatement
objects (one with RetryPolicy and the other without any retry policy) for
the same query string. And when we try to create two separate
PreparedStatements, we see only one PreparedStatement getting retained
Hi Sergio,
AFAIK you use batches when you want to get "all or nothing" approach from
Cassandra. So turning multiple statements into one atomic operation.
One very typical use case for this is when you have denormalized data in
multiple tables (optimized for different queries) but you need to
The current approach is delete from key_value where id = whatever and it is
performed asynchronously from the client.
I was thinking to reduce at least the network round-trips between client
and coordinator with that Batch approach. :)
In any case, I would test it it will improve or not. So when
Batches aren't really meant for optimisation in the same way as RDBMS. If
anything, it will just put pressure on the coordinator having to fire off
multiple requests to lots of replicas. The IN operator falls into the same
category and I personally wouldn't use it with more than 2 or 3 partitions
Hi guys!
Let's say we have a KEY-VALUE schema
The goal is to delete the KEYS in batches without burning the cluster and be
efficient as soon as possible
I would like to know if it is better to run the query with DELETE FROM
KEY_VALUE_COLUMN_FAMILY WHERE KEY IN ('A','B','C'); At most 10 KEYS
To: "user@cassandra.apache.org"
Subject: Re: Query timeouts after Cassandra Migration
Message from External Sender
So do you advise copying tokens in such cases ? What procedure is advisable ?
Specifically for your case with 3 nodes + RF=3, it won't make a difference so
leave it as it is
>
> So do you advise copying tokens in such cases ? What procedure is
> advisable ?
>
Specifically for your case with 3 nodes + RF=3, it won't make a difference
so leave it as it is.
> Latency increased on target cluster.
>
Have you tried to run a trace of the queries which are slow? It will
Thanks Eric.
So do you advise copying tokens in such cases ? What procedure is advisable
?
Latency increased on target cluster. I’d double check on storage disks but
it should be same.
— Ankit
On Thu, Feb 6, 2020 at 9:07 PM Erick Ramirez wrote:
> I didn’t copy tokens since it’s an identical
>
> I didn’t copy tokens since it’s an identical cluster and we have RF as 3
> on 3 node cluster. Is it still needed , why?
>
In C*, same number of nodes alone isn't enough. Clusters aren't really
identical unless token assignments are the same. In your case though since
each node has a full copy
Hi Michael,
Thanks for your response.
I didn’t copy tokens since it’s an identical cluster and we have RF as 3 on
3 node cluster. Is it still needed , why?
Don’t see anything in cassandra log as such. I don’t have debugs enabled.
Thanks & Regards,
Ankit
On Thu, Feb 6, 2020 at 1:47 PM Michael
Did you copy the tokens from cluster1 to new cluster2? Same Cassandra
version, same instance type/size? What to the logs say on cluster2 that
look different from the cluster1 norm? There are a number of possible
`nodetool` utilities that may help see what is happening on new cluster2.
Michael
Hi Folks,
I recently migrated Cassandra keyspace data from one Azure cluster (3
Nodes) to another (3 nodes different region) using simple sstable copy.
Post this , we are observing overall response time has increased and
timeouts every 20 mins.
Has anyone faced such in their experiences ?
Do I
es using TTL. Map seem to be
excellent fit for that.
> Furthermore, you cannot query the TTL for a single item in a collection,
> and as distinct columns can have distinct TTLs, you cannot query the TTL
> for the whole map (collection). As you cannot get the TTL for the whole
> thing, nor que
'}}
```
Furthermore, you cannot query the TTL for a single item in a collection,
and as distinct columns can have distinct TTLs, you cannot query the TTL
for the whole map (collection). As you cannot get the TTL for the whole
thing, nor query a single item of the collection, I guess there is no way
Hi everyone,
I'm struggling to understand how can I query TTL on the row in collection (
Cassandra 3.11.4 ).
Here is my schema:
CREATE TYPE item (
csn bigint,
name text
);
CREATE TABLE products (
product_id bigint PRIMARY KEY,
items map>
);
And I'm creating records with TTL l
Raised a ticket https://issues.apache.org/jira/browse/CASSANDRA-15159 for
the same.
On Thu, Jun 13, 2019 at 3:55 PM Laxmikant Upadhyay
wrote:
> This issue is reproducible on *3.11.4 and 2.1.21* as well. (not yet
> checked on 3.0)
>
> Range query could be : select * from
This issue is reproducible on *3.11.4 and 2.1.21* as well. (not yet
checked on 3.0)
Range query could be : select * from test.table1; (In this case, Read
repair actually sending old mutation to the node which has tombstone )
I also ran normal read and it also returns a row in this case
2.1.17 as well.
I am attaching the steps to reproduce on 2.1.17 (with minor change from
previous steps to make sure one request must go to the node which has old
mutation). I have also attached the trace of range read query.
Should I raise a jira for the same ?
On Wed, Jun 12, 2019 at 9:00 AM M
wrote:
Does range query ignore purgable tombstone (which crossed grace period)
in some cases?
On Tue, Jun 11, 2019, 2:56 PM Laxmikant Upadhyay
mailto:laxmikant@gmail.com>> wrote:
In a 3 node cassandra 2.1.16 cluster where, one node has old
mutation and two nodes have evic
Does range query ignore purgable tombstone (which crossed grace period) in
some cases?
On Tue, Jun 11, 2019, 2:56 PM Laxmikant Upadhyay
wrote:
> In a 3 node cassandra 2.1.16 cluster where, one node has old mutation and
> two nodes have evict-able (crossed gc grace period) tombstone pr
In a 3 node cassandra 2.1.16 cluster where, one node has old mutation and
two nodes have evict-able (crossed gc grace period) tombstone produced by
TTL. A read range query with local quorum return the old mutation as
result. However expected result should be empty. Next time running the same
t, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.
On Tue, 30 Apr 2019 at 17:06, Marco Gasparini <
marco.gaspar...@competitoor.com> wrote:
> > My guess is the initial query was
> My guess is the initial query was causing a read repair so, on subsequent
queries, there were replicas of the data on every node and it still got
returned at consistency one
got it
>There are a number of ways the data could have become inconsistent in the
first place - eg badly over
My guess is the initial query was causing a read repair so, on subsequent
queries, there were replicas of the data on every node and it still got
returned at consistency one.
There are a number of ways the data could have become inconsistent in the
first place - eg badly overloaded or down nodes
ged it I get the right results.
>After results are returned correctly are they then returned correctly for
all future runs?
yes it seems that after they returned I can get access to them at each run
of the same query on each node i run it.
> When was the data inserted (relative to your attempt
inserted (relative to your attempt to
query it)?
Cheers
Ben
---
*Ben Slater**Chief Product Officer*
<https://www.instaclustr.com/platform/>
<https://www.facebook.com/instaclustr> <https://twitter.com/instaclustr>
<https://www.linkedin.com/company/instaclustr>
Read our lat
Hi all,
I'm using Cassandra 3.11.3.5.
I have just noticed that when I perform a query I get 0 result but if I
launch that same query after few seconds I get the right result.
I have traced the query:
cqlsh> select event_datetime, id_url, uuid, num_pages from
mkp_history.mkp_lookup where id_
.
But now, when we do a count query on the system_distributed keyspace
parent_repair_history or repair_history tables, we get varying results
everytime we do this query, querying immediately after each other. Sometimes
count is a bigger number, sometimes a smaller number
query:
select count
Hello all,
As per my knowledge spring data cassandra (recent version) uses by default
cassandra client side query timestamp.
I am just curious to know which once is more preferable and recommended to
have out of client side or server side query timestamp.
Also if any logical reason for the same
gt;>> Read our latest technical blog posts here
>>>> <https://www.instaclustr.com/blog/>.
>>>>
>>>> This email has been sent on behalf of Instaclustr Pty. Limited
>>>> (Australia) and Instaclustr Inc (USA).
>>>>
>>>>
t;> or disclose its content, but please reply to this email immediately and
>>> highlight the error to the sender and then immediately delete the message.
>>>
>>>
>>> On Tue, 9 Apr 2019 at 16:51, Mahesh Daksha wrote:
>>>
>>>> Hello,
>>
its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
>>
>> On Tue, 9 Apr 2019 at 16:51, Mahesh Daksha wrote:
>>
>>> Hello,
>>>
>>> I have configured the timest
ely delete the message.
>
>
> On Tue, 9 Apr 2019 at 16:51, Mahesh Daksha wrote:
>
>> Hello,
>>
>> I have configured the timestamp generator at cassandra client as below:
>>
>> cluster.setTimestampGenerator(new AtomicMonotonicTimestampGenerator());
>
ello,
>
> I have configured the timestamp generator at cassandra client as below:
>
> cluster.setTimestampGenerator(new AtomicMonotonicTimestampGenerator());
>
> My cassandra client inserting and updating few of the rows in a table.
> My query is where in the cassandra debug
Hello,
I have configured the timestamp generator at cassandra client as below:
cluster.setTimestampGenerator(new AtomicMonotonicTimestampGenerator());
My cassandra client inserting and updating few of the rows in a table.
My query is where in the cassandra debug logs I can see the query write
.0" -u cassandra_superuser -p my_password
> cassandra_address 9042`
>
> The CL commands will fail half of the time :
>
> ```
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD =
> 'leo4' AND LOGIN=TRUE;
> InvalidRequest: Error from
From the client host :
>
> `cqlsh --cqlversion "3.4.0" -u cassandra_superuser -p my_password
> cassandra_address 9042`
>
> The CL commands will fail half of the time :
>
> ```
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4'
> A
_superuser -p my_password
cassandra_address 9042`
The CL commands will fail half of the time :
```
cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4'
AND LOGIN=TRUE;
InvalidRequest: Error from server: code=2200 [Invalid query]
message="org.apache.cassandra.auth
(21.613MiB) for commitlog position ReplayPosition(segmentId=1525681566129,
> position=29370537)
> ERROR [SharedPool-Worker-1] 2018-12-27 05:07:10,349 QueryMessage.java:135 -
> Unexpected error during query
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.l
ReplayPosition(segmentId=1525681566129,
position=29370537)
ERROR [SharedPool-Worker-1] 2018-12-27 05:07:10,349 QueryMessage.java:135 -
Unexpected error during query
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException:
org.apache.cassandra.exceptions.ReadTimeoutException
hi everyone,
I have activated DEBUG mode via nodetool setlogginglevel, now system.log
shows me slow queries (slower than 500ms) but the log is showing me the
wrong query. My query is composed like this:
SELECT pkey, f1, f2, f3 FROM mykeyspace.mytable WHERE pkey='xxx' LIMIT 3000
Hi,
Day before yesterday, I had issued a full sequential repair on one of my
nodes in a 5 node cassandra cluster for a single table using the below
command.
nodetool repair -full -seq -tr >
Now the node on which the command was issued was repaired properly as can
be infered from the below
Thanks a lot for the info :)
On Tue, Nov 6, 2018 at 11:11 AM DuyHai Doan wrote:
> Cassandra will execute such request using a Partition Range Scan.
>
> See more details here http://www.doanduyhai.com/blog/?p=13191, chapter E
> Cluster Read Path (look at the formula of Concurrency Factor)
>
>
>
Hi All,
If I run for example:
select * from myTable limit 3;
Does Cassandra do a full table scan regardless of the limit?
Thanks!
Hi, we built a simple system to migrate live cassandra data to other
databases, mainly by using these queries:
1. SELECT DISTINCT TOKEN(partition_key) FROM table WHERE
TOKEN(partition_key) > current_offset AND TOKEN(partition_key) <=
upper_bound LIMIT token_fetch_size
2. Any cql
each primary key is also not a right choice for big
>> partition key.
>> Can anyone suggest me best practice to query it and if Cassandra is not
>> the right fit for these type of queries which NoSQL dB solves this problem.
>> Any help is highly appreciated
>>
>> Thanks
ys can be achieved using IN operator but it
> cause load only on single node and which inturn causes READ timeout issues.
> Calling asynchronously each primary key is also not a right choice for big
> partition key.
> Can anyone suggest me best practice to query it and if Cassandra is n
Hi users,
Querying multiple primary keys can be achieved using IN operator but it
cause load only on single node and which inturn causes READ timeout issues.
Calling asynchronously each primary key is also not a right choice for big
partition key.
Can anyone suggest me best practice to query
images_by_user where token(iduser) = token(5) and imagekey >
90b18881-ccd3-4ed4-8cdf-d71eb99b3505 limit 10 allow filtering;
where the image key is the last one on the first page.
I have 13 rows in the table. The first query returns 10 both on the cqlsh and
the application (consistency level ONE j
https://www.slideshare.net/doanduyhai/datastax-day-2016-cassandra-data-modeling-basics
On Thu, Mar 1, 2018 at 3:48 PM, Valentina Crisan <valentina.cri...@gmail.com
> wrote:
> 1) I created another table for Query#2/3. The partition Key was StartTime
> and clustering key was name. Wh
1) I created another table for Query#2/3. The partition Key was StartTime
and clustering key was name. When I execute my queries, I get an exception
saying that I need to ALLOW FILTERING.
*Primary key(startTime,name) - the only queries that can be answered by
this model are: where startTime
Thank you for your response.
I have been through the document and I have tried these techniques but I
failed to model my queries correctly.
Forexample, I have already tried the following:
1) I created another table for Query#2/3. The partition Key was StartTime
and clustering key was name. When
and 3rd are failing.
You might find this useful:
http://cassandra.apache.org/doc/latest/cql/dml.html#the-where-clause
There are several Cassandra handbooks available on Amazon, maybe it would be
helpful for you to use some of them as starting point to understand aspects of
Cassandra data[query
Hi,own vote
favorite
<https://stackoverflow.com/questions/49049760/cassandra-filter-with-ordering-query-modeling#>
I am new to Cassandra and I am trying to model a table in Cassandra. My
queries look like the following
Query #1: select * from TableA where Id = "123"Query #2: sele
-0500, Rajesh Kishore <rajesh10si...@gmail.com>, wrote:
> Hi Rahul,
>
> I cannot confirm the size wrt Cassandra, but usually in berkley db for 10 M
> records , it takes around 120 GB. Any operation takes hardly 2 to 3 ms when
> query is performed on index attribute.
>
&g
Hi Rahul,
I cannot confirm the size wrt Cassandra, but usually in berkley db for *10
M records* , it takes around 120 GB. Any operation takes hardly 2 to 3 ms
when query is performed on index attribute.
Usually 10 to 12 columns are the OOTB behaviour but one can configure any
attribute
er backend can fit
> > > > with the requirement we have.
> > > >
> > > > Now, if we want to use cassandra , I broadly see one table which would
> > > > contain all the entries. Now, the question is what should be the
> > > > correct p
Now, if we want to use cassandra , I broadly see one table which would
contain all the entries. Now, the question is what should be the correct
partitioning majors ?
entity is
Entry {
id varchar,
objectclasses list
sn
cn
...
...
}
and query can be anything like
a) get all entries based on sn=*
b) get a
should be the correct
> partitioning majors ?
> entity is
> Entry {
> id varchar,
> objectclasses list
> sn
> cn
> ...
> ...
> }
>
> and query can be anything like
> a) get all entries based on sn=*
> b) get all entries based on sn=A and cn=b
> c) ge
should be the correct
partitioning majors ?
entity is
Entry {
id varchar,
objectclasses list
sn
cn
...
...
}
and query can be anything like
a) get all entries based on sn=*
b) get all entries based on sn=A and cn=b
c) get all entries based on sn=A OR objeclass contains person
..
Please
eQuery =
QueryBuilder.update(KEYSPACE, tableName)...;
cassandraSession.execute(query);
}
TimeUnit.SECONDS.sleep(sleepSeconds); //Sleep for a few seconds to let
the DB... breathe
}
}
I'm using consistency level ONE on both
yClass resultToUpdate : resultsToUpdate){
Statement updateQuery =
QueryBuilder.update(KEYSPACE, tableName)...;
cassandraSession.execute(query);
}
TimeUnit.SECONDS.sleep(sleepSeconds); /
e same day so that might me the reason for all this data
> going into the same partition. I'll see if I can do something about this.
>
>
> Thanks,
>
> Dipan Shah
>
>
> --
> *From:* Nicolas Guyomar <nicolas.guyo...@gmail.com>
> *S
do something about this.
Thanks,
Dipan Shah
From: Nicolas Guyomar <nicolas.guyo...@gmail.com>
Sent: Wednesday, December 20, 2017 2:48 PM
To: user@cassandra.apache.org
Subject: Re: Error during select query - Found other issues with cluster too
Hi
and after that will update if that solved my problem.
Thanks,
Dipan Shah
From: adama.diab...@orange.com <adama.diab...@orange.com>
Sent: Wednesday, December 20, 2017 3:43 PM
To: user@cassandra.apache.org; Dipan Shah
Subject: RE: Error during select query -
# change 1234 to your current
Cassandra process id
$ ulimit -Hn ; ulimit -Sn
What are your OS and its version ?
Thanks,
Adama
De : Dipan Shah [mailto:dipan@hotmail.com]
Envoyé : mercredi 20 décembre 2017 07:34
À : User
Objet : Re: Error during select query - Found other
506
> Min 0.00 0.00 0.00 150 0
> Max 0.00 0.00 0.00 53142810146 1996099046
>
>
> Thanks,
>
> Dipan Shah
>
>
> --
> *From:* Dipan Shah <dipan@hotmail.com>
> *Sent:* Wednesday, December 20, 2017 12:04 PM
> *To:* User
> *Subject
0.000.00150 0
Max 0.000.000.0053142810146 1996099046
Thanks,
Dipan Shah
From: Dipan Shah <dipan@hotmail.com>
Sent: Wednesday, December 20, 2017 12:04 PM
To: User
Subject: Re: Error during select query - Found other
ber 20, 2017 2:23 AM
To: User
Subject: Re: Error during select query
Can you send through the full stack trace as reported in the Cassandra logs?
Also, what version are you running?
On 19 Dec. 2017 9:23 pm, "Dipan Shah"
<dipan@hotmail.com<mailto:dipan@hotmail.com>> w
Can you send through the full stack trace as reported in the Cassandra
logs? Also, what version are you running?
On 19 Dec. 2017 9:23 pm, "Dipan Shah" <dipan@hotmail.com> wrote:
> Hello,
>
>
> I am getting an error message when I'm running a select query from 1
Hello,
I am getting an error message when I'm running a select query from 1 particular
node. The error is "ServerError: java.lang.IllegalStateException: Unable to
compute ceiling for max when histogram overflowed".
Has anyone faced this error earlier? I tried to search for th
t; I have a new requirement , where I need to copy all the rows from one
> table to another table in Cassandra, where the second table contains one
> extra column.
>
> I have written a python script, which reads each row and inserts . But the
> problem is in stage environment I'm ob
the select count query is
returning different result each time I execute, so the row count is
varying.
The base table contains 6 lakhs rows.The stage cluster is of 5 instances
with a replication factor of 3.
I'm able to successfully run the query in dev cluster with the same data,
where 3 instances
vailableException: All
>> host(s) tried for query failed (no host was tried)
>> at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(
>> RequestHandler.java:218)
>> at com.datastax.driver.core.RequestHandler.access$1000(
>> RequestHandler.java:43)
>> at c
ly, we are seeing issues with apps not being able to communicate
> with Cassandra nodes, returning the following errors (captured in
> servicemix logs):
>
>> by: com.datastax.driver.core.exceptions.NoHostAvailableException: All
>> host
unicate
> with Cassandra nodes, returning the following errors (captured in
> servicemix logs):
>
>> by: com.datastax.driver.core.exceptions.NoHostAvailableException: All
>> host(s) tried for query failed (no host was tried)
>> at com.datastax.driver.core.RequestHa
):
> by: com.datastax.driver.core.exceptions.NoHostAvailableException: All
> host(s) tried for query failed (no host was tried)
> at
> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:218)
> at
> com.datastax.driver.core.RequestHandler.access$1000(Requ
; I'd say that no, a range query probably isn't the best for monitoring, but
> it really depends on how important it is that the range you select is
> consistent.
>
> From those traces it does seem that the bulk of the time spent was waiting
> for responses from the replicas, which may
I'd say that no, a range query probably isn't the best for monitoring, but
it really depends on how important it is that the range you select is
consistent.
>From those traces it does seem that the bulk of the time spent was waiting
for responses from the replicas, which may indicate a netw
OK.
I suspect either that you have a node down in your cluster (or 4),
>
Nope, that’s not what is happening as a) we have monitoring on all nodes,
b) there is nothing in the logs.
> or your queries are gradually getting slower.
>
Perhaps, but we have query time metrics that don’t seem to ind
You're correct in that the timeout is only driver side. The server will
have its own timeouts configured in the cassandra.yaml file.
I suspect either that you have a node down in your cluster (or 4), or your
queries are gradually getting slower. This kind of aligns with the slow
query statements
We have a monitoring service that runs on all of our Cassandra nodes which
performs different query types to ensure the cluster is healthy. We use
different consistency levels for the queries and alert if any of them
fail. All of our query types consistently succeed apart from our ALL range
is less than 10 Mb/sec for each node.
Now when I run read intensive query (like select count(1) on that huge
table) obviously Casandra is under pressure, cpu is high etc. If that's
important the query is done via Presto.
The query returns successfully after 40 minutes, but the cluster doesn't
return
,
I've been noticing some missing rows, any where from 20-40% missing, while
executing paging queries over my cluster.
Basically the query is to hit every row, subdividing the entire token range
into a few tens of token ranges to parallelize the work, there is no wrap
around involved
Hi Folks,
I've been noticing some missing rows, any where from 20-40% missing, while
executing paging queries over my cluster.
Basically the query is to hit every row, subdividing the entire token range
into a few tens of token ranges to parallelize the work, there is no wrap
around involved
While this is indeed a problem with DSE, your problem looks related to CJK
Lucene indexing, in this context I think your query does not make sense.
(see CJK: https://en.wikipedia.org/wiki/CJK_characters)
If you properly configured your indexing to handle CJK, as it looks like you’re
searching
This is a question for datastax support, not the Apache mailing list. Folks
here are more than happy to help with open source, Apache Cassandra
questions, if you've got one.
On Thu, May 11, 2017 at 9:06 PM @Nandan@
wrote:
> Hi ,
>
> In my table, I am having few
Hi ,
In my table, I am having few records and implemented SOLR for partial
search but not able to retrieve data.
SELECT * from revall_book_by_title where solr_query = 'language:中';
SELECT * from revall_book_by_title where solr_query = 'language:中*';
None of them are working.
Any suggestions.
need to search by few 3-4 columns like Video_title, video_actor etc..
> 2) If I am implementing Solr indexing on this single table, then we can
> able to do a query from other columns and much more.. but is it going to
> effect my READ and WRITE speed.
> 3) is it will be a good idea or
am implementing Solr indexing on this single table, then we can
able to do a query from other columns and much more.. but is it going to
effect my READ and WRITE speed.
3) is it will be a good idea or not to implement SOLR directly.
Please suggest on above.
Thanks.
not
sure if there's any performance advantage to using geohashes, from a
Cassandra data model & query perspective, as I haven't spent much time with
them. Maybe someone who's done this can chime in.
On Tue, May 9, 2017 at 1:16 PM Jim Ancona <j...@anconafamily.com> wrote:
> Cou
ey.
>> >
>> > For example, a space of (1,1) could contain all x,y coordinates where x
>> and y are > 0 and <= 1. You would then have a table like:
>> >
>> > CREATE TABLE geospatial (
>> > space text,
>> > x double,
>> >
gt; y are > 0 and <= 1. You would then have a table like:
> >
> > CREATE TABLE geospatial (
> > space text,
> > x double,
> > y double,
> > item text,
> > m1,
> > m2,
> > m3,
&g
101 - 200 of 1112 matches
Mail list logo