The rendering tool renders a portion a very large image. It may fetch
different data each time from billions of rows.
So I don't think I can cache such large results. Since same results will
rarely fetched again.
Also do you know how I can do 2d range queries using Cassandra. Some other
users
Data won't change much but queries will be different.
I am not working on the rendering tool myself so I don't know much details
about it.
Also as suggested by you I tried to fetch data in size of 500 or 1000 with
java driver auto pagination.
It fails when the number of records are high (around
Why are you creating new tables dynamically? I would try to use a static
schema and use a collection (list / map / set) for storing arbitrary data.
On Wed, Mar 18, 2015 at 2:52 PM, Ankit Agarwal agarwalankit.k...@gmail.com
wrote:
Hi,
I am new to Cassandra, we are planning to use Cassandra for
Yeah, it may be that the process is being limited by swap. This page:
https://gist.github.com/aliakhtar/3649e412787034156cbb#file-cassandra-install-sh-L42
Lines 42 - 48 list a few settings that you could try out for increasing /
reducing the memory limits (assuming you're on linux).
Also, are
Perhaps just fetch them in batches of 1000 or 2000? For 1m rows, it seems
like the difference would only be a few minutes. Do you have to do this all
the time, or only once in a while?
On Wed, Mar 18, 2015 at 12:34 PM, Mehak Mehta meme...@cs.stonybrook.edu
wrote:
yes it works for 1000 but not
Hi,
I am new to Cassandra, we are planning to use Cassandra for cloud base
application in our development environment, so for this looking forward for
best strategies to sync the schema for micro-services while deploy
application on cloud foundry
One way which I could use is Accessor interface
4g also seems small for the kind of load you are trying to handle (billions
of rows) etc.
I would also try adding more nodes to the cluster.
On Wed, Mar 18, 2015 at 2:53 PM, Ali Akhtar ali.rac...@gmail.com wrote:
Yeah, it may be that the process is being limited by swap. This page:
How often does the data change?
I would still recommend a caching of some kind, but without knowing more
details (how often the data is changing, what you're doing with the 1m rows
after getting them, etc) I can't recommend a solution.
I did see your other thread. I would also vote for
ya I have cluster total 10 nodes but I am just testing with one node
currently.
Total data for all nodes will exceed 5 billion rows. But I may have memory
on other nodes.
On Wed, Mar 18, 2015 at 6:06 AM, Ali Akhtar ali.rac...@gmail.com wrote:
4g also seems small for the kind of load you are
We have UI interface which needs this data for rendering.
So efficiency of pulling this data matters a lot. It should be fetched
within a minute.
Is there a way to achieve such efficiency
On Wed, Mar 18, 2015 at 4:06 AM, Ali Akhtar ali.rac...@gmail.com wrote:
Perhaps just fetch them in batches
I would probably do this in a background thread and cache the results, that
way when you have to render, you can just cache the latest results.
I don't know why Cassandra can't seem to be able to fetch large batch
sizes, I've also run into these timeouts but reducing the batch size to 2k
seemed
Sorry, meant to say that way when you have to render, you can just display
the latest cache.
On Wed, Mar 18, 2015 at 1:30 PM, Ali Akhtar ali.rac...@gmail.com wrote:
I would probably do this in a background thread and cache the results,
that way when you have to render, you can just cache the
Cassandra can certainly handle millions and even billions of rows, but...
it is a very clear anti-pattern to design a single query to return more
than a relatively small number of rows except through paging. How small?
Low hundreds is probably a reasonable limit. It is also an anti-pattern to
When I run nodetool compactionhistory , I'm only seeing the system
keyspace, and OpsCenter keyspace in the compactions. I only see one mention
of my own keyspace, but its only for the smallest table within that
keyspace (containing only about 1k rows). My two other tables, containing
1.1m and 100k
From your description, it sounds like you have a single partition key with
millions of clustered values on the same partition. That's a very wide
partition. You may very likely be causing a lot of memory pressure in your
Cassandra node (especially at 4G) while trying to execute the query.
I have a table which is going to be storing temporary search results. The
results will be available for a short time ( anywhere from 1 to 24 hours)
from the time of the search, and then should be deleted to clear up disk
space.
This is going to apply to all the rows within this table.
What would
Hi Fabien,
Thank you for the link ! That’s exactly what we want to do.
But before starting this, we need to clean up the mess in order to get a clean
cluster.
Thanks for your help.
Best regards,
[cid:image001.png@01D061A4.2E073720]
David CHARBONNIER
Sysadmin
T : +33 411 934 200
Hi David,
There is an excellent article which describes exactly what you want to do
(ie migrate from one DC to another DC) :
http://planetcassandra.org/blog/cassandra-migration-to-ec2/
2015-03-18 17:05 GMT+01:00 David CHARBONNIER david.charbonn...@rgsystem.com
:
Hi,
We’re using Cassandra
Hello,
Finally, i have created my ring using cassandra.
Please, i'd like to store a file replicated 2 times in my cluster.
is that possible ? can you please send me a link for a tutorial ?
Thanks a lot.
Best Regards.
After upgrading a 3 node Cassandra cluster from 1.2.19 to 2.0.12, I have an
event storm of SliceQueryFilter messages flooding the Cassandra system.log
file.
WARN [ReadStage:1043] 2015-03-18 15:14:12,708 SliceQueryFilter.java (line 231)
Read 201 live and 13539 tombstoned cells in
Hi,
Try setting fetchsize before querying. Assuming you don't set it too high, and
you don't have too many tombstones, that should do it.
Cheers,
Jens
–
Skickat från Mailbox
On Wed, Mar 18, 2015 at 2:58 AM, Mehak Mehta meme...@cs.stonybrook.edu
wrote:
Hi,
I have requirement to fetch
Hi Jens,
I have tried with fetch size of 1 still its not giving any results.
My expectations were that Cassandra can handle a million rows easily.
Is there any mistake in the way I am defining the keys or querying them.
Thanks
Mehak
On Wed, Mar 18, 2015 at 3:02 AM, Jens Rantil
Have you tried a smaller fetch size, such as 5k - 2k ?
On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta meme...@cs.stonybrook.edu
wrote:
Hi Jens,
I have tried with fetch size of 1 still its not giving any results.
My expectations were that Cassandra can handle a million rows easily.
Is
yes it works for 1000 but not more than that.
How can I fetch all rows using this efficiently?
On Wed, Mar 18, 2015 at 3:29 AM, Ali Akhtar ali.rac...@gmail.com wrote:
Have you tried a smaller fetch size, such as 5k - 2k ?
On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta meme...@cs.stonybrook.edu
Thanks! a lot for your responses,
My question is , what all best practices used for database schema
deployment for a microservice in cloud environment.
e.g., shall we create it with deployement of microservice or it should be
generated via code or should not be generated via code instead should
Hi David;
some input to get back to where you were : a) Start with the French cluster
only and get it working with DSE 4.5.1 b) Opscenter keyspace is by default RF1;
alter the keyspace to RF3 c) Take a full snapshot of all your nodes copy
the files to a safe location on all the nodes
To
On Wed, Mar 18, 2015 at 10:14 AM, Caraballo, Rafael
rafael.caraba...@twcable.com wrote:
After upgrading a 3 node Cassandra cluster from 1.2.19 to 2.0.12, I have
an event storm of “ SliceQueryFilter” messages flooding the Cassandra
system.log file.
How can I stop this event storm?
As
Hello,
For the limit of number of
cellshttp://wiki.apache.org/cassandra/CassandraLimitations (columns *rows)
per partition, I wonder what we mean by number of columns, since different rows
may have different columns? Is the number of columns the number of columns of
the biggest row,
On Wed, Mar 18, 2015 at 12:43 PM, Ruebenacker, Oliver A
oliver.ruebenac...@altisource.com wrote:
For the limit of number of cells
http://wiki.apache.org/cassandra/CassandraLimitations (columns *rows)
per partition, I wonder what we mean by number of columns, since different
rows may have
On Wed, Mar 18, 2015 at 12:58 PM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Mar 18, 2015 at 9:05 AM, David CHARBONNIER
david.charbonn...@rgsystem.com wrote:
- New nodes in the other country have been installed like
French nodes except for Datastax Enterprise version (4.5.1
Generally a concern for limitations on number of columns is a concern about
storage for rows in a partition. Cassandra is a column-oriented database,
but this is really referring to its cell-oriented storage structure, with
each column name and column value pair being a single cell (except
31 matches
Mail list logo