Generally speaking (both for Cassandra as well as for many other projects),
timestamps don't carry a timezone directly. A single point in time has a
consistent value for timestamp regardless of the timezone, and when you
convert a timestamp to a human-friendly value, you can attach a timezone to
s
Even if you get this to work for now, I really recommend using a different
tool, like Spark. Personally I wouldn't use UDAs outside of a single
partition.
On Mon, Dec 21, 2015 at 1:50 AM Dinesh Shanbhag <
dinesh.shanb...@isanasystems.com> wrote:
>
> Thanks for the pointers! I edited jvm.options
>
> We do have a lot of keyspaces and column families.
Be careful as c* (not just opscenter) will not run well with too many
tables. Usually 2 or 3 hundred is a good upper bound though I've seen folks
throw money at the problem and run more with special hardware (lots of RAM).
Most importantly,
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.0.2.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source an
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.1.1.
There has been some understandable confusion about our new Tick-Tock
release style. This thread should help explain it [4]. Since a critical
bug was discovered just after 3.1 we are releasing 3.1.1 to address
Why is the old node not able to restart?
If you're about to bring a new one to replace the old dead one, it may be
simpler to just replace it
https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
Hope it helps.
Carlos Alonso | Software Engineer | @calonso
For cross-cluster operation with the Spark/Cassandra connector, you can
look at this trick:
http://www.slideshare.net/doanduyhai/fast-track-to-getting-started-with-dse-max-ing/64
On Mon, Dec 21, 2015 at 1:14 PM, George Sigletos
wrote:
> Roughly half TB of data.
>
> There is a timestamp column in
Roughly half TB of data.
There is a timestamp column in the tables we migrated and we did use that
to achieve incremental updates.
I don't know anything about kairosdb, but I can see from the docs that
there exists a row timestamp column. Could you maybe use that one?
Kind regards,
George
On Mo
George Sigletos writes:
> Hello,
>
> We had a similar problem where we needed to migrate data from one cluster
> to another.
>
> We ended up using Spark to accomplish this. It is fast and reliable but
> some downtime was required after all.
>
> We minimized the downtime by doing a first run, and
Hello,
We had a similar problem where we needed to migrate data from one cluster
to another.
We ended up using Spark to accomplish this. It is fast and reliable but
some downtime was required after all.
We minimized the downtime by doing a first run, and then run incremental
updates.
Kind regar
Thanks for the pointers! I edited jvm.options in
$CASSANDRA_HOME/conf/jvm.options to increase -Xms and -Xmx to 1536M.
The result is the same.
And in $CASSANDRA_HOME/logs/system.log, grep GC system.log produces this
(when jvm.options had not been changed):
INFO [Service Thread] 2015-12-1
Hello all,
We have two clusters X and Y with same keyspaces but distinct data sets.
We are planning to merge these into single cluster. What would be ideal
steps to achieve this without downtime for applications? We have time
series data stream continuously writing to Cassandra.
We have ruled ou
12 matches
Mail list logo