Hi Oleksandr,
Yes, that was always the case. All older versions are removed from Debian repo
index :(
From: Oleksandr Shulgin
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, April 2, 2019 at 20:04
To: User
Subject: How to install an older minor release?
Hello,
We've just noticed that we
Hi Vsevolod,
> Are there any workarounds to speed up the process? (e.g. doing cleanup only
> after all 4 new nodes joined cluster), or inserting multiple nodes
> simultaneously with specific settings?
e.g. doing cleanup only after all 4 new nodes joined cluster === allowed
inserting multiple nod
e is the severity, level INFO.
Generally, I don’t recommend running Cassandra on only 2GB ram or for small
datasets that can easily fit in memory. Is there a reason why you’re picking
Cassandra for this dataset?
On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev
mailto:klebed...@conductor.com>> wr
Hi All,
We have a tiny 3-node cluster
C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
HEAP_SIZE is 2G
JVM options are default
All setting in cassandra.yaml are default (file_cache_size_in_mb not set)
Data per node – just ~ 1Gbyte
We’re getting following errors messag
As many people use Oracle JDK, I think it worth mentioning that according
Oracle Support Roadmap there are some changes in their policies for Java 11 and
above (http://www.oracle.com/technetwork/java/eol-135779.html).
In particular:
“Starting with Java SE 9, in addition to providing Oracle JDK f
EPAIR 0
and yes,we are inserting data with TTL=60 seconds
we have 200 vehicles and updating this table every 5 or 10 seconds;
At 2018-08-31 17:10:50, "Kyrylo Lebediev"
mailto:kyrylo_lebed...@epam.com.INVALID>>
wrote:
Looks like you're querying the table at CL = ONE which is
Looks like you're querying the table at CL = ONE which is default for cqlsh.
If you run cqlsh on nodeX it doesn't mean you retrieve data from this node.
What this means is that nodeX will be coordinator, whereas actual data will be
retrieved from any node, based on token range + dynamic snitch da
Hi,
There is an instruction how to switch to a different snitch in datastax docs:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html
Personally I haven't tried to change Ec2 to Ec2Multi, but my understanding is
that as long as 'dc' and 'rack' values are left u
eans there's lots of room for improvement though :)
On Thu, Aug 9, 2018 at 5:36 AM Kyrylo Lebediev
wrote:
Thank you Jon, great article as usually!
One topic that was discussed in the article is filesystem cache which is
traditionally leveraged for data caching in Cassandra (with row-cac
Thank you Jon, great article as usually!
One topic that was discussed in the article is filesystem cache which is
traditionally leveraged for data caching in Cassandra (with row-caching
disabled by default).
IIRC mmap() is used.
Some RDBMS and NoSQL DB's as well use direct I/O + async I/O + m
in Rodriguez - @arodream -
al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France / Spain
The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com
2018-08-06 22:27 GMT+01:00 Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com.invalid>>:
Thank you for replying, Ala
---
Alain Rodriguez - @arodream -
al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France / Spain
The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com
2018-08-04 15:37 GMT+02:00 Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com.invalid>>:
Hello!
In c
Small gc_grace_seconds value lowers max allowed node downtime, which is 15
minutes in your case. After 15 minutes of downtime you'll need to replace the
node, as you described. This interval looks too short to be able to do planned
maintenance. So, in case you set larger value for gc_grace_secon
Go to: http://dl.bintray.com/apache/cassandra/pool/main/c/cassandra/
download deb packages for the versions you need and install them by dpkg.
Regards,
Kyrill
From: R1 J1
Sent: Saturday, August 4, 2018 4:56:56 PM
To: user@cassandra.apache.org
Subject: How to downl
Hello!
In case when dynamic snitching is enabled data is read from 'the fastest
replica' and other replicas send digests for CL=QUORUM/LOCAL_QUORUM .
When dynamic snitching is disabled, as the concept of the fastest replica
disappears, which rules are used to choose from which replica to read
There are two factors in terms of Cassandra that determine what's called
network topology: datacenter and rack.
rack - it's not necessarily a physical rack, it's rather a single point of
failure. For example, in case of AWS one availability zone is usually chosen to
be a Cassandra rack.
datace
Also it worth noting, that usage of JBOD isn't recommended for older Cassandra
versions, as there are known issues with data imbalance on JBOD.
iirc JBOD data imbalance was fixed in some 3.x version (3.2?)
For older versions creation one large filesystem on top md or lvm device seems
to be a be
f you've got small partitions/small reads you should test lowering your
compression chunk size on the table and disabling read ahead. This sounds like
it might just be a case of read amplification.
On Tue., 8 May 2018, 05:43 Kyrylo Lebediev,
mailto:kyrylo_lebed...@epam.com>> wrote:
Dear Experts,
I'm observing strange behavior on a cluster 2.1.20 during compactions.
My setup is:
12 nodes m4.2xlarge (8 vCPU, 32G RAM) Ubuntu 16.04, 2T EBS gp2.
Filesystem: XFS, blocksize 4k, device read-ahead - 4k
/sys/block/vxdb/queue/nomerges = 0
SizeTieredCompactionStrategy
After da
Hi!
You are talking about messages like below, right?
INFO [epollEventLoopGroup-2-8] 2018-05-05 16:57:45,537 Message.java:623 -
Unexpected exception during request; channel = [id: 0xa4879fdd,
L:/10.175.20.112:9042 - R:/10.175.20.73:2508]
io.netty.channel.unix.Errors$NativeIoException: sysca
First of all you need to identify what's the bottleneck in your case.
First things to check:
1) jvm - heap is too small for such workloads
Enable GC logging in /etc/cassandra/cassandra-env.sh, then analyze its output
during workload. In case you observe messages about long pauses or Full GC's,
Thank you, Rahul!
From: Rahul Singh
Sent: Saturday, April 21, 2018 3:02:11 PM
To: user@cassandra.apache.org
Subject: Re: copy from one table to another
That’s correct.
On Apr 21, 2018, 5:05 AM -0400, Kyrylo Lebediev ,
wrote:
You mean that correct table UUID
ent Guid — doing a hard link may work as long as the
sstable dir’s guid is he same as the newly created table in the system schema.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Apr 19, 2018, 10:41 AM -0500, Kyrylo Lebediev ,
wrote:
The table is too large to be copied fast/effec
ints and to
improve our customer service, email communications may be monitored and
telephone calls may be recorded.
Kyrylo Lebediev
04/16/2018 10:37 AM
Please respond to
user@cassandra.apache.org
To
"user@cassandra.apache.org" ,
cc
Subject
Re: copy from o
tomer service, email communications may be monitored and
telephone calls may be recorded.
Kyrylo Lebediev
04/16/2018 10:37 AM
Please respond to
user@cassandra.apache.org
To
"user@cassandra.apache.org" ,
cc
Subject
Re: copy from one table to another
Any i
Any issues if we:
1) create an new empty table with the same structure as the old one
2) create hardlinks ("ln without -s"):
.../-/--* --->
.../-/--*
3) run nodetool refresh -- newkeyspacename newtable
and then query/modify both tables independently/simultaneously?
In theory, as SSTables
Niclas,
Here is Jeff's comment regarding this: https://stackoverflow.com/a/31690279
From: Niclas Hedhman
Sent: Friday, March 9, 2018 9:09:53 AM
To: user@cassandra.apache.org; Rahul Singh
Subject: Re: Adding disk to operating C*
I am curious about the side commen
Not sure where I heard this, but AFAIK data imbalance when multiple
data_directories are in use is a known issue for older versions of Cassandra.
This might be the root-cause of your issue.
Which version of C* are you using?
Unfortunately, don't remember in which version this imbalance issue wa
daemons, however, since they may sync from
different sources. Whether you use ntpd, openntp, ntpsec, chrony isn't
really important, since they are all just background daemons to sync the
system clock. There is nothing Cassandra-specific.
--
Kind regards,
Michael
On 03/08/2018 04:15 AM, Kyryl
Hi!
Recently Amazon announced launch of Amazon Time Sync Service
(https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/)
and now it's AWS-recommended way for time sync on EC2 instances
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html). It's
stated there
Thanks for sharing, Dikang!
Impressive results.
As you plugged in different storage engine, it's interesting how you're dealing
with compactions in Rocksandra?
Is there still the concept of immutable SSTables + compaction strategies or it
was changed somehow?
Best,
Kyrill
Hello!
Can't answer your question but there is another one: "why do we need to
maintain counters with their known limitations (and I've heard of some issues
with implementation of counters in Cassandra), when there exist really
effective uuid generation algorithms which allow us to generate uni
2018 12:49:02 AM
To: user
Subject: Re: vnodes: high availability
I *strongly* recommend disabling dynamic snitch. I’ve seen it make latency
jump 10x.
dynamic_snitch: false is your friend.
On Jan 17, 2018, at 2:00 PM, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>> wrote:
Avi,
I
Hi!
Partition key (Id in your case) must be in WHERE cause if not using indexes
(but indexes should be used carefully, not like in case of relational DB's).
Also, only columns which belong to primary key ( = partition key + clustering
key) can be used in WHERE in such cases. That's why 2nd and
Thank you so much, Eric!
From: Eric Evans
Sent: Wednesday, February 28, 2018 6:26:23 PM
To: user@cassandra.apache.org
Subject: Re: JMX metrics for CL
On Tue, Feb 27, 2018 at 2:26 AM, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>> wrote:
Hello!
ement the
according metric.
Then, in cassandra-env.sh you could specify to use your class using
-Dcassandra.custom_query_handler_class.
HTH,
Horia
On tis, 2018-02-27 at 08:26 +0000, Kyrylo Lebediev wrote:
Hello!
Is it possible to get counters from C* side regarding CQL queries executed
si
In terms of Cassandra, rack is considered as single point of failure. So, using
some rack-aware snitch (GossipingPropertyFileSnitch would be the best for your
case) Cassandra won't place multiple replicas of the same range in the same
rack.
Basically, there are two requirements that should be
Hello!
Is it possible to get counters from C* side regarding CQL queries executed
since startup for each CL?
For example:
CL ONE: NNN queries
CL QUORUM: MMM queries
etc
Regards,
Kyrill
olute hell.
You can only do this by adding a new DC and retiring the old.
On Feb 24, 2018, at 2:26 AM, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>> wrote:
> By the way, is it possible to migrate towards to smaller token ranges? What
> is the recommended way doing so?
- Didn'
> By the way, is it possible to migrate towards to smaller token ranges? What
> is the recommended way doing so?
- Didn't see this question answered. I think, be easiest way to do this is to
add new C* nodes with lower vnodes (8, 16 instead of default 256) then decom
old nodes with vnodes=256.
Yes, it's a 'known issue' with the style of Debian repos management.
Personally, I don't understand why all versions of except the latest one are
intentionally removed from repo index.
Because if we had several versions of a package in the repo, anyway 'apt-get
install cassandra cassandra-tools'
g 8 smaller nodes in production
cluster with New 8 nodes that are bigger in capacity, without a downtime
You can also create a new DC and then terminate old one.
Sent from my iPhone
> On Feb 20, 2018, at 2:49 PM, Kyrylo Lebediev wrote:
>
> Hi,
> Consider using this approach, repl
Hi,
Consider using this approach, replacing nodes one by one:
https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/
Regards,
Kyrill
From: Leena Ghatpande
Sent: Tuesday, February 20, 2018 10:24:24 PM
To: user@cassandra.apache
Agree with you, Daniel, regarding gaps in documentation.
---
At the same time I disagree with the folks who are complaining in this thread
about some functionality like 'advanced backup' etc is missing out of the box.
We all live in the time where there are literally tons of open-source tools
Hi,
Not sure what could be the reason, but the issue is obvious: directory
/var/lib/cassandra/hints is missing or cassandra user doesn't have enough
permissions on it.
Regards,
Kyrill
From: test user
Sent: Wednesday, February 7, 2018 8:14:59 AM
To: user@cass
s,
Kyrill
____
From: Kyrylo Lebediev
Sent: Saturday, February 3, 2018 12:23:15 PM
To: User
Subject: Re: Cassandra 2.1: replace running node without streaming
Thank you Oleksandr,
Just tested on 3.11.1 and it worked for me (you may see the logs below).
Just comprehended
it worked for the first time.
--
Alex
Am 03.02.2018 um 08:19 schrieb Oleksandr Shulgin
mailto:oleksandr.shul...@zalando.de>>:
On 3 Feb 2018 02:42, "Kyrylo Lebediev"
mailto:kyrylo_lebed...@epam.com>> wrote:
Thanks, Oleksandr,
In my case I'll need to replace all no
.1.x with
vnodes enabled?
Regards,
Kyrill
From: Oleksandr Shulgin
Sent: Friday, February 2, 2018 4:26:30 PM
To: User
Subject: Re: Cassandra 2.1: replace running node without streaming
On Fri, Feb 2, 2018 at 3:15 PM, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>&g
Hello All!
I've got a pretty standard task - to replace a running C* node [version 2.1.15,
vnodes=256, Ec2Snitch] (IP address will change after replacement, have no
control over it).
There are 2 ways stated in C* documentation how this can be done:
1) Add a new node, than 'nodetool decommissio
Haven't tried this by myself, but I suspect that in order to add public
addresses for existing nodes for inter-dc communication, following method might
work (just my guess):
- take a node, shutdown C* there, change it's private IP, add public IP,
update cassandra.yaml accordingly (the same wa
tion you'll need to do a DC migration to a
new DC with a better configuration of snitch/replication strategy/racks/tokens.
On 16 January 2018 at 21:54, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>> wrote:
Thank you for this valuable info, Jon.
I guess both you and Al
not a point
of interest for many C* devs plus probably a lot of us wouldn't remember enough
math to know how to approach it.
If you want to get out of this situation you'll need to do a DC migration to a
new DC with a better configuration of snitch/replication strategy/racks/tokens.
Hi Jerome,
I don't know reason for this, but compactions run during 'nodetool
decommission'.
What C* version you are working with?
What is the reason you're decommissioning the node? Any issues with it?
Can you see any errors/warnings in system.log on the node being decommissioned?
Pending
imbalance depends on luck,
unfortunately.
I’m interested to hear your results using 4 tokens, would you mind letting the
ML know your experience when you’ve done it?
Jon
On Jan 16, 2018, at 9:40 AM, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>> wrote:
Agree with you, Jon.
Actually, this cluster w
of an entire availability zone
at QUORUM, or two if you queried at CL=ONE.
You are correct about 256 tokens causing issues, it’s one of the reasons why we
recommend 32. I’m curious how things behave going as low as 4, personally, but
I haven’t done the math / tested it yet.
On Jan 16, 2018
rtial downtime, and that's something you must consider both when
designing your cluster (how it is segmented, how operations are performed), and
when designing your apps (how you will use the driver, how your apps will react
to failure).
Cheers,
On Tue, Jan 16, 2018 at 11:03 AM Kyr
it's always going to be the case that any 2 nodes going down
result in a loss of QUORUM for some token range.
On 15 January 2018 at 19:59, Kyrylo Lebediev
mailto:kyrylo_lebed...@epam.com>> wrote:
Thanks Alexander!
I'm not a MS in math too) Unfortunately.
Not sure, but it seem
odds will get slightly different).
That makes maintenance predictable because you can shut down as many nodes as
you want in a single rack without losing QUORUM.
Feel free to correct my numbers if I'm wrong.
Cheers,
On Mon, Jan 15, 2018 at 5:27 PM Kyrylo Lebediev
mailto:kyrylo_lebed
ax Java Driver and having
the client recognize a degraded cluster and operate temporarily in downgraded
consistency mode
http://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html
- Rahul
On Mon, Jan 15, 2018 at 10:04 AM, Kyrylo L
Hi,
Let's say we have a C* cluster with following parameters:
- 50 nodes in the cluster
- RF=3
- vnodes=256 per node
- CL for some queries = QUORUM
- endpoint_snitch = SimpleSnitch
Is it correct that 2 any nodes down will cause unavailability of a keyrange at
CL=QUORUM?
Regards,
K
Nandan,
There are several options available how this can be done.
For example, you may configure 2 network adapters per each VM:
1) NAT: in order the VM to have access to the Internet
2) Host-only Adapter - for internode communication setup (listen_address,
rpc_address). Static IP configuration
61 matches
Mail list logo