Yes. Move the corrupt sstable, and run a repair on this node, so that it
gets in sync with it's peers.
On Thu, Nov 2, 2017 at 6:12 PM, Shashi Yachavaram
wrote:
> We are cassandra 2.0.17 and have corrupted sstables. Ran offline
> sstablescrub but it fails with OOM. Increased the MAX_HEAP_SIZE to
are facing.
thanks
Sai
On Tue, Apr 11, 2017 at 10:29 AM, Martijn Pieters wrote:
> From: sai krishnam raju potturi
> > I got a similar error, and commenting out the below line helped.
> > JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
>
I got a similar error, and commenting out the below line helped.
JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
Did you also include "rpc_interface_prefer_ipv6: true" in the YAML file?
thanks
Sai
On Tue, Apr 11, 2017 at 6:37 AM, Martijn Pieters wrote:
> I’m having issues getting a
show as DOWN in Cassandra versions
> 2.1.12 - 2.1.16
>
> we've seen this issue on a few clusters, including on 2.1.7 and 2.1.8.
> pretty sure it is an issue in gossip that's known about. in later versions
> it seems to be fixed.
>
> On 24 Jan 2017 06:09, "sai k
In the Cassandra versions 2.1.11 - 2.1.16, after we decommission a node or
datacenter, we observe the decommissioned nodes marked as DOWN in the
cluster when you do a "nodetool describecluster". The nodes however do not
show up in the "nodetool status" command.
The decommissioned node also does not
> existing only in the truststore. If so, please share your experience and
> lessons learned. Would this impact client-to-node encryption as the
> certificates used in internode would not have the hostnames represented in
> CN?
>
> -- Jacob Shadix
>
> On Wed, Sep 21,
we faced a similar issue earlier, but that was more related to firewall
rules. The newly added datacenter was not able to communicate with the
existing datacenters on the port 7000(inter-node communication). Your's
might be a different issue, but just saying.
On Thu, Oct 20, 2016 at 4:12 PM, Jai
restarting the cassandra service helped get rid of those files in our
situation.
thanks
Sai
On Wed, Sep 28, 2016 at 3:15 PM, Anuj Wadehra
wrote:
> Hi,
>
> We are facing an issue where Cassandra has open file handles for deleted
> sstable files. These open file handles keep on increasing with ti
Sep 21, 2016 at 10:30 AM, Eric Evans
wrote:
> On Tue, Sep 20, 2016 at 12:57 PM, sai krishnam raju potturi
> wrote:
> > Due to the security policies in our company, we were asked to use 3rd
> party
> > signed certs. Since we'll require to manage 100's of individual c
thanks Robert; we followed the instructions mentioned in
http://thelastpickle.com/blog/2015/09/30/hardening-cassandra
-step-by-step-part-1-server-to-server.html. It worked great.
Due to the security policies in our company, we were asked to
use 3rd party signed certs. Since we'll requ
hi;
has anybody enabled SSL using a generic keystore for node-to-node
encryption. We're using 3rd party signed certificates, and want to avoid
the hassle of managing 100's of certificates.
thanks
Sai
hi Laxmi;
what's the size of data per node? If the data is really huge, then let
the decommission process continue. Else; stop the cassandra process on the
decommissioning node, and from another node in the datacenter, do a
"nodetool removenode host-id". This might speed up the decommissioning
pr
01PM +01:00 de sai krishnam raju potturi
> pskraj...@gmail.com:
>
>
> hi;
> will enabling SSL (node-to-node) cause an overhead in the performance of
> Cassandra? We have tried it out on a small test cluster while running
> Cassandra-stress tool, and did not see much difference
r cluster gets around 40-50K reads and writes per second.
>
> On 13 September 2016 at 12:01, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> hi;
>> will enabling SSL (node-to-node) cause an overhead in the performance
>> of Cassandra? We have
hi;
will enabling SSL (node-to-node) cause an overhead in the performance of
Cassandra? We have tried it out on a small test cluster while running
Cassandra-stress tool, and did not see much difference in terms of read and
write latencies.
Could somebody throw some light regarding any impact
Make sure there is no spike in the load-avg on the existing nodes, as that
might affect your application read request latencies.
On Sun, Sep 11, 2016, 17:10 Jens Rantil wrote:
> Hi Bhuvan,
>
> I have done such expansion multiple times and can really recommend
> bootstrapping a new DC and pointin
31
> (But if I were you I would not rely on this. It's always better to be
> explicit.)
>
> Best,
>
> Romain
>
> Le Mercredi 10 août 2016 17h50, sai krishnam raju potturi <
> pskraj...@gmail.com> a écrit :
>
>
> hi;
>if there are any missed at
hi;
if there are any missed attributes in the YAML file, will Cassandra pick
up default values for those attributes.
thanks
uck,
>
> C*heers,
> ---
> Alain Rodriguez - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2016-07-28 0:00 GMT+02:00 sai krishnam raju potturi :
>
>> The read queries are co
on is like 220 kb in size.
>
>
>
> thanks
>
>
>
>
>
> On Wed, Jul 27, 2016 at 5:41 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> it's set to 1800 Vinay.
>
>
>
> bloom_filter_fp_chance=0.01 AND
&
and also the sstable size in question is like 220 kb in size.
thanks
On Wed, Jul 27, 2016 at 5:41 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> it's set to 1800 Vinay.
>
> bloom_filter_fp_chance=0.01 AND
> caching='KEYS_
7;tombstone_compaction_interval': '1800', 'class':
'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
thanks
On Wed, Jul 27, 2016 at 5:34 PM, Vinay Kumar Chella wrote:
> What is your GC_grace_seconds set t
compaction
>>
>> /usr/bin/java -jar cmdline-jmxclient-0.10.3.jar - localhost:
>>> {
>>> port}
>>> org.apache.cassandra.db:type=CompactionManager
>>> forceUserDefinedCompaction="'${KEYSPACE}','${
>>> SSTABLEFILENAME
>>
hi;
we have a columnfamily that has around 1000 rows, with one row is really
huge (million columns). 95% of the row contains tombstones. Since there
exists just one SSTable , there is going to be no compaction kicked in. Any
way we can get rid of the tombstones in that row?
Userdefined compactio
SANDRA-10559
>
> On Thu, Jul 21, 2016 at 3:02 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> hi ;
>> if possible could someone shed some light on this. I followed a
>> post from the lastpickle which was very informative, but we had some
&g
ate to the latest 2.1 first, you can make this a non-issue as
> 2.1.12 and above support simultaneous SSL and plain on the same port for
> exactly this use case:
> https://issues.apache.org/jira/browse/CASSANDRA-10559
>
> On Thu, Jul 21, 2016 at 3:02 AM, sai krishnam raju potturi <
s each.
thanks
Sai
-- Forwarded message --
From: sai krishnam raju potturi
Date: Mon, Jul 18, 2016 at 11:06 AM
Subject: Re : Recommended procedure for enabling SSL on a live production
cluster
To: user@cassandra.apache.org
Hi;
We have a Cassandra cluster ( version 2.0.14
Hi;
We have a Cassandra cluster ( version 2.0.14 ) spanning across 4
datacenters with 50 nodes each. We are planning to enable SSL between the
datacenters. We are following the standard procedure for enabling SSL (
http://thelastpickle.com/blog/2015/09/30/hardening-cassandra-step-by-step-part-1-s
hi Adarsh;
were there any drawbacks to setting the bloom_filter_fp_chance to the
default value?
thanks
Sai
On Wed, May 18, 2016 at 2:21 AM, Adarsh Kumar wrote:
> Hi,
>
> What is the impact of setting bloom_filter_fp_chance < 0.01.
>
> During performance tuning I was trying to tune bloom_fi
al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2016-04-29 16:39 GMT+02:00 sai krishnam raju potturi
> :
>
>> hi;
>> we are upgrading our cluster from apache-cassandra 2.0.14 to 2.0.17.
hi;
we are upgrading our cluster from apache-cassandra 2.0.14 to 2.0.17. We
have been facing SYN flooding issue (port 9042) in our current version of
Cassandra at times. We are hoping to tackle the SYN flooding issues with
the following attributes in the YAML file for 2.0.17
native_transport_max
hi;
do we see any hung process like Repairs on those 3 nodes? what does
"nodetool netstats" show??
thanks
Sai
On Tue, Apr 19, 2016 at 8:24 AM, Erik Forsberg wrote:
> Hi!
>
> I have this problem where 3 of my 84 nodes misbehave with too long GC
> times, leading to them being marked as DN.
>
hi;
we are upgrading our cluster from apache-cassandra 2.0.14 to 2.0.17. We
have been facing SYN flooding issue (port 9042) in our current version of
Cassandra at times. We are hoping to tackle the SYN flooding issues with
the following attributes in the YAML file for 2.0.17
native_transport_max
already which seems to be your case, it should be
> safe to use it in your case). I only used it to fix gossip status in the
> past or at some point when forcing a removenode was not working, followed
> by full repairs on remaining nodes.
>
> C*heers,
> -
> Alain
y
>https://issues.apache.org/jira/browse/CASSANDRA-10371. In this case
>upgrade to 2.1 or you can try the work arounds listed in the ticket.
>
> Ben
>
> On Tue, 16 Feb 2016 at 11:09 sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> hi;
>>
up in the "nodetool describecluster" and
"nodetool gossipinfo" in 2.0.14 version that we use in another cluster.
thanks
Sai
On Tue, Feb 16, 2016 at 2:08 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> hi;
> we have a 12 node cluster across 2 da
hi;
we have a 12 node cluster across 2 datacenters. We are currently using
cassandra 2.1.12 version.
SNITCH : GossipingPropertyFileSnitch
When we decommissioned few nodes in a particular datacenter and observed
the following :
nodetool status shows only the live nodes in the cluster.
nodeto
suggestion : try the following command "lsof | grep DEL". If in the
output if you see a lot of SSTable files; restart the node. The disk space
will be claimed back.
thanks
Sai
On Wed, Feb 10, 2016 at 9:59 AM, Ted Yu wrote:
> Hi,
> I am using DSE 4.8.4
> On one node, disk space is low where:
thanks a lot Robert. Greatly appreciate it.
thanks
Sai
On Tue, Feb 2, 2016 at 6:19 PM, Robert Coli wrote:
> On Tue, Feb 2, 2016 at 1:23 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> What is the possibility of using GossipingPropertFileSnitch on
hi;
we have a multi-DC cluster spanning across our own private cloud and AWS.
We are currently using Propertyfile snitch across our cluster.
What is the possibility of using GossipingPropertFileSnitch on datacenters
in our private cloud, and Ec2MultiRegionSnitch in AWS?
Thanks in advance for th
Could it have been that you expanded your cluster a while back, but did not
cleanup then.
On Thu, Nov 26, 2015, 07:51 Luigi Tagliamonte wrote:
> I did it 2 times and in both times it freed a lot of space, don't think
> that it's just a coincidence.
> On Nov 26, 2015 10:56 AM, "Carlos Alonso" wr
Is that a seed node?
On Mon, Nov 16, 2015, 05:21 Anishek Agarwal wrote:
> Hello,
>
> We are having a 3 node cluster and one of the node went down due to a
> hardware memory failure looks like. We followed the steps below after the
> node was down for more than the default value of *max_hint_wind
1, 2015, at 3:12 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> yes Surbhi.
>
> On Sat, Oct 31, 2015 at 1:13 PM, Surbhi Gupta
> wrote:
>
>> Is the cluster using vnodes?
>>
>> Sent from my iPhone
>>
>> On Oct 31, 2015, at 9:1
yes Surbhi.
On Sat, Oct 31, 2015 at 1:13 PM, Surbhi Gupta
wrote:
> Is the cluster using vnodes?
>
> Sent from my iPhone
>
> On Oct 31, 2015, at 9:16 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> yes Surbhi.
>
> On Sat, Oct 31, 2015 at 12:1
yes Surbhi.
On Sat, Oct 31, 2015 at 12:10 PM, Surbhi Gupta
wrote:
> So have you already done unsafe assassination ?
>
> On 31 October 2015 at 08:37, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> it's dead; and we had to do unsafeassassinate as
it's dead; and we had to do unsafeassassinate as other 2 methods did not
work
On Sat, Oct 31, 2015 at 11:30 AM, Surbhi Gupta
wrote:
> Whether the node is down or up which you want to decommission?
>
> Sent from my iPhone
>
> On Oct 31, 2015, at 8:24 AM, sai krishnam raj
tion factor .
> It is like forcing a node out from the cluster .
>
> Hope this helps.
>
> Sent from my iPhone
>
> > On Oct 31, 2015, at 5:12 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
> >
> > hi;
> >would unsafeassasinating a
hi;
we were trying to add in a new node to the cluster. It fails during the
bootstrap process unable to gossip with seed nodes. We have not faced this
earlier.
2015-10-31 12:30:15,779 [HANDSHAKE-/X.X.X.X] INFO OutboundTcpConnection
Handshaking version with /X.X.X.X
2015-10-31 12:30:15,810 [HAN
hi;
would unsafeassasinating a dead node maintain the replication factor
like decommission process or removenode process?
thanks
hi;
we are working on a data backup and restore procedure to a new cluster.
We are following the datastax documentation. It mentions a step
"Restore the SSTable files snapshotted from the old cluster onto the new
cluster using the same directories"
http://docs.datastax.com/en/cassandra/2.0/cas
; takes so long for such a small keyspace leads me to believe you're using
> sequential repair ...
>
> -V
>
> On Thu, Oct 15, 2015 at 7:46 PM, Robert Coli wrote:
>
>> On Thu, Oct 15, 2015 at 10:24 AM, sai krishnam raju potturi <
>> pskraj...@gmail.com> wrote:
>
hi;
we are deploying a new cluster with 2 datacenters, 48 nodes in each DC.
For the system_auth keyspace, what should be the ideal replication_factor
set?
We tried setting the replication factor equal to the number of nodes in a
datacenter, and the repair for the system_auth keyspace took really
running a proper snitch you could probably do an entire rack /
> AZ at a time.
>
>
> On Thu, Oct 8, 2015 at 3:08 PM sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> We plan to do it during non-peak hours when customer traffic is less.
>> That sum
traffic away from a DC just to run cleanup feels
> like overkill to me.
>
>
>
> On Thu, Oct 8, 2015 at 2:39 PM sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> hi;
>>our cassandra cluster currently uses DSE 4.6. The underlying cassandra
>
hi;
our cassandra cluster currently uses DSE 4.6. The underlying cassandra
version is 2.0.14.
We are planning on adding multiple nodes to one of our datacenters. This
requires "nodetool cleanup". The "nodetool cleanup" operation takes around
45 mins for each node.
Datastax documentation recomm
could you also provide the columnfamily schema.
On Thu, Oct 8, 2015 at 4:13 PM, Peddi, Praveen wrote:
> Hi,
>
> I am trying to understand this error message that CQL is throwing when I
> try to update 2 different rows with different values on same conditional
> columns. Doesn't CQL support that?
the below solution should work.
For each node in the cluster :
a : Stop cassandra service on the node.
b : manually delete data under $data_directory/system/peers/ directory.
c : In cassandra-env.sh file, add the line JVM_OPTS="$JVM_OPTS
-Dcassandra.load_ring_state=false".
d : Restart service
We have 2 clusters running DSE. On one of the clusters we recently added
additional nodes to a datacenter.
On the cluster where we added nodes, we are getting authentication issues
from client. We are also unable to "list users" on system_auth keyspace.
It's getting stuck.
InvalidRequestException
ing a key value pair, and 40ms latency may have been a concern.
Bottom Line : The latency depends on how wide the row is.
On Tue, Sep 22, 2015 at 1:27 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> thanks for the information. Posting the query too would be of help.
>
>
thanks for the information. Posting the query too would be of help.
On Tue, Sep 22, 2015 at 11:56 AM, Jaydeep Chovatia <
chovatia.jayd...@gmail.com> wrote:
> Please find required details here:
>
> - Number of req/s
>
> 2k reads/s
>
> - Schema details
>
> create table test {
>
>
Once the new node is bootstrapped, you could remove replacement_address
from the env.sh file
On Tue, Sep 8, 2015, 13:27 Maciek Sakrejda wrote:
> According to the docs [1], when replacing a Cassandra node, I should start
> the replacement with cassandra.replace_address specified. Does that just
>
rgeting your specific environment.
>
> On Fri, Aug 28, 2015 at 1:12 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>>
>> hi;
>> We have cassandra cluster with Vnodes spanning across 3 data
>> centers. We take backup of the snaps
We are using DSE on our clusters.
DSE version : 4.6.7
Cassandra version : 2.0.14
thanks
Sai Potturi
On Fri, Aug 28, 2015 at 3:40 PM, Robert Coli wrote:
> On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> we deco
hi;
we decommissioned nodes in a datacenter a while back. Those nodes keep
showing up in the logs, and also sometimes marked as UNREACHABLE when
`nodetool describecluster` is run.
However these nodes do not show up in `nodetool status` and
`nodetool ring`.
Below are a couple lines fro
hi;
We have cassandra cluster with Vnodes spanning across 3 data centers.
We take backup of the snapshots from one datacenter.
In a doomsday scenario, we want to restore a downed datacenter, with
snapshots from another datacenter. We have same number of nodes in each
datacenter.
1 : We kno
65 matches
Mail list logo