I just wanted to verify the procedures to add and remove nodes in my
environment, please feel free to comments or advise.
I have 3 node cluster N1, N2, N3 with Vnode configured as (256) on each
node. All are in one data center.
1. Procedure to Change node hardware or replace to new node
snapshotted files to node 1 data directory of CF
3. perform nodetool refresh on node 1
Any suggestions/advise?
ng
I am not worried about eventually consistent data. I just wanted to get
rough data in close proximate.
ng
On Wed, Jun 4, 2014 at 2:49 PM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Jun 4, 2014 at 1:26 PM, ng pipeli...@gmail.com wrote:
Is there any reason you would like to take snapshot
I need to make sure that all the data in sstable before taking the snapshot.
I am thinking of
nodetool cleanup
nodetool repair
nodetool flush
nodetool snapshot
Am I missing anything else?
Thanks in advance for the responses/suggestions.
ng
I want to discuss the question asked by Rene last year again.
http://www.mail-archive.com/user%40cassandra.apache.org/msg28465.html
Is the following a good backup solution.
Create two data-centers:
- A live data-center with multiple nodes (commodity hardware) (6 nodes with
replication factor of
If I have configuration of two data center with one node each.
Replication factor is also 1.
Will these 2 nodes going to be mirrored/replicated?
sstable2json tomcat-t5-ic-1-Data.db -e
gives me
0021
001f
0020
How do I convert this (hex) to actual value of column so I can do below
select * from tomcat.t5 where c1='concerted value';
Thanks in advance for the help.
range_request_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 1000
I still have no luck. Any advise how to achieve this? I am NOT limited to
copy command. What is the best way to achieve this? Thanks in advance for
the help.
ng
this message and any copies.
*From:* ng
[mailto:pipeli...@gmail.comjavascript:_e(%7B%7D,'cvml','pipeli...@gmail.com');]
*Sent:* Wednesday, April 2, 2014 6:04 PM
*To:*
user@cassandra.apache.orgjavascript:_e(%7B%7D,'cvml','user@cassandra.apache.org');
*Subject:* Exporting column family data
is no option for us.
regards,
ondrej cernos
On Fri, Feb 14, 2014 at 8:53 PM, Frank Ng fnt...@gmail.com wrote:
Sorry, I have not had a chance to file a JIRA ticket. We have not
been able to resolve the issue. But since Joel mentioned that
upgrading to
Cassandra 2.0.X solved it for them
We have swap disabled. Can death by paging still happen?
On Thu, Feb 27, 2014 at 11:32 AM, Benedict Elliott Smith
belliottsm...@datastax.com wrote:
That sounds a lot like death by paging.
On 27 February 2014 16:29, Frank Ng fnt...@gmail.com wrote:
I just caught that a node was down
Sorry, I have not had a chance to file a JIRA ticket. We have not been
able to resolve the issue. But since Joel mentioned that upgrading to
Cassandra 2.0.X solved it for them, we may need to upgrade. We are
currently on Java 1.7 and Cassandra 1.2.8
On Thu, Feb 13, 2014 at 12:40 PM, Keith
All,
We've been having intermittent long application pauses (version 1.2.8) and
not sure if it's a cassandra bug. During these pauses, there are dropped
messages in the cassandra log file along with the node seeing other nodes
as down. We've turned on gc logging and the following is an example
the safepoint.
On Wed, Jan 29, 2014 at 1:20 PM, Shao-Chuan Wang
shaochuan.w...@bloomreach.com wrote:
We had similar latency spikes when pending compactions can't keep it up or
repair/streaming taking too much cycles.
On Wed, Jan 29, 2014 at 10:07 AM, Frank Ng fnt...@gmail.com wrote:
All
to aggregate into one log
message)
52s is a very extreme pause, and I would be surprised if revoke bias could
cause this. I wonder if the VM is swapping out.
On 29 January 2014 19:02, Frank Ng fnt...@gmail.com wrote:
Thanks for the update. Our logs indicated that there were 0 pending
Hi All,
We are using the Fat Client and notice that there are files written to the
commit log directory on the Fat Client. Does anyone know what these files
are storing? Are these hinted handoff data? The Fat Client has no files in
the data directory, as expected.
thanks
; Thu, 26 Apr 2012 11:11:42 -0700 (PDT)
Date: Thu, 26 Apr 2012 14:11:42 -0400
Message-ID:
caal7ocavuw1rtaqwlddzbnzosv7-qxqfhot7w6uj8q08m03...@mail.gmail.com
Subject: Get
From: Frank Ng buzzt...@gmail.com
To: user-get.23...@cassandra.apache.org
Content-Type: multipart/alternative; boundary
I am having the same issue in 1.0.7 with leveled compation. It seems that
the repair is flaky. It either completes relatively fast in a TEST
environment (7 minutes) or gets stuck trying to receive a merkle tree from
a peer that is already sending it the merkle tree.
Only solution is to restart
I also noticed that if I use the -pr option, the repair process went down
from 30 hours to 9 hours. Is the -pr option safe to use if I want to run
repair processes in parallel on nodes that are not replication peers?
thanks
On Thu, Apr 12, 2012 at 12:06 AM, Frank Ng berryt...@gmail.com wrote
Thanks for the clarification. I'm running repairs as in case 2 (to avoid
deleted data coming back).
On Thu, Apr 12, 2012 at 10:59 AM, Sylvain Lebresne sylv...@datastax.comwrote:
On Thu, Apr 12, 2012 at 4:06 PM, Frank Ng buzzt...@gmail.com wrote:
I also noticed that if I use the -pr option
spots
when you think you should be balanced and repair never ends (I think there
is a 48 hour timeout).
On Tuesday, April 10, 2012, Frank Ng wrote:
I am not using tier-sized compaction.
On Tue, Apr 10, 2012 at 12:56 PM, Jonathan Rhone rh...@tinyco.comwrote:
Data size, number of nodes, RF
building the merkle hash tree.
Look at nodetool netstats . Is it streaming data ? If so all hash trees
have been calculated.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 12/04/2012, at 2:16 AM, Frank Ng wrote:
Can you expand
Hello,
I am on Cassandra 1.0.7. My repair processes are taking over 30 hours to
complete. Is it normal for the repair process to take this long? I wonder
if it's because I am using the ext3 file system.
thanks
which part of repair process is slow -
network streams or verify compactions. use nodetool netstats or
compactionstats.
On 04/10/2012 05:16 PM, Frank Ng wrote:
Hello,
I am on Cassandra 1.0.7. My repair processes are taking over 30 hours to
complete. Is it normal for the repair process
-tiered compaction on any of the column families that
hold a lot of your data?
Do your cassandra logs say you are streaming a lot of ranges?
zgrep -E (Performing streaming repair|out of sync)
On Tue, Apr 10, 2012 at 9:45 AM, Igor i...@4friends.od.ua wrote:
On 04/10/2012 07:16 PM, Frank Ng wrote
a lot of ranges?
zgrep -E (Performing streaming repair|out of sync)
On Tue, Apr 10, 2012 at 9:45 AM, Igor i...@4friends.od.ua wrote:
On 04/10/2012 07:16 PM, Frank Ng wrote:
Short answer - yes.
But you are asking wrong question.
I think both processes are taking a while. When it starts up
26 matches
Mail list logo