DBA
> T: +972-74-700-4035
> <http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
> <http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
>
> <https://engage.liveperson.com/idc-mobile-first-consumer/?utm_medium=email_source=
this?
>
> Thanks!
>
>
>
> Shalom Sagges
> DBA
> T: +972-74-700-4035 <+972%2074-700-4035>
> <http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
> <http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
>
> <ht
ived this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>
>
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
logs
>
> On Tue, Nov 1, 2016 at 10:25 AM, Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
> Do you have anything in the reaper logs that would show a failure of some
> sort ?
> Also, can you tell me which version of Cassandra you're using ?
>
> Thanks
>
sity": 0.900,
> "keyspace_name": "users",
> * "last_event": "no events",*
> "owner": "root",
> "pause_time": null,
> "repair_parallelism": "DATACENTER_AWARE",
> "seg
r
> cassandra-reaper.yaml
> 3. ./bin/spreaper repair production users
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
?v=N3mGxgnUiRY
Slides :
http://www.slideshare.net/DataStax/myths-of-big-partitions-robert-stupp-datastax-cassandra-summit-2016
Cheers,
On Fri, Oct 28, 2016 at 4:09 PM Eric Evans <john.eric.ev...@gmail.com>
wrote:
> On Thu, Oct 27, 2016 at 4:13 PM, Alexander Dejanovski
> <a...@th
8, Vincent Rischmann <m...@vrischmann.me> a écrit :
> Yeah that particular table is badly designed, I intend to fix it, when the
> roadmap allows us to do it :)
> What is the recommended maximum partition size ?
>
> Thanks for all the information.
>
>
> On Thu, Oct 27, 2016,
t; less big partitions are around 500Mb and less.
>
>
> On Thu, Oct 27, 2016, at 05:37 PM, Alexander Dejanovski wrote:
>
> Oh right, that's what they advise :)
> I'd say that you should skip the full repair phase in the migration
> procedure as that will obviously fail, and just mar
or more. We were never able to run
> one to completion. I'm not sure it's a good idea to disable autocompaction
> for that long.
>
> But maybe I'm wrong. Is it possible to use incremental repairs on some
> column family only ?
>
>
> On Thu, Oct 27, 2016, at 05:02 PM, Alexa
ervice but I'm wondering how do other
Cassandra users manage repairs ?
Vincent.
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
law. Global Relay will not be liable for any compliance or
> technical information provided herein. All trademarks are the property of
> their respective owners.
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
n 19 October 2016 at 17:13, Alexander Dejanovski <a...@thelastpickle.com>
> wrote:
>
> There aren't that many tools I know to orchestrate repairs and we maintain
> a fork of Reaper, that was made by Spotify, and handles incremental repair
> : https://github.com/thelastpickle/
, which should soon be merged to master.
Le mer. 19 oct. 2016 19:03, Kant Kodali <k...@peernova.com> a écrit :
Also any suggestions on a tool to orchestrate the incremental repair? Like
say most commonly used
Sent from my iPhone
On Oct 19, 2016, at 9:54 AM, Alexander Deja
t; Sent from my iPhone
>
> On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski <a...@thelastpickle.com>
> wrote:
>
> Hi Kant,
>
> subrange is a form of full repair, so it will just split the repair
> process in smaller yet sequential pieces of work (repair is started
the -pr flag, will subsequent full repairs also force anti compacting
most (all?) sstables?
Thanks,
Sean
--
-----
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
e a way to do full repairs
> without continually anti compacting? If we do a full repair on each node
> with the -pr flag, will subsequent full repairs also force anti compacting
> most (all?) sstables?
>
> Thanks,
>
> Sean
>
--
-
Alexander Dejanovski
France
ple.com/in/app/snapdeal-mobile-shopping/id721124909?ls=1=8_source=mobileAppLp_campaign=ios>
> [image:
> W]
> <http://www.windowsphone.com/en-in/store/app/snapdeal/ee17fccf-40d0-4a59-80a3-04da47a5553f>
>
> On Wed, Oct 19, 2016 at 4:44 PM, Alexander Dejanovski <
&
ng the below error.
>
>
> A repair run already exist for the same cluster/keyspace/table but with a
> different incremental repair value.Requested value: true | Existing value:
> false
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
There are some suggestions mentioned by *brstgt* which we can try on our
> side.
>
> On Thu, Sep 29, 2016 at 5:42 PM, Atul Saroha <atul.sar...@snapdeal.com>
> wrote:
>
>> Thanks Alexander.
>>
>> Will look into all these.
>>
>> On Thu, Sep 29, 2016 at
eady running repair
> of same partition on other box for same partition range. We saw error
> validation failed with some ip as repair in already running for the same
> SSTable.
> Just few days back, we had 2 DCs with 3 nodes each and replication was
> also 3. It means all data on
; partition...
>
> for some materialized view. Some have values over 500MB. How this affects
> performance? What can/should be done? I suppose is a problem in the schema
> design.
>
> Thanks,
> Robert Sicoie
>
--
-
Alexander Dejanovski
France
@alexand
>
> Just want the help how to verify and debug this issue. Help will be
> appreciated.
>
>
> --
> Regards,
> Atul Saroha
>
> *Lead Software Engineer | CAMS*
>
> M: +91 8447784271
> Plot #362, ASF Center - Tower A, 1st Floor, Sec-18,
> Udyog Vihar Phase I
r one there are 31 pending repair. On others there are less
> pending repairs (min 12). Is there any recomandation for the restart order?
> The one with more less pending repairs first, perhaps?
>
> Thanks,
> Robert
>
> Robert Sicoie
>
> On Wed, Sep 28, 2016 at 5:35 PM, Al
dingCompactions on jmx.
>
> Is there other way I can find out if is there any anticompaction running
> on any node?
>
> Thanks a lot,
> Robert
>
> Robert Sicoie
>
> On Wed, Sep 28, 2016 at 4:44 PM, Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
ngQueue.put(LinkedBlockingQueue.java:339)
> ~[na:1.8.0_60]*
> * at
> org.apache.cassandra.net.OutboundTcpConnection.enqueue(OutboundTcpConnection.java:168)
> ~[apache-cassandra-3.0.5.jar:3.0.5]*
> * ... 6 common frames omitted*
>
>
> Now if I run nodetool repair I get t
in the foreground before the response is returned to the client.
> So, at least from a single client's perspective, you get monotonic reads.
>
>
> --
> Tyler Hobbs
> DataStax <http://datastax.com/>
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
B, C respond --> conflict
> Because a quorum (2 nodes) responded, the coordinator will return the
> latest time stamp and may issue read repair depending on YAML settings.
>
> So where do you see only one client having this guarantee?
>
> Regards,
>
> James
>
> On
Hi,
the analysis is valid, and strong consistency the Cassandra way means that
one client writing at quorum, then reading at quorum will always see his
previous write.
Two different clients have no guarantee to see the same data when using
quorum, as illustrated in your example.
Only options
Hi Paulo,
don't you think it might be better to keep applying the migration procedure
whatever the version ?
Anticompaction is pretty expensive on big SSTables and if the cluster has a
lot of data, the first run might be very very long if the nodes are dense,
and especially with a high number of
Hi Siddharth,
yes, we are sure token ranges will never overlap (I think the start token
in describering output is excluded and the end token included).
You can get per host information in the Datastax Java driver using :
Set rangesForKeyspace = cluster.getMetadata().getTokenRanges(
Hi Siddarth,
I would recommend running "nodetool describering keyspace_name" as its
output is much simpler to reason about :
Schema Version:9a091b4e-3712-3149-b187-d2b09250a19b
TokenRange:
TokenRange(start_token:1943978523300203561, end_token:2137919499801737315,
endpoints:[127.0.0.3,
Reads at quorum in dc3 will involve dc1 and dc2 as they will require a
response from more than half the replicas throughout the Cluster.
If you're using RF=3 in each DC, each read will need at least 5 responses,
which DC3 cannot provide on its own.
You can have troubles if DC3 has more than half
After running some tests I can confirm that using -pr leaves unrepaired
SSTables, while removing it shows repaired SSTables only once repair is
completed.
The purpose of -pr was to lighten the repair process by not repairing
ranges RF times, but just once. With incremental repair though, repaired
There are 2 main reasons I see for still having unrepaired sstables after
running nodetool repair -pr :
1- new data is still flowing in your database after the repair sessions
were launched, and thus hasn't been repaired
2- some repair sessions failed and left unrepaired data on your nodes.
101 - 135 of 135 matches
Mail list logo