Re: question on the code formatter

2017-09-14 Thread Murukesh Mohanan
The wiki seems to be outdated. See
https://github.com/apache/cassandra/blob/trunk/doc/source/development/ide.rst
:


> The project generated by the ant task ``generate-idea-files`` contains
nearly everything
> you need to debug Cassandra and execute unit tests.
>
> * Run/debug defaults for JUnit
> * Run/debug configuration for Cassandra daemon
> * License header for Java source files
> * Cassandra code style
> * Inspections

You can just run `generate-idea-files` and then open the project in IDEA.
Code style settings should be automatically picked up by IDEA.

On Fri, 15 Sep 2017 at 14:46 Tyagi, Preetika 
wrote:

> Hi all,
>
> I was trying to configure the Cassandra code formatter and downloaded
> IntelliJ-codestyle.jar from this link:
> https://wiki.apache.org/cassandra/CodeStyle
>
> After extracting this JAR, I was able to import codestyle/Default_1_.xml
> into my project and formatting seemed to work.
>
> However, I'm wondering what options/code.style.schemes.xml file is exactly
> used for? Could anyone please give me an idea if I need to configure this
> as well?
>
> Thanks,
> Preetika
>
>
> --

Murukesh Mohanan,
Yahoo! Japan


question on the code formatter

2017-09-14 Thread Tyagi, Preetika
Hi all,

I was trying to configure the Cassandra code formatter and downloaded 
IntelliJ-codestyle.jar from this link: 
https://wiki.apache.org/cassandra/CodeStyle

After extracting this JAR, I was able to import codestyle/Default_1_.xml into 
my project and formatting seemed to work.

However, I'm wondering what options/code.style.schemes.xml file is exactly used 
for? Could anyone please give me an idea if I need to configure this as well?

Thanks,
Preetika




Re: Proposal: Closing old, unable-to-repro JIRAs

2017-09-14 Thread Jason Brown
Jeff, fantastic idea. +1

John, i'd prefer to get the non-PA ones out of the way first. With PA,
someone at least tried to improve to software.

I'm not at a computer now, but it would be instructive to have a beak down
of those types of tickets by age. I can do that later.

Jason
On Thu, Sep 14, 2017 at 16:56 Eduard Tudenhoefner <
eduard.tudenhoef...@datastax.com> wrote:

> +1, if it turns out that something is still valid/important, then people
> can always re-open.
>
> On Thu, Sep 14, 2017 at 4:50 PM, Jeff Jirsa  wrote:
>
> > There's a number of JIRAs that are old - sometimes very old - that
> > represent bugs that either don't exist in modern versions, or don't have
> > sufficient information for us to repro, but the reporter has gone away.
> >
> > Would anyone be offended if I start tagging these with the label
> > 'UnableToRepro' or 'Unresponsive' and start a 30 day timer to close them?
> > Anyone have a better suggestion?
> >
>
>
>
> --
>
> [image: DataStaxLogo copy3.png] 
>
> Eduard Tudenhoefner
>
> Software Engineer | +49 151 206 111 97 | eduard.tudenhoef...@datastax.com
>
>
> 
>  
> 
>  
>


Re: Proposal: Closing old, unable-to-repro JIRAs

2017-09-14 Thread Jonathan Haddad
I think it's a great idea. I'd like to do the same thing with the available
patches but I'm totally ok with doing that separately.
On Thu, Sep 14, 2017 at 4:50 PM Jeff Jirsa  wrote:

> There's a number of JIRAs that are old - sometimes very old - that
> represent bugs that either don't exist in modern versions, or don't have
> sufficient information for us to repro, but the reporter has gone away.
>
> Would anyone be offended if I start tagging these with the label
> 'UnableToRepro' or 'Unresponsive' and start a 30 day timer to close them?
> Anyone have a better suggestion?
>


Re: Proposal: Closing old, unable-to-repro JIRAs

2017-09-14 Thread Eduard Tudenhoefner
+1, if it turns out that something is still valid/important, then people
can always re-open.

On Thu, Sep 14, 2017 at 4:50 PM, Jeff Jirsa  wrote:

> There's a number of JIRAs that are old - sometimes very old - that
> represent bugs that either don't exist in modern versions, or don't have
> sufficient information for us to repro, but the reporter has gone away.
>
> Would anyone be offended if I start tagging these with the label
> 'UnableToRepro' or 'Unresponsive' and start a 30 day timer to close them?
> Anyone have a better suggestion?
>



-- 

[image: DataStaxLogo copy3.png] 

Eduard Tudenhoefner

Software Engineer | +49 151 206 111 97 | eduard.tudenhoef...@datastax.com



 

 


Proposal: Closing old, unable-to-repro JIRAs

2017-09-14 Thread Jeff Jirsa
There's a number of JIRAs that are old - sometimes very old - that
represent bugs that either don't exist in modern versions, or don't have
sufficient information for us to repro, but the reporter has gone away.

Would anyone be offended if I start tagging these with the label
'UnableToRepro' or 'Unresponsive' and start a 30 day timer to close them?
Anyone have a better suggestion?


Re: repair hangs, validation failed

2017-09-14 Thread Micha
ok, thanks, so I don't use the -pr option anymore (I used it on the
first node). It seems to take some time. running for 150minutes to
complete 7% ...


Cheers,
 Michael



On 14.09.2017 15:40, Alexander Dejanovski wrote:
> There should be no migration needed, but if you have a lot of data,
> anticompaction could take a while the first time. The only way to make that
> fast would be to mark all sstables as repaired, and then start running
> incremental repair every day or so to have small datasets to anticompact.
> But all that data wouldn't really be repaired and no subsequent repair
> would actually repair it, unless you run a full repair.
> 
> Do note that full repair performs anticompaction too, and only subrange
> repair will skip that phase.
> 
> You should never use "-pr" with incremental repair as it won't mark all
> sstables as repaired. It's unnecessary anyway since incremental repair will
> skip already repaired tokens, which defeats the benefits of using "-pr".
> Just run "nodetool repair" on one node, wait for it to finish (check the
> logs for repair completion if nodetool loses the connection at some point).
> Only when it is fully finished on that node, move on to the next one.
> 

-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: repair hangs, validation failed

2017-09-14 Thread Alexander Dejanovski
There should be no migration needed, but if you have a lot of data,
anticompaction could take a while the first time. The only way to make that
fast would be to mark all sstables as repaired, and then start running
incremental repair every day or so to have small datasets to anticompact.
But all that data wouldn't really be repaired and no subsequent repair
would actually repair it, unless you run a full repair.

Do note that full repair performs anticompaction too, and only subrange
repair will skip that phase.

You should never use "-pr" with incremental repair as it won't mark all
sstables as repaired. It's unnecessary anyway since incremental repair will
skip already repaired tokens, which defeats the benefits of using "-pr".
Just run "nodetool repair" on one node, wait for it to finish (check the
logs for repair completion if nodetool loses the connection at some point).
Only when it is fully finished on that node, move on to the next one.

Cheers,

On Thu, Sep 14, 2017 at 11:52 AM Micha  wrote:

>
> ok, I have restarted the cluster to stop all repairs.
> There is no "migration" process to move to incremental repair in 3.11?
> So I can start "nodetool repair -pr"  node after node or just "nodetool
> repair" on one node?
>
>  Cheers,
>  Michael
>
>
> On 14.09.2017 10:47, Alexander Dejanovski wrote:
> > Hi Micha,
> >
> > Are you running incremental repair ?
> > If so, then validation fails when 2 repair sessions are running at the
> same
> > time, with one anticompacting an SSTable and the other trying to run a
> > validation compaction on it.
> >
> > If you check the logs of the nodes that is referred to in the "Validation
> > failed in ...", you should see that there are error messages stating that
> > an sstable can't be part of 2 different repair sessions.
> >
> > If that happens (and you're indeed running incremental repair), you
> should
> > roll restart the cluster to stop all repairs and then process one node
> at a
> > time only.
> > Reaper does that, but you can handle it manually if you prefer. The plan
> > here is to wait for all anticompactions to be over before starting a
> repair
> > on the next node.
> >
> > In any case, check the logs of the node that failed to run validation
> > compaction in order to understand what failed.
> >
> > Cheers,
> >
> > On Thu, Sep 14, 2017 at 10:18 AM Micha  wrote:
> >
> >> Hi,
> >>
> >> I started a repair (7 nodes, C* 3.11) but at once I get an exception in
> >> the log:
> >> "RepairException: [# on keyspace/table, [],
> >>  Validation failed in /ip"
> >>
> >> The started nodetool repair hangs (the whole day...), strace shows it's
> >> waiting...
> >>
> >> What's the reason for this excpetion and what to do now? If this is an
> >> error, why doesn't nodetool abort the command and shows the error?
> >>
> >> thanks,
> >>  Michael
> >>
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> >> For additional commands, e-mail: dev-h...@cassandra.apache.org
> >>
> >> --
> > -
> > Alexander Dejanovski
> > France
> > @alexanderdeja
> >
> > Consultant
> > Apache Cassandra Consulting
> > http://www.thelastpickle.com
> >
>
-- 
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: repair hangs, validation failed

2017-09-14 Thread Micha

ok, I have restarted the cluster to stop all repairs.
There is no "migration" process to move to incremental repair in 3.11?
So I can start "nodetool repair -pr"  node after node or just "nodetool
repair" on one node?

 Cheers,
 Michael


On 14.09.2017 10:47, Alexander Dejanovski wrote:
> Hi Micha,
> 
> Are you running incremental repair ?
> If so, then validation fails when 2 repair sessions are running at the same
> time, with one anticompacting an SSTable and the other trying to run a
> validation compaction on it.
> 
> If you check the logs of the nodes that is referred to in the "Validation
> failed in ...", you should see that there are error messages stating that
> an sstable can't be part of 2 different repair sessions.
> 
> If that happens (and you're indeed running incremental repair), you should
> roll restart the cluster to stop all repairs and then process one node at a
> time only.
> Reaper does that, but you can handle it manually if you prefer. The plan
> here is to wait for all anticompactions to be over before starting a repair
> on the next node.
> 
> In any case, check the logs of the node that failed to run validation
> compaction in order to understand what failed.
> 
> Cheers,
> 
> On Thu, Sep 14, 2017 at 10:18 AM Micha  wrote:
> 
>> Hi,
>>
>> I started a repair (7 nodes, C* 3.11) but at once I get an exception in
>> the log:
>> "RepairException: [# on keyspace/table, [],
>>  Validation failed in /ip"
>>
>> The started nodetool repair hangs (the whole day...), strace shows it's
>> waiting...
>>
>> What's the reason for this excpetion and what to do now? If this is an
>> error, why doesn't nodetool abort the command and shows the error?
>>
>> thanks,
>>  Michael
>>
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: dev-h...@cassandra.apache.org
>>
>> --
> -
> Alexander Dejanovski
> France
> @alexanderdeja
> 
> Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
> 

-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: repair hangs, validation failed

2017-09-14 Thread Alexander Dejanovski
Hi Micha,

Are you running incremental repair ?
If so, then validation fails when 2 repair sessions are running at the same
time, with one anticompacting an SSTable and the other trying to run a
validation compaction on it.

If you check the logs of the nodes that is referred to in the "Validation
failed in ...", you should see that there are error messages stating that
an sstable can't be part of 2 different repair sessions.

If that happens (and you're indeed running incremental repair), you should
roll restart the cluster to stop all repairs and then process one node at a
time only.
Reaper does that, but you can handle it manually if you prefer. The plan
here is to wait for all anticompactions to be over before starting a repair
on the next node.

In any case, check the logs of the node that failed to run validation
compaction in order to understand what failed.

Cheers,

On Thu, Sep 14, 2017 at 10:18 AM Micha  wrote:

> Hi,
>
> I started a repair (7 nodes, C* 3.11) but at once I get an exception in
> the log:
> "RepairException: [# on keyspace/table, [],
>  Validation failed in /ip"
>
> The started nodetool repair hangs (the whole day...), strace shows it's
> waiting...
>
> What's the reason for this excpetion and what to do now? If this is an
> error, why doesn't nodetool abort the command and shows the error?
>
> thanks,
>  Michael
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


repair hangs, validation failed

2017-09-14 Thread Micha
Hi,

I started a repair (7 nodes, C* 3.11) but at once I get an exception in
the log:
"RepairException: [# on keyspace/table, [],
 Validation failed in /ip"

The started nodetool repair hangs (the whole day...), strace shows it's
waiting...

What's the reason for this excpetion and what to do now? If this is an
error, why doesn't nodetool abort the command and shows the error?

thanks,
 Michael



-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org