Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
On Fri, 3 May 2019 at 16:07, Amar Tumballi Suryanarayan 
wrote:

>
>
> On Fri, May 3, 2019 at 3:17 PM Atin Mukherjee  wrote:
>
>>
>>
>> On Fri, 3 May 2019 at 14:59, Xavi Hernandez  wrote:
>>
>>> Hi Atin,
>>>
>>> On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee 
>>> wrote:
>>>
 I'm bit puzzled on the way coverity is reporting the open defects on
 GD1 component. As you can see from [1], technically we have 6 open defects
 and all of the rest are being marked as dismissed. We tried to put some
 additional annotations in the code through [2] to see if coverity starts
 feeling happy but the result doesn't change. I still see in the report it
 complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
 Low). More interestingly yesterday's report claimed we fixed 8 defects,
 introduced 1, but the overall count remained as 102. I'm not able to
 connect the dots of this puzzle, can anyone?

>>>
>>> Maybe we need to modify all dismissed CID's so that Coverity considers
>>> them again and, hopefully, mark them as solved with the newer updates. They
>>> have been manually marked to be ignored, so they are still there...
>>>
>>
>> After yesterday’s run I set the severity for all of them to see if
>> modifications to these CIDs make any difference or not. So fingers crossed
>> till the next report comes :-) .
>>
>
> If you noticed the previous day report, it was 101 'Open defects' and 65
> 'Dismissed' (which means, they are not 'fixed in code', but dismissed as
> false positive or ignore in CID dashboard.
>
> Now, it is 57 'Dismissed', which means, your patch has actually fixed 8
> defects.
>
>
>>
>>
>>> Just a thought, I'm not sure how this really works.
>>>
>>
>> Same here, I don’t understand the exact workflow and hence seeking
>> additional ideas.
>>
>>
> Looks like we should consider overall open defects as Open + Dismissed.
>

This is why I’m concerned. There’re defects which we clearly can’t or don’t
want to fix and in that case even though they are marked as dismissed the
overall open defect count doesn’t come down. So we’d never be able to come
down below total number of dismissed defects :-( .

However today’s report bring the overall count down to 97 from 102.
Coverity claimed we fixed 0 defects since last scan which means somehow my
update at those GD1 dismissed defects did a trick for 5 defects. This
continues to be a great puzzle for me!


>
>>
>>> Xavi
>>>
>>>

 [1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
 [2] https://review.gluster.org/#/c/22619/
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> --
>> - Atin (atinm)
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Amar Tumballi (amarts)
>
-- 
- Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Amar Tumballi Suryanarayan
On Fri, May 3, 2019 at 3:17 PM Atin Mukherjee  wrote:

>
>
> On Fri, 3 May 2019 at 14:59, Xavi Hernandez  wrote:
>
>> Hi Atin,
>>
>> On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee 
>> wrote:
>>
>>> I'm bit puzzled on the way coverity is reporting the open defects on GD1
>>> component. As you can see from [1], technically we have 6 open defects and
>>> all of the rest are being marked as dismissed. We tried to put some
>>> additional annotations in the code through [2] to see if coverity starts
>>> feeling happy but the result doesn't change. I still see in the report it
>>> complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
>>> Low). More interestingly yesterday's report claimed we fixed 8 defects,
>>> introduced 1, but the overall count remained as 102. I'm not able to
>>> connect the dots of this puzzle, can anyone?
>>>
>>
>> Maybe we need to modify all dismissed CID's so that Coverity considers
>> them again and, hopefully, mark them as solved with the newer updates. They
>> have been manually marked to be ignored, so they are still there...
>>
>
> After yesterday’s run I set the severity for all of them to see if
> modifications to these CIDs make any difference or not. So fingers crossed
> till the next report comes :-) .
>

If you noticed the previous day report, it was 101 'Open defects' and 65
'Dismissed' (which means, they are not 'fixed in code', but dismissed as
false positive or ignore in CID dashboard.

Now, it is 57 'Dismissed', which means, your patch has actually fixed 8
defects.


>
>
>> Just a thought, I'm not sure how this really works.
>>
>
> Same here, I don’t understand the exact workflow and hence seeking
> additional ideas.
>
>
Looks like we should consider overall open defects as Open + Dismissed.


>
>> Xavi
>>
>>
>>>
>>> [1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
>>> [2] https://review.gluster.org/#/c/22619/
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> --
> - Atin (atinm)
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
On Fri, 3 May 2019 at 14:59, Xavi Hernandez  wrote:

> Hi Atin,
>
> On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee 
> wrote:
>
>> I'm bit puzzled on the way coverity is reporting the open defects on GD1
>> component. As you can see from [1], technically we have 6 open defects and
>> all of the rest are being marked as dismissed. We tried to put some
>> additional annotations in the code through [2] to see if coverity starts
>> feeling happy but the result doesn't change. I still see in the report it
>> complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
>> Low). More interestingly yesterday's report claimed we fixed 8 defects,
>> introduced 1, but the overall count remained as 102. I'm not able to
>> connect the dots of this puzzle, can anyone?
>>
>
> Maybe we need to modify all dismissed CID's so that Coverity considers
> them again and, hopefully, mark them as solved with the newer updates. They
> have been manually marked to be ignored, so they are still there...
>

After yesterday’s run I set the severity for all of them to see if
modifications to these CIDs make any difference or not. So fingers crossed
till the next report comes :-) .


> Just a thought, I'm not sure how this really works.
>

Same here, I don’t understand the exact workflow and hence seeking
additional ideas.


> Xavi
>
>
>>
>> [1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
>> [2] https://review.gluster.org/#/c/22619/
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
> --
- Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Xavi Hernandez
Hi Atin,

On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee  wrote:

> I'm bit puzzled on the way coverity is reporting the open defects on GD1
> component. As you can see from [1], technically we have 6 open defects and
> all of the rest are being marked as dismissed. We tried to put some
> additional annotations in the code through [2] to see if coverity starts
> feeling happy but the result doesn't change. I still see in the report it
> complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
> Low). More interestingly yesterday's report claimed we fixed 8 defects,
> introduced 1, but the overall count remained as 102. I'm not able to
> connect the dots of this puzzle, can anyone?
>

Maybe we need to modify all dismissed CID's so that Coverity considers them
again and, hopefully, mark them as solved with the newer updates. They have
been manually marked to be ignored, so they are still there...

Just a thought, I'm not sure how this really works.

Xavi


>
> [1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
> [2] https://review.gluster.org/#/c/22619/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
I'm bit puzzled on the way coverity is reporting the open defects on GD1
component. As you can see from [1], technically we have 6 open defects and
all of the rest are being marked as dismissed. We tried to put some
additional annotations in the code through [2] to see if coverity starts
feeling happy but the result doesn't change. I still see in the report it
complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
Low). More interestingly yesterday's report claimed we fixed 8 defects,
introduced 1, but the overall count remained as 102. I'm not able to
connect the dots of this puzzle, can anyone?

[1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
[2] https://review.gluster.org/#/c/22619/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-03 Thread Jiffin Tony Thottan


On 30/04/19 6:59 PM, Strahil Nikolov wrote:

Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.



It do take care those, but need to follow certain prerequisite, but 
please fencing won't configured for this setup. May we think about in 
future.


--

Jiffin



Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
   
+1!

I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
   
Hi all,


Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

   I have opened up an issue [1] with details and posted initial set of patches 
[2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)



--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-05-03 Thread Jiffin Tony Thottan


On 30/04/19 6:41 PM, Renaud Fortier wrote:


IMO, you should keep storhaug and maintain it. At the beginning, we 
were with pacemaker and corosync. Then we move to storhaug with the 
upgrade to gluster 4.1.x. Now you are talking about going back like it 
was. Maybe it will be better with pacemake and corosync but the 
important is to have a solution that will be stable and maintained.




I agree it is very frustrating, there is no longer development planned 
for future unless someone pick it and work on for its stabilization and 
improvement.


My plan is just to get back what gluster and nfs-ganesha had before

--

Jiffin


thanks

Renaud

*De :*gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] *De la part de* Jim Kinney

*Envoyé :* 30 avril 2019 08:20
*À :* gluster-us...@gluster.org; Jiffin Tony Thottan 
; gluster-us...@gluster.org; Gluster Devel 
; gluster-maintain...@gluster.org; 
nfs-ganesha ; de...@lists.nfs-ganesha.org
*Objet :* Re: [Gluster-users] Proposing to previous ganesha HA cluster 
solution back to gluster code as gluster-7 feature


+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use 
NFS instead of fuse mounts. Having an integrated, designed in process 
to coordinate multiple nodes into an HA cluster will very welcome.


On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:


Hi all,

Some of you folks may be familiar with HA solution provided for
nfs-ganesha by gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA
project "Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state,
hence planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting
for next gluster release 7.

I have opened up an issue [1] with details and posted initial set
of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1]https://github.com/gluster/glusterfs/issues/663


[2]
https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb 
related and reflect authenticity.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel