Re: [Nfs-ganesha-devel] Announce GA of Ganesha V2.2.0

2015-04-23 Thread DENIEL Philippe
On 04/22/15 17:07, Frank Filz wrote:
>> I added 2 commits for updating rpm and debian changes. I push them on
>> gerrithub, both on refs/for/next and refs/form/master (I guess they should
>> land on master too).
>> Related reviews:
>>   https://review.gerrithub.io/#/c/230823/
>>   https://review.gerrithub.io/#/c/230824/
>>   https://review.gerrithub.io/#/c/230825/
>>   https://review.gerrithub.io/#/c/218025/
>>
>> I'll push rpms and tarballs on Sourceforge, they will include those
> commits.
>>   Regards
>>
>>   Philippe
> Oh grumble...
>
> Wish we had got this stuff in before I tagged V2.2.0 and opened V2.3...
In fact, those commits have to be the very last one before GA, and I 
need the src/Changelog updated to produce those commits. Once the 
changelog generated, you should have told me that those commits were to 
be done (or do them yourself, for they are pretty simple). Anyway, we 
will agree on the point that we failed in staying synchronized (we have 
9 good excuses: each of the timezones between you and me ;-) ).
We done not release versions frequently enough and we lack of procedure 
for that. This means that we'll do better next time.

> We need something to automate this stuff, having to put changelog in several
> places is annoying.
I do agree on automatizing the process. But there will be changelog in 
different places: I do not see any easy and maintainable way of sharing 
changelogs for rpm and debian packages.
This is an operation we do two or three times a year, do we need to do 
something complicated ? Let's set up a procedure in the wiki and apply 
it. Different places for changelog is no problem as long as the 
information is the same.

 Regards

 Philippe

>
> Frank
>


--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] nfs-ganesha and nfs-common

2015-04-23 Thread DENIEL Philippe
On 04/13/15 23:08, Matt W. Benjamin wrote:
> I thought the daemons (either of them) weren't needed on the Ganesha
> server side.
That's right : Ganesha does the idmap resolution by itself (by using 
libnfsidmap or equivalent), and it does the gss nego so it does not 
require any gssd.

 Regards

 Philippe

>
> We don't have the gss backchannel atm, but many folks have verified
> the server and GSS in the fore channel, to the best of my knowledge
> without these running (on the non-client, Ganesha host).
>
> Matt
>
> - "J. Bruce Fields"  wrote:
>
>> On Mon, Apr 13, 2015 at 09:09:59AM -0700, Frank Filz wrote:
>>> Ganesha does need rpc.statd for NLM operations (for NFSv3
>>> locks)
> rpc.gssd is required for GSS (Kerberos) on both client and
>> server.
 What does it do on the server?  I thought this just terminated
 kernel client upcalls.
>>> Oh, yea, probably so... Does the kernel server still use
>> rpc.svcgssd?
>>
>> Note that if you want to support NFSv4.0 delegations over krb5 then
>> the
>> server needs to be able to initiate context negotiation and the
>> client
>> needs to be able to accept.  So you may end up needing both daemons
>> on
>> both side.
>>
>> Making Ganesha handle that (if it doesn't already) may be best left as
>> a
>> low priority: NFSv4.0/krb5 will still work fine, there just won't be
>> delegations granted.  And 4.1+ doesn't have this problem.  (The
>> client's
>> responsible for creating contexts for the sessions backchannel.)
>>
>> Another wrinkle is that gss-proxy should be replacing rpc.svcgssd on
>> newer distros.
>>
>> --b.
>>
>>> I haven't set up Kerberos in forever, so I'm really rusty... I
>> really
>>> need to try it again, I just remember it being a PITA.


--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Fwd: [nfs-ganesha.github.com] Update news to release V2.2 (#1)

2015-04-24 Thread DENIEL Philippe

Hi List,

have you seen that pull request (made apparently online from github.com)

Regards

Philippe


 Forwarded Message 
Subject:[nfs-ganesha.github.com] Update news to release V2.2 (#1)
Date:   Thu, 23 Apr 2015 14:48:06 -0700
From:   Timofey 
Reply-To: 	nfs-ganesha/nfs-ganesha.github.com 
 

To: 	nfs-ganesha/nfs-ganesha.github.com 








   You can view, comment on, or merge this pull request online at:

https://github.com/nfs-ganesha/nfs-ganesha.github.com/pull/1


   Commit Summary

 * Update news to release V2.2


   File Changes

 * *M* index.html
   
   (2)


   Patch Links:

 * https://github.com/nfs-ganesha/nfs-ganesha.github.com/pull/1.patch
 * https://github.com/nfs-ganesha/nfs-ganesha.github.com/pull/1.diff

—
Reply to this email directly or view it on GitHub 
.




--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] [Nfs-ganesha-support] Troubles with configuration

2015-04-24 Thread DENIEL Philippe
Hi,

what does " showmount -e 172.20.252.12 " say ? This way, you'll see if 
Ganesha was capable of building the export entries.
My knowledge of CEPH is weak, so I forward your question to 
nfs-gamesha-devel. In particular, I do not know if:
 - CEPH support open_by_handle_at() syscall
 - FSAL_CEPH can work on a "regular" CEPH distribution. I remember 
that Matt and Adam made some changes to CEPH to ease pNFS 
implementation, and I do not know if those commits went upstream. They 
will tell us about this.

 Regards

 Philippe

On 04/24/15 09:18, Timofey Titovets wrote:
> Good time of day, I've try to setup nfs-ganesha server on Ubuntu 15.04
> Builded by last source from master tree.
>
> 1. Can't be mounted with nfsv3
> Config:
> EXPORT
> {
>  # Export Id (mandatory, each EXPORT must have a unique Export_Id)
>  Export_Id = 77;
>  # Exported path (mandatory)
>  Path = /storage;
>  # Pseudo Path (required for NFS v4)
>  Pseudo = /cephfs;
>  Access_Type = RW;
>  NFS_Protocols = 3;
>  FSAL {
>  Name = VFS;
>  }
> }
> Or
> EXPORT
> {
> Export_ID = 1;
> Path = "/";
> Pseudo = "/cephfs";
> Access_Type = RW;
> NFS_Protocols = 3;
> Transport_Protocols = TCP;
> FSAL {
> Name = CEPH;
> }
> }
>
> On client I've get:
> # mount -o nfsvers=3 172.20.252.12:/ /mnt -v
> mount.nfs: timeout set for Fri Apr 24 09:47:50 2015
> mount.nfs: trying text-based options 'nfsvers=3,addr=172.20.252.12'
> mount.nfs: prog 13, trying vers=3, prot=6
> mount.nfs: trying 172.20.252.12 prog 13 vers 3 prot TCP port 2049
> mount.nfs: prog 15, trying vers=3, prot=17
> mount.nfs: trying 172.20.252.12 prog 15 vers 3 prot UDP port 40321
> mount.nfs: mount(2): Permission denied
> mount.nfs: access denied by server while mounting 172.20.252.12:/
>
> What's did I wrong? %)
>


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] DISCUSSION: V2.3 workflow and how we proceed

2015-04-28 Thread DENIEL Philippe
Hi Frank,

reply is in the message body. I cut some pieces of your original message 
to avoid a too long message where information would be diluted.

On 04/24/15 23:59, Frank Filz wrote:
> 1. Abandon gerrithub, revert to using github branches for review and merge.
> This has a few problems.
The issue with github-based reviews are known. Avoiding solving a 
current issue by stepping back to formerly known problems may seem 
comfortable, but this is no evolution, it's clearly a regression.

> 2a. One solution here is use an e-mail review system (like the kernel
> process).
This is prehistory... Kernel community works like this because they have 
this habit since the (Linux) world was new and nothing else existed. Can 
someone seriously say it would be user-friendly ? OK, mail are archived, 
but not indexed. Looking for a review will be like lookingfora needle 
ina haystack. Once hundreds of mail will have been sent and received and 
re-re-re-re-replied finding useful information will become impossible. 
Please don't bring us back to Stone Age.

> 3. Change our workflow to work with gerrithub better (stop using the
> incremental patch process). One loss here would be the ability to bisect and
> hone in on a small set of changes that caused a bug.
It seems like a much better idea. People in my team are developing in 
the Lustre code. The Lustre community works with a private gerrit since 
the beginning. They have their best practices and their workflow.
In particular, they have "Patch windows" : when the window is opened, 
people can submit patchsets. As it closed, people review it, fix stuff, 
rebase code, and the branch is released. No new patchset comes at this 
time. Then the window opens again and a new cycle starts. One important 
point : the "master repo" is the git inside gerrit and no other. This 
means that contributors would only fetch gerrithub to rebase their work, 
github will then become a simple mirror.
Clearly, the "merge/fix/rebase" process is longer than a week. We could 
work this way by simply abandon the one-week cycle we are accustomed to. 
It's just a matter of using new, more adapted, rules and best practises.

> 3a. The most extreme option would be to abandon incremental patches. If you
> have a body of work for submission in a given week, you submit one patch for
> that work.
Again, I believe that the one-week cycle is the real issue and it's such 
a constraint for release management. You should open/close the 
submission window at your will. It would ease your work a lot, wouldn't 
it ? Remember that gerrit was designed to help the release manager, it's 
not designed to be that painful. We may just use the tool the wrong way.

> 3c. A process implied by a post Matt found: Perform code review with
> incremental patches. Once submission is ready, squash the submission into a
> single patch and get final signoffs on that. Then the single patch is
> merged.
People can rebase their patchset, even when submitted to gerrit and I 
think they should keep the same changeid. Remember that changeid is 
nothing but a mark on a commit to allow gerrit to keep patchset history. 
It's not a commit id.

>
> If we proceed with gerrit, Malahal and Vincent will need to re-submit their
> patches with new changeids
 From my point of view, changing the changeids would clearly be a misuse 
of gerrit.

> 1. We need more participation in review and verification on a timely basis.
Yes. But the timeline can be refine.

> 2. We should make sure each patch that has significant impact in areas the
> author may not be able to test is verified by someone who is able to test
> that area, and then make sure we document that verification in the review
> history (here is where gerrit COULD shine).
Gerrit can trigger automated tests (as it does with checkpatch.pl). 
Github does not (and so do not emails)


> 3. It would be helpful to be able to identify one or two critical reviewers
> for each patch, and then make sure those people are able to review the
> patch.
Yes.

>   For those patches that may need more than a couple people to review,
> we need to stage them at least a week ahead of when we expect them to be
> merged, and then somehow flag those patches as high priority for all
> required reviewers to actually review.
The question of timeline comes back again. That's clearly part of the issue.

I am, and I always was, a partisan for using gerrit. I will go and talk 
with my Lustre developers and will be back with a summary of the 
workflow use for Lustre with their private gerrit. This may be bring a 
new set of information to the current discussion.

 Regards

 Philippe

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracin

Re: [Nfs-ganesha-devel] DISCUSSION: V2.3 workflow and how we proceed

2015-04-29 Thread DENIEL Philippe
On 04/28/15 18:13, Frank Filz wrote:
(...)
> The problem with a longer that one  week cycle is that we get larger and
> larger volumes of patches. With the right tools, we can sustain a weekly
> cycle.
We introduced the 1-week cycle to regulate/sort patches, for we had no 
other way of doing it when using github.
I think that gerrit can help regulating patches workflow and so removes 
the need for a 1-week cycle.
What we should do could look like this:
 - people depose patches and asks people to review it
 - gerrit runs automated test on it (like checkpatch) and verify the 
patches
 - do add +1/-1... new version of the patch is done, discussion are 
made... and finally the patch is ready
 - at this point, people contact the release manager (aka Frank ;-) 
) and ask for landing the patch.
 - if the patch is mergeable (fast-forwardable) it is merged 
(cherry-pick button in github interface), if not, owner will have to 
rebase it and restart the whole review process
This can be done on the fly, as patches arrive and are reviewed. This 
change a constrained 1-week cycle against several, less constrained, 
cycles, one per patch. Keeping this in mind, a single "big and squashed" 
patch is clearly easier to manage. As Matt said, bigger patches are no 
issue. If I do a big patch or 10 small ones, all of my changed files 
will have to be reviewed, which does have no impact on the workload. In 
fact, one big patch is a cool situation : it is easy to rebase, and it 
depends only on stuff already pushed and landed. If I publish 5 patches, 
what if patch 1, 2 and 3 merge fine, but 4 produce a conflict ? How 
could I rebase 4 without touch 1,2 and 3 ? This leads to a dependencies 
maze, and this is precisely the situation we fell into.

Regarding, I believe we should:
  - nfs-ganesha in gerrithub (Frank's home in gerrithub) should 
accept only fast-forwardable commit (that's a matter of clicking on the 
right button in the right page)
 - we should provide big and squashed patches, one per feature. For 
example, CEA will soon push a rework of FSAL_HPSS, this will be a single 
commit.
 - the git in gerrit is the reference. Forget githib, at best it a 
clone to expose code in a fancy way. This mean that we stop fetch 
frank's github and we fetch frank's gerrithub. This is a very important 
point. It seems to me that if a patch is landed in gerrithub, the 
related github repository is automatically updated. This will prevent 
Frank from doing not-funny work, getting things on one side to push it 
on the other. We use gerrit, gerrit becomes our reference. That's as 
simple as that. Forget github or use it to store work-in-progress stuff 
of your own.

> Note however that the review cycle of a patch set needs to be understood to
> not always be a week. Sometimes it will take several weeks of iterations.
As I said, the "one single 1-week cycle" is replaced by "several 
independent patch-related cycle". Some may take weeks, some may take 
days or less. The 1-week cycle is github related. If we stop using 
github as a reference and use gerrithub instead, we have no need for 
this 1-week constraint.

> What I want to work on though is responsiveness when people submit patches
> that they get a first review in a timely fashion.
They can. People could then add stuff in the patch, and squash the 
result in a new patch. If the ChangeId is preserved, gerrit will 
understand that this is a new version of an already known patchset and 
keep the same tracks.

> Also, with the changeid allowing review comments to be tracked across
> versions of a patch, I want to encourage posting patches for review earlier
> so the major features are getting reviewed as they are developed, not one
> huge review at the end that no one can keep track of.
1% agreed ;-)
This is a completely safe way of proceeding.

> We have to do it because once a changeid has been merged, it is marked
> closed, and can't be resubmitted. This happened because the way we are
> trying to use gerrithub is non-native and I messed up. This will never
> happen again (with the usual caveat, never say never).
This is the result of a mistake, and we all are in a learning curve in 
using gerrithub. It's no actual issue so far.
I understood that you spoke about modifying changeids in a normal 
workflow. Nice to see to do not think this way ;-)
As far as possible changeids are to be preserved.

> Yes, that is goodness. Long term, we need it to automate testing of a set of
> temporarily merged patches, so if you and I have submitted patches, we test
> that they play well together. We also need to automate testing across a
> wider variety of platforms.
We do need automated tests. TravisCI can do compilation tests, but it 
seems limited to run "client/server" tests on several machines (what 
Jenkins/Workflow does or what I do with some of my Sigmund tests). 
Ideally, when someone submit a patch, before it is actually merged or 
even reviewed, automated 

[Nfs-ganesha-devel] Fwd: [Nfs-ganesha-support] nfs-ganesha pNFS over CephFS deployment

2015-05-11 Thread DENIEL Philippe

Hi,

I forward your message to the devel list. Here, you'll find people with 
the right answer.


Regards

Philippe


 Forwarded Message 
Subject:[Nfs-ganesha-support] nfs-ganesha pNFS over CephFS deployment
Date:   Thu, 7 May 2015 17:35:20 +0800
From:   莊尚豪 
To: nfs-ganesha-supp...@lists.sourceforge.net



Dear all,

I want to deploy a nfs-ganesha pNFS server based on CephFS

I already have a Ceph cluster(1 monitor + 1 mds + 3 osds) and a 
nfs-ganesha server(OS Fedora 21)


Howerver, I don¡¦t know how to deploy a pnfs architecture to these machine.

About EXPORT in nfs-ganesha server, how do I set a simple pNFS 
configuration?


Thanks to give me a suggestion.

Best Regards,

Ben C.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Nfs-ganesha-support mailing list
nfs-ganesha-supp...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-support

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] About checkpatch.pl

2015-05-11 Thread DENIEL Philippe

Hi,

I just submitted a patch on gerrithub to update checkpatch.pl (we used a 
pretty old version).

There are 2 fun facts to be known:
1- checkpatch does not pass checkpatch test : run checkpatch on 
itself and it will complain a lot. That does not bother me, but I guess 
the gerrit triggers will be quite unhappy.
2- checkpatch currently holds a bug (which is referenced on kernel 
mailing list), if you have code like this : /int xdr_something(XDR 
*xdrs, something *objp)/;  there will be issues for checkpatch.pl won't 
identify XDR as a type and will believe that XDR *xdrs is actually a 
multiplication whose identation should be "XDR * xdrs" (it thinks XDR is 
a variable name). I found no version of checkpatch.pl that fixes this. 
For wanting of that, my FlexFile patch triggers many checkpatch warnings.


Regards

Philippe

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] CMake fails in 2.3-dev-3

2015-05-18 Thread DENIEL Philippe
Hi,

I met the same issue on my testbed.

 Philippe

On 05/18/15 15:34, Malahal Naineni wrote:
> Meghana Madhusudhan [mmadh...@redhat.com] wrote:
>> Hi,
>>
>> The latest tag has an error in CMakeLists.txt,
>> cmake -DUSE_FSAL_GLUSTER=ON -DCURSES_LIBRARY=/usr/lib64 
>> -DCURSES_INCLUDE_PATH=/usr/include/ncurses -DCMAKE_BUILD_TYPE=Maintainer 
>> -DUSE_DBUS=ON /root/nfs-ganesha/src/
>> CMake Error: Error in cmake code at
>> /root/nfs-ganesha/src/CMakeLists.txt:1310:
>> Parse error.  Function missing ending ")".  End of file reached.
>>
>>
>> This is due to a typo in line 38 of /src/CMakeLists.txt. The diff is pasted 
>> here,
>>
>> -set(GANESHA_EXTRA_VERSION -dev-3
>> +set(GANESHA_EXTRA_VERSION -dev-3)
> Looks like we have a process issue here. How come this wasn't detected
> prior this making it there!
>
> Regards, Malahal.
>
>
> --
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] This week's pull request

2015-05-21 Thread DENIEL Philippe
Hi,

3 commits in my pull_request branch, this week. You do know 2 of them 
(FlexFiles layout and checkpatch.pl enhancements), the 3rd one is a 
small fix to nfs4_op_lookup. When running pynfs against V2.3-dev3.1, it 
appeared that a corner base was badly managed (NFSv4.1 should return 
NFS4ERR_REQ_TOO_BIG instead of NFS4ERR_NAME_TOO_LONG).
You may cherry-pick this patch. That a pretty minor change anyway.

 Regards

 Philippe


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Configuring pNFS/spnfsd - Linux NFS

2015-06-03 Thread DENIEL Philippe

On 06/01/15 09:36, 孙俊 wrote:

http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd

can i use nfs-ganesha like spnfsd


--


___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
I wrote this wiki page a very long time ago, in the early years of 
NFSv4.1, when there was few implementation of pNFS and so pNFS support 
in Ganesha.
The spnfsd was a way to implement LAYOUT4_NFSV4_1_FILES using multiple 
NFSv4 servers. The very very first version of pNFS in Ganesha worked 
like this but the approach inside spnfsd (federating several independant 
filesystem, each of them local to each pNFS DS) is dead long ago.

Actually I should ask admin as linux-nfs.org to remove this wiki page.

Regards

Philippe
--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Pull Request

2015-06-25 Thread DENIEL Philippe

Hi Frank,

3 commits for this week:

   # git log --oneline V2.3-dev-8..HEAD
   01812a3 TRAVIS/CI: remove ceph from build made by Travis/CI
   fb4a5a7 Do not register NFS port is configuration file do not ask
   for NFS support
   2a43107 NFSv4.1/pNFS/Flexible Files: add Flexible Files Layout XDR
   definition


01812a3 removes libceph-fs from .travis.yml for libcepfs-devel seems not 
to be in the asked pkg repository anymore, causing the build to fail.
fb4a5a7 a quick patch to prevent ganesha to get NFS dedicated port and 
RPC service when configuration file do not ask for NFS (in my case I 
have 9p-only servers)

2a43107 is my pnfs/FlexFiles patch.

Regards

Philippe
--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical & virtual servers, alerts via email & sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] What are these files, are they useful

2015-08-20 Thread DENIEL Philippe
Hi,

all of those scripts/* files are deprecated for a long time. They can be 
removed. Files under Docs/* may have "historical" value, but they are 
deprecated since the wiki exists.

 regards

 Philippe

On 08/11/15 22:31, Frank Filz wrote:
> In doing some scrubbing for licenses, I'm curious about the following files
> and what they are used for?
>
> If any of these can be removed, I think that would be good.
>
> Thanks
>
> Frank
>
> scripts/test_pynfs/test.cd_junction.pynfs
> scripts/test_pynfs/test_nfs4_fs_localtions_request.pynfs
> scripts/test_pynfs/test_nfs4_mount_request.pynfs
> scripts/test_pynfs/test_get_root_entry_rawdev.pynfs
> scripts/test_pynfs/test.proxy.pynfs
> scripts/test_pynfs/test_get_root_entry_type.pynfs
> scripts/test_pynfs/test_get_root_entry_supported_attrs.pynfs
> scripts/test_pynfs/test_get_root_entry_all_type_for_pseudofs.pynfs
>
> scripts/test_through_mountpoint/test_rename.sh
> scripts/test_through_mountpoint/test_create.sh
> scripts/test_through_mountpoint/test_create_ls.sh
> scripts/test_through_mountpoint/test_create_rm.sh
> scripts/test_through_mountpoint/test_mkdircascade.sh
> scripts/test_through_mountpoint/test_rm_stat_readdir.sh
> scripts/test_through_mountpoint/test_createrenameunlink.sh
> scripts/test_through_mountpoint/test_rename2.sh
> scripts/test_through_mountpoint/test_createunlink.sh
> scripts/test_through_mountpoint/test_read_write.sh
>
> Also, I think the following Docs files are hopelessly out of date and should
> be removed:
>
> Docs/nfs-ganesha-adminguide.pdf
> Docs/ganesha_logging.pdf
> Docs/Resources.txt
> Docs/nfs-ganesha-userguide.rtf
> Docs/nfs-ganesha-userguide.pdf
> Docs/15to20-porting.txt
>
> Some other files that look hopelessly out of date and not useful
> ganesha.el
> tools/remove_rpc.pl
> scripts/reindenture
>
>
>
>
>
> --
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Testing NFSv4 ACL

2015-09-23 Thread DENIEL Philippe
Hi,

in order to add more tests to my non-regression test suite, I'd like to 
worry about NFSv4 acl.
Currently, a POSIX FS manages ACLs via a special xattr named 
system.posix_acl_access (and uses getxattr()/setxattr() to deal with 
them). The xattr can be accessed either by
The NFSv4 ACLs are no POSIX ACL and they rely on system.nfs4_acl (which 
can be managed by {set,get}xattr() function and by utilities 
nfs4_setfacl and nfs4_getfacl, provided in package nfs4-acl-tools.
Ganesha embeds ACLs support for NFS : what are currently the tests 
designed and used to check if the features is correctly working and does 
not regress ?

 Regards

 Philippe



--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] before we push 2.3

2015-10-28 Thread DENIEL Philippe
Frank,

we should not forget the last commit with update to %changelog (in rpm 
specfile) and in src/ChangeLog.
Ideally, it should be the last one before 2.3.0 tag.

 Regards

 Philippe

--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] before we push 2.3

2015-10-28 Thread DENIEL Philippe
Is it possible to do a last chance rebae ? RPM Changelog is the most 
important one.

 Regards

 Philippe

On 10/28/15 16:43, Frank Filz wrote:
> Done:
>
> https://github.com/ffilz/nfs-ganesha/commit/cf1e588944cedf54b1ec129e4b6776509a584d00
>
> Well, didn't change RPM specfile... sorry...
>
> Frank
>
>> we should not forget the last commit with update to %changelog (in rpm
>> specfile) and in src/ChangeLog.
>> Ideally, it should be the last one before 2.3.0 tag.
>>
>>   Regards
>>
>>   Philippe
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] fid cloned in 9p and fd leak

2017-09-28 Thread DENIEL Philippe

On 09/28/17 02:18, Dominique Martinet wrote:

Frank Filz wrote on Wed, Sep 27, 2017 at 10:50:00AM -0700:

One source of “fd leaks” is that the global fd, which is used for
getattrs, is no longer well managed. I have some ideas in progress on
how to better manage this, but discussion has stalled due to folks
involvement in Bakeathon.



If there is also a genuine leak of state_t associated with fids, that
would be an issue. You might try running under Valgrind memcheck to
look for leaks.

No, this is definitely the former - I had a local patch that would not
open global fd but use a temporary FD in vfs_fsal_open_and_stat() that
worked ok-ish but I seem to have misplaced it.
Basically just always go through the default case of the switch
statement in that function will make it not use global fd for stats and
fix the leak Philippe sees, but it's not a proper solution

Dominique, if you have the patch, send it to me and I'll apply 
it/validate it (cthon / sigmund / kernel compilation) on top of a 
regular Ganesha 2.5/master .


    Philippe


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

2018-01-04 Thread DENIEL Philippe

Hi Aurélien,

I can provide you an alternate solution, still nfs-ganesha based. For 
the need of a project, I developed an open-source library that emulate a 
POSIX namespace using a KVS (for metadata) and an object store (for 
data). For example, you can use REDIS and RADOS. I have written a FSAL 
for it (it is not pushed in the official branch) but with no compliancy 
to support_ex, it's still using the former FSAL semantics (so it should 
be ported to support_ex). If you are interested, I can give you some 
pointers (the code is on github). You could use S3 as data storage for 
example. In particular, I had to solve the same "inode" issue that you 
met. This solution as very few impact on nfs-ganesha code (it just adds 
a new FSAL).


 Regards

        Philippe

On 01/03/18 19:58, Aurelien RAINONE wrote:
To follow up on the development on an FSAL for S3, I have some doubts 
and questions I'd like to share.
Apart from its full path, S3 doesn't have the concept of file 
descriptor, I mean, there's nothing else
than the full path that 
IcanprovidetoS3inordertogetattributeofcontentofaspecificobject.
I have some doubts regarding the implementation of the S3 fsal object 
handle (s3_fsal_obj_handle).
Should s3_fsal_obj_handle be very simple, for example should it only 
contain a key that maps to the full S3 filename, in an key-value store.
Or on the contrary, should the handle implement a tree like structure, 
like I saw in FSAL_MEM?

Or something in between, but what?
Having a very simple handle has some advantages but may require some 
more frequent network calls,
for example readdir won't have any kind of information about the 
content of the directory.
Having a whole tree-like structure in the handle would allow to have 
direct access to directory content,

but isn't that the role of ganesha cache to do that?
My questions probably shows that I have problems to understand the 
responsability of my FSAL implementation

regarding the cache. Who does what, what doesn't do what?
Good evening,
Aurélien


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot


___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

2018-01-05 Thread DENIEL Philippe

On 01/04/18 13:34, Aurelien RAINONE wrote:

Hello Philippe,

Did you mean it like I could directly use the FSAL you developed, 
modify some code (or not?) in order to use S3 as a storage? Or is it 
to share solutions you found to some problems I will encounter during 
the development of my FSAL_S3.
My position is that object storage (I consider S3 as an "object 
accessor") has its own semantic which differs from the POSIX one. 
Because of this, it's quite difficult to fit creatures from the object 
storage world into a FSAL. What I did is a lib in two sub-component that 
emulates a POSIX namespace using object storage as a backup. Since 
object storage have few mds (and they are quite different from the POSIX 
ones), a (potentially distributed) KVS is used to manage them. What you 
can try is taking this lib (look here 
https://github.com/phdeniel/kvsns), and use REDIS to store your mds. 
You'll then have to implement access to storage (that's the "extstore" 
sub library). In particular, this lib handles inodes, open fd (with 
management of the "open and deleted" case).
Once you have this, I do have a FSAL that make the interface between 
nfs-ganesha and this lib, exposing the emulated namespace. As said 
previously, it has no support_ex, it rely on the former version of the 
FSAL_API.
The simplest for you is to have a look at the code : 
/github.com/phdeniel/kvsns




In both cases, I sure would be happy to have a look at your project, 
thank you for that.


What do you mean by no compliancy to support_ex, does that imply a 
specific range of ganesha versions? other constraints?


​Regards,

Aurélien




2018-01-04 13:18 GMT+01:00 DENIEL Philippe <mailto:philippe.den...@cea.fr>>:


Hi Aurélien,

I can provide you an alternate solution, still nfs-ganesha based.
For the need of a project, I developed an open-source library that
emulate a POSIX namespace using a KVS (for metadata) and an object
store (for data). For example, you can use REDIS and RADOS. I have
written a FSAL for it (it is not pushed in the official branch)
but with no compliancy to support_ex, it's still using the former
FSAL semantics (so it should be ported to support_ex). If you are
interested, I can give you some pointers (the code is on github).
You could use S3 as data storage for example. In particular, I had
to solve the same "inode" issue that you met. This solution as
very few impact on nfs-ganesha code (it just adds a new FSAL).

 Regards

        Philippe

On 01/03/18 19:58, Aurelien RAINONE wrote:

To follow up on the development on an FSAL for S3, I have some
doubts and questions I'd like to share.
Apart from its full path, S3 doesn't have the concept of file
descriptor, I mean, there's nothing else
than the full path that
IcanprovidetoS3inordertogetattributeofcontentofaspecificobject.
I have some doubts regarding the implementation of the S3 fsal
object handle (s3_fsal_obj_handle).
Should s3_fsal_obj_handle be very simple, for example should it
only contain a key that maps to the full S3 filename, in an
key-value store.
Or on the contrary, should the handle implement a tree like
structure, like I saw in FSAL_MEM?
Or something in between, but what?
Having a very simple handle has some advantages but may require
some more frequent network calls,
for example readdir won't have any kind of information about the
content of the directory.
Having a whole tree-like structure in the handle would allow to
have direct access to directory content,
but isn't that the role of ganesha cache to do that?
My questions probably shows that I have problems to understand
the responsability of my FSAL implementation
regarding the cache. Who does what, what doesn't do what?
Good evening,
Aurélien



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org!http://sdm.link/slashdot


___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
<mailto:Nfs-ganesha-devel@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
<https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel>






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Multiprotocol support in ganesha

2018-03-07 Thread DENIEL Philippe

Hi,

from a "stratospheric" point of view, I see a potentially big issue 
ahead for such a feature : FSAL has been designed to be quite close to 
POSIX behavior, CIFS follows the Microsoft File System semantics, which 
is pretty different from POSIX.
My experience with 9p integration in Ganesha shows some issues in POSIX 
corner cases (like "delete on close" situations), I can't imagine what 
integrating a CIFS support would mean.
Years ago, Tom Tapley came to bake-a-thon (this was a few months after 
he joined Microsoft Research) and he talked about issues met by 
Microsoft to implement a NFSv4 support, because of Microsoft semantics. 
He found many, but was quite optimistic. Current state : Windows has no 
NFS support and code developed at CITI (e.g. NFSv4 clients for Windos) 
were not pushed to Windows.
Microsoft is not POSIX and POSIX is not Microsoft. They live in two very 
different worlds, and it's probably better so ;-)


    Regards

        Philippe

On 03/06/18 18:20, Pradeep wrote:

Hello,

Is there plans to implement multiprotocol (NFS and CIFS accessing same 
export/share) in ganesha? I believe current FD cache will need changes 
to support that.


Thanks,
Pradeep


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot


___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel