Hi Frank,
Entries were supposed to gradually move to L2 independent of any other
process--the only connection was intended to be that the L1 -> L2 transition
might induce other compaction, hints, etc as a side effect.
Matt
- "Frank Filz" wrote:
> Take a look:
>
> https://review.gerrithu
Hi Bill,
Here's a thought: why not spend more time doing useful work, and less time
trying to find someone to blame for changed behavior your own work introduced.
Matt
- "William Allen Simpson" wrote:
> Last week, Daniel G had to revert one of my patches, because the CEA
> tests segfault
Very nice, thanks folks.
Matt
- "Supriti Singh" wrote:
> Hello all,
>
> As I said in the concall this week, at SUSE, we are working on
> salt-based deployment system for ceph, called Deepsea.
> https://github.com/SUSE/DeepSea
>
> Recently we added support to automatically deploy Ganesha s
Hi Karol,
- "Karol Mroz" wrote:
> Hi Matt,
>
> Thanks for the input. Please see inline.
>
> On Sun, Nov 27, 2016 at 10:58:34PM -0500, Matt W. Benjamin wrote:
> [...]
> > > 1. `echo` or `touch` a new file
> > >
> > > File creation vi
Hi Bill,
Ok, thanks for the heads up, I at least do support this change.
Matt
- "William Allen Simpson" wrote:
> Today, there was a small mixup on pulls. Malahal's was added to
> duplex-13, while mine were added to duplex-14.
>
> After consultation with Matt, I've changed the repository
Hi Karol,
- "Karol Mroz" wrote:
> Hi Ganesha developers,
>
> I've been playing with the RGW FSAL (ganesha-2.4.1) on a fairly
> recent
> Jewel-based (10.2.3-493-gb4c314a) Ceph cluster and wanted to share
> some
> observations.
>
> In a read-only scenario, the FSAL appears to function well!
agree. I'm not certain nfs-ganesha has built against heimdal, and I think the
credential cache bits may also conflict.
Matt
- "Daniel Gryniewicz" wrote:
> On 10/31/2016 12:19 PM, Wyllys Ingersoll wrote:
> > Has anyone worked on making nfs-ganesha build and run with Heimdal
> > Kerberos li
Hi,
- "Akira Hayakawa" wrote:
> Dispatch_Max_Reqs_Xprt is settable attribute that affects
> stalling the ganesha server when the outstanding requests in the
> queue
> supersedes the set value. The default value is 512 and can take 1 to
> 2048.
> Ganesha unstalls the server when the queuing r
Hi Swen,
Yes, please do.
Matt
- "Swen Schillig" wrote:
> On Mo, 2016-10-10 at 11:46 -0400, Matt Benjamin wrote:
> > Hi Swen,
> >
> > Are you going to re-push your recent closed PR as 2 new PRs as
> > planned?
> >
> > Dan has made some time this week to review and test, so if you have
> > t
Hi Malahal,
Is this intended to be a PR?
Matt
- "Malahal Naineni" wrote:
> A umount/mount loop for a krb5 mount was sending RPCSEC_GSS_DESTROY
> calls at times. We end up a huge number of contexts that are expired
> but
> not destroyed, eventually failing mounts after some time.
>
> Relea
Oh, it is.
Matt
- "Matt W. Benjamin" wrote:
> Hi Malahal,
>
> Is this intended to be a PR?
>
> Matt
>
> - "Malahal Naineni" wrote:
>
> > A umount/mount loop for a krb5 mount was sending RPCSEC_GSS_DESTROY
> > calls at times
Hi,
I don't notice anything offhand. If mount succeeded, the most likely thing is
that the Access Key you've used can't see them. Can you create new
buckets/objects?
Matt
- "yiming xie" wrote:
> env :centos7, nfs-ganesha 2.3, jewel
> nfs server :192.168.77.61
> 1.cmake && make&& make i
submit a change; I dont know if rgw needs share reservations, can we refuse
them?
Matt
- "Frank Filz" wrote:
> Crud, while working on the documentation for support_ex, just came
> across a
> method other folks implementing support_ex missed...
>
> There is the possibility that two open2(
Is RDMA feature complete for NFS3? That would be interesting.
Matt
- "Frank Filz" wrote:
> Matt, Could you rebase and submit your c++ compile patches?
>
> Assuming nothing crashes and burns, I will merge those and tag V2.4.0
> by
> tomorrow afternoon.
>
> On the concall, we discussed kee
I rebased yesterday--is there more to do?
Matt
- "Frank Filz" wrote:
> Matt, Could you rebase and submit your c++ compile patches?
>
> Assuming nothing crashes and burns, I will merge those and tag V2.4.0
> by
> tomorrow afternoon.
>
> On the concall, we discussed keeping things a bit in
Hi Frank,
I couldn't push your branch to gerrit due to it being ahead and having some
illegal email wrt my account. However, it's pushed to the linuxbox2
nfs-ganesha repo, as branch rebase-c++.
thanks!
Matt
- "Frank Filz" wrote:
> I have posted a pre-merge for V2.4-rc6, it contains the
Sounds good, thanks Frank!
Matt
- "Frank Filz" wrote:
> Sorry, with a short week for vacation yesterday and today, I ran out
> of time
> to get a merge out. Will shoot for a merge Monday.
>
> And that merge will be rc1. This primarily is a statement that we are
> seeking to close in on a c
inline
- "Frank Filz" wrote:
> From: kanishk rastogi [mailto:kanishk...@gmail.com]
> Sent: Tuesday, July 26, 2016 10:22 PM
> To: Frank Filz
> Cc: Soumya Koduri ;
> nfs-ganesha-devel@lists.sourceforge.net
> Subject: Re: [Nfs-ganesha-devel] FSAL locks implementation
>
>
>
>
> If the only
VFS?
Matt
- "Marc Eshel" wrote:
> Do we have an FSAL that implemented multi-fd ?
>
>
>
> From: "Frank Filz"
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc:
> Date: 07/25/2016 04:10 PM
> Subject:RE: multi-fd
>
>
>
> > Why do we have reopen2 as one of the multi-fd support
:)
- "Marc Eshel" wrote:
> We are making progress, now we return the file layout attribute
>197.618970099 9141 TRACE_GANESHA: [work-65]
> nfs4_FSALattr_To_Fattr
> :NFS4 :F_DBG :Encoded attr 62, name = FATTR4_FS_LAYOUT_TYPES
>
> but we fail on the first layout get
>210.125317087
++
1. did you really mean you think it should always be based on ctim?
2. we should probably allow FSAL's to override Ganesha's default mtime
behavior, e.g., for AFS or DCE every vnode has a monotonically increasing
version number (that's called out in RFC5661 or an errata by David Noveck, I
c
Hi Frank,
- "Frank Filz" wrote:
> As I am digging more into what actually happens with FSAL_MDCACHE, I
> am
> finding some structural issues that are probably best resolved by
> hastening
> the migration to the support_ex (multiple-fd) API.
>
> The biggest issue that I see is the disappeara
Hi Malahal,
IIRC, we're -mostly- relying on environmental settings. Improvements welcome.
Matt
- "Malahal Naineni" wrote:
> We seem to have some network (tcp/ip) related issues, one of the guys
> asked "Does Ganesha use tcp autotuning? Or fixed buffer?". Does
> anyone
> know answer to thi
I think this is probably the best approach for now, ++.
Matt
- "Frank Filz" wrote:
> From: Swen Schillig [mailto:s...@vnet.ibm.com]
> > Regardless of what's decided on how to react to out of mem
> conditions, we
> > must check and detect them, fast and reliable, always.
> > It is not accept
Hi,
- "William Allen Simpson" wrote:
> Updated to today's ganesha V2.3-rc5.
>
> Frank, could you please change nfs-ganesha/ntirpc to default
> branch duplex-12? Matt forgot, and I've reminded him twice.
Ok, fine, I've done this. I don't think this was a blocker.
>
> Dan, you forgot to
Hi Malahal,
I'll send something in the next week or so.
Matt
- "Malahal Naineni" wrote:
> This is an old email but any progress on this?
>
> Regards, Malahal.
>
> Frank Filz [ffilz...@mindspring.com] wrote:
> >Hmm, that looks like a constant that needs to be replaced with a
> config
Hi Dirk,
Yes, Ganesha creates an expanded handle from your FSAL private handle and its
own steering data. See the logic in support/nfs_filehandle_mgmt_c.
Matt
- "Dirk Jagdmann" wrote:
> > The exportid is not meaningful to the client, however, since it is
> part of
> > the handle, and the
Hi,
I think we're not staying completely on-point.
I agree with Wei's suggestion that different transports likely need different
affinity strategies, and presumably you do too.
There is probably no big issue with affinity syscalls--we should not do them
inline with work, but different pools (e
Hi,
The svc_init() call is intended to precede any call to tirpc_control, and
we do want this call to default-initialize.
Regards,
Matt
- fangw...@huawei.com wrote:
> From: Wei Fang
>
> ->warnx has been assigned defaultly. If we assigned here when
> SVC_INIT_WARNX unsetted, ->warnx assig
Hi,
If we permit conditional linkage against a static libntirpc, I would
strongly prefer that it be explicit (config option that defaults to
OFF).
I don't think static linkage actually adds much to ease of use, since
everybody should understand platform rules for shlibs.
Matt
- "Daniel Gryn
I'm not sure I believe that there is hang case, but if there is, the
simple fix for it is to bound the offending wait.
I have a branch that uses a lock-free bounded mpmc queue based work we
did last year in the Ceph codebase. I have something working but not
efficiently, and I haven't have time t
>From memory.
The code isn't trying to eliminate that race, no. If/when it arises, and all
threads are idle, then the timeout will ensure progress.
In particular, if there is no work for -any- threads, it is desirable for at
least
most to go idle. As work ramps up, threads will be woken.
The
Hi,
inline
- "Malahal Naineni" wrote:
>
> perf record shows that too much time is spent in malloc/free
> functions.
> Reported functions are alloc_nfs_request, alloc_nfs_res, and few
> objects
> in src/xdr_ioq.c file. alloc_nfs_res seems thread specific, so could
> be
> allocated one per t
Thanks, guys!
Matt
- "William Allen Simpson" wrote:
> On 8/3/15 5:13 AM, Dominique Martinet wrote:
> > Hi,
> >
> > William Allen Simpson wrote on Mon, Aug 03, 2015 at 04:45:33AM
> -0400:
> >> I've pushed some ntirpc changes to shutdown that might fix
> >> the problems reported by Dominique.
Bill,
Please don't make this change. We can discuss tuning changes at your leisure.
Thanks,
Matt
- "William Allen Simpson" wrote:
> As I've been fixing the problem with ntirpc *_cleanup(), have
> discovered that it all passes through ntirpc svc_rqst.[ch].
>
> Trying to grok it, it's all
Addressing this (and the sequence we'll address it in) is already in the work
plan, right?
Matt
- "William Allen Simpson" wrote:
> Every time there's an event, the current code re-arms the fd
> from the epoll.
>
> That means 3 system calls for each incoming event. Hopefully,
> I'm not doi
Hi Marc,
Probably. I was writing to malahal in irc that we have code changes that
will reduce lock contention for xprt->xp_lock a LOT, and more changes coming
that
will address latency in dispatch and reduce locking in SAL. The first
of those changes will be coming in hopefully still this week.
o SAL layer in some
cases), esp. since iirc an eventual step in Dan's MDCACHE work involved lock
reorganization, and a lot things will likely shake out from that?
Matt
----- "Matt W. Benjamin" wrote:
> This feels to be on the right track.
>
> Matt
>
> - "Da
This feels to be on the right track.
Matt
- "Daniel Gryniewicz" wrote:
> On 07/28/2015 03:51 PM, Frank Filz wrote:
>
> >
> > I was avoiding the free standing blob (at least for NFS, 9p it was
> not so easy), so for NFS state_t objects, the fsal_fd is allocated
> along with the state_t.
> >
Hi,
- "Frank Filz" wrote:
> > The problem is that "fd"s are an internal detail that we do not want
> leaking
> > out into the general server.
> >
> > There are several reasons for this adduced in my and Adam's comments
> in
> > gerrit.
> >
> > I may be missing something, but what is the ge
The problem is that "fd"s are an internal detail that we do not want
leaking out into the general server.
There are several reasons for this adduced in my and Adam's comments
in gerrit.
I may be missing something, but what is the generic (non-VFS or few
other FSAL-specific) concept that is being
Hi Krishna,
I don't have and haven't seen reports about issues. Please report any problems
you find.
Regards,
Matt
- "Krishna Harathi" wrote:
> Is there any read/write performance hit by using multiple CLIENT
> sections in configuration to filter client access?
>
>
> Also I assume IPv
Yup, will do. Was going to do it over the weekend, but got tied up.
Matt
- "Frank Filz" wrote:
> Looks like we can clear all of these. I think they should mostly all
> be marked as intentional, our use of flags to control when lock/unlock
> is done is hard for an automated tool to follow (
Hi Dirk,
Well, the proxy fsal proxies to another NFS server over TCP or UDP, so
you might find that helpful.
The Ceph fsal is a fairly minimal example of a fsal that delegates to
a library with a posix-like api.
The problem of how to do IPC between your fsal and a remote using a
custom protocol
Thanks, Frank
- "Frank Filz" wrote:
> There will be no concall this week.
>
> As of this time, I don't plan another dev-release this week. I just
> pushed
> Bill's RPC changes (with libntirpc update) as dev-10. Please make sure
> you
> rebase all patch submissions and make sure you update y
*thread*. You see two
> stack
> traces because I enabled valgrind to keep origins.
>
> Regrads, Malahal.
>
> Matt W. Benjamin [m...@cohortfs.com] wrote:
> > Hi,
> >
> > So this is all happening in clnt_vc_call()? I'll have a look at
> it.
> &g
Summarizing from call.
Presumably we'd normally fix this without taking larger reorganization
changes from duplex-12, but IIRC, there's a larger list of things, and we
agreed to
bring a list to the next call for discussion.
Matt
- "William Allen Simpson" wrote:
> On 6/24/15 1:24 PM, Malah
Hi,
So this is all happening in clnt_vc_call()? I'll have a look at it.
Matt
- "Malahal Naineni" wrote:
> Hi All,
>
> valgrind reported that clnt_vc_geterr() is accessing freed memory.
> Code review shows that nlm_send_async() calls clnt_call() and then
> calls
> clnt_sperror() on
Hi Bill,
This isn't the current work plan/design as I understand it, so I'd appreciate it
if you walk through your blocker with us here.
Also "I" is me, as I could have told you.
Thank you.
Matt
- "William Allen Simpson" wrote:
> I've been struggling with the fridge threads. For NFS/RDM
; >Thanks.
> >Regards.
> >Krishna Harathi
> >On Sat, May 16, 2015 at 8:36 AM, Matt W. Benjamin
> <[1]m...@cohortfs.com>
> >wrote:
> >
> > Hi Krishna,
> >
> > If you're using a VFS-like FSAL, I wouldn
Hi Krishna,
If you're using a VFS-like FSAL, I wouldn't think you'd want to be
hard limited to 4K open files.
It could be useful to have more flexibility in (choice of?) LRU
reclaim strategy, but to start with, I'd raise this limit (on
Linux, /proc/sys/fs/nr_open, etc).
Matt
- "Krishna Hara
Hi,
There clearly are people who think reviewing something large in
small chunks is categorically easier, but, I don't completely
understand the reasoning.
1) It seems to me that whether the small chunks are easier to
actually understand, depends on whether the chunks can really
be understood in
Hi,
- "GerritForge Support" wrote:
> >> From what I can tell, Frank finds problematic
> >
> > 1) a lack in gerrit of a "changeset" concept, plus, apparently
>
> It actually exists and is called “topic”. It will be very soon
> possible as well to merge a whole topic atomically with one clic
Hi,
As I mentioned in an IRC discussion, my recollection from when OpenAFS
implemented gerrit (also their own installation, but not to my
knowledge customized), it was basically mandated that developers
submit "one patch per change."
>From what I can tell, Frank finds problematic
1) a lack in ge
It seems to me that openafs gerrit (dating back to 2008 or earlier) had an
ability to keep
sets of patches pushed together, as a kind of changeset.
I'll ask around.
Matt
- "Frank Filz" wrote:
> Ok, using gerrithub is really feeling like it's not going to work for
> me...
>
> Merging a pa
Hi,
The Ceph FSAL uses the libcephfs (and librados) to consume the cluster.
It's not re-exporting a mounted Ceph filesystem.
Matt
- "Timofey Titovets" wrote:
> showmount -e 172.20.252.12
> Export list for 172.20.252.12:
> / (everyone)
>
> 2015-04-24 10:33 GMT+03:00 DENIEL Philippe :
> > H
56 matches
Mail list logo