Hi Sriram,
Thanks for sharing this. Just one comment below.
On Tue, Mar 21, 2017 at 10:12 AM, wrote:
> Hi Raghavendra,
>
> My name is Sriram I'd been working with Rajesh on creating a plugin
> structure for snapshot functionality. Below is the document which Rajesh'd
> created and I've edited t
At the risk of repeating myself, the POSIX file system underpinnings are not a
concern – that part is understood and handled.
I’m also not asking for help to solve this problem, again, to be clear. SE
Linux is not an option. To summarize the point of my post:
I’ve gotten what I want to work.
On Tue, Mar 21, 2017 at 10:52 AM, Mackay, Michael
wrote:
> Thanks for the help and advice so far. It’s difficult at times to
> describe what the use case is, so I’ll try here.
>
>
>
> We need to make sure that no one can write to the physical volume in any
> way. We want to be able to be sure t
I don't see how you could accomplish what you're describing purely through
the gluster code. The bricks are mounted on the servers as standard local
POSIX file systems, so there is always the chance that something could
change the data outside of Gluster's control.
This all seems overly-restrictiv
Hello, Poorinima and Soumya,
Thanks for your kind reply.
As you said, for the "first" lookup(called in fuse_first_lookup), xdata will be
set "gfid-req", which will miss the md-cache, and then call other translators.
However, in fuse_getattr, when nodeid==1, gfid-req is set to "gfid-req", too.
Thanks for the help and advice so far. It's difficult at times to describe
what the use case is, so I'll try here.
We need to make sure that no one can write to the physical volume in any way.
We want to be able to be sure that it can't be corrupted. We know from working
with Gluster that we
Hi Raghavendra,
My name is Sriram I'd been working with Rajesh on creating a plugin
structure for snapshot functionality. Below is the document which
Rajesh'd created and I've edited the same with ideas and problems. Could
you have a look and review so that we could take it forward?
https://d
On 03/21/2017 07:33 PM, Mackay, Michael wrote:
“read-only xlator is loaded at gluster server (brick) stack. so once
the volume is in place, you'd need to enable read-only option using
volume set and then you should be able to mount the volume which would
provide you the read-only access.”
“read-only xlator is loaded at gluster server (brick) stack. so once the volume
is in place, you'd need to enable read-only option using volume set and then
you should be able to mount the volume which would provide you the read-only
access.”
OK, so fair enough, but is the physical volume on wh
On Tue, Mar 21, 2017 at 6:06 PM, Mackay, Michael wrote:
> Samikshan,
>
> Thanks for your suggestion.
>
> From what I understand, the read-only feature (which I had seen and
> researched) is a client option for mounting the filesystem. Unfortunately,
> we need the filesystem itself to be set up r
On Tue, Mar 21, 2017 at 12:04:05PM +0800, Zorro Lang wrote:
> On Mon, Mar 20, 2017 at 11:45:17PM -0400, Niels de Vos wrote:
> > On Thu, Mar 16, 2017 at 01:28:19PM +0800, Zorro Lang wrote:
> > > Add basic GlusterFS support. Neither new GlusterFS specific tests
> > > nor related patches are included.
Samikshan,
Thanks for your suggestion.
>From what I understand, the read-only feature (which I had seen and
>researched) is a client option for mounting the filesystem. Unfortunately, we
>need the filesystem itself to be set up read-only, so that no one can modify
>it - in other words, we nee
Hello folks,
I've upgraded Gerrit to 2.12.7. We run 2.12.2 in production. The changes are
mostly bug fixes. The full release notes are in the release notes[1]. Please
test out staging[2] and let me know if you notice anything wrong.
[1]: https://www.gerritcodereview.com/releases/2.12.md#2.12.3
[
- Original Message -
> From: "Soumya Koduri"
> To: "Zhitao Li" , "Gluster Devel"
>
> Cc: "Zhitao Li" , 1318078...@qq.com, "Poornima
> Gurusiddaiah"
> Sent: Monday, March 20, 2017 2:21:12 PM
> Subject: Re: [Gluster-devel] What does xdata mean? "gfid-req"?
>
>
>
> On 03/18/2017 06:5
14 matches
Mail list logo