No graph changes either on client side or server side. The
> snap-view-server will detect availability of new snapshot from
> glusterd, and will spin up a new glfs_t for the corresponding snap,
> and start returning new list of "names" in readdir(), etc.
I asked if we were dynamically changing th
On Thu, May 8, 2014 at 12:20 PM, Jeff Darcy wrote:
> > They were: a) snap view generation requires privileged ops to
> > glusterd. So moving this task to the server side solves a lot of those
> > challenges.
>
> Not really. A server-side component issuing privileged requests
> whenever a client
> Overall, it seems like having clients connect *directly* to the
>> snapshot volumes once they've been started might have avoided some
>> complexity or problems. Was this considered?
> Yes this was considered. I have mentioned the two reasons why this was
> dropped in the other mail.
I look forw
On Thu, May 8, 2014 at 11:48 AM, Jeff Darcy wrote:
> > client graph is not dynamically modified. the snapview-client and
> > protocol/server are inserted by volgen and no further changes are made on
> > the client side. I believe Anand was referring to " Adding a
> protocol/client
> > instance to
> client graph is not dynamically modified. the snapview-client and
> protocol/server are inserted by volgen and no further changes are made on
> the client side. I believe Anand was referring to " Adding a protocol/client
> instance to connect to protocol/server at the daemon" as an action being
>
On Thu, May 8, 2014 at 4:53 AM, Jeff Darcy wrote:
> > > * How do clients find it? Are we dynamically changing the client
> > >side graph to add new protocol/client instances pointing to new
> > >snapview-servers, or is snapview-client using RPC directly? Are
> > >the snapview-server
On Thu, May 8, 2014 at 4:48 AM, Jeff Darcy wrote:
>
> If snapview-server runs on all servers, how does a particular client
> decide which one to use? Do we need to do something to avoid hot spots?
>
> Overall, it seems like having clients connect *directly* to the snapshot
> volumes once they've
On Thu, May 8, 2014 at 4:45 AM, Ira Cooper wrote:
> Also inline.
>
> - Original Message -
>
> > The scalability factor I mentioned simply had to do with the core
> > infrastructure (depending on very basic mechanisms like the epoll wait
> > thread, the entire end-to-end flow of a single f
> > Overall, it seems like having clients connect *directly* to the
> > snapshot volumes once they've been started might have avoided some
> > complexity or problems. Was this considered?
>
> Can you explain this in more detail? Are you saying that the virtual
> namespace overlay used by the curre
On 05/08/2014 05:18 PM, Jeff Darcy wrote:
* Since a snap volume will refer to multiple bricks, we'll need
more brick daemons as well. How are *those* managed?
This is infra handled by the "core" snapshot functionality/feature. When
a snap is created, it is treated not only as a lvm2 thin-lv
> > * How do clients find it? Are we dynamically changing the client
> >side graph to add new protocol/client instances pointing to new
> >snapview-servers, or is snapview-client using RPC directly? Are
> >the snapview-server ports managed through the glusterd portmapper
> >interf
> > * Since a snap volume will refer to multiple bricks, we'll need
> >more brick daemons as well. How are *those* managed?
>
> This is infra handled by the "core" snapshot functionality/feature. When
> a snap is created, it is treated not only as a lvm2 thin-lv but as a
> glusterfs volume as
Also inline.
- Original Message -
> The scalability factor I mentioned simply had to do with the core
> infrastructure (depending on very basic mechanisms like the epoll wait
> thread, the entire end-to-end flow of a single fop like say, a lookup()
> here). Even though this was contained
Inline.
On 05/07/2014 10:59 PM, Ira Cooper wrote:
Anand, I also have a concern regarding the user-serviceable snapshot feature.
You rightfully call out the lack of scaling caused by maintaining the gfid ->
gfid mapping tables, and correctly point out that this will limit the use cases
this f
Answers inline.
On 05/07/2014 10:52 PM, Jeff Darcy wrote:
Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...
A few.
The design creates a new type of daemon: snapview-server.
* Where i
Hi,
On Wednesday 07 May 2014 10:52 PM, Jeff Darcy wrote:
Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...
A few.
The design creates a new type of daemon: snapview-server.
* Where is
- Original Message -
From: "Varun Shastry"
To: "Sobhan Samantaray" , ana...@redhat.com
Cc: gluster-devel@gluster.org, "gluster-users" ,
"Anand Avati"
Sent: Thursday, May 8, 2014 11:16:11 AM
Subject: Re: [Gluster-users] User-serviceable snapshots design
Hi Sobhan,
On Wednesday 07 May
Hi Sobhan,
On Wednesday 07 May 2014 09:12 PM, Sobhan Samantaray wrote:
I think its a good idea to include the auto-remove of the snapshots based on
the time or space as threshold as mentioned in below link.
http://www.howtogeek.com/110138/how-to-back-up-your-linux-system-with-back-in-time/
Anand, I also have a concern regarding the user-serviceable snapshot feature.
You rightfully call out the lack of scaling caused by maintaining the gfid ->
gfid mapping tables, and correctly point out that this will limit the use cases
this feature will be applicable to, on the client side.
If
> Attached is a basic write-up of the user-serviceable snapshot feature
> design (Avati's). Please take a look and let us know if you have
> questions of any sort...
A few.
The design creates a new type of daemon: snapview-server.
* Where is it started? One server (selected how) or all?
* How
I think its a good idea to include the auto-remove of the snapshots based on
the time or space as threshold as mentioned in below link.
http://www.howtogeek.com/110138/how-to-back-up-your-linux-system-with-back-in-time/
- Original Message -
From: "Anand Subramanian"
To: "Paul Cuzner"
C
Hi Paul, that is definitely doable and a very nice suggestion. It is
just that we probably won't be able to get to that in the immediate code
drop (what we like to call phase-1 of the feature). But yes, let us try
to implement what you suggest for phase-2. Soon :-)
Regards,
Anand
On 05/06/201
uss.
Regards
Sobhan
From: "Paul Cuzner"
To: ana...@redhat.com
Cc: gluster-devel@gluster.org, "gluster-users" , "Anand
Avati"
Sent: Tuesday, May 6, 2014 7:27:29 AM
Subject: Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design
Just one question re
.
3. It's good to mention the default value of the option of uss.
Regards
Sobhan
From: "Paul Cuzner"
To: ana...@redhat.com
Cc: gluster-devel@gluster.org, "gluster-users" ,
"Anand Avati"
Sent: Tuesday, May 6, 2014 7:27:29 AM
Subject: Re: [Gluster-devel] [Glu
Just one question relating to thoughts around how you apply a filter to the
snapshot view from a user's perspective.
In the "considerations" section, it states - "We plan to introduce a
configurable option to limit the number of snapshots visible under the USS
feature."
Would it not be possib
25 matches
Mail list logo