cc'ing devel as well for some developer insight..
-- Forwarded message --
To: gluster-users
I had a question on the expected behaviour of simple distributed volumes
when a brick fails for the following scenarios (as in, will the scenario
succeed for
Hi Rajesh,
I did not want to respond to the question that you'd posed on the zfs
snapshot code (about the volume backend backup) as I am not too familiar
with the code and the person who's coded it is not with us anymore. This
was done in bit of a hurry so it could be that it was just kept for
I have not gone through this implementation nor the new iscsi
implementation being worked on for 3.9 but I thought I'd share the design
behind a distributed iscsi implementation that we'd worked on some time
back based on the istgt code with a libgfapi hook.
The implementation used the idea of
Second that. That kind of interface would be a great idea although I don't
know how much work that would involve in the snapshot interface redesign..
On Tue, Jun 21, 2016 at 4:24 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi,
> Is there a plan to come up with an interface
snapshotting
mechanisms such as btrfs/lxd as well?
On Mon, Jun 20, 2016 at 3:16 PM, Rajesh Joseph <rjos...@redhat.com> wrote:
>
>
> On Mon, Jun 20, 2016 at 12:33 PM, Kaushal M <kshlms...@gmail.com> wrote:
>
>> On Mon, Jun 20, 2016 at 11:38 AM, B.K.Raghuram <bkr...@gma
I just wanted to check if gluster replace brick commit force is
officially deprecated in 3.6? Is there any other way to do a planned
replace of just one of the bricks in a replica pair? Add/remove brick
requires that new bricks be added in replica count multiples which may not
be always
We have an interesting problem. In order to have a supported version of the
linux kernel for our CPU (Intel's Avoton), we need a kernel version 3.15.
To get this kernel version, we need to be on centos 7(Same with the later
versions of Ubuntu). Centos 7 installs and requires python 2.7. and if we