On Sat, May 13, 2017 at 8:44 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, May 12, 2017 at 8:04 PM, Pat Haley wrote:
>
>>
>> Hi Pranith,
>>
>> My question was about setting up a gluster volume on an ext4 partition.
>> I thought we had the bricks mounted
On Fri, May 12, 2017 at 8:04 PM, Pat Haley wrote:
>
> Hi Pranith,
>
> My question was about setting up a gluster volume on an ext4 partition. I
> thought we had the bricks mounted as xfs for compatibility with gluster?
>
Oh that should not be a problem. It works fine.
>
> Pat
On Mon, May 1, 2017, at 02:34 PM, Gandalf Corvotempesta wrote:
> I'm still thinking that saving (I don't know where, I don't know how)
> a mapping between
> files and bricks would solve many issues and add much more flexibility.
Every system we've discussed has a map. The differences are only
Can you please provide output of following from all the nodes:
cat /var/lib/glusterd/glusterd.info
cat /var/lib/glusterd/peers/*
On Wed, May 10, 2017 at 5:02 PM, Pawan Alwandi wrote:
> Hello,
>
> I'm trying to upgrade gluster from 3.6.2 to 3.10.1 but don't see the
>
Hi Pranith,
My question was about setting up a gluster volume on an ext4 partition.
I thought we had the bricks mounted as xfs for compatibility with gluster?
Pat
On 05/11/2017 12:06 PM, Pranith Kumar Karampuri wrote:
On Thu, May 11, 2017 at 9:32 PM, Pat Haley
> I have the scenario to expand a single gluster server with no replica to a
> replica of 2 by adding a new server.
No sharding, right ?
>
> Since I have many TB's of data, can I use the first gluster server while
> the data is being replicated to the second new brick or should I wait for
> it
Hi,
I have the scenario to expand a single gluster server with no replica to a
replica of 2 by adding a new server.
Since I have many TB's of data, can I use the first gluster server while
the data is being replicated to the second new brick or should I wait for
it to finish?
Thanks,
Hi Soumya,
Thank you very much for last response – very useful.
I apologize for delay, I had to find time for another testing.
I updated instructions that I provided in previous e-mail. *** means
that the step was added.
Instructions:
- Clean installation of CentOS 7.3 with all updates, 3x
Il 12/05/2017 11:36, Niels de Vos ha scritto:
> On Thu, May 11, 2017 at 03:49:27PM +0200, Alessandro Briosi wrote:
>> Il 11/05/2017 14:09, Niels de Vos ha scritto:
>>> On Thu, May 11, 2017 at 12:35:42PM +0530, Krutika Dhananjay wrote:
Niels,
Allesandro's configuration does not have
On Thu, May 11, 2017 at 03:49:27PM +0200, Alessandro Briosi wrote:
> Il 11/05/2017 14:09, Niels de Vos ha scritto:
> > On Thu, May 11, 2017 at 12:35:42PM +0530, Krutika Dhananjay wrote:
> >> Niels,
> >>
> >> Allesandro's configuration does not have shard enabled. So it has
> >> definitely not got
On Thu, May 11, 2017 at 07:40:15PM +0530, Pranith Kumar Karampuri wrote:
> On Thu, May 11, 2017 at 5:39 PM, Niels de Vos wrote:
>
> > On Thu, May 11, 2017 at 12:35:42PM +0530, Krutika Dhananjay wrote:
> > > Niels,
> > >
> > > Allesandro's configuration does not have shard
On 09/05/17 19:18, hvjunk wrote:
On 03 May 2017, at 07:49 , Jiffin Tony Thottan > wrote:
On 02/05/17 15:27, hvjunk wrote:
Good day,
I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs
running Debian 8. GlusterFS volume
12 matches
Mail list logo