Re: [Gluster-users] [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-06 Thread Sahina Bose
On Mon, Mar 6, 2017 at 3:21 PM, Arman Khalatyan  wrote:

>
>
> On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic 
> wrote:
>
>> Why are you using an arbitrator if all your HW configs are identical? I’d
>> use a true replica 3 in this case.
>>
>>
> This was just GIU suggestion when I was creating the cluster it was asking
> for the 3 Hosts , I did not knew even that an Arbiter does not keep the
> data.
> I am not so sure if I can change the type of the glusterfs to triplicated
> one in the running system, probably I need to destroy whole cluster.
>
>
>
>> Also in my experience with gluster and vm hosting, the ZIL/slog degrades
>> write performance unless it’s a truly dedicated disk. But I have 8 spinners
>> backing my ZFS volumes, so trying to share a sata disk wasn’t a good zil.
>> If yours is dedicated SAS, keep it, if it’s SATA, try testing without it.
>>
>>
> We  have also several huge systems running with zfs quite successful over
> the years. This was an idea to use zfs + glusterfs for the HA solutions.
>
>
>> You don’t have compression enabled on your zfs volume, and I’d recommend
>> enabling relatime on it. Depending on the amount of RAM in these boxes, you
>> probably want to limit your zfs arc size to 8G or so (1/4 total ram or
>> less). Gluster just works volumes hard during a rebuild, what’s the problem
>> you’re seeing? If it’s affecting your VMs, using shading and tuning client
>> & server threads can help avoid interruptions to your VMs while repairs are
>> running. If you really need to limit it, you can use cgroups to keep it
>> from hogging all the CPU, but it takes longer to heal, of course. There are
>> a couple older posts and blogs about it, if you go back a while.
>>
>
> Yes I saw that glusterfs is CPU/RAM hugry!!! 99% of all 16 cores used just
> for healing 500GB vm disks. It was taking almost infinity compare with nfs
> storage (single disk+zfs ssd cache, for sure one get an penalty for the
> HA:) )
>

Is your gluster volume configured to use sharding feature? Could you
provide output of gluster vol info?


>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Deleting huge file from glusterfs hangs the cluster for a while

2017-03-06 Thread GEORGI MIRCHEV
Hi,

I have deleted two large files (around 1 TB each) via gluster client (mounted
on /mnt folder). I used a simple rm command, e.g "rm /mnt/hugefile". This
resulted in hang of the cluster (no io can be done, the VM hanged). After a
few minutes my ssh connection to the gluster node gets disconnected - I had to
reconnect, which was very strange, probably some kind of timeout. Nothing in
dmesg so it's probably the ssh that terminated the connection.

After that the cluster works, everything seems fine, the file is gone in the
client but the space is not reclaimed.

The deleted file is also gone from bricks, but the shards are still there and
use up all the space.

I need to reclaim the space. How do I delete the shards / other metadata for a
file that no longer exists?


Versions:
glusterfs-server-3.8.9-1.el7.x86_64
glusterfs-client-xlators-3.8.9-1.el7.x86_64
glusterfs-geo-replication-3.8.9-1.el7.x86_64
glusterfs-3.8.9-1.el7.x86_64
glusterfs-fuse-3.8.9-1.el7.x86_64
vdsm-gluster-4.19.4-1.el7.centos.noarch
glusterfs-cli-3.8.9-1.el7.x86_64
glusterfs-libs-3.8.9-1.el7.x86_64
glusterfs-api-3.8.9-1.el7.x86_64

--
Georgi Mirchev

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-06 Thread Denis Chaplygin
Hello!

On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan  wrote:

> I think there are some bug in the vdsmd checks;
>
> OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
> `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist
>


>
> 10.10.10.44:/GluReplica.rdma   3770662912 407818240 3362844672  11%
> /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
>

I suppose, that vdsm is not able to handle that .rdma suffix on volume
path. Could you please file a bug for that issue to track it?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] one brick vs multiple brick on the same ZFS zpool.

2017-03-06 Thread Dung Le
Hi,

How about hardware raid with XFS? I assuming it would be faster than ZFS raid 
since it has physical cache on raid controller for reads and writes.

Thanks,


> On Mar 6, 2017, at 3:08 PM, Gandalf Corvotempesta 
>  wrote:
> 
> Hardware raid with ZFS should avoided
> ZFS needs direct access to disks and with hardware raid you have a controller 
> in the middle
> 
> If you need ZFS, skip the hardware raid and use ZFS raid
> 
> Il 6 mar 2017 9:23 PM, "Dung Le"  > ha scritto:
> Hi,
> 
> Since I am new with Gluster, need your advices. I have 2 different Gluster 
> configuration:
> 
> Purpose: Need to create 5 Gluster volumes. I am running the gluster version 
> is 3.9.0.
> 
> Config #1: 5 bricks from one zpool
> 3 storage nodes.
> Using hardware raid to create one array with raid5 (9+1) per storage node 
> Create a zpool on top of the array per storage node
> Create 5 ZFS shares (each share is a brick) per storage node
> Create 5 volumes with replica of 3 using 5 different bricks.
> 
> Config #2: 1 brick from one zpool
> 3 storage nodes.
> Using hardware raid to create one array with raid5 (9+1) per storage node
> Create a zpool on top of the array per storage node
> Create 1 ZFS shares per storage node. Using the share as brick.
> Create 5 volumes with replica of 3 with same share.
> 
> 1) Is there any different on the performance on both config? 
> 2) Will the single brick be handling parallel writing vs multiple brick?
> 3) Since I am using hardware raid controller, any option that I need to 
> enable or disable for the gluster volume?
> 
> Best Regards,
> ~ Vic Le
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users 
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] one brick vs multiple brick on the same ZFS zpool.

2017-03-06 Thread Gandalf Corvotempesta
Hardware raid with ZFS should avoided
ZFS needs direct access to disks and with hardware raid you have a
controller in the middle

If you need ZFS, skip the hardware raid and use ZFS raid

Il 6 mar 2017 9:23 PM, "Dung Le"  ha scritto:

> Hi,
>
> Since I am new with Gluster, need your advices. I have 2 different Gluster
> configuration:
>
> *Purpose:* Need to create 5 Gluster volumes. I am running the gluster
> version is 3.9.0.
>
> *Config #1: 5 bricks from one zpool*
>
>- 3 storage nodes.
>- Using hardware raid to create one array with raid5 (9+1) per storage
>node
>- Create a zpool on top of the array per storage node
>- Create 5 ZFS shares (each share is a brick) per storage node
>- Create 5 volumes with replica of 3 using 5 different bricks.
>
>
> *Config #2: 1 brick from one zpool*
>
>- 3 storage nodes.
>- Using hardware raid to create one array with raid5 (9+1) per storage
>node
>- Create a zpool on top of the array per storage node
>- Create 1 ZFS shares per storage node. Using the share as brick.
>- Create 5 volumes with replica of 3 with same share.
>
>
> 1) Is there any different on the performance on both config?
> 2) Will the single brick be handling parallel writing vs multiple brick?
> 3) Since I am using hardware raid controller, any option that I need to
> enable or disable for the gluster volume?
>
> Best Regards,
> ~ Vic Le
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How many clients can mount a single volume?

2017-03-06 Thread Robert Hajime Lanning

There is a lot of data missing...

Using FUSE clients:

For every write, there are three writes on the network (one per brick in 
your x3 config).  So, outside of bandwidth requirements, you have TCP 
connection limits which comes with filedescriptor limits.


If all three bricks are on the same server, then you have a max for that 
server divided by three. If each brick is on its own server, then you 
don't have that 1/3rd problem.


The idea of GlusterFS is horizontal scaling, so you would add more 
bricks on more hosts to scale when you reach that arbitrary limit.



Using NFS:

You have the single connection from the client to the NFS server, then 
the fan out to the bricks.


On 03/02/17 21:14, Tamal Saha wrote:

Hi,
Anyone has any comments about this issue? Thanks again.

-Tamal

On Mon, Feb 27, 2017 at 8:34 PM, Tamal Saha > wrote:


Hi,
I am running a GlusterFS cluster in Kubernetes. This has a single
1x3 volume. But this volume is mounted by around 30 other docker
containers. Basically each docker container represents a separate
"user" in our multi-tenant application. As a result there is no
conflicting writes among the "user"s. Each user writes to their
own folder in the volume.

My question is how many clients can mount a GlusterFS volume
before it becomes a performance issue?

Thanks,
-Tamal




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


--
Mr. Flibble
King of the Potato People
http://www.linkedin.com/in/RobertLanning

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Any way to get rid of zfs error messages?

2017-03-06 Thread Dung Le
Hi Niels,

Is this bug got fix on the version 3.9.0?

Thanks,
~ Vic Le

> On Mar 6, 2017, at 12:36 PM, Niels de Vos  wrote:
> 
> On Mon, Mar 06, 2017 at 05:26:56PM +0100, Arman Khalatyan wrote:
>> hi, I have replicated glusterfs on 3 nodes with zfs, the logs are flooded
>> with inodes error
>> 
>> [2017-03-06 16:24:15.019386] E [MSGID: 106419]
>> [glusterd-utils.c:5458:glusterd_add_inode_size_to_dict] 0-management: could
>> not find (null) to getinode size for zclei21/01 (zfs): (null) package
>> missing?
>> 
>> Any way to fix it?
>> 
>> glusterfs --version
>> glusterfs 3.8.9 built on Feb 13 2017 10:03:47
> 
> We'll need to add the zfs command to fetch the size of the inodes. The
> commands for the different filesystems that we currently recognize are
> in the 'fs_info' structure at:
>  
> https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-utils.c#L5895
> 
> Could you file a bug and paste the command and output in there? We can
> then add it for a 3.8 update.
>  
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&version=3.8&component=glusterd
> 
> Of course, patches are welcome too :-)
>  
> http://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/
> 
> Thanks,
> Niels
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Any way to get rid of zfs error messages?

2017-03-06 Thread Darrell Budic

> On Mar 6, 2017, at 2:36 PM, Niels de Vos  wrote:
> 
> On Mon, Mar 06, 2017 at 05:26:56PM +0100, Arman Khalatyan wrote:
>> hi, I have replicated glusterfs on 3 nodes with zfs, the logs are flooded
>> with inodes error
>> 
>> [2017-03-06 16:24:15.019386] E [MSGID: 106419]
>> [glusterd-utils.c:5458:glusterd_add_inode_size_to_dict] 0-management: could
>> not find (null) to getinode size for zclei21/01 (zfs): (null) package
>> missing?
>> 
>> Any way to fix it?
>> 
>> glusterfs --version
>> glusterfs 3.8.9 built on Feb 13 2017 10:03:47
> 
> We'll need to add the zfs command to fetch the size of the inodes. The
> commands for the different filesystems that we currently recognize are
> in the 'fs_info' structure at:
>  
> https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-utils.c#L5895
>  
> 

ZFS “doesn’t have inodes” and thus there is no way to get their size. Would it 
be appropriate to treat it like Btrfs dynamic inodes, gluster wise? Would 
adding “zfs, NULL, NULL, NULL, NULL” to the detect code avoid those errors?___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Any way to get rid of zfs error messages?

2017-03-06 Thread Niels de Vos
On Mon, Mar 06, 2017 at 01:06:36PM -0800, Dung Le wrote:
> Hi Niels,
> 
> Is this bug got fix on the version 3.9.0?

No, it needs to get fixed in the master branch, and then we can backport
the fix to the maintained versions. GlusterFS 3.9 is not maintained
anymore since 3.10 is released, and I do not expect that there will be
any further 3.9 updates.

The current versions that we maintain, are 3.8 and 3.10. More details
about the Long-Term-Maintenance and Short-Term-Maintenance versions can
be found on https://www.gluster.org/community/release-schedule/

HTH,
Niels


> 
> Thanks,
> ~ Vic Le
> 
> > On Mar 6, 2017, at 12:36 PM, Niels de Vos  wrote:
> > 
> > On Mon, Mar 06, 2017 at 05:26:56PM +0100, Arman Khalatyan wrote:
> >> hi, I have replicated glusterfs on 3 nodes with zfs, the logs are flooded
> >> with inodes error
> >> 
> >> [2017-03-06 16:24:15.019386] E [MSGID: 106419]
> >> [glusterd-utils.c:5458:glusterd_add_inode_size_to_dict] 0-management: could
> >> not find (null) to getinode size for zclei21/01 (zfs): (null) package
> >> missing?
> >> 
> >> Any way to fix it?
> >> 
> >> glusterfs --version
> >> glusterfs 3.8.9 built on Feb 13 2017 10:03:47
> > 
> > We'll need to add the zfs command to fetch the size of the inodes. The
> > commands for the different filesystems that we currently recognize are
> > in the 'fs_info' structure at:
> >  
> > https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-utils.c#L5895
> > 
> > Could you file a bug and paste the command and output in there? We can
> > then add it for a 3.8 update.
> >  
> > https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&version=3.8&component=glusterd
> > 
> > Of course, patches are welcome too :-)
> >  
> > http://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/
> > 
> > Thanks,
> > Niels
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> 


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] one brick vs multiple brick on the same ZFS zpool.

2017-03-06 Thread Dung Le
Hi,

Since I am new with Gluster, need your advices. I have 2 different Gluster 
configuration:

Purpose: Need to create 5 Gluster volumes. I am running the gluster version is 
3.9.0.

Config #1: 5 bricks from one zpool
3 storage nodes.
Using hardware raid to create one array with raid5 (9+1) per storage node 
Create a zpool on top of the array per storage node
Create 5 ZFS shares (each share is a brick) per storage node
Create 5 volumes with replica of 3 using 5 different bricks.

Config #2: 1 brick from one zpool
3 storage nodes.
Using hardware raid to create one array with raid5 (9+1) per storage node
Create a zpool on top of the array per storage node
Create 1 ZFS shares per storage node. Using the share as brick.
Create 5 volumes with replica of 3 with same share.

1) Is there any different on the performance on both config? 
2) Will the single brick be handling parallel writing vs multiple brick?
3) Since I am using hardware raid controller, any option that I need to enable 
or disable for the gluster volume?

Best Regards,
~ Vic Le

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Any way to get rid of zfs error messages?

2017-03-06 Thread Niels de Vos
On Mon, Mar 06, 2017 at 05:26:56PM +0100, Arman Khalatyan wrote:
> hi, I have replicated glusterfs on 3 nodes with zfs, the logs are flooded
> with inodes error
> 
> [2017-03-06 16:24:15.019386] E [MSGID: 106419]
> [glusterd-utils.c:5458:glusterd_add_inode_size_to_dict] 0-management: could
> not find (null) to getinode size for zclei21/01 (zfs): (null) package
> missing?
> 
> Any way to fix it?
> 
> glusterfs --version
> glusterfs 3.8.9 built on Feb 13 2017 10:03:47

We'll need to add the zfs command to fetch the size of the inodes. The
commands for the different filesystems that we currently recognize are
in the 'fs_info' structure at:
  
https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-utils.c#L5895

Could you file a bug and paste the command and output in there? We can
then add it for a 3.8 update.
  
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&version=3.8&component=glusterd

Of course, patches are welcome too :-)
  
http://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] documentation on georeplication failover

2017-03-06 Thread Joseph Lorenzini
Hi all,

I found this doc on georeplication. I am on gluster 3.9. I am looking for
documentation that explains how to failover between the master and slave
volumes.

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/

How would someone handle the following scenario.

   1. two datacenters A and B. Each is running a gluster pool.
   2. In DC A , create gluster volume named gv0-dcA.
   3. In DC B, create gluster volume named gv0-dcB.
   4. Gluster volume gv0-dcA is the master and volume gv0-dcB is the slave.
   5. A couple GB of data is written to gv0-dcA and replicated to gv0-dcB.
   6. Initiate failover process so that:
  1. all writes are completed on gv0-dcA
  2. gv0-dcA is now read-only
  3. gv0-dcB becomes master and now data can be written to this volume
  4. gv0-dcA is now the slave volume and gv0-dcB is the master volume.
  Consequently, georeplication now happens from gv0-dcB to gv0-dcA.
   7. repeat all steps described in the previous step but now roles are
   switched where gv0-dcA becomes master and gv0-dcB becomes slave.


Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Any way to get rid of zfs error messages?

2017-03-06 Thread Arman Khalatyan
hi, I have replicated glusterfs on 3 nodes with zfs, the logs are flooded
with inodes error

[2017-03-06 16:24:15.019386] E [MSGID: 106419]
[glusterd-utils.c:5458:glusterd_add_inode_size_to_dict] 0-management: could
not find (null) to getinode size for zclei21/01 (zfs): (null) package
missing?

Any way to fix it?

glusterfs --version
glusterfs 3.8.9 built on Feb 13 2017 10:03:47
thanks,
Arman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-06 Thread Arman Khalatyan
https://bugzilla.redhat.com/show_bug.cgi?id=1428851

On Mon, Mar 6, 2017 at 12:56 PM, Mohammed Rafi K C 
wrote:

> I will see what we can do from gluster side to fix this. I will get back
> to you .
>
>
> Regards
>
> Rafi KC
>
> On 03/06/2017 05:14 PM, Denis Chaplygin wrote:
>
> Hello!
>
> On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan < 
> arm2...@gmail.com> wrote:
>
>> I think there are some bug in the vdsmd checks;
>>
>> OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
>> `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist
>>
>
>
>>
>> 10.10.10.44:/GluReplica.rdma   3770662912 407818240 3362844672
>> <(336)%20284-4672>  11% /rhev/data-center/mnt/glusterSD/10.10.10.44:
>> _GluReplica
>>
>
> I suppose, that vdsm is not able to handle that .rdma suffix on volume
> path. Could you please file a bug for that issue to track it?
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-06 Thread Mohammed Rafi K C
I will see what we can do from gluster side to fix this. I will get back
to you .


Regards

Rafi KC


On 03/06/2017 05:14 PM, Denis Chaplygin wrote:
> Hello!
>
> On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan  > wrote:
>
> I think there are some bug in the vdsmd checks;
>
> OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
> `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not
> exist
>
>  
>
>
> 10.10.10.44:/GluReplica.rdma   3770662912 407818240 3362844672 
> 11% /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
>
>
> I suppose, that vdsm is not able to handle that .rdma suffix on volume
> path. Could you please file a bug for that issue to track it?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Maximum bricks per volume recommendation

2017-03-06 Thread Serkan Çoban
It depends on your workload and expectations.
I have 1500 bricks in single volume and happy with it.
I don't do metadata heavy operations on it.
I also 4000 bricks single volume in plan if it pass the evaluations.
Just do your tests and make sure it works before going production.

On Mon, Mar 6, 2017 at 11:54 AM, qingwei wei  wrote:
> Hi Serkan,
>
> Thanks for the information. So 150 bricks should still be good. So
> what number of bricks is consider excessive?
>
> Cw
>
> On Mon, Mar 6, 2017 at 3:14 PM, Serkan Çoban  wrote:
>> Putting lots of bricks in a volume have side affects. Slow meta
>> operations, slow gluster commands executions, etc.
>> But 150 bricks are not that much.
>>
>> On Mon, Mar 6, 2017 at 9:41 AM, qingwei wei  wrote:
>>> Hi,
>>>
>>> Is there hard limit on the maximum number of bricks per Gluster
>>> volume. And if no such hard limit exists, then is there any best
>>> practice on selecting the number of bricks per volume. Example, if i
>>> would like to create a 200TB for my host, which config below is
>>> better?
>>>
>>> HDD: 4TB (1 brick on 1 physical disk)
>>> 1 Gluster volume = 10x3 (30 bricks in total)
>>> total 5 Gluster volumes are created and host will combine them as one
>>> logical volume
>>>
>>> or
>>>
>>> HDD: 4TB (1 brick on 1 physical disk)
>>> 1 Gluster volume = 50x3 (150 bricks in total)
>>> total 1 Gluster volume is created
>>>
>>>
>>> Thanks.
>>>
>>> Cw
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-06 Thread Arman Khalatyan
On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic 
wrote:

> Why are you using an arbitrator if all your HW configs are identical? I’d
> use a true replica 3 in this case.
>
>
This was just GIU suggestion when I was creating the cluster it was asking
for the 3 Hosts , I did not knew even that an Arbiter does not keep the
data.
I am not so sure if I can change the type of the glusterfs to triplicated
one in the running system, probably I need to destroy whole cluster.



> Also in my experience with gluster and vm hosting, the ZIL/slog degrades
> write performance unless it’s a truly dedicated disk. But I have 8 spinners
> backing my ZFS volumes, so trying to share a sata disk wasn’t a good zil.
> If yours is dedicated SAS, keep it, if it’s SATA, try testing without it.
>
>
We  have also several huge systems running with zfs quite successful over
the years. This was an idea to use zfs + glusterfs for the HA solutions.


> You don’t have compression enabled on your zfs volume, and I’d recommend
> enabling relatime on it. Depending on the amount of RAM in these boxes, you
> probably want to limit your zfs arc size to 8G or so (1/4 total ram or
> less). Gluster just works volumes hard during a rebuild, what’s the problem
> you’re seeing? If it’s affecting your VMs, using shading and tuning client
> & server threads can help avoid interruptions to your VMs while repairs are
> running. If you really need to limit it, you can use cgroups to keep it
> from hogging all the CPU, but it takes longer to heal, of course. There are
> a couple older posts and blogs about it, if you go back a while.
>

Yes I saw that glusterfs is CPU/RAM hugry!!! 99% of all 16 cores used just
for healing 500GB vm disks. It was taking almost infinity compare with nfs
storage (single disk+zfs ssd cache, for sure one get an penalty for the
HA:) )
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Maximum bricks per volume recommendation

2017-03-06 Thread qingwei wei
Hi Serkan,

Thanks for the information. So 150 bricks should still be good. So
what number of bricks is consider excessive?

Cw

On Mon, Mar 6, 2017 at 3:14 PM, Serkan Çoban  wrote:
> Putting lots of bricks in a volume have side affects. Slow meta
> operations, slow gluster commands executions, etc.
> But 150 bricks are not that much.
>
> On Mon, Mar 6, 2017 at 9:41 AM, qingwei wei  wrote:
>> Hi,
>>
>> Is there hard limit on the maximum number of bricks per Gluster
>> volume. And if no such hard limit exists, then is there any best
>> practice on selecting the number of bricks per volume. Example, if i
>> would like to create a 200TB for my host, which config below is
>> better?
>>
>> HDD: 4TB (1 brick on 1 physical disk)
>> 1 Gluster volume = 10x3 (30 bricks in total)
>> total 5 Gluster volumes are created and host will combine them as one
>> logical volume
>>
>> or
>>
>> HDD: 4TB (1 brick on 1 physical disk)
>> 1 Gluster volume = 50x3 (150 bricks in total)
>> total 1 Gluster volume is created
>>
>>
>> Thanks.
>>
>> Cw
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS Multitenancy -- supports multi-tenancy by partitioning users or groups into logical volumes on shared storage

2017-03-06 Thread Deepak Naidu
>>Idea of multi-tenancy is to have multiple tenants on same volume. May be I 
>>didn't understand your idea completely

First, if you can help me understand how Glusterfs defines and does multi 
tenancy, it would be helpful.


Second, multi tenancy should have complete isolation of resource from disk , 
network & access. If I use the same volume for multiple tenant, how I am 
isolate resource ? I need that understanding for gluster. How can I guarantee 
that failure of the brick in that volume is not effecting all tenants(if I 
accept ur logic) of shared tenant using same volume.


--
Deepak

> On Mar 5, 2017, at 11:51 PM, Pranith Kumar Karampuri  
> wrote:
> 
> Idea of multi-tenancy is to have multiple tenants on same volume. May be I 
> didn't understand your idea completely
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Community Meeting 2017-03-01

2017-03-06 Thread Kaushal M
I'm late with the meeting notes again. But better late, than never.
Here are the meeting notes for the community meeting on 2017-03-01.

We had a lower attendance this week with lot of regular attendees out
for conferences. We had one topic of discussion on backports and
change-ids. Other than that we had an informal discussion around
maintainers and maintainership. More details about the discussions can
be found in the logs.

The next meeting is on 15th March, 1500UTC. The meeting pad is ready
for updates and topics for discussion at
https://bit.ly/gluster-community-meetings.

See you all next time.

~kaushal

- Logs :
- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-01/community_meeting_2017-03-01.2017-03-01-15.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-01/community_meeting_2017-03-01.2017-03-01-15.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-01/community_meeting_2017-03-01.2017-03-01-15.00.log.html

## Topics of Discussion

The meeting is an open floor, open for discussion of any topic entered below.

- Discuss backport tracking via gerrit Change-ID [shyam]
- Change-ID makes it easier to track backports.
- [Backport
guidelines](https://gluster.readthedocs.io/en/latest/Developer-guide/Backport-Guidelines/)
mention the need to use the same Change-ID for backports.
- But isn't enforced.
- How to we enforce it?
- Jenkins job that checks if Change-ID for new reviews on
release branches exists on master [nigelb]
- Yes [kshlm, shyam, vbellur]
- shyam will inform the lists before we proceed

### Next edition's meeting host

- kshlm (again)

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from the last meeting

- jdarcy will work with nigelb to make the infra for reverts easier.
- Nothing happened here
- nigelb will document kkeithleys build process for glusterfs packages
- Or here.

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
- GD2
- Targetting to provide a preview release with 3.11
- Gave a code name "Rogue One"
- We started filling up tasks for Rogue One on Github
- https://github.com/gluster/glusterd2/projects/1
- We want to try [mgmt](https://github.com/purpleidea/mgmt) as
the internal orchestraction and management engine
- Started a new PR to import mgmt into GD2, to allow
@purpleidea to show how we could use mgmt.
- https://github.com/gluster/glusterd2/pull/247
- SunRPC bits were refactored
- https://github.com/gluster/glusterd2/pull/242
- https://github.com/gluster/glusterd2/pull/245
- Portmap registry was implemented
- https://github.com/gluster/glusterd2/pull/246

 GlusterFS 3.11
- Maintainers: shyam, *TBD*
- Release: May 30th, 2017
- Branching: April 27th, 2017
- 3.11 Main focus areas
- Testing improvements in Gluster
- Merge all (or as much as possible) Facebook patches into master,
and hence into release 3.11
- We will still retain features that slipped 3.10 and hence were
moved to 3.11
- Release Scope: https://github.com/gluster/glusterfs/projects/1

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Current release : 3.10.0
- Next release : 3.10.1
- Target date: March 30, 2017
- Bug tracker: https://bugzilla.redhat.com/show_bug.cgi?glusterfs-3.10.1
- Updates:
  - 3.10.0 has finally been released.
  - http://blog.gluster.org/2017/02/announcing-gluster-3-10/

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.1
- Next release : EOL
- Updates:
  - EOLed. Announcement pending.
  - Bug cleanup pending.

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.9
- Next release : 3.8.10
  - Release date : 10 March 2017
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.10
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&id=glusterfs-3.8.10&hide_resolved=1
- Updates:
  - _Add updates here_

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.20
- Next release : EOL
- Updates:
  - EOLed. Announcement pending.
  - Bug cleanup pending.

### Related projects and efforts

 Community Infra

- Reminder: Community cage outage on 14th and 15th March
- All smoke jobs run on Centos 7 in the community cage.
- We will slowly be moving jobs into the cage.

 Samba

- _None_

 Ganesha

- _None_

 Containers

- _None_

 Testing

- [nigelb] Glusto tests are completely green at the moment. They had
been broken for a while. We'll make sure their status is monitored
more carefully goin