[Gluster-users] 3.8 Release

2016-06-22 Thread Lindsay Mathieson
Hey all, have been away for two weeks and I see there has been a 3.8 
release with some fascinating new features and a slew of fixes.



Unfortunately I'm not in a good position to pre-test before rollout now, 
we've moved all our VM's to 3.7.11 where it is working very nicely.



Is 3.8.0 considered stable for production? would I be safe to update to 
it and keep my current datastore on 3.7.11 features, but start a test 
datastore for 3.8 on the same nodes? Impossible to answer? :)



--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster weekly community meeting minutes 22-Jun-2016

2016-06-22 Thread Kaleb S. KEITHLEY
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.log.html

Next weeks meeting will be held at 12:00 UTC  29 June 2016 in
#gluster-meeting on freenode.  See you all next week.



#gluster-meeting: Weekly Community meeting - 22-Jun-2016



Meeting started by kshlm at 12:01:29 UTC. The full logs are available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.log.html
.



Meeting summary
---
* RollCall  (kshlm, 12:02:01)

* GlusterFS 4.0  (kshlm, 12:05:44)
  * LINK: http://review.gluster.org/#/c/14763/   (atinm, 12:10:25)

* GlusterFS 3.8  (kshlm, 12:12:14)
  * LINK: http://blog.gluster.org/2016/06/glusterfs-3-8-released/
(anoopcs, 12:14:48)
  * ACTION: jiffin will announce 3.8 on the mailing lists.  (kshlm,
12:15:28)
  * ACTION: aravindavk will ping amye to link release-notes to 3.8
release annoucement on blog  (kshlm, 12:17:38)

* GlusterFS-3.9  (kshlm, 12:17:49)
  * LINK:
http://www.gluster.org/pipermail/maintainers/2016-June/000951.html
(aravindavk, 12:19:20)

* GlusterFS 3.7  (kshlm, 12:21:09)
  * AGREED: , we will release 3.7.12 following the meeting  (kkeithley,
12:29:29)
  * AGREED: hagarth tentative release manager for 3.7.13  (kkeithley,
12:33:39)

* GlusterFS 3.6  (kkeithley, 12:34:24)
  * AGREED: we only fix critical bugs in 3.6  (kkeithley, 12:43:48)

* NFS-Ganesha + Gluster  (kkeithley, 12:45:49)

* Samba and GlusterFS  (kkeithley, 12:48:45)

* AIs from last week  (kkeithley, 12:50:11)

* Open Floor  (kkeithley, 12:55:31)
  * glusterfs-coreutils is now available in fedora 22, 23 and 24 stable
repositories  (kkeithley, 12:59:01)

Meeting ended at 13:02:40 UTC.




Action Items

* jiffin will announce 3.8 on the mailing lists.
* aravindavk will ping amye to link release-notes to 3.8 release
  annoucement on blog




Action Items, by person
---
* aravindavk
  * aravindavk will ping amye to link release-notes to 3.8 release
annoucement on blog
* jiffin
  * jiffin will announce 3.8 on the mailing lists.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (87)
* kshlm (85)
* anoopcs (10)
* aravindavk (9)
* atinm (6)
* partner (6)
* jiffin (4)
* zodbot (4)
* rjoseph (4)
* jdarcy (3)
* ira (3)
* mchangir (2)
* post-factum (1)
* karthik___ (1)
* hgowtham_ (1)
* msvbhat (1)
* samikshan (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Multiple questions regarding monitoring of Gluster

2016-06-22 Thread ML mail
Where's the package for Debian?

On Wednesday, June 22, 2016 3:48 PM, "Glomski, Patrick" 
 wrote:
 

 If you're not opposed to another dependency, there is a glusterfs-nagios 
package (python-based) which presents the volumes in a much more useful format 
for monitoring.

http://download.gluster.org/pub/gluster/glusterfs-nagios/1.1.0/

Patrick

On Tue, Jun 21, 2016 at 10:28 AM, Malte Schmidt  wrote:

 Under which conditions does "gluster volume status $volume detail" return 
something else than a table?Typical, expected output:root@server1:~# gluster 
volume status vol0 detail
Status of volume: vol0
--
Brick : Brick server1:/data/glusterfs/vol0
TCP Port : 49152 
RDMA Port : 0 
Online : Y 
Pid : 2942 
File System : xfs 
Device : /dev/mapper/glusterfs
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512 
Disk Space Free : 9.0GB 
Total Disk Space : 20.0GB 
Inode Count : 10485760 
Free Inodes : 8774085 
--
Brick : Brick server2:/data/glusterfs/vol0
TCP Port : 49152 
RDMA Port : 0 
Online : Y 
Pid : 3275 
File System : xfs 
Device : /dev/mapper/glusterfs
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : 512 
Disk Space Free : 9.0GB 
Total Disk Space : 20.0GB 
Inode Count : 10485760 
Free Inodes : 8774085 Are there any conditions under which that table is 
different? Better question: What is the best way of getting this data for usage 
in Nagios?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

   ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-22 Thread Gandalf Corvotempesta
Il 21 giu 2016 19:02, "Luciano Giacchetta"  ha
scritto:
>
> Hi,
>
> I have similar scenario, for a cars classified with millions of small
files, mounted with gluster native client in a replica config.
> The gluster server has 16gb RAM and 4 cores and mount the glusterfs with
direct-io-mode=enable. Then i export to all servers ( windows included with
CIFS )
>
> performance.cache-refresh-timeout: 60
> performance.read-ahead: enable
> performance.write-behind-window-size: 4MB
> performance.io-thread-count: 64
> performance.cache-size: 12GB
> performance.quick-read: on
> performance.flush-behind: on
> performance.write-behind: on
> nfs.disable: on

Which performance are you getting?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-22 Thread ML mail
Luciano, how do you enable direct-io-mode?

On Wednesday, June 22, 2016 7:09 AM, Luciano Giacchetta 
 wrote:
 

 Hi,

I have similar scenario, for a cars classified with millions of small files, 
mounted with gluster native client in a replica config.
The gluster server has 16gb RAM and 4 cores and mount the glusterfs with 
direct-io-mode=enable. Then i export to all servers ( windows included with 
CIFS )

performance.cache-refresh-timeout: 60
performance.read-ahead: enable
performance.write-behind-window-size: 4MB
performance.io-thread-count: 64
performance.cache-size: 12GB
performance.quick-read: on
performance.flush-behind: on
performance.write-behind: on
nfs.disable: on


--Saludos, LG
On Sat, May 28, 2016 at 6:46 AM, Gandalf Corvotempesta 
 wrote:

if i remember properly, each stat() on a file needs to be sent to all host in 
replica to check if are in syncIs this true for both gluster native client and 
nfs ganesha?Which is the best for a shared hosting storage with many millions 
of small files? About 15.000.000 small files in 800gb ? Or even for Maildir 
hostingGanesha can be configured for HA and loadbalancing so the biggest issue 
that was present in standard NFS now is goneAny advantage about native gluster 
over Ganesha? Removing the fuse requirement should also be a performance 
advantage for Ganesha over native client 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

   ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS RDMA Support

2016-06-22 Thread Kaleb S. KEITHLEY
On 06/22/2016 08:17 AM, Atul Yadav wrote:
> Hi Team,
> 
> We installed glusterfs3.8 in our HPC environment.
> 
> While configuring RDMA on glusterfs, the erroe is coming.
> 
> Is glusterfs rdma is compatible with OFED, Intel and Mellanox driver.
> Or It is only compatible with operating system infiniband driver. 

At one time it worked with the OFED driver.

AFAIK it still should.

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS RDMA Support

2016-06-22 Thread Atul Yadav
Hi Team,

We installed glusterfs3.8 in our HPC environment.

While configuring RDMA on glusterfs, the erroe is coming.

Is glusterfs rdma is compatible with OFED, Intel and Mellanox driver.
Or It is only compatible with operating system infiniband driver.

Thank You
Atul YAdav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Startup Issue

2016-06-22 Thread Danny Lee
Thank you for responding, Heiko.  In the process of seeing the differences
between our two scripts.  First thing I noticed was that the notes states "need
to be defined in the /etc/hosts". Would using the IP address directly be a
problem?

On Tue, Jun 21, 2016 at 2:10 PM, Heiko L.  wrote:

> Am Di, 21.06.2016, 19:22 schrieb Danny Lee:
> > Hello,
> >
> >
> > We are currently figuring out how to add GlusterFS to our system to make
> > our systems highly available using scripts.  We are using Gluster 3.7.11.
> >
> > Problem:
> > Trying to migrate to GlusterFS from a non-clustered system to a 3-node
> > glusterfs replicated cluster using scripts.  Tried various things to
> make this work, but it sometimes causes us to be in an
> > indesirable state where if you call "gluster volume heal 
> full", we would get the error message, "Launching heal
> > operation to perform full self heal on volume  has been
> unsuccessful on bricks that are down. Please check if
> > all brick processes are running."  All the brick processes are running
> based on running the command, "gluster volume status
> > volname"
> >
> > Things we have tried:
> > Order of preference
> > 1. Create Volume with 3 Filesystems with the same data
> > 2. Create Volume with 2 Empty filesysytems and one with the data
> > 3. Create Volume with only one filesystem with data and then using
> > "add-brick" command to add the other two empty filesystems
> > 4. Create Volume with one empty filesystem, mounting it, and then copying
> > the data over to that one.  And then finally, using "add-brick" command
> to add the other two empty filesystems
> - should be working
> - read each file on /mnt/gvol, to trigger replication [2]
>
> > 5. Create Volume
> > with 3 empty filesystems, mounting it, and then copying the data over
> - my favorite
>
> >
> > Other things to note:
> > A few minutes after the volume is created and started successfully, our
> > application server starts up against it, so reads and writes may happen
> pretty quickly after the volume has started.  But there
> > is only about 50MB of data.
> >
> > Steps to reproduce (all in a script):
> > # This is run by the primary node with the IP Adress, , that
> > has data systemctl restart glusterd gluster peer probe 
> gluster peer probe  Wait for "gluster peer
> > status" to all be in "Peer in Cluster" state gluster volume create
>  replica 3 transport tcp ${BRICKS[0]} ${BRICKS[1]}
> > ${BRICKS[2]} force
> > gluster volume set  nfs.disable true gluster volume start
>  mkdir -p $MOUNT_POINT mount -t glusterfs
> > :/volname $MOUNT_POINT
> > find $MOUNT_POINT | xargs stat
>
> I have written a script for 2 nodes. [1]
> but should be at least 3 nodes.
>
>
> I hope it helps you
> regards Heiko
>
> >
> > Note that, when we added sleeps around the gluster commands, there was a
> > higher probability of success, but not 100%.
> >
> > # Once volume is started, all the the clients/servers will mount the
> > gluster filesystem by polling "mountpoint -q $MOUNT_POINT": mkdir -p
> $MOUNT_POINT mount -t glusterfs :/volname
> > $MOUNT_POINT
> >
> >
> > Logs:
> > *etc-glusterfs-glusterd.vol.log* in *server-ip-1*
> >
> >
> > [2016-06-21 14:10:38.285234] I [MSGID: 106533]
> > [glusterd-volume-ops.c:857:__glusterd_handle_cli_heal_volume]
> 0-management:
> > Received heal vol req for volume volname
> > [2016-06-21 14:10:38.296801] E [MSGID: 106153]
> > [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on
> > . Please check log file for details.
> >
> >
> >
> > *usr-local-volname-data-mirrored-data.log* in *server-ip-1*
> >
> >
> > [2016-06-21 14:14:39.233366] E [MSGID: 114058]
> > [client-handshake.c:1524:client_query_portmap_cbk] 0-volname-client-0:
> > failed to get the port number for remote subvolume. Please run 'gluster
> volume status' on server to see if brick process is
> > running. *I think this is caused by the self heal daemon*
> >
> >
> > *cmd_history.log* in *server-ip-1*
> >
> >
> > [2016-06-21 14:10:38.298800]  : volume heal volname full : FAILED :
> Commit
> > failed on . Please check log file for details.
> ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
> [1]
> http://www2.fh-lausitz.de/launic/comp/net/glusterfs/130620.glusterfs.create_brick_vol.howto.txt
>   - old, limit 2 nodes
>
>
> --
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-22 Thread André Bauer
Hi Vijay,

i just used "tail -f /var/log/glusterfs/*.log" and also "tail -f
/var/log/glusterfs/bricks/glusterfs-vmimages.log" on all 4 nodes to
check for new log entries when trying to migrate a VM to the host.

There are no new log entries from start of vm migration until error.

Does anybody have this (qemu / libgfapi access) running in Ubuntu 16.04?

Regards
André



Am 17.06.2016 um 04:44 schrieb Vijay Bellur:
> On Wed, Jun 15, 2016 at 8:07 AM, André Bauer  wrote:
>> Hi Prasanna,
>>
>> Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:
>>
>>>
>>> I think you have missed enabling bind insecure which is needed by
>>> libgfapi access, please try again after following below steps
>>>
>>> => edit /etc/glusterfs/glusterd.vol by add "option
>>> rpc-auth-allow-insecure on" #(on all nodes)
>>> => gluster vol set $volume server.allow-insecure on
>>> => systemctl restart glusterd #(on all nodes)
>>>
>>
>> No, thats not the case. All services are up and runnig correctly,
>> allow-insecure is set and the volume works fine with libgfapi access
>> from my Ubuntu 14.04 KVM/Qemu servers.
>>
>> Just the server which was updated to Ubuntu 16.04 can't access the
>> volume via libgfapi anmyore (fuse mount still works).
>>
>> GlusterFS logs are empty when trying to access the GlusterFS nodes so iyo
>> think the requests are blocked on the client side.
>>
>> Maybe apparmor again?
>>
> 
> Might be worth a check again to see if there are any errors seen in
> glusterd's log file on the server. libvirtd seems to indicate that
> fetch of the volume configuration file from glusterd has failed.
> 
> If there are no errors in glusterd or glusterfsd (brick) logs, then we
> can possibly blame apparmor ;-).
> 
> Regards,
> Vijay
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net 
www.magix.com 

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

 
 
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users