Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Samuli Heinonen

23.1.2014 17:27, Bernhard Glomm kirjoitti:



After migration the disks become read-only because

on migration the disk files changes ownership from

libvirt-qemu to root


What am I missing?


I'm not sure of this but is it possible that this is because of 
different ownership and permission on bricks?


Can you try to set storage.owner-uid and storage.owner-gid to 
libvirt-qemu? To do that you have to stop volume.


-samuli

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Problem creating/editing files with samba gluster vfs module..

2014-01-23 Thread B.K.Raghuram
Hi,

We have built samba 4.1 from source with the gluster vfs module
enabled. I am able to access (read) and browse a volume from a windows
machine. However, when I try to create or edit a file that resides on
the volume from a windows box, it hangs forever. On the backend, I see
that many temporary files are being created. I suspect that the
windows box is trying to create the file but is not getting a
confirmation of it having been created and so it tries to create one
again. However, I do not have any problems with creating directories.
We are using gluster 3.4.1.

Any ideas on what may be the issue?

Thanks,
-Ram
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread James
Not sure if it's related at all, but is there any chance this has anything
to do with:

https://bugzilla.redhat.com/show_bug.cgi?id=987555

It came to mind as something to do with glusterfs+libvirt+migration.

HTH,
James



On Thu, Jan 16, 2014 at 5:52 AM, Bernhard Glomm
wrote:

> I experienced a strange behavior of glusterfs during livemigration
> of a qemu-kvm guest
> using a 10GB file on a mirrored gluster 3.4.2 volume
> (both on ubuntu 13.04)
> I run
> virsh migrate --verbose --live --unsafe --p2p --domain atom01 --desturi
> qemu+ssh:///system
> and the migration works,
> the running machine is pingable and keeps sending pings.
> nevertheless, when I let the machine touch a file during migration
> it stops, complaining that it's filesystem is read only (from that moment
> that
> migration finished)
> A reboot from inside the machine failes,
> machine goes down and comes up with an error
> unable to write to sector xx on hd0
> (than falling into the initrd).
> a
> virsh destroy VM && virsh start VM
> leads to a perfect running VM again,
> no matter on which of the two hosts I start the machine
> anybody better experience with livemigration?
> any hint on a procedure how to debug that?
> TIA
> Bernhard
>
> --
>   --
>   [image: *Ecologic Institute*]   *Bernhard Glomm*
> IT Administration
>
>Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
> bernhard.glomm.ecologic [image:
> Website:]  [image: | 
> Video:] [image:
> | Newsletter:]  [image: |
> Facebook:]  [image: |
> Linkedin:] 
> [image:
> | Twitter:]  [image: | 
> YouTube:] [image:
> | Google+:]    Ecologic
> Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> --
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Ira Cooper
Understood on CTDB.

Locking:

Volker has been doing research on robust mutexes based on initial research that 
we did, a while ago.  (I believe you can find my initial work: 
http://git.samba.org/?p=ira/tdb.git;a=shortlog;h=refs/heads/locking , but that 
code is ancient.)

fcntl locks are chosen in Samba due to their semantics when processes die.  If 
a process dies without releasing its locks, the fcntl locks automatically 
release.

Robust mutexes have the same type of properties, but when I did the initial 
research I hit the same type of thing Volker did.  (You can see it in my commit 
messages,  I hit issues, but pushed the code for others to see.)

Samba in any current configuration I'd expect to see deployed, does not use 
spinlocks directly.  It may use them indirectly via the implementation of fcntl 
in the kernel.

I want to make sure that users have the correct impression of how the database 
works, if they review this thread.

Thanks,

-Ira / ira@(redhat.com|samba.org)

- Original Message -
> Ira
> 
> In clustered mode stores locking information in a TDB so thats why you
> need to configure CTDB and tell samba its clustered in the configed so
> it will keep the sync databases in sync.
> 
> By the way samba does still do spinlocks or at least it did last year
> here an intresting thread where they were talking about spinlocks,
> fcntl locks and TDB based locks
> https://groups.google.com/forum/#!topic/mailing.unix.samba-technical/hM-t2pBf_Hs
> 
> That said spinlocks are just the first thing that came to mind
> (because that was part of many discussions back in the 2.x days) and
> was not best choice of words on my part.
> 
> 
> 
> 
> On Thu, Jan 23, 2014 at 4:48 PM, Paul Robert Marino 
> wrote:
> > check this article its fairly strait forward
> > http://ctdb.samba.org/samba.html
> > if you don't get this configured properly then your locking wouldn't work.
> >
> > by the way oplocks need to be enabled based on an other post it looks
> > like you turned off all locking support.
> >
> >
> > On Thu, Jan 23, 2014 at 3:05 PM, Paul Robert Marino 
> > wrote:
> >> Are you using CTDB on a shared gluster volume instead of TDB on a local
> >> volume which is samba default.
> >>
> >> If not this may explain your issue because Samba stores  or at least did
> >> ate
> >> one time store spinlocks in the TDB for speed.
> >>
> >>
> >>
> >> -- Sent from my HP Pre3
> >>
> >> 
> >> On Jan 23, 2014 10:31, Adrian Valeanu  wrote:
> >>
> >> Hi,
> >> I try to replicate some data that resides on two CentOS servers. The
> >> replicated data should be shared using Samba to the users. It should be
> >> avoided that two users
> >> try to work on the same file using MS Office. If the users access the same
> >> file on one server, one of them is not able to modify the file. This is
> >> possible (and the normal operation)
> >> if one uses one server with Samba. I assumed that this kind of lock would
> >> be
> >> replicated too.
> >>
> >> I did not knew about  libgfapi and have not used it yesterday. I used the
> >> fuse mounted directory as data source for Samba.
> >>
> >> Yesterday night I updated both CentOS servers. They are CentOS 6.5 now.
> >> Now
> >> locking does not happen at all any more. After you mentioned libgfapi I
> >> found this:
> >> https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
> >>
> >> I managed to compile the module and switched to samba-glusterfs-vfs. But I
> >> still have no locking
> >> My Samba configuration looks like this:
> >> [glusterdata-vfs]
> >>vfs object = glusterfs
> >>glusterfs:volume = gv0
> >>path = /
> >>glusterfs:loglevel = 2
> >>glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
> >>
> >>read only = no
> >>browseable = yes
> >>guest ok = no
> >>printable = no
> >>nt acl support = yes
> >>acl map full control = yes
> >>
> >> Thank you for your attention.
> >>
> >>
> > Lalatendu Mohanty  22.01.2014 16:10 >>>
> >>>
> >> On 01/22/2014 08:30 PM, Adrian Valeanu wrote:
> >>
> >> Hi,
> >> I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6
> >> servers. The gluster filesystem is shared through Samba on both servers.
> >> Replication is working like a charm but file locking is not. Is it
> >> possible
> >> to have file locking working in this configuration in an way that
> >> Microsoft
> >> Office 2010
> >> behaves like as if the files were on the same server? Does somebody have
> >> such an configuration?
> >> I tried a lot of the Samba configurations found on the mailing list but
> >> none
> >> showed the expected results.
> >>
> >>
> >> Are you using Samba with libgfapi? I am not sure if I understand your
> >> expectation on locking through Samba. Some more context would be nice.
> >>
> >> -Lala
> >>
> >> Thank you
> >>
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://superco

Re: [Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Paul Robert Marino
Ira

In clustered mode stores locking information in a TDB so thats why you
need to configure CTDB and tell samba its clustered in the configed so
it will keep the sync databases in sync.

By the way samba does still do spinlocks or at least it did last year
here an intresting thread where they were talking about spinlocks,
fcntl locks and TDB based locks
https://groups.google.com/forum/#!topic/mailing.unix.samba-technical/hM-t2pBf_Hs

That said spinlocks are just the first thing that came to mind
(because that was part of many discussions back in the 2.x days) and
was not best choice of words on my part.




On Thu, Jan 23, 2014 at 4:48 PM, Paul Robert Marino  wrote:
> check this article its fairly strait forward http://ctdb.samba.org/samba.html
> if you don't get this configured properly then your locking wouldn't work.
>
> by the way oplocks need to be enabled based on an other post it looks
> like you turned off all locking support.
>
>
> On Thu, Jan 23, 2014 at 3:05 PM, Paul Robert Marino  
> wrote:
>> Are you using CTDB on a shared gluster volume instead of TDB on a local
>> volume which is samba default.
>>
>> If not this may explain your issue because Samba stores  or at least did ate
>> one time store spinlocks in the TDB for speed.
>>
>>
>>
>> -- Sent from my HP Pre3
>>
>> 
>> On Jan 23, 2014 10:31, Adrian Valeanu  wrote:
>>
>> Hi,
>> I try to replicate some data that resides on two CentOS servers. The
>> replicated data should be shared using Samba to the users. It should be
>> avoided that two users
>> try to work on the same file using MS Office. If the users access the same
>> file on one server, one of them is not able to modify the file. This is
>> possible (and the normal operation)
>> if one uses one server with Samba. I assumed that this kind of lock would be
>> replicated too.
>>
>> I did not knew about  libgfapi and have not used it yesterday. I used the
>> fuse mounted directory as data source for Samba.
>>
>> Yesterday night I updated both CentOS servers. They are CentOS 6.5 now. Now
>> locking does not happen at all any more. After you mentioned libgfapi I
>> found this:
>> https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
>>
>> I managed to compile the module and switched to samba-glusterfs-vfs. But I
>> still have no locking
>> My Samba configuration looks like this:
>> [glusterdata-vfs]
>>vfs object = glusterfs
>>glusterfs:volume = gv0
>>path = /
>>glusterfs:loglevel = 2
>>glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
>>
>>read only = no
>>browseable = yes
>>guest ok = no
>>printable = no
>>nt acl support = yes
>>acl map full control = yes
>>
>> Thank you for your attention.
>>
>>
> Lalatendu Mohanty  22.01.2014 16:10 >>>
>>>
>> On 01/22/2014 08:30 PM, Adrian Valeanu wrote:
>>
>> Hi,
>> I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6
>> servers. The gluster filesystem is shared through Samba on both servers.
>> Replication is working like a charm but file locking is not. Is it possible
>> to have file locking working in this configuration in an way that Microsoft
>> Office 2010
>> behaves like as if the files were on the same server? Does somebody have
>> such an configuration?
>> I tried a lot of the Samba configurations found on the mailing list but none
>> showed the expected results.
>>
>>
>> Are you using Samba with libgfapi? I am not sure if I understand your
>> expectation on locking through Samba. Some more context would be nice.
>>
>> -Lala
>>
>> Thank you
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Paul Robert Marino
check this article its fairly strait forward http://ctdb.samba.org/samba.html
if you don't get this configured properly then your locking wouldn't work.

by the way oplocks need to be enabled based on an other post it looks
like you turned off all locking support.


On Thu, Jan 23, 2014 at 3:05 PM, Paul Robert Marino  wrote:
> Are you using CTDB on a shared gluster volume instead of TDB on a local
> volume which is samba default.
>
> If not this may explain your issue because Samba stores  or at least did ate
> one time store spinlocks in the TDB for speed.
>
>
>
> -- Sent from my HP Pre3
>
> 
> On Jan 23, 2014 10:31, Adrian Valeanu  wrote:
>
> Hi,
> I try to replicate some data that resides on two CentOS servers. The
> replicated data should be shared using Samba to the users. It should be
> avoided that two users
> try to work on the same file using MS Office. If the users access the same
> file on one server, one of them is not able to modify the file. This is
> possible (and the normal operation)
> if one uses one server with Samba. I assumed that this kind of lock would be
> replicated too.
>
> I did not knew about  libgfapi and have not used it yesterday. I used the
> fuse mounted directory as data source for Samba.
>
> Yesterday night I updated both CentOS servers. They are CentOS 6.5 now. Now
> locking does not happen at all any more. After you mentioned libgfapi I
> found this:
> https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
>
> I managed to compile the module and switched to samba-glusterfs-vfs. But I
> still have no locking
> My Samba configuration looks like this:
> [glusterdata-vfs]
>vfs object = glusterfs
>glusterfs:volume = gv0
>path = /
>glusterfs:loglevel = 2
>glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
>
>read only = no
>browseable = yes
>guest ok = no
>printable = no
>nt acl support = yes
>acl map full control = yes
>
> Thank you for your attention.
>
>
 Lalatendu Mohanty  22.01.2014 16:10 >>>
>>
> On 01/22/2014 08:30 PM, Adrian Valeanu wrote:
>
> Hi,
> I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6
> servers. The gluster filesystem is shared through Samba on both servers.
> Replication is working like a charm but file locking is not. Is it possible
> to have file locking working in this configuration in an way that Microsoft
> Office 2010
> behaves like as if the files were on the same server? Does somebody have
> such an configuration?
> I tried a lot of the Samba configurations found on the mailing list but none
> showed the expected results.
>
>
> Are you using Samba with libgfapi? I am not sure if I understand your
> expectation on locking through Samba. Some more context would be nice.
>
> -Lala
>
> Thank you
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Ira Cooper
Samba hasn't stored spinlocks for a long time  (At least since I've used 
it!)

It does use fcntl locks...  But those are a different beast.  I agree, your 
local TDB files in CTDB should be local.

Thanks,

-Ira

- Original Message -
> Are you using CTDB on a shared gluster volume instead of TDB on a local
> volume which is samba default.
> 
> If not this may explain your issue because Samba stores or at least did ate
> one time store spinlocks in the TDB for speed.
> 
> 
> 
> -- Sent from my HP Pre3
> 
> 
> On Jan 23, 2014 10:31, Adrian Valeanu  wrote:
> 
> Hi,
> I try to replicate some data that resides on two CentOS servers. The
> replicated data should be shared using Samba to the users. It should be
> avoided that two users
> try to work on the same file using MS Office. If the users access the same
> file on one server, one of them is not able to modify the file. This is
> possible (and the normal operation)
> if one uses one server with Samba. I assumed that this kind of lock would be
> replicated too.
> I did not knew about libgfapi and have not used it yesterday. I used the fuse
> mounted directory as data source for Samba.
> Yesterday night I updated both CentOS servers. They are CentOS 6.5 now. Now
> locking does not happen at all any more. After you mentioned libgfapi I
> found this:
> https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
> I managed to compile the module and switched to samba-glusterfs-vfs. But I
> still have no locking
> My Samba configuration looks like this:
> [glusterdata-vfs]
> vfs object = glusterfs
> glusterfs:volume = gv0
> path = /
> glusterfs:loglevel = 2
> glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
> read only = no
> browseable = yes
> guest ok = no
> printable = no
> nt acl support = yes
> acl map full control = yes
> Thank you for your attention.
> 
> 
> >>> Lalatendu Mohanty  22.01.2014 16:10 >>>
> > 
> On 01/22/2014 08:30 PM, Adrian Valeanu wrote:
> 
> 
> 
> Hi,
> I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6
> servers. The gluster filesystem is shared through Samba on both servers.
> Replication is working like a charm but file locking is not. Is it possible
> to have file locking working in this configuration in an way that Microsoft
> Office 2010
> behaves like as if the files were on the same server? Does somebody have such
> an configuration?
> I tried a lot of the Samba configurations found on the mailing list but none
> showed the expected results.
> 
> Are you using Samba with libgfapi? I am not sure if I understand your
> expectation on locking through Samba. Some more context would be nice.
> 
> -Lala
> 
> 
> 
> Thank you
> 
> 
> ___
> Gluster-users mailing list Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Paul Robert Marino
Are you using CTDB on a shared gluster volume instead of TDB on a local volume which is samba default.If not this may explain your issue because Samba stores  or at least did ate one time store spinlocks in the TDB for speed. -- Sent from my HP Pre3On Jan 23, 2014 10:31, Adrian Valeanu  wrote: 

Hi,
I try to replicate some data that resides on two CentOS servers. The replicated data should be shared using Samba to the users. It should be avoided that two users 
try to work on the same file using MS Office. If the users access the same file on one server, one of them is not able to modify the file. This is possible (and the normal operation)
if one uses one server with Samba. I assumed that this kind of lock would be replicated too.
 
I did not knew about  libgfapi and have not used it yesterday. I used the fuse mounted directory as data source for Samba.
 
Yesterday night I updated both CentOS servers. They are CentOS 6.5 now. Now locking does not happen at all any more. After you mentioned libgfapi I found this:
https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
 
I managed to compile the module and switched to samba-glusterfs-vfs. But I still have no locking
My Samba configuration looks like this:
[glusterdata-vfs]   vfs object = glusterfs   glusterfs:volume = gv0   path = /   glusterfs:loglevel = 2   glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
 
   read _only_ = no   browseable = yes   guest ok = no   printable = no   nt acl support = yes   acl map full control = yes
 
Thank you for your attention.>>> Lalatendu Mohanty  22.01.2014 16:10  
On 01/22/2014 08:30 PM, Adrian Valeanu wrote:


Hi,I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6 servers. The gluster filesystem is shared through Samba on both servers.Replication is working like a charm but file locking is not. Is it possible to have file locking working in this configuration in an way that Microsoft Office 2010
behaves like as if the files were on the same server? Does somebody have such an configuration?
I tried a lot of the Samba configurations found on the mailing list but none showed the expected results.Are you using Samba with libgfapi? I am not sure if I understand your expectation on locking through Samba. Some more context would be nice.-Lala

Thank you 
 ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread BGM
Hi Paul,
thnx, nice report,
u file(d) the bug?
can u do a
watch tree - pfungiA 
on both hosts
some vm running, some stopped.
start a machine
trigger the migration
at some point, the ownership of the vmimage.file flips from
libvirtd (running machnie) to root (normal permission, but only when stopped).
If the ownership/permission flips that way, 
libvirtd on the reciving side
can't write that file ... 
does group/acl permission flip likewise?
Regards
Bernhard

On 23.01.2014, at 16:49, Paul Boven  wrote:

> Hi Bernhard,
> 
> I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1 packages 
> from semiosis. It worked fine with glusterfs-3.4.0.
> 
> We've been trying to debug this on the list, but haven't found the smoking 
> gun yet.
> 
> Please have a look at the URL below, and see if it matches what you are 
> experiencing?
> 
> http://epboven.home.xs4all.nl/gluster-migrate.html
> 
> Regards, Paul Boven.
> 
> On 01/23/2014 04:27 PM, Bernhard Glomm wrote:
>> 
>> I had/have problems with live-migrating a virtual machine on a 2sided
>> replica volume.
>> 
>> I run ubuntu 13.04 and gluster 3.4.2 from semiosis
>> 
>> 
>> with network.remote-dio to enable I can use "cache mode = none" as
>> performance option for the virtual disks,
>> 
>> so live migration works without "--unsafe"
>> 
>> I'm triggering the migration now through the "Virtual Machine Manager" as an
>> 
>> unprivileged user which is group member of libvirtd.
>> 
>> 
>> After migration the disks become read-only because
>> 
>> on migration the disk files changes ownership from
>> 
>> libvirt-qemu to root
>> 
>> 
>> What am I missing?
>> 
>> 
>> TIA
>> 
>> 
>> Bernhard
>> 
>> 
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> -- 
> Paul Boven  +31 (0)521-596547
> Unix/Linux/Networking specialist
> Joint Institute for VLBI in Europe - www.jive.nl
> VLBI - It's a fringe science
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Vijay Bellur

On 01/23/2014 09:19 PM, Paul Boven wrote:

Hi Bernhard,

I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1
packages from semiosis. It worked fine with glusterfs-3.4.0.

We've been trying to debug this on the list, but haven't found the
smoking gun yet.

Please have a look at the URL below, and see if it matches what you are
experiencing?

http://epboven.home.xs4all.nl/gluster-migrate.html



I think it would be a good idea to track this as a bug report. Would it 
be possible to open a new bug at [1] with client and server log files?


Thanks,
Vijay

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS share authentication?

2014-01-23 Thread Peter B.
On 01/22/2014 10:47 PM, Dan Mons wrote:
> Others have made comments about separate networks, and that would
> probably be your best bet.  Gluster does technically listen on all
> interfaces, but with appropriate physical networking setup (completely
> separate network ranges on physically separate interfaces or VLANs)
> you could circumvent security issues there.

In my use case, you're right:
The error on my side was to assume that "in order to be able to access
the gluster storage, one must be able to reach it on the network".
I can restrict glusterfs to a certain NIC/VLAN and only connect the NIC
with the Samba share to the LAN.


Thanks to all of you for all your suggestions!
:)


Regards,
Pb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Paul Boven

Hi Bernhard,

I'm having exactly the same problem on Ubuntu 13.04 with the 3.4.1 
packages from semiosis. It worked fine with glusterfs-3.4.0.


We've been trying to debug this on the list, but haven't found the 
smoking gun yet.


Please have a look at the URL below, and see if it matches what you are 
experiencing?


http://epboven.home.xs4all.nl/gluster-migrate.html

Regards, Paul Boven.

On 01/23/2014 04:27 PM, Bernhard Glomm wrote:


I had/have problems with live-migrating a virtual machine on a 2sided
replica volume.

I run ubuntu 13.04 and gluster 3.4.2 from semiosis


with network.remote-dio to enable I can use "cache mode = none" as
performance option for the virtual disks,

so live migration works without "--unsafe"

I'm triggering the migration now through the "Virtual Machine Manager" as an

unprivileged user which is group member of libvirtd.


After migration the disks become read-only because

on migration the disk files changes ownership from

libvirt-qemu to root


What am I missing?


TIA


Bernhard



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




--
Paul Boven  +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and kvm livemigration

2014-01-23 Thread Bernhard Glomm


I had/have problems with live-migrating a virtual machine on a 2sided replica 
volume.I run ubuntu 13.04 and gluster 3.4.2 from semiosis
with network.remote-dio to enable I can use "cache mode = none" as performance 
option for the virtual disks, so live migration works without "--unsafe"I'm 
triggering the migration now through the "Virtual Machine Manager" as 
anunprivileged user which is group member of libvirtd.
After migration the disks become read-only because on migration the disk files 
changes ownership fromlibvirt-qemu to root 
What am I missing?
TIA
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Adrian Valeanu
Hi,
I try to replicate some data that resides on two CentOS servers. The replicated 
data should be shared using Samba to the users. It should be avoided that two 
users 
try to work on the same file using MS Office. If the users access the same file 
on one server, one of them is not able to modify the file. This is possible 
(and the normal operation)
if one uses one server with Samba. I assumed that this kind of lock would be 
replicated too.
 
I did not knew about  libgfapi and have not used it yesterday. I used the fuse 
mounted directory as data source for Samba.
 
Yesterday night I updated both CentOS servers. They are CentOS 6.5 now. Now 
locking does not happen at all any more. After you mentioned libgfapi I found 
this:
https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
 
I managed to compile the module and switched to samba-glusterfs-vfs. But I 
still have no locking
My Samba configuration looks like this:
[glusterdata-vfs]
   vfs object = glusterfs
   glusterfs:volume = gv0
   path = /
   glusterfs:loglevel = 2
   glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
 
   read only = no
   browseable = yes
   guest ok = no
   printable = no
   nt acl support = yes
   acl map full control = yes
 
Thank you for your attention.


>>> Lalatendu Mohanty  22.01.2014 16:10 >>>
> 
On 01/22/2014 08:30 PM, Adrian Valeanu wrote:


Hi,
I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6 
servers. The gluster filesystem is shared through Samba on both servers.
Replication is working like a charm but file locking is not. Is it possible to 
have file locking working in this configuration in an way that Microsoft Office 
2010
behaves like as if the files were on the same server? Does somebody have such 
an configuration?
I tried a lot of the Samba configurations found on the mailing list but none 
showed the expected results.

Are you using Samba with libgfapi? I am not sure if I understand your 
expectation on locking through Samba. Some more context would be nice.

-Lala


Thank you 


___
Gluster-users mailing 
listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Several questions from replicas to performance

2014-01-23 Thread Dean Bruhn
Elias, 
I would suggest testing your performance of your bricks directly and 
see if you have an issue there, verify all of your building blocks and them 
worry about gluster. XFS is the suggested starting point, a lot of people have 
avoided EXT4 because of some upstream issues that happened with the kernel 
development a while back, my understanding is that it is stable and working, 
but most people still shy away from it because of the previous issue. 

 There is a writeup on ZFS on the wiki, I have not personally used it, but I 
have seen successful reports, and reports of better performance over XFS. Again 
unverified by me. 

- Dean 



On Jan 23, 2014, at 9:17 AM, Elías David  wrote:

> Thanks Dean, I've tested that this morning and sure enough is better but 
> still I think I'm missing something.
> 
> Having replica 2 I tested the same dd if=/dev/zero of=/mnt/data bs=1m 
> count=5 and it took 60 minutes to complete at 14 MB/s
> 
> Running iperf against the server I get something around 936 Mbits/sec so I'm 
> wondering if theres something wrong with my setup (be it replica 2 or not), 
> could it be xfs? I've read of people who were more lucky with ext4 for 
> instance. Could it be lack of tuning of the gluster vol file?
> 
> Thanks again!
> 
> On Jan 23, 2014 10:04 AM, "Dean Bruhn"  wrote:
> Elias,
> Looks like you’ve got your replication setup turned around. The 
> replica count is the number of times you want the data replicated across the 
> volume. So right now, for every file you write, it is being written to 5 
> bricks, I would suspect you want more of a replica 2. You would modify your 
> volume create command to look more like this
> 
> gluster vol create replica 2 server1:/gv0/vol1/data server2:/gv0/vol1/data 
> server3:/gv0/vol1/data server4:/gv0/vol1/data server5:/gv0/vol1/data 
> server1:/gv0/vol2/data server2:/gv0/vol2/data server3:/gv0/vol2/data 
> server4:/gv0/vol2/data server5:/gv0/vol2/data
> 
> - Dean
> 
> 
> 
> 
> On Jan 22, 2014, at 11:33 PM, Elías David  wrote:
> 
> > Hello everyone,
> >
> > The place I work for is starting to look at gluster to replace a current 
> > windows share we have. The amount of files and sizes varies a lot, from 
> > very small (~5kb) to somewhat large (~25GB).
> >
> > Since we're checking if gluster is a viable option for us, and we're still 
> > learning about the filesystem, we're pretty sure that a problem we're 
> > seeing right now is coming from our ignorance.
> >
> > Right now our biggest concern during our tests is a ridiculously low 
> > performance, I'm talking about a 'cp -R /home/user/* /mnt/data' where 
> > /mnt/data is: mount -t glusterfs 192.168.0.10:/VolName /mnt/data that took 
> > something ridiculous like 12 "hours" or more to transfer mere 226GB of data 
> > (from ISOs to documents to flat files).
> >
> > Right now our setup is this:
> > -We have 5 servers (peers) with each having two 2TB disks WD black 
> > formatted with xfs -i size=512, each disk is a brick so we have:
> >
> > 192.168.0.10 disk0 (2TB) on /gv0/vol1
> > 192.168.0.10 disk1 (2TB) on /gv0/vol2
> > 192.168.0.11 disk0 (2TB) on /gv0/vol1
> > 192.168.0.11 disk1 (2TB) on /gv0/vol2
> > and so on...
> >
> > Total: 2TB disk x 10 = 20TB
> >
> > Now, not being really sure yet how replica counts really work, we created a 
> > volume with "replica 5", as in:
> >
> > gluster vol create replica 5 server1:/gv0/vol1/data server2:/gv0/vol1/data 
> > server3:/gv0/vol1/data server4:/gv0/vol1/data server5:/gv0/vol1/data 
> > server1:/gv0/vol2/data server2:/gv0/vol2/data server3:/gv0/vol2/data 
> > server4:/gv0/vol2/data server5:/gv0/vol2/data
> >
> > Vol info goes like this:
> >
> > Volume Name: Data
> > Type: Distributed-Replicate
> > Volume ID: 2c938585-d2bd-43cf-98d8-caab70033750
> > Status: Started
> > Number of Bricks: 2 x 5 = 10
> > Transport-type: tcp
> > Bricks:
> > Brick1: 192.168.0.10:/gv0/vol1/data
> > Brick2: 192.168.0.11:/gv0/vol1/data
> > Brick3: 192.168.0.12:/gv0/vol1/data
> > Brick4: 192.168.0.13:/gv0/vol1/data
> > Brick5: 192.168.0.14:/gv0/vol1/data
> > Brick6: 192.168.0.10:/gv0/vol2/data
> > Brick7: 192.168.0.11:/gv0/vol2/data
> > Brick8: 192.168.0.12:/gv0/vol2/data
> > Brick9: 192.168.0.13:/gv0/vol2/data
> > Brick10: 192.168.0.14:/gv0/vol2/data
> >
> > The servers are not bad really, Intel(R) Xeon(R) CPU X5450  @ 3.00GHz 8 
> > cores, 32 GB of ram and 1GB link for gluster
> >
> > As I said earlier I mounted this vol on another machine in the lan using 
> > 'mount -t glusterfs 192.168.0.10:/Data /mnt/data' and I use a simple cp -R 
> > to put data on it, I also tested with 'dd if=/dev/zero 
> > of=/mnt/data/zerofile bs=1M count=5' and this dd process is running 
> > four about 5 hours now and I'm about to reach 24GB of 50GB file size...
> >
> > I'm pretty sure that this problem is solely caused from our ignorance of 
> > the filesystem and that's why I ask you guys
> >
> > The servers are all running CentOS 6.5, glusterfs-* packages from E

[Gluster-users] Antw: Re: Subject: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Adrian Valeanu
Hi,
thank you for your answer. I have seen that thread an I already tried:
posix locking = no
 
and also
 
posix locking = no
kernel oplocks = no
oplocks = no
level2 oplocks = no
But it didn't made any difference. Locking is not working as I expected.
Bye
 


>>> "Khoi Mai"  23.01.2014 15:03 >>>
Have you seen this thread?

source: https://bugzilla.redhat.com/show_bug.cgi?id=802423

Gluster-users Using Samba and MSOffice files
http://gluster.org/pipermail/gluster-users/2011-November/009127.html

add "posix locking = no" host:/etc/samba/smb.conf

Khoi Mai
Union Pacific Railroad
Distributed Engineering & Architecture
Project Engineer


**

This email and any attachments may contain information that is confidential 
and/or privileged for the sole use of the intended recipient. Any use, review, 
disclosure, copying, distribution or reliance by others, and any forwarding of 
this email or its contents, without the express permission of the sender is 
strictly prohibited by law. If you are not the intended recipient, please 
contact the sender immediately, delete the e-mail and destroy all copies.
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Several questions from replicas to performance

2014-01-23 Thread Elías David
Thanks Dean, I've tested that this morning and sure enough is better but
still I think I'm missing something.

Having replica 2 I tested the same dd if=/dev/zero of=/mnt/data bs=1m
count=5 and it took 60 minutes to complete at 14 MB/s

Running iperf against the server I get something around 936 Mbits/sec so
I'm wondering if theres something wrong with my setup (be it replica 2 or
not), could it be xfs? I've read of people who were more lucky with ext4
for instance. Could it be lack of tuning of the gluster vol file?

Thanks again!
On Jan 23, 2014 10:04 AM, "Dean Bruhn" 
wrote:

> Elias,
> Looks like you’ve got your replication setup turned around. The
> replica count is the number of times you want the data replicated across
> the volume. So right now, for every file you write, it is being written to
> 5 bricks, I would suspect you want more of a replica 2. You would modify
> your volume create command to look more like this
>
> gluster vol create replica 2 server1:/gv0/vol1/data server2:/gv0/vol1/data
> server3:/gv0/vol1/data server4:/gv0/vol1/data server5:/gv0/vol1/data
> server1:/gv0/vol2/data server2:/gv0/vol2/data server3:/gv0/vol2/data
> server4:/gv0/vol2/data server5:/gv0/vol2/data
>
> - Dean
>
>
>
>
> On Jan 22, 2014, at 11:33 PM, Elías David 
> wrote:
>
> > Hello everyone,
> >
> > The place I work for is starting to look at gluster to replace a current
> windows share we have. The amount of files and sizes varies a lot, from
> very small (~5kb) to somewhat large (~25GB).
> >
> > Since we're checking if gluster is a viable option for us, and we're
> still learning about the filesystem, we're pretty sure that a problem we're
> seeing right now is coming from our ignorance.
> >
> > Right now our biggest concern during our tests is a ridiculously low
> performance, I'm talking about a 'cp -R /home/user/* /mnt/data' where
> /mnt/data is: mount -t glusterfs 192.168.0.10:/VolName /mnt/data that
> took something ridiculous like 12 "hours" or more to transfer mere 226GB of
> data (from ISOs to documents to flat files).
> >
> > Right now our setup is this:
> > -We have 5 servers (peers) with each having two 2TB disks WD black
> formatted with xfs -i size=512, each disk is a brick so we have:
> >
> > 192.168.0.10 disk0 (2TB) on /gv0/vol1
> > 192.168.0.10 disk1 (2TB) on /gv0/vol2
> > 192.168.0.11 disk0 (2TB) on /gv0/vol1
> > 192.168.0.11 disk1 (2TB) on /gv0/vol2
> > and so on...
> >
> > Total: 2TB disk x 10 = 20TB
> >
> > Now, not being really sure yet how replica counts really work, we
> created a volume with "replica 5", as in:
> >
> > gluster vol create replica 5 server1:/gv0/vol1/data
> server2:/gv0/vol1/data server3:/gv0/vol1/data server4:/gv0/vol1/data
> server5:/gv0/vol1/data server1:/gv0/vol2/data server2:/gv0/vol2/data
> server3:/gv0/vol2/data server4:/gv0/vol2/data server5:/gv0/vol2/data
> >
> > Vol info goes like this:
> >
> > Volume Name: Data
> > Type: Distributed-Replicate
> > Volume ID: 2c938585-d2bd-43cf-98d8-caab70033750
> > Status: Started
> > Number of Bricks: 2 x 5 = 10
> > Transport-type: tcp
> > Bricks:
> > Brick1: 192.168.0.10:/gv0/vol1/data
> > Brick2: 192.168.0.11:/gv0/vol1/data
> > Brick3: 192.168.0.12:/gv0/vol1/data
> > Brick4: 192.168.0.13:/gv0/vol1/data
> > Brick5: 192.168.0.14:/gv0/vol1/data
> > Brick6: 192.168.0.10:/gv0/vol2/data
> > Brick7: 192.168.0.11:/gv0/vol2/data
> > Brick8: 192.168.0.12:/gv0/vol2/data
> > Brick9: 192.168.0.13:/gv0/vol2/data
> > Brick10: 192.168.0.14:/gv0/vol2/data
> >
> > The servers are not bad really, Intel(R) Xeon(R) CPU X5450  @ 3.00GHz 8
> cores, 32 GB of ram and 1GB link for gluster
> >
> > As I said earlier I mounted this vol on another machine in the lan using
> 'mount -t glusterfs 192.168.0.10:/Data /mnt/data' and I use a simple cp
> -R to put data on it, I also tested with 'dd if=/dev/zero
> of=/mnt/data/zerofile bs=1M count=5' and this dd process is running
> four about 5 hours now and I'm about to reach 24GB of 50GB file size...
> >
> > I'm pretty sure that this problem is solely caused from our ignorance of
> the filesystem and that's why I ask you guys
> >
> > The servers are all running CentOS 6.5, glusterfs-* packages from EPEL
> repo, glusterfs version 3.4.2
> >
> > Another question I would like to add if I may is that, I run df -h after
> mounting the volume and I'm seeing as total volume capacity of 3.7
> terabytes when I expected something like 10 terabytes given 10 2TB disks in
> a replica 5 setup, is this normal or I'm misunderstanding the replica count
> thing?
> >
> > That's it, sorry for the long message just wanted to be clear, any input
> or info about this would be greatly appreciated.
> >
> > Thanks!
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.o

Re: [Gluster-users] Puppet-Gluster, Vagrant, and Gluster Automation: Spotlight on James Shubin

2014-01-23 Thread John Walker
If anyone wants to be in the peanut gallery for today's spotlight, and I didn't 
send you a hangout invite, let me know.

On Jan 22, 2014 10:43 AM, John Mark Walker  wrote:
>
> Just a note that this was delayed until tomorrow at 3pm EST/noonJust a note 
> that this was delayed until tomorrow at 3pm EST/noon PST/20:00 GMT, due to my 
> flight cancelation and rescheduling. 

-JM


- Original Message -
> 
> Date and Time: Wed, January 22, 2:00 PM EST/11:00am PST/19:00 GMT
> 
> Live and recorded video available at
> http://www.gluster.org/2014/01/gluster-hangout-james-shubin/
> 
> 
> James Shubin is known in the Gluster community for his work on the
> Puppet-Gluster module (https://forge.gluster.org/puppet-gluster/ )
> 
> Recently, he's begun to create powerful cocktails of Puppet and Vagrant to
> create recipes for automated Gluster deployments. See, eg.
> 
> https://ttboj.wordpress.com/2014/01/20/building-base-images-for-vagrant-with-a-makefile/
> 
> and
> 
> https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
> 
> This will be quite a fun hangout, and very much worth your while. As usual,
> follow along with the live broadcast in #gluster-meeting on the freenode IRC
> network.
> 
> ___
> Announce mailing list
> annou...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/announce
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Announce mailing list
annou...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Several questions from replicas to performance

2014-01-23 Thread Dean Bruhn
Elias, 
Looks like you’ve got your replication setup turned around. The replica 
count is the number of times you want the data replicated across the volume. So 
right now, for every file you write, it is being written to 5 bricks, I would 
suspect you want more of a replica 2. You would modify your volume create 
command to look more like this

gluster vol create replica 2 server1:/gv0/vol1/data server2:/gv0/vol1/data 
server3:/gv0/vol1/data server4:/gv0/vol1/data server5:/gv0/vol1/data 
server1:/gv0/vol2/data server2:/gv0/vol2/data server3:/gv0/vol2/data 
server4:/gv0/vol2/data server5:/gv0/vol2/data

- Dean 




On Jan 22, 2014, at 11:33 PM, Elías David  wrote:

> Hello everyone,
> 
> The place I work for is starting to look at gluster to replace a current 
> windows share we have. The amount of files and sizes varies a lot, from very 
> small (~5kb) to somewhat large (~25GB).
> 
> Since we're checking if gluster is a viable option for us, and we're still 
> learning about the filesystem, we're pretty sure that a problem we're seeing 
> right now is coming from our ignorance.
> 
> Right now our biggest concern during our tests is a ridiculously low 
> performance, I'm talking about a 'cp -R /home/user/* /mnt/data' where 
> /mnt/data is: mount -t glusterfs 192.168.0.10:/VolName /mnt/data that took 
> something ridiculous like 12 "hours" or more to transfer mere 226GB of data 
> (from ISOs to documents to flat files).
> 
> Right now our setup is this:
> -We have 5 servers (peers) with each having two 2TB disks WD black formatted 
> with xfs -i size=512, each disk is a brick so we have:
> 
> 192.168.0.10 disk0 (2TB) on /gv0/vol1
> 192.168.0.10 disk1 (2TB) on /gv0/vol2
> 192.168.0.11 disk0 (2TB) on /gv0/vol1
> 192.168.0.11 disk1 (2TB) on /gv0/vol2
> and so on...
> 
> Total: 2TB disk x 10 = 20TB
> 
> Now, not being really sure yet how replica counts really work, we created a 
> volume with "replica 5", as in:
> 
> gluster vol create replica 5 server1:/gv0/vol1/data server2:/gv0/vol1/data 
> server3:/gv0/vol1/data server4:/gv0/vol1/data server5:/gv0/vol1/data 
> server1:/gv0/vol2/data server2:/gv0/vol2/data server3:/gv0/vol2/data 
> server4:/gv0/vol2/data server5:/gv0/vol2/data
> 
> Vol info goes like this:
> 
> Volume Name: Data
> Type: Distributed-Replicate
> Volume ID: 2c938585-d2bd-43cf-98d8-caab70033750
> Status: Started
> Number of Bricks: 2 x 5 = 10
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.0.10:/gv0/vol1/data
> Brick2: 192.168.0.11:/gv0/vol1/data
> Brick3: 192.168.0.12:/gv0/vol1/data
> Brick4: 192.168.0.13:/gv0/vol1/data
> Brick5: 192.168.0.14:/gv0/vol1/data
> Brick6: 192.168.0.10:/gv0/vol2/data
> Brick7: 192.168.0.11:/gv0/vol2/data
> Brick8: 192.168.0.12:/gv0/vol2/data
> Brick9: 192.168.0.13:/gv0/vol2/data
> Brick10: 192.168.0.14:/gv0/vol2/data
> 
> The servers are not bad really, Intel(R) Xeon(R) CPU X5450  @ 3.00GHz 8 
> cores, 32 GB of ram and 1GB link for gluster
> 
> As I said earlier I mounted this vol on another machine in the lan using 
> 'mount -t glusterfs 192.168.0.10:/Data /mnt/data' and I use a simple cp -R to 
> put data on it, I also tested with 'dd if=/dev/zero of=/mnt/data/zerofile 
> bs=1M count=5' and this dd process is running four about 5 hours now and 
> I'm about to reach 24GB of 50GB file size...
> 
> I'm pretty sure that this problem is solely caused from our ignorance of the 
> filesystem and that's why I ask you guys
> 
> The servers are all running CentOS 6.5, glusterfs-* packages from EPEL repo, 
> glusterfs version 3.4.2
> 
> Another question I would like to add if I may is that, I run df -h after 
> mounting the volume and I'm seeing as total volume capacity of 3.7 terabytes 
> when I expected something like 10 terabytes given 10 2TB disks in a replica 5 
> setup, is this normal or I'm misunderstanding the replica count thing?
> 
> That's it, sorry for the long message just wanted to be clear, any input or 
> info about this would be greatly appreciated.
> 
> Thanks!
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Subject: MS Office/Samba/Glusterfs/Centos

2014-01-23 Thread Khoi Mai
Have you seen this thread?

source: https://bugzilla.redhat.com/show_bug.cgi?id=802423

Gluster-users Using Samba and MSOffice files
http://gluster.org/pipermail/gluster-users/2011-November/009127.html

add "posix locking = no" host:/etc/samba/smb.conf

Khoi Mai
Union Pacific Railroad
Distributed Engineering & Architecture
Project Engineer



**

This email and any attachments may contain information that is confidential 
and/or privileged for the sole use of the intended recipient.  Any use, review, 
disclosure, copying, distribution or reliance by others, and any forwarding of 
this email or its contents, without the express permission of the sender is 
strictly prohibited by law.  If you are not the intended recipient, please 
contact the sender immediately, delete the e-mail and destroy all copies.
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] intresting issue of replication and self-heal

2014-01-23 Thread Mingfan Lu
I profiled node22, I found that most latency comes from setxattr, where
node23 & node22 comes from lookup and locks. any one could help?

 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
Fop
 -   ---   ---   ---   

  0.00   0.00 us   0.00 us   0.00 us2437540
FORGET
  0.00   0.00 us   0.00 us   0.00 us 252684
RELEASE
  0.00   0.00 us   0.00 us   0.00 us2226292
RELEASEDIR
  0.00  38.00 us  37.00 us  40.00 us  4
FGETXATTR
  0.00  66.16 us  15.00 us   13139.00 us596
GETXATTR
  0.00 239.14 us  58.00 us  126477.00 us   1967
LINK
  0.00  51.85 us  14.00 us8298.00 us  19045
STAT
  0.00 165.50 us   9.00 us  212057.00 us  20544
READDIR
  0.001827.92 us 184.00 us  150298.00 us   2084
RENAME
  0.00  49.14 us  12.00 us5908.00 us 189019
STATFS
  0.00  84.63 us  14.00 us   96016.00 us 163405
READ
  0.00   29968.76 us 156.00 us 1073902.00 us   3115
CREATE
  0.001340.25 us   6.00 us 7415357.00 us 248141
FLUSH
  0.001616.76 us  32.00 us 13865122.00 us 229190
FTRUNCATE
  0.011807.58 us  19.00 us 55480776.00 us 249569
OPEN
  0.011875.11 us  10.00 us 8842171.00 us 465197
FSTAT
  0.05  393296.28 us  52.00 us 56856581.00 us   9057
UNLINK
  0.07   32291.01 us 192.00 us 9638107.00 us 156081
RMDIR
  0.08   18339.18 us 140.00 us 5313885.00 us 337862
MKNOD
  0.092904.39 us  18.00 us 51724741.00 us2226290
OPENDIR
  0.154708.15 us  27.00 us 55115760.00 us2334864
SETXATTR
  0.188965.91 us  68.00 us 26465968.00 us1513280
FXATTROP
  0.213465.29 us  74.00 us 58580783.00 us4506602
XATTROP
  0.284801.16 us  44.00 us 49643138.00 us4436847
READDIRP
  0.375935.92 us   7.00 us 56449083.00 us4611760
ENTRYLK
  1.024226.58 us  33.00 us 63494729.00 us   18092335
WRITE
  1.502734.50 us   6.00 us 185109908.00 us   40971541
INODELK
  4.75  348602.30 us   5.00 us 2185602946.00 us1019332
FINODELK


* 14.98   33957.49 us  14.00 us 59261447.00 us   32998211
LOOKUP  26.30  807063.74 us 150.00 us 68086266.00 us
2438422   MKDIR 49.95  457402.30 us  20.00 us 67894186.00
us8171751 SETATTR*

Duration: 353678 seconds
   Data Read: 21110920120 bytes
Data Written: 2338403381483 bytes

here is  node23
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
Fop
 -   ---   ---   ---   

  0.00   0.00 us   0.00 us   0.00 us   22125898
FORGET
  0.00   0.00 us   0.00 us   0.00 us   89286732
RELEASE
  0.00   0.00 us   0.00 us   0.00 us   32865496
RELEASEDIR
  0.00  35.50 us  23.00 us  48.00 us  2
FGETXATTR
  0.00 164.04 us  29.00 us  749181.00 us  39320
FTRUNCATE
  0.00 483.71 us   8.00 us 2688755.00 us
39288  LK
  0.00 419.61 us  48.00 us 2183971.00 us 274939
LINK
  0.00 970.55 us 145.00 us 2471745.00 us 293435
RENAME
  0.001346.63 us  35.00 us 4462970.00 us 243238
SETATTR
  0.01 285.51 us  25.00 us 2588685.00 us3459436
SETXATTR
  0.03 323.11 us   5.00 us 2074581.00 us6977304
READDIR
  0.05   12200.60 us  84.00 us 3943421.00 us 287979
RMDIR
  0.07 592.75 us   7.00 us 3592073.00 us8129847
STAT
  0.076938.50 us  49.00 us 3268036.00 us 705818
UNLINK
  0.08   19468.78 us 149.00 us 3664022.00 us 276310
MKNOD
  0.09 763.31 us   8.00 us 3396903.00 us8731725
STATFS
  0.091715.79 us   4.00 us 5626912.00 us3902746
FLUSH
  0.104614.74 us   9.00 us 5835691.00 us1574923
FSTAT
  0.101189.55 us  13.00 us 6043163.00 us6129885
OPENDIR
  0.10   19729.66 us 131.00 us 4112832.00 us 376286
CREATE
  0.13 328.26 us  24.00 us 2410049.00 us   29091424
WRITE
  0.202107.64 us  10.00 us 5765196.00 us6675496
GETXATTR
  0.285317.38 us  14.00 us 7549301.00 us3798543
OPEN
  0.717042.79 us  47.00 us 5848284.00 us7125716
READDIRP
  0.80 743.88 us  10.00 us 7979373.00 us   76781383
READ
  0.931802.29 us  60.00 us 11040319.00 us   36501360
FXATTROP
  1.76   36083.12 us 141.00 us 3548175.00 us3458135
MKDIR
  1.835046.35 us  70.00 us 8120221.00 us   2576561