Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-12 Thread 符永涛
I have finished a patch on my local repo based on glusterfs3.3.0.5rhs
source rpm. The patch export client-bind-insecure option to fuse
mount.
To use it the command would be like the following:
mount -t glusterfs -o client-bind-insecure : 
This involves very few changes on glusterfs client side and works for me now.
In my client machine there're several volumes and every volume has
tens of bricks and local port always conflicts.

2013/3/12, John Mark Walker :
> Was looking through your steps - it would be interesting to see if there are
> any unforeseen ramifications from doing this.
>
> Hoping to see some others chime in who've tried it.
>
> -JM
>
>
> - Original Message -
>> Gives above steps are too complex and it's not easy to maintain
>> volume
>> files especially if the volume backend brick changes. I intend to
>> make
>> it a patch and only make it configurable throw mount option on the
>> client side. Any body agree with me?
>> Thank you.
>>
>> 2013/3/12, 符永涛 :
>> > Finally I find the answer though it's not very convinient but
>> > doable.
>> > 1 sudo gluster volume set  server.allow-insecure on
>> > 2 get the volume file from glusterfs backend server
>> > the file is -fuse.vol
>> > 3 edit the volume file add option client-bind-insecure and set it's
>> > value to on for every protocol/client xlator
>> > 4 mount the volume by customized volume file above
>> > /usr/sbin/glusterfs --volfile= --volfile-id=volume
>> > mountpoint
>> > Then the glusterfs client will use local port which is larger than
>> > 1024 and the port conflicts issue can be resolved.
>> >
>> > 2013/3/12, 符永涛 :
>> >> dear gluster experts, any suggestions? Thank you very much.
>> >>
>> >> 2013/3/11, 符永涛 :
>> >>> Dear gluster experts,
>> >>> I recently run into an problem related to glusterfs client
>> >>> process
>> >>> occupy rsync port 873. Since there're serveral volumes in our
>> >>> client
>> >>> machines the port conflicts may occur. Is ther any easy way to
>> >>> solve
>> >>> this problem? Thank you very much.
>> >>>
>> >>> --
>> >>> 符永涛
>> >>>
>> >>
>> >>
>> >> --
>> >> 符永涛
>> >>
>> >
>> >
>> > --
>> > 符永涛
>> >
>>
>>
>> --
>> 符永涛
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


-- 
符永涛
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Can't get gluster to mount on boot

2013-03-12 Thread Marcus Bointon
This is something I've been trying to get to work for the last year or so, but 
have still not found a working solution.

I have two servers in a simple replication config of a single volume running 
gluster 3.3.0 (I'm avoiding 3.3.1 because of its known NFS issues) on Ubuntu 
Lucid. As well as hosting a gluster server, each mounts the gluster volume via 
NFS over tcp on localhost. There is also a third server that mounts the NFS 
volume but is not running any gluster code (client nor server), and that 
displays exactly the same symptoms, so I don't think running gluster server or 
accessing via localhost is causing the problem.

Firstly, a manual mount works fine, so I know that the NFS settings, paths, 
permissions and the gluster volume itself are ok.

If I try to mount via fstab (even with _netdev), it hangs the machine on boot 
and it must be put into single-user mode in order to restore it, so I'm not 
trying that approach for now.

This list suggested using autofs, so I've been trying that, with no success. 
I've created a /etc/auto.nfs file containing this:

/var/lib/sitedata/aegir 
-fstype=nfs,vers=3,hard,noexec,nosuid,nodev,rsize=32768,wsize=32768,intr,noatime,mountproto=tcp
 127.0.0.1:/shared

and referred to it from /etc/auto.master, and commented out the '+auto.master' 
line, as suggested on docs I found (it doesn't work work either way). autofs 
does do *something* here, but it appears to completely ignore NFS and instead 
creates and rebinds /shared locally at /var/lib/sitedata/aegir. I've tried with 
a minimal options list (just -fstype=nfs,mountproto=tcp) but it doesn't help.

gluster peer status and gluster volume status both say all is present and 
correct with respect to the underlying gluster volume. The only anomaly I am 
seeing is this happening once per second in syslog:

Mar 12 17:09:49 web2 init: glusterd main process ended, respawning
Mar 12 17:09:50 web2 init: glusterd main process (20365) terminated with status 
255

I've googled that and not found anything at all.

How does everyone else do automounts?

Marcus
-- 
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK info@hand CRM solutions
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-12 Thread John Mark Walker
Was looking through your steps - it would be interesting to see if there are 
any unforeseen ramifications from doing this. 

Hoping to see some others chime in who've tried it. 

-JM


- Original Message -
> Gives above steps are too complex and it's not easy to maintain
> volume
> files especially if the volume backend brick changes. I intend to
> make
> it a patch and only make it configurable throw mount option on the
> client side. Any body agree with me?
> Thank you.
> 
> 2013/3/12, 符永涛 :
> > Finally I find the answer though it's not very convinient but
> > doable.
> > 1 sudo gluster volume set  server.allow-insecure on
> > 2 get the volume file from glusterfs backend server
> > the file is -fuse.vol
> > 3 edit the volume file add option client-bind-insecure and set it's
> > value to on for every protocol/client xlator
> > 4 mount the volume by customized volume file above
> > /usr/sbin/glusterfs --volfile= --volfile-id=volume
> > mountpoint
> > Then the glusterfs client will use local port which is larger than
> > 1024 and the port conflicts issue can be resolved.
> >
> > 2013/3/12, 符永涛 :
> >> dear gluster experts, any suggestions? Thank you very much.
> >>
> >> 2013/3/11, 符永涛 :
> >>> Dear gluster experts,
> >>> I recently run into an problem related to glusterfs client
> >>> process
> >>> occupy rsync port 873. Since there're serveral volumes in our
> >>> client
> >>> machines the port conflicts may occur. Is ther any easy way to
> >>> solve
> >>> this problem? Thank you very much.
> >>>
> >>> --
> >>> 符永涛
> >>>
> >>
> >>
> >> --
> >> 符永涛
> >>
> >
> >
> > --
> > 符永涛
> >
> 
> 
> --
> 符永涛
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS performance

2013-03-12 Thread Torbjørn Thorsen
That is the same transfer rate I'm seeing using O_SYNC writes or using
a Gluster-backed file as loop device for a Xen instance.

I just did a quick "benchmark" (I just installed and tested, no
tuning) between Gluster + loop device, Ceph-RBD and NFS, and saw
pretty much that same transfer speed for all three technologies.
A bit faster over NFS, about 20%, but that comparison might not be
fair considering the features of Gluster and Ceph.

Writes that are easier to buffer, such as straight cp are able to top
out both NIC's,
giving writes speeds at around 100MB/s.
Using dd with conv=sync or running sync, I blocking, presumably
waiting for the buffers to flush.

I'm not at all sure if these numbers reflect the potential performance
of the hardware or software, but the numbers seem consistent and maybe
not all that unreasonable.

On Tue, Mar 12, 2013 at 9:56 AM, Nikita A Kardashin
 wrote:
> Hello,
>
> I found other strange thing.
>
> On the dd-test (dd if=/dev/zero of=2testbin bs=1M count=1024 oflag=direct)
> my volume shows only 18-19MB/s.
> Full network speed is 90-110MB/s, storage speed - ~200MB/s.
>
> Volume type - replicated-distributed, 2 replicas, 4 nodes. Volumes mounted
> via fuse with direct-io=enable option.
>
> Its sooo slooow, right?
>
>
> 2013/3/5 harry mangalam 
>>
>> This kind of info is surprisingly hard to obtain.  The gluster docs do
>> contain
>> some of it, ie:
>>
>> 
>>
>> I also found well-described kernel tuning parameters in the FHGFS wiki (as
>> another distibuted fs, they share some characteristics)
>>
>> http://www.fhgfs.com/wiki/wikka.php?wakka=StorageServerTuning
>>
>> and more XFS tuning filesystem params here:
>>
>> 
>>
>> and here:
>> > edition>
>>
>> But of course, YMMV and a number of these parameters conflict and/or have
>> serious tradeoffs, as you discovered.
>>
>> LSI recently loaned me a Nytro SAS controller (on-card SSD-cached) which
>> seems
>> pretty phenomenal on a single brick (and is predicted to perform well
>> based on
>> their profiling), but am waiting for another node to arrive before I can
>> test
>> it under true gluster conditions.  Anyone else tried this hardware?
>>
>> hjm
>>
>> On Tuesday, March 05, 2013 12:34:41 PM Nikita A Kardashin wrote:
>> > Hello all!
>> >
>> > This problem is solved by me today.
>> > Root of all in the incompatibility of gluster cache and kvm cache.
>> >
>> > Bug reproduces if KVM virtual machine created with cache=writethrough
>> > (default for OpenStack) option and hosted on GlusterFS volume. If any
>> > other
>> > (cache=writeback or cache=none with direct-io) cacher used, performance
>> > of
>> > writing to existing file inside VM is equal to bare storage (from host
>> > machine) write performance.
>> >
>> > I think, it must be documented in Gluster and maybe filled a bug.
>> >
>> > Other question. Where I can read something about gluster tuning (optimal
>> > cache size, write-behind, flush-behind use cases and other)? I found
>> > only
>> > options list, without any how-to or tested cases.
>> >
>> >
>> > 2013/3/5 Toby Corkindale 
>> >
>> > > On 01/03/13 21:12, Brian Candler wrote:
>> > >> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote:
>> > >>> If I try to execute above command inside virtual machine (KVM),
>> > >>> first
>> > >>> time all going right - about 900MB/s (cache effect, I think),
>> > >>> but if
>> > >>>
>> > >>> I
>> > >>>
>> > >>> run this test again on existing file - task (dd) hungs up and
>> > >>> can be
>> > >>> stopped only by Ctrl+C.
>> > >>> Overall virtual system latency is poor too. For example, apt-get
>> > >>> upgrade upgrading system very, very slow, freezing on "Unpacking
>> > >>> replacement" and other io-related steps.
>> > >>> Does glusterfs have any tuning options, that can help me?
>> > >>
>> > >> If you are finding that processes hang or freeze indefinitely, this
>> > >> is
>> > >> not
>> > >> a question of "tuning", this is simply "broken".
>> > >>
>> > >> Anyway, you're asking the wrong person - I'm currently in the process
>> > >> of
>> > >> stripping out glusterfs, although I remain interested in the project.
>> > >>
>> > >> I did find that KVM performed very poorly, but KVM was not my main
>> > >> application and that's not why I'm abandoning it.  I'm stripping out
>> > >> glusterfs primarily because it's not supportable in my environment,
>> > >> because
>> > >> there is no documentation on how to analyse and recover from failure
>> > >> scenarios which can and do happen. This point in more detail:
>> > >> http://www.gluster.org/**pipermail/gluster-users/2013-**
>> > >>
>> > >> January/035118.html> > >> anuary/035118.html>
>> > >>
>> > >> The other downside of gluster 

Re: [Gluster-users] CentOS 6.4 + selinux enforcing + mount.glusterfs == bad?

2013-03-12 Thread Rejy M Cyriac
On 03/12/2013 02:57 PM, Alan Orth wrote:
> All,
> 
> I just learned how to create a new module to allow this request.  In a
> nutshell, use audit2allow to check the audit log and create a new
> module, see [1] and [2].  My exact steps:
> 
> mkdir ~/selinux_gluster
> cd ~/selinux_gluster
> setenforce 0
> load_policy
> service netfs start
> audit2allow -M glusterd_centos64 -l -i /var/log/audit/audit.log
> setenforce 1
> semodule -i glusterd_centos64.pp
> service netfs start
> 
> More precisely, what you are doing is:
> 
>  1. setting selinux to permissive mode
>  2. re-loading the policy to get a clean "starting point"
>  3. performing the actions which are being denied
>  4. creating a module
>  5. re-enabling selinux enforcing mode
>  6. loading the new selinux module (which, after loading, is copied into
> /etc/selinux/targeted/modules/active/modules/ and will persist after
> reboot)
>  7. gluster should now be able to mount via /etc/fstab on boot, or via
> the netfs service, etc (ie, not manually as root).
> 
> Hope this helps some future traveler,
> 
> Alan
> 
> [1] http://fedorasolved.org/security-solutions/selinux-module-building
> [2] man audit2allow
> 
> On 03/12/2013 11:32 AM, Alan Orth wrote:
>> All,
>>
>> I've updated one of my GlusterFS clients from CentOS 6.3 to CentOS 6.4
>> and now my gluster volumes fail to mount at boot.  dmesg shows:
>>
>> type=1400 audit(1363004014.209:4): avc:  denied  { execute } for
>> pid=1150 comm="mount.glusterfs" name="glusterfsd" dev=sda1 ino=1315297
>> scontext=system_u:system_r:mount_t:s0
>> tcontext=system_u:object_r:glusterd_exec_t:s0 tclass=file
>>
>> Mounting manually as root works, but obviously isn't optimal.
>>
>> Does anyone know how to fix this?
>>
>> Thanks!
>>
> 
> -- 
> Alan Orth
> alan.o...@gmail.com
> http://alaninkenya.org
> http://mjanja.co.ke
> "I have always wished for my computer to be as easy to use as my telephone; 
> my wish has come true because I can no longer figure out how to use my 
> telephone." -Bjarne Stroustrup, inventor of C++
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
This should be fixed with the latest SELinux policy update, which was
out for Red Hat Enterprise Linux today -
selinux-policy-targeted-3.7.19-195.el6_4.3.noarch,
selinux-policy-3.7.19-195.el6_4.3.noarch .


-- 
Regards,

Rejy M Cyriac (rmc)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Slow read performance

2013-03-12 Thread Rodrigo Severo
Joe,


Understood. No problem at all.


Regards,

Rodrigo


On Mon, Mar 11, 2013 at 7:24 PM, Joe Julian  wrote:

>  I apologize. I normally tend to try to be much more eloquent with my
> debates.
>
> I woke up this morning to learn that the CentOS 6.4 rollout broke all my
> end-user stations (yes, I have to do automatic updates. I just don't have
> time to review every package and do everything else I need to do all by my
> self). Put 200 employees without computers on my shoulders and I tend to
> stress a little until it's resolved.
>
> I took a pot shot and it was uncalled for.
>
> Please forgive me.
>
>
> On 03/11/2013 12:10 PM, Rodrigo Severo wrote:
>
> On Mon, Mar 11, 2013 at 4:04 PM, Joe Julian  wrote:
>
>>  Which is why we don't run Rodigux
>>
>
> Oh Joe, that remark sounds rather inappropriate to me.
>
> Apparently we disagree on more levels that just kernel and applications
> compatibility policies.
>
>
> Regards,
>
> Rodrigo Severo
>
>
>>
>>
>> On 03/11/2013 12:02 PM, Rodrigo Severo wrote:
>>
>> On Mon, Mar 11, 2013 at 3:46 PM, Bryan Whitehead wrote:
>>
>>> This is clearly something Linus should support (forcing ext4 fix). There
>>> is an ethos Linus always champions and that is *never* break userspace.
>>> NEVER. Clearly this ext4 change has broken userspace. GlusterFS is not in
>>> the kernel at all and this change has broken it.
>>>
>>
>> Apparently one year after the change having made into the kernel you
>> believe this argument is still relevant. I don't, really don't.
>>
>>
>> Rodrigo Severo
>>
>>
>>>
>>>  On Mon, Mar 11, 2013 at 11:34 AM, Rodrigo Severo <
>>> rodr...@fabricadeideias.com> wrote:
>>>
  If you prefer to say that Linus recent statement isn't pertinent to
 Gluster x ext4 issue (as I do), or that ext4 developers are being
 hypocritical/ignoring Linus orientation (as you do) or anything similar
 isn't really relevant any more.

 This argument could have been important in March 2012, the month the
 ext4 change as applied. Today, March 2013, or Gluster devs decides to
 assume it's incompatible with ext4 and states it clearly in it's
 installations and migration documentation, or fixes it's current issues
 with ext4. No matter what is done, it should have been done months ago.


 Regards,

 Rodrigo Severo




 On Mon, Mar 11, 2013 at 2:49 PM, John Mark Walker 
 wrote:

>
> --
>
> I know where this statement came from. I believe you are both:
>
>- trying to apply some statement on a context it's not pertinent
>to and
>
>
>  No, it's actually quite applicable. I'm aware of the context of that
> statement by Linus, and it applies to this case. Kernel devs, at least the
> ext4 maintainers, are being hypocritical.
>
> There were a few exchanges between Ted T'so and Avati, among other
> people, on gluster-devel. I highly recommend you read them:
>
> http://lists.nongnu.org/archive/html/gluster-devel/2013-02/msg00050.html
>
>
>
>- fouling yourself and/or others arguing that this issue
>will/should be fixed in the kernel.
>
>
>  This is probably true. I'm *this* close to declaring that, at least
> for the Gluster community, ext4 is considered harmful. There's a reason 
> Red
> Hat started pushing XFS over ext4 a few years ago.
>
> And Red Hat isn't alone here.
>
>  The ext4 hash size change was applied in the kernel an year ago. I
> don't believe it will be undone. Gluster developers could argue that this
> change was hard on them, and that it shouldn't be backported to Enterprise
> kernels but after one year not having fixed it is on Gluster developers.
> Arguing otherwise seems rather foolish to me.
>
>
>  I think that's a legitimate argument to make. This is a conversation
> that is worth taking up on gluster-devel. But I'm not sure what can be 
> done
> about it, seeing as how the ext4 maintainers are not likely to make the
> change.
>
> Frankly, dropping ext4 as an FS we can recommend solves a lot of
> headaches.
>
> -JM
>
>
>


  ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-u

Re: [Gluster-users] GlusterFS performance

2013-03-12 Thread Nikita A Kardashin
Pure transfering speed (from storage to storage, via scp): 80MB/s.

As I think, gluster write perfomance in this situation (2 replica bricks)
must be about 40MB/s, yes?


2013/3/12 Nikita A Kardashin 

> Hello,
>
> I found other strange thing.
>
> On the dd-test (dd if=/dev/zero of=2testbin bs=1M count=1024 oflag=direct)
> my volume shows only 18-19MB/s.
> Full network speed is 90-110MB/s, storage speed - ~200MB/s.
>
> Volume type - replicated-distributed, 2 replicas, 4 nodes. Volumes mounted
> via fuse with direct-io=enable option.
>
> Its sooo slooow, right?
>
>
> 2013/3/5 harry mangalam 
>
>> This kind of info is surprisingly hard to obtain.  The gluster docs do
>> contain
>> some of it, ie:
>>
>> 
>>
>> I also found well-described kernel tuning parameters in the FHGFS wiki (as
>> another distibuted fs, they share some characteristics)
>>
>> http://www.fhgfs.com/wiki/wikka.php?wakka=StorageServerTuning
>>
>> and more XFS tuning filesystem params here:
>>
>> 
>>
>> and here:
>> <
>> http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-
>> edition>
>>
>> But of course, YMMV and a number of these parameters conflict and/or have
>> serious tradeoffs, as you discovered.
>>
>> LSI recently loaned me a Nytro SAS controller (on-card SSD-cached) which
>> seems
>> pretty phenomenal on a single brick (and is predicted to perform well
>> based on
>> their profiling), but am waiting for another node to arrive before I can
>> test
>> it under true gluster conditions.  Anyone else tried this hardware?
>>
>> hjm
>>
>> On Tuesday, March 05, 2013 12:34:41 PM Nikita A Kardashin wrote:
>> > Hello all!
>> >
>> > This problem is solved by me today.
>> > Root of all in the incompatibility of gluster cache and kvm cache.
>> >
>> > Bug reproduces if KVM virtual machine created with cache=writethrough
>> > (default for OpenStack) option and hosted on GlusterFS volume. If any
>> other
>> > (cache=writeback or cache=none with direct-io) cacher used, performance
>> of
>> > writing to existing file inside VM is equal to bare storage (from host
>> > machine) write performance.
>> >
>> > I think, it must be documented in Gluster and maybe filled a bug.
>> >
>> > Other question. Where I can read something about gluster tuning (optimal
>> > cache size, write-behind, flush-behind use cases and other)? I found
>> only
>> > options list, without any how-to or tested cases.
>> >
>> >
>> > 2013/3/5 Toby Corkindale 
>> >
>> > > On 01/03/13 21:12, Brian Candler wrote:
>> > >> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote:
>> > >>> If I try to execute above command inside virtual machine (KVM),
>> > >>> first
>> > >>> time all going right - about 900MB/s (cache effect, I think),
>> but if
>> > >>>
>> > >>> I
>> > >>>
>> > >>> run this test again on existing file - task (dd) hungs up and
>> can be
>> > >>> stopped only by Ctrl+C.
>> > >>> Overall virtual system latency is poor too. For example, apt-get
>> > >>> upgrade upgrading system very, very slow, freezing on "Unpacking
>> > >>> replacement" and other io-related steps.
>> > >>> Does glusterfs have any tuning options, that can help me?
>> > >>
>> > >> If you are finding that processes hang or freeze indefinitely, this
>> is
>> > >> not
>> > >> a question of "tuning", this is simply "broken".
>> > >>
>> > >> Anyway, you're asking the wrong person - I'm currently in the
>> process of
>> > >> stripping out glusterfs, although I remain interested in the project.
>> > >>
>> > >> I did find that KVM performed very poorly, but KVM was not my main
>> > >> application and that's not why I'm abandoning it.  I'm stripping out
>> > >> glusterfs primarily because it's not supportable in my environment,
>> > >> because
>> > >> there is no documentation on how to analyse and recover from failure
>> > >> scenarios which can and do happen. This point in more detail:
>> > >> http://www.gluster.org/**pipermail/gluster-users/2013-**
>> > >> January/035118.html<
>> http://www.gluster.org/pipermail/gluster-users/2013-J
>> > >> anuary/035118.html>
>> > >>
>> > >> The other downside of gluster was its lack of flexibility, in
>> particular
>> > >> the
>> > >> fact that there is no usage scaling factor on bricks, so that even
>> with a
>> > >> simple distributed setup all your bricks have to be the same size.
>>  Also,
>> > >> the object store feature which I wanted to use, has clearly had
>> hardly
>> > >> any
>> > >> testing (even the RPM packages don't install properly).
>> > >>
>> > >> I *really* wanted to deploy gluster, because in principle I like the
>> idea
>> > >> of
>> > >> a virtual distribution/replication system which sits on top of
>> existing
>> > >> local filesystems.  But for storage, I need something where
>> operational
>> > >> supportability is at the top of the pile.
>> > >
>> > > I ha

Re: [Gluster-users] CentOS 6.4 + selinux enforcing + mount.glusterfs == bad?

2013-03-12 Thread Alan Orth

All,

I just learned how to create a new module to allow this request.  In a 
nutshell, use audit2allow to check the audit log and create a new 
module, see [1] and [2].  My exact steps:


   mkdir ~/selinux_gluster
   cd ~/selinux_gluster
   setenforce 0
   load_policy
   service netfs start
   audit2allow -M glusterd_centos64 -l -i /var/log/audit/audit.log
   setenforce 1
   semodule -i glusterd_centos64.pp
   service netfs start

More precisely, what you are doing is:

1. setting selinux to permissive mode
2. re-loading the policy to get a clean "starting point"
3. performing the actions which are being denied
4. creating a module
5. re-enabling selinux enforcing mode
6. loading the new selinux module (which, after loading, is copied into
   /etc/selinux/targeted/modules/active/modules/ and will persist after
   reboot)
7. gluster should now be able to mount via /etc/fstab on boot, or via
   the netfs service, etc (ie, not manually as root).

Hope this helps some future traveler,

Alan

[1] http://fedorasolved.org/security-solutions/selinux-module-building
[2] man audit2allow

On 03/12/2013 11:32 AM, Alan Orth wrote:

All,

I've updated one of my GlusterFS clients from CentOS 6.3 to CentOS 6.4 
and now my gluster volumes fail to mount at boot.  dmesg shows:


type=1400 audit(1363004014.209:4): avc:  denied  { execute } for 
pid=1150 comm="mount.glusterfs" name="glusterfsd" dev=sda1 ino=1315297 
scontext=system_u:system_r:mount_t:s0 
tcontext=system_u:object_r:glusterd_exec_t:s0 tclass=file


Mounting manually as root works, but obviously isn't optimal.

Does anyone know how to fix this?

Thanks!



--
Alan Orth
alan.o...@gmail.com
http://alaninkenya.org
http://mjanja.co.ke
"I have always wished for my computer to be as easy to use as my telephone; my wish 
has come true because I can no longer figure out how to use my telephone." -Bjarne 
Stroustrup, inventor of C++

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS performance

2013-03-12 Thread Nikita A Kardashin
Hello,

I found other strange thing.

On the dd-test (dd if=/dev/zero of=2testbin bs=1M count=1024 oflag=direct)
my volume shows only 18-19MB/s.
Full network speed is 90-110MB/s, storage speed - ~200MB/s.

Volume type - replicated-distributed, 2 replicas, 4 nodes. Volumes mounted
via fuse with direct-io=enable option.

Its sooo slooow, right?


2013/3/5 harry mangalam 

> This kind of info is surprisingly hard to obtain.  The gluster docs do
> contain
> some of it, ie:
>
> 
>
> I also found well-described kernel tuning parameters in the FHGFS wiki (as
> another distibuted fs, they share some characteristics)
>
> http://www.fhgfs.com/wiki/wikka.php?wakka=StorageServerTuning
>
> and more XFS tuning filesystem params here:
>
> 
>
> and here:
>  edition>
>
> But of course, YMMV and a number of these parameters conflict and/or have
> serious tradeoffs, as you discovered.
>
> LSI recently loaned me a Nytro SAS controller (on-card SSD-cached) which
> seems
> pretty phenomenal on a single brick (and is predicted to perform well
> based on
> their profiling), but am waiting for another node to arrive before I can
> test
> it under true gluster conditions.  Anyone else tried this hardware?
>
> hjm
>
> On Tuesday, March 05, 2013 12:34:41 PM Nikita A Kardashin wrote:
> > Hello all!
> >
> > This problem is solved by me today.
> > Root of all in the incompatibility of gluster cache and kvm cache.
> >
> > Bug reproduces if KVM virtual machine created with cache=writethrough
> > (default for OpenStack) option and hosted on GlusterFS volume. If any
> other
> > (cache=writeback or cache=none with direct-io) cacher used, performance
> of
> > writing to existing file inside VM is equal to bare storage (from host
> > machine) write performance.
> >
> > I think, it must be documented in Gluster and maybe filled a bug.
> >
> > Other question. Where I can read something about gluster tuning (optimal
> > cache size, write-behind, flush-behind use cases and other)? I found only
> > options list, without any how-to or tested cases.
> >
> >
> > 2013/3/5 Toby Corkindale 
> >
> > > On 01/03/13 21:12, Brian Candler wrote:
> > >> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote:
> > >>> If I try to execute above command inside virtual machine (KVM),
> > >>> first
> > >>> time all going right - about 900MB/s (cache effect, I think),
> but if
> > >>>
> > >>> I
> > >>>
> > >>> run this test again on existing file - task (dd) hungs up and
> can be
> > >>> stopped only by Ctrl+C.
> > >>> Overall virtual system latency is poor too. For example, apt-get
> > >>> upgrade upgrading system very, very slow, freezing on "Unpacking
> > >>> replacement" and other io-related steps.
> > >>> Does glusterfs have any tuning options, that can help me?
> > >>
> > >> If you are finding that processes hang or freeze indefinitely, this is
> > >> not
> > >> a question of "tuning", this is simply "broken".
> > >>
> > >> Anyway, you're asking the wrong person - I'm currently in the process
> of
> > >> stripping out glusterfs, although I remain interested in the project.
> > >>
> > >> I did find that KVM performed very poorly, but KVM was not my main
> > >> application and that's not why I'm abandoning it.  I'm stripping out
> > >> glusterfs primarily because it's not supportable in my environment,
> > >> because
> > >> there is no documentation on how to analyse and recover from failure
> > >> scenarios which can and do happen. This point in more detail:
> > >> http://www.gluster.org/**pipermail/gluster-users/2013-**
> > >> January/035118.html<
> http://www.gluster.org/pipermail/gluster-users/2013-J
> > >> anuary/035118.html>
> > >>
> > >> The other downside of gluster was its lack of flexibility, in
> particular
> > >> the
> > >> fact that there is no usage scaling factor on bricks, so that even
> with a
> > >> simple distributed setup all your bricks have to be the same size.
>  Also,
> > >> the object store feature which I wanted to use, has clearly had hardly
> > >> any
> > >> testing (even the RPM packages don't install properly).
> > >>
> > >> I *really* wanted to deploy gluster, because in principle I like the
> idea
> > >> of
> > >> a virtual distribution/replication system which sits on top of
> existing
> > >> local filesystems.  But for storage, I need something where
> operational
> > >> supportability is at the top of the pile.
> > >
> > > I have to agree; GlusterFS has been in use here in production for a
> while,
> > > and while it mostly works, it's been fragile and documentation has been
> > > disappointing. Despite 3.3 being in beta for a year, it still seems to
> > > have
> > > been poorly tested. For eg, I can't believe almost no-one else noticed
> > > that
> > > the log files were 

[Gluster-users] CentOS 6.4 + selinux enforcing + mount.glusterfs == bad?

2013-03-12 Thread Alan Orth

All,

I've updated one of my GlusterFS clients from CentOS 6.3 to CentOS 6.4 
and now my gluster volumes fail to mount at boot.  dmesg shows:


type=1400 audit(1363004014.209:4): avc:  denied  { execute } for 
pid=1150 comm="mount.glusterfs" name="glusterfsd" dev=sda1 ino=1315297 
scontext=system_u:system_r:mount_t:s0 
tcontext=system_u:object_r:glusterd_exec_t:s0 tclass=file


Mounting manually as root works, but obviously isn't optimal.

Does anyone know how to fix this?

Thanks!

--
Alan Orth
alan.o...@gmail.com
http://alaninkenya.org
http://mjanja.co.ke
"I have always wished for my computer to be as easy to use as my telephone; my wish 
has come true because I can no longer figure out how to use my telephone." -Bjarne 
Stroustrup, inventor of C++

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users