Re: [Gluster-users] [Gluster-devel] GlusterFS and the logging framework

2014-05-06 Thread Nithya Balachandran
We have had some feedback/concerns raised regarding not including the messages 
in the header file. Some external products do include the message strings in 
the header files which helps for documentation as well as easier editing.

Does anyone have any thoughts on this? The advantages are listed above. 
Disadvantages were listed in earlier emails. If we decide to include messages 
in the header file, we will need to consolidate all messages that fall into 
various classes and come up with a single format string - currently there seem 
to be too many messages that mean the same thing but use different foramts to 
say it.


Regards,
Nithya

- Original Message -
From: "Vijay Bellur" 
To: "Dan Lambright" , "Nithya Balachandran" 

Cc: "gluster-users" , gluster-de...@gluster.org
Sent: Thursday, 1 May, 2014 1:31:04 PM
Subject: Re: [Gluster-devel] GlusterFS and the logging framework

On 05/01/2014 04:07 AM, Dan Lambright wrote:
> Hello,
>
> In a previous job, an engineer in our storage group modified our I/O stack 
> logs in a manner similar to your proposal #1 (except he did not tell anyone, 
> and did it for DEBUG messages as well as ERRORS and WARNINGS, over the 
> weekend). Developers came to work Monday and found over a thousand log 
> message strings had been buried in a new header file, and any new logs 
> required a new message id, along with a new string entry in the header file.
>
> This did render the code harder to read. The ensuing uproar closely mirrored 
> the arguments (1) and (2) you listed. Logs are like comments. If you move 
> them out of the source, the code is harder to follow. And you probably wan't 
> fewer message IDs than comments.
>
> The developer retracted his work. After some debate, his V2 solution 
> resembled your "approach #2". Developers were once again free to use plain 
> text strings directly in logs, but the notion of "classes" (message ID) was 
> kept. We allowed multiple text strings to be used against a single class, and 
> any new classes went in a master header file. The "debug" message ID class 
> was a general purpose bucket and what most coders used day to day.
>
> So basically, your email sounded very familiar to me and I think your 
> proposal #2 is on the right track.
>

+1. Proposal #2 seems to be better IMO.

Thanks,
Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Repeated file stats to non-existent files causing gluster to consume excessive bandwidth.

2014-05-06 Thread max murphy
Hello,

I am currently using gluster replication to share a common filesystem
across multiple nodes. Part of my application is a directory watcher that
will end up doing a stat on a glob of file patterns. What I am seeing is
that gluster will start using roughly 300kBps per watched pattern. Will
gluster ever back off, or continue to try and force consistency even though
there are no files?

Here's a small bit of bash that will repro this:

for i in {1..1000}; do stat /path/to/files/that/do/not/*.exist; sleep 2;
done


Thanks!

-Max Murphy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] du time on bricks vs mounted glusterfs via fuse

2014-05-06 Thread Gavin Henry
Hi,

Completely forgot to say this is a mount on the same server.
On 6 May 2014 21:17, "Gavin Henry"  wrote:

> Hi all,
>
> We use glusterfs with our SIP cluster for hosted SIP endpoints/phones.
> This is set up in replicated mode for lua scripts, voicemails (wav
> files), faxes (PDF's/tiffs) and call recordings (wav/mp3). We're
> trying to find more information on the time differences between a "du"
> on a brick, i.e. direct filesystem vs a "du" on a mounted export:
>
> box1 # time /usr/local/nagios/libexec/check_subdirectory_size
> /opt/surevoip/recordings/
> WARNING: blah size is 34126MB
>
> real 0m20.788s
> user 0m0.178s
> sys 0m0.559s
>
>
> box2 # time /usr/local/nagios/libexec/check_subdirectory_size
> /data/glusterfs/surevoip/brick1/brick/recordings/
> WARNING: blah size is 34195MB
>
> real 0m0.577s
> user 0m0.074s
> sys 0m0.161s
>
> Can anyone point me to the right documentation to read?
>
> Thanks,
>
> Gavin.
>
> --
> Kind Regards,
>
> Gavin Henry.
> http://www.surevoip.co.uk
>
> Did you see our API? http://www.surevoip.co.uk/api
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] du time on bricks vs mounted glusterfs via fuse

2014-05-06 Thread Gavin Henry
Hi all,

We use glusterfs with our SIP cluster for hosted SIP endpoints/phones.
This is set up in replicated mode for lua scripts, voicemails (wav
files), faxes (PDF's/tiffs) and call recordings (wav/mp3). We're
trying to find more information on the time differences between a "du"
on a brick, i.e. direct filesystem vs a "du" on a mounted export:

box1 # time /usr/local/nagios/libexec/check_subdirectory_size
/opt/surevoip/recordings/
WARNING: blah size is 34126MB

real 0m20.788s
user 0m0.178s
sys 0m0.559s


box2 # time /usr/local/nagios/libexec/check_subdirectory_size
/data/glusterfs/surevoip/brick1/brick/recordings/
WARNING: blah size is 34195MB

real 0m0.577s
user 0m0.074s
sys 0m0.161s

Can anyone point me to the right documentation to read?

Thanks,

Gavin.

-- 
Kind Regards,

Gavin Henry.
http://www.surevoip.co.uk

Did you see our API? http://www.surevoip.co.uk/api
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.4 or 3.5 compatibility with 3.3.x

2014-05-06 Thread Cristiano Corsani
I will.

Cristiano Corsani, PhD
--
web: http://www.cryx.it
mail: ccors...@gmail.com
Il 06/mag/2014 19:08 "Kaushal M"  ha scritto:

> Try disabling the open-behind translator. This should allow the 3.3
> clients to mount the volume.
> # gluster volume set  performance.open-behind off
>
> ~kaushal
>
>
> On Tue, May 6, 2014 at 4:49 PM, Cristiano Corsani wrote:
>
>> > Your 3.4 cluster is a newly deployed one or a upgraded one ( from 3.3)?
>> > If yours are newly deployed, you could not use 3.3 client to mount for
>> the
>> > op version is set 2
>>
>> Is a new one.
>>
>> > If yours are upgraded one, you could use 3.3 client to mount for the op
>> > version is set 1.
>> > the op version is newly introduced in 3.4
>>
>> Is there a way to manage it or let it works in 3.5?
>>
>> My problem is that I have over 100 clients whose home directory are
>> stored in
>> an old storage. I would like to migrate to glusterfs but clients are
>> old distro and
>> plugging the glusterfs is the first step to upgrade the client too.
>> Otherwise I will
>> use NFS.
>>
>> --
>> Cristiano Corsani, PhD
>> -
>> http://www.cryx.it
>> i...@cryx.it
>> ccors...@gmail.com
>> --
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster 3.4 or 3.5 compatibility with 3.3.x

2014-05-06 Thread Kaushal M
Try disabling the open-behind translator. This should allow the 3.3 clients
to mount the volume.
# gluster volume set  performance.open-behind off

~kaushal


On Tue, May 6, 2014 at 4:49 PM, Cristiano Corsani wrote:

> > Your 3.4 cluster is a newly deployed one or a upgraded one ( from 3.3)?
> > If yours are newly deployed, you could not use 3.3 client to mount for
> the
> > op version is set 2
>
> Is a new one.
>
> > If yours are upgraded one, you could use 3.3 client to mount for the op
> > version is set 1.
> > the op version is newly introduced in 3.4
>
> Is there a way to manage it or let it works in 3.5?
>
> My problem is that I have over 100 clients whose home directory are stored
> in
> an old storage. I would like to migrate to glusterfs but clients are
> old distro and
> plugging the glusterfs is the first step to upgrade the client too.
> Otherwise I will
> use NFS.
>
> --
> Cristiano Corsani, PhD
> -
> http://www.cryx.it
> i...@cryx.it
> ccors...@gmail.com
> --
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] User-serviceable snapshots design

2014-05-06 Thread Sobhan Samantaray
Hi Anand,
   Thanks to come-up with the nice design. I have couple of comments.

1. It should be mention in the design the access-protocols that should be 
used(NFS/CIFS etc) although the requirement states that. 
2. Consideration section:
"Again, this is not a performance oriented feature. Rather, the goal is to 
allow a seamless user-experience by allowing easy and useful access to 
snapshotted volumes and individual data stored in those volumes".

If it is something that fops performance would not be impacted due to the 
introduction of this feature then it should be clarified.

3. It's good to mention the default value of the option of uss.

Regards
Sobhan


From: "Paul Cuzner" 
To: ana...@redhat.com
Cc: gluster-de...@gluster.org, "gluster-users" , 
"Anand Avati" 
Sent: Tuesday, May 6, 2014 7:27:29 AM
Subject: Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

Just one question relating to thoughts around how you apply a filter to the 
snapshot view from a user's perspective. 

In the "considerations" section, it states - "We plan to introduce a 
configurable option to limit the number of snapshots visible under the USS 
feature." 
Would it not be possible to take the meta data from the snapshots to form a 
tree hierarchy when the number of snapshots present exceeds a given threshold, 
effectively organising the snaps by time. I think this would work better from 
an end-user workflow perspective. 

i.e. 
.snaps 
\/ Today 
+-- snap01_20140503_0800 
+-- snap02_ 20140503_ 1400 
> Last 7 days 
> 7-21 days 
> 21-60 days 
> 60-180days 
> 180days 







From: "Anand Subramanian"  
To: gluster-de...@nongnu.org, "gluster-users"  
Cc: "Anand Avati"  
Sent: Saturday, 3 May, 2014 2:35:26 AM 
Subject: [Gluster-users] User-serviceable snapshots design 

Attached is a basic write-up of the user-serviceable snapshot feature 
design (Avati's). Please take a look and let us know if you have 
questions of any sort... 

We have a basic implementation up now; reviews and upstream commit 
should follow very soon over the next week. 

Cheers, 
Anand 

___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] tar_ssh.pem?

2014-05-06 Thread Venky Shankar
I had seen the new "create push-pem" option and gave it a try today. I
> see that it does indeed create a different key with a different command
> in the authorized_keys file.
>
> One question remains though and this stems back to bug #
> ​​
> ​​
> 1091079.
> push-pem expects you to have setup passwordless SSH access already so
> what is the point of adding further lines to authorized_keys when
> general access is already allowed? Surely this is bad for security?
> Wouldn't it be better for push-pem to prompt for a password so that
> only the required access is added?
>

push-pem expects password less SSH​ b/w the node where the CLI is executed
and a slave node (the slave endpoint used session creation). It then adds
master's SSH keys to *authorized_keys* on all slave nodes (prepended with
command=... for restricting access to gsyncd). As you said, prompting for
password is definitely better and should be thought of.

Non-root geo-replication does not work as of now (upstream/3.5). I'm in the
process of getting in to work (patch http://review.gluster.org/#/c/7658/ in
gerrit). Even with this you'd need password less SSH to one of the nodes on
the slave (to an unprivileged user in this case). Your argument of
prompting for password still holds true here.

I see the document link you mentioned in BZ #1091079 (comment #2) still
points to old style geo-replication (we'd need to correct that). Are you
following that in any case? Comment #1 points to the correct URL.

Thanks,
-venky
IRC: overclk on #freenode
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] tar_ssh.pem?

2014-05-06 Thread James Le Cuirot
On Wed, 30 Apr 2014 20:25:03 +0100
James Le Cuirot  wrote:

> > > On April 28, 2014 6:03:16 AM PDT, Venky Shankar
> > >  wrote:
> 
> > >> On 04/27/2014 11:55 PM, James Le Cuirot wrote:
> > >>> I'm new to Gluster but have successfully tried geo-rep with
> > >>> 3.5.0. I've read about the new tar+ssh feature and it sounds
> > >>> good but nothing has been said about the tar_ssh.pem file that
> > >>> gsyncd.conf references. Why is a separate key needed? Does it
> > >>> not use gsyncd on the other end? If not, what command should I
> > >>> lock it down to in authorized_keys, bug #1091079
> > >>> notwithstanding?
> 
> > >> geo-replication "create push-pem" command should add the keys on
> > >> the slave for tar+ssh to work. That is done as part of geo-rep
> > >> setup.
> 
> I had seen the new "create push-pem" option and gave it a try today. I
> see that it does indeed create a different key with a different
> command in the authorized_keys file.
> 
> One question remains though and this stems back to bug #1091079.
> push-pem expects you to have setup passwordless SSH access already so
> what is the point of adding further lines to authorized_keys when
> general access is already allowed? Surely this is bad for security?
> Wouldn't it be better for push-pem to prompt for a password so that
> only the required access is added?

Sorry for this but could I please get an answer on the above? Security
is a very big deal for us as it should be for everyone here. I gather
the mountbroker can be used to do this replication as non-root which
helps but general SSH access for this user is something I would still
like to avoid if it is really not necessary.

Regards,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.4 or 3.5 compatibility with 3.3.x

2014-05-06 Thread Cristiano Corsani
> Your 3.4 cluster is a newly deployed one or a upgraded one ( from 3.3)?
> If yours are newly deployed, you could not use 3.3 client to mount for the
> op version is set 2

Is a new one.

> If yours are upgraded one, you could use 3.3 client to mount for the op
> version is set 1.
> the op version is newly introduced in 3.4

Is there a way to manage it or let it works in 3.5?

My problem is that I have over 100 clients whose home directory are stored in
an old storage. I would like to migrate to glusterfs but clients are
old distro and
plugging the glusterfs is the first step to upgrade the client too.
Otherwise I will
use NFS.

-- 
Cristiano Corsani, PhD
-
http://www.cryx.it
i...@cryx.it
ccors...@gmail.com
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.4 or 3.5 compatibility with 3.3.x

2014-05-06 Thread Mingfan Lu
Your 3.4 cluster is a newly deployed one or a upgraded one ( from 3.3)?
If yours are newly deployed, you could not use 3.3 client to mount for the
op version is set 2
If yours are upgraded one, you could use 3.3 client to mount for the op
version is set 1.
the op version is newly introduced in 3.4


On Tue, May 6, 2014 at 3:11 PM, Cristiano Corsani wrote:

> Hi all. I have many clients 3.3.1 (I can't upgrade because of an old
> distro version) I would like to mount a 3.4 or 3.5 volume.
>
> From documentation it seems that 3.3.x is compatible with 3.4. But it
> does not work. The error is "unable to get the volume file from
> server".
> My systems (server and client) are all x86.
>
> Thank you
>
> --
> Cristiano Corsani, PhD
> -
> http://www.cryx.it
> i...@cryx.it
> ccors...@gmail.com
> --
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster 3.4 or 3.5 compatibility with 3.3.x

2014-05-06 Thread Cristiano Corsani
Hi all. I have many clients 3.3.1 (I can't upgrade because of an old
distro version) I would like to mount a 3.4 or 3.5 volume.

>From documentation it seems that 3.3.x is compatible with 3.4. But it
does not work. The error is "unable to get the volume file from
server".
My systems (server and client) are all x86.

Thank you

-- 
Cristiano Corsani, PhD
-
http://www.cryx.it
i...@cryx.it
ccors...@gmail.com
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users