Re: [Gluster-users] Issue installing Gluster on CentOS 7.2

2017-04-22 Thread Eric K. Miller
> We build the GlusterFS packages against the current version of CentOS
7.
> The dependencies that are installed during the package building may
not be
> correct for older CentOS versions. There is no guarantee that the
compiled
> binaries work correctly on previous versions. The backwards
compatibility
> only counts for new OS updates, so that old applications do not need
to be
> recompiled. This request is the other way around, and is not something
I
> think we can commit to keep working.
> 
> The correct solution would be to update all the dependencies (and
their
> dependencies like a chain) to not run into unexpected and difficult to
debug
> problems. Downgrading libvirt or keeping it on an older version should
be
> possible.
> 
> HTH,
> Niels

Thanks Niels and Kaleb!

I have installed the older Gluster version 3.7 with no issues.  We will
wait for the libvirt issues to be worked out in CentOS 7.3 before
upgrading to Gluster version 3.8+.

Eric



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Issue installing Gluster on CentOS 7.2

2017-04-19 Thread Eric K. Miller
We have a requirement to stay on CentOS 7.2 for a while (due to some
bugs in 7.3 components related to libvirt).  So we have the yum repos
set to CentOS 7.2, not 7.3.  When installing Gluster (latest version in
the repo, which turns out to be 3.8.10), a dependency exists for
firewalld-filesystem, which fails.  From what I have read,
firewalld-filesystem is only available in CentOS 7.3.

 

Has anyone else run into this?  Is there a workaround?  Or is this a bug
where we should consider installing an earlier version of Gluster?

 

I'm new to this list, so if there is a better list to post this issue
to, please let me know.

 

Thanks!

 

Eric

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] changing gluster client log level - no effect

2015-05-07 Thread Eric Mortensen
Thanks for the suggestion,

I tried running: gluster --log-level=WARNING volume info
but it did not modify the log level of cli.log.

I read the comment to the patch 9383, did I interpret it correctly that it
says cli.log logs at all levels regardless of any options?


On Thu, May 7, 2015 at 12:01 PM, Vijay Bellur  wrote:

> On 05/07/2015 03:25 PM, Kaushal M wrote:
>
>> You have to provide the log-level option to the `gluster` command
>> directly, as in `gluster --log-level=WARNING volume info`.
>>
>> I don't know how or why the log level of CLI defaults to TRACE. It
>> used to be INFO earlier, but some it has changed to TRACE. I need to
>> investigate this further.
>>
>>
> We need a backport of [1] to release-3.6.
>
> -Vijay
>
> [1] http://review.gluster.org/9383
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] changing gluster client log level - no effect

2015-05-07 Thread Eric Mortensen
Sorry, I am running 3.6.2

On Thu, May 7, 2015 at 11:08 AM, Eric Mortensen  wrote:

> Hi
>
> Running 3.6.3
>
> My cli.log fills up with debug and trace info. I have run these commands
> on the server:
>
> gluster volume set gsfiles diagnostics.brick-log-level WARNING
> gluster volume set gsfiles diagnostics.client-log-level WARNING
>
> I am running glusterd like this:
>
> /usr/sbin/glusterd --no-daemon --log-level WARNING
>
> None of these seem to have any effect, as cli.log is still flooded with
> debug and trace info.
>
> Do I need to do anything else?
>
> Thanks,
> Eric
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] changing gluster client log level - no effect

2015-05-07 Thread Eric Mortensen
Hi

Running 3.6.3

My cli.log fills up with debug and trace info. I have run these commands on
the server:

gluster volume set gsfiles diagnostics.brick-log-level WARNING
gluster volume set gsfiles diagnostics.client-log-level WARNING

I am running glusterd like this:

/usr/sbin/glusterd --no-daemon --log-level WARNING

None of these seem to have any effect, as cli.log is still flooded with
debug and trace info.

Do I need to do anything else?

Thanks,
Eric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Synchronous replication, or no?

2015-04-09 Thread Eric Mortensen
Ok, that made a lot of sense. I guess what I was expecting was that the
writes were (close to) immediately consistent, but Gluster is rather
designed to be eventually consistent.

Thanks for explaining all that.

Eric


On Thu, Apr 9, 2015 at 5:45 PM, Jeff Darcy  wrote:

> > Jeff: I don't really understand how a write-behind translator could keep
> data
> > in memory before flushing to the replication module if the replication is
> > synchronous. Or put another way, from whose perspective is the
> replication
> > synchronous? The gluster daemon or the creating client?
>
> That's actually a more complicated question than many would think.  When we
> say "synchronous replication" we're talking about *durability* (i.e. does
> the disk see it) from the perspective of the replication module.  It does
> none of its own caching or buffering.  When it is asked to do a write, it
> does not report that write as complete until all copies have been updated.
>
> However, durability is not the same as consistency (i.e. do *other clients*
> see it) and the replication component does not exist in a vacuum.  There
> are other components both before and after that can affect durability and
> consistency.  We've already touched on the "after" part.  There might be
> caches at many levels that become stale as the result of a file being
> created and written.  Of particular interest here are "negative directory
> entries" which indicate that a file is *not* present.  Until those expire,
> it is possible to see a file as "not there" even though it does actually
> exist on disk.  We can control some of this caching, but not all.
>
> The other side is *before* the replication module, and that's where
> write-behind comes in.  POSIX does not require that a write be immediately
> durable in the absence of O_SYNC/fsync and so on.  We do honor those
> requirements where applicable.  However, the most common user expectation
> is that we will defer/batch/coalesce writes, because making every write
> individually immediate and synchronous has a very large performance impact.
> Therefore we implement write-behind, as a layer above replication.  Absent
> any specific request to perform a write immediately, data might sit there
> for an indeterminate (but usually short) time before the replication code
> even gets to see it.
>
> I don't think write-behind is likely to be the issue here, because it
> only applies to data within a file.  It will pass create(2) calls through
> immediately, so all servers should become aware of the file's existence
> right away.  On the other hand, various forms of caching on the *client*
> side (even if they're the same physical machines) could still prevent a
> new file from being seen immediately.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Synchronous replication, or no?

2015-04-09 Thread Eric Mortensen
Hi all,

Thanks for the helpful responses everyone, I will look at the mounting
options suggested.

Just a quick question to confirm my understanding. When we all say
replication is synchronous, that does mean that each of the filesystem
operations on appserver1 that write a chunk of bytes to the file does not
return back to the caller until the same chunks of bytes have been written
to the OS filesystem cache/disk on the other appserver/brick, right? Or
does it mean something else?

Jeff: I don't really understand how a write-behind translator could keep
data in memory before flushing to the replication module if the replication
is synchronous. Or put another way, from whose perspective is the
replication synchronous? The gluster daemon or the creating client?

Eric



On Thu, Apr 9, 2015 at 4:07 PM, Vijay Bellur  wrote:

> On 04/09/2015 07:33 PM, Jeff Darcy wrote:
>
>> I was under the impression that gluster replication was synchrounous, so
>>> the
>>> appserver would not return back to the client until the created file was
>>> replicated to the other server. But this does not seem to be the case,
>>> because sleeping a little bit always seems to make the read failures go
>>> away. Is there any other reason why a file created is not immediately
>>> available on a second request?
>>>
>>
>> It's quite possible that the replication is synchronous (the bits do hit
>> disk before returning) but that the results are not being seen immediately
>> due to caching at some level.  There are some GlusterFS mount options
>> (especially --negative-timeout) that might be relevant here, but it's also
>> possible that the culprit is somewhere above that in your app servers.
>>
>>
> Might be worth mounting with --entry-timeout=0 too.
>
> -Vijay
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Synchronous replication, or no?

2015-04-09 Thread Eric Mortensen
We have a Gluster replicated setup with 2 servers. Each server also runs an
app server that functions as a client of the gluster files. Client access
to the appservers are load balanced using round robin.

Sometimes, when a client creates a new file and then immediately tries to
read it, the read fails because the appserver cannot find it. If the client
sleeps for about 1 second between creating the file and reading it, the
read always succeeds.

I was under the impression that gluster replication was synchrounous, so
the appserver would not return back to the client until the created file
was replicated to the other server. But this does not seem to be the case,
because sleeping a little bit always seems to make the read failures go
away. Is there any other reason why a file created is not immediately
available on a second request?

I am running 3.6.2 and have not configured anything special except
storage.owner-id and auth.allow.

Thanks!
Eric Mortensen
Appstax Technologies
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Missing gsyncd and dependency issues with gluster on AWS AMI host

2015-04-08 Thread Eric Berg
I rebuilt my AWS instances as Ubuntu and this problem is no longer valid.  For 
me.

Thanks.

From: Eric Berg mailto:eric.b...@yodle.com>>
Date: Wednesday, April 8, 2015 at 12:13 PM
To: "gluster-users@gluster.org<mailto:gluster-users@gluster.org>" 
mailto:gluster-users@gluster.org>>
Subject: [Gluster-users] Missing gsyncd and dependency issues with gluster on 
AWS AMI host

I'm setting up geo-replication from a cluster in our colo facility to a gluster 
cluster in amazon.  I set up the amazon instances as AMI builds (probably a 
mistake), by enabling gluster epel repo from download.gluster.org.

While trying to set up geo-replication, I could not find gsyncd, so I tried to 
update the existing 3.6.2-1.el6 versions, and I get the following, indicating 
that a) it's looking at el7 packages, when 6 is clearly stated in the 
gluster-epel.repo file and b) there are unmet dependencies for systemd-units 
and rsyslog-mmjsonparse.

I'm thinking that I should just rebuild these instances as ubuntu 14.x boxes, 
like the ones in our datacenter, but it might be easier to solve this problem.

Anybody have any thoughts on the matter?

Here's the output of the install command:

# yum install -y glusterfs{-fuse,-server}
Loaded plugins: priorities, update-motd, upgrade-helper
epel/x86_64/metalink

 |  16 kB 00:01
glusterfs-epel/x86_64   

 | 2.9 kB 00:00
Not using downloaded repomd.xml because it is older than what we have:
  Current   : Fri Jan 23 06:00:21 2015
  Downloaded: Fri Jan 23 06:00:19 2015
glusterfs-noarch-epel   

 | 2.9 kB 00:00
Not using downloaded repomd.xml because it is older than what we have:
  Current   : Fri Jan 23 06:00:23 2015
  Downloaded: Fri Jan 23 06:00:20 2015
# Place this file in your /etc/yum.repos.d/ directory
975 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package glusterfs-fuse.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-fuse.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: glusterfs = 3.6.2-1.el7 for package: 
glusterfs-fuse-3.6.2-1.el7.x86_64
---> Package glusterfs-server.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-server.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: glusterfs-libs = 3.6.2-1.el7 for package: 
glusterfs-server-3.6.2-1.el7.x86_64
--> Processing Dependency: glusterfs-cli = 3.6.2-1.el7 for package: 
glusterfs-server-3.6.2-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:3.6.2-1.el6 will be updated
--> Processing Dependency: glusterfs = 3.6.2-1.el6 for package: 
glusterfs-api-3.6.2-1.el6.x86_64
---> Package glusterfs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
---> Package glusterfs-cli.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-cli.x86_64 0:3.6.2-1.el7 will be an update
---> Package glusterfs-libs.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-libs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: rsyslog-mmjsonparse for package: 
glusterfs-libs-3.6.2-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
---> Package glusterfs-api.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-api.x86_64 0:3.6.2-1.el7 will be an update
---> Package glusterfs-libs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: rsyslog-mmjsonparse for package: 
glusterfs-libs-3.6.2-1.el7.x86_64
--> Finished Dependency Resolution
Error: Package: glusterfs-3.6.2-1.el7.x86_64 (glusterfs-epel)
   Requires: systemd-units
Error: Package: glusterfs-libs-3.6.2-1.el7.x86_64 (glusterfs-epel)
   Requires: rsyslog-mmjsonparse
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles -nodigest

Thanks for any help you may provide.

Eric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Missing gsyncd and dependency issues with gluster on AWS AMI host

2015-04-08 Thread Eric Berg
I'm setting up geo-replication from a cluster in our colo facility to a gluster 
cluster in amazon.  I set up the amazon instances as AMI builds (probably a 
mistake), by enabling gluster epel repo from download.gluster.org.

While trying to set up geo-replication, I could not find gsyncd, so I tried to 
update the existing 3.6.2-1.el6 versions, and I get the following, indicating 
that a) it's looking at el7 packages, when 6 is clearly stated in the 
gluster-epel.repo file and b) there are unmet dependencies for systemd-units 
and rsyslog-mmjsonparse.

I'm thinking that I should just rebuild these instances as ubuntu 14.x boxes, 
like the ones in our datacenter, but it might be easier to solve this problem.

Anybody have any thoughts on the matter?

Here's the output of the install command:

# yum install -y glusterfs{-fuse,-server}
Loaded plugins: priorities, update-motd, upgrade-helper
epel/x86_64/metalink

 |  16 kB 00:01
glusterfs-epel/x86_64   

 | 2.9 kB 00:00
Not using downloaded repomd.xml because it is older than what we have:
  Current   : Fri Jan 23 06:00:21 2015
  Downloaded: Fri Jan 23 06:00:19 2015
glusterfs-noarch-epel   

 | 2.9 kB 00:00
Not using downloaded repomd.xml because it is older than what we have:
  Current   : Fri Jan 23 06:00:23 2015
  Downloaded: Fri Jan 23 06:00:20 2015
# Place this file in your /etc/yum.repos.d/ directory
975 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package glusterfs-fuse.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-fuse.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: glusterfs = 3.6.2-1.el7 for package: 
glusterfs-fuse-3.6.2-1.el7.x86_64
---> Package glusterfs-server.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-server.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: glusterfs-libs = 3.6.2-1.el7 for package: 
glusterfs-server-3.6.2-1.el7.x86_64
--> Processing Dependency: glusterfs-cli = 3.6.2-1.el7 for package: 
glusterfs-server-3.6.2-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:3.6.2-1.el6 will be updated
--> Processing Dependency: glusterfs = 3.6.2-1.el6 for package: 
glusterfs-api-3.6.2-1.el6.x86_64
---> Package glusterfs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
---> Package glusterfs-cli.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-cli.x86_64 0:3.6.2-1.el7 will be an update
---> Package glusterfs-libs.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-libs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: rsyslog-mmjsonparse for package: 
glusterfs-libs-3.6.2-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
--> Processing Dependency: systemd-units for package: 
glusterfs-3.6.2-1.el7.x86_64
---> Package glusterfs-api.x86_64 0:3.6.2-1.el6 will be updated
---> Package glusterfs-api.x86_64 0:3.6.2-1.el7 will be an update
---> Package glusterfs-libs.x86_64 0:3.6.2-1.el7 will be an update
--> Processing Dependency: rsyslog-mmjsonparse for package: 
glusterfs-libs-3.6.2-1.el7.x86_64
--> Finished Dependency Resolution
Error: Package: glusterfs-3.6.2-1.el7.x86_64 (glusterfs-epel)
   Requires: systemd-units
Error: Package: glusterfs-libs-3.6.2-1.el7.x86_64 (glusterfs-epel)
   Requires: rsyslog-mmjsonparse
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles -nodigest

Thanks for any help you may provide.

Eric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] volume create: gsfiles: failed: /gfiles/data is already part of a volume

2015-02-17 Thread Eric Mortensen
Hi

I have somehow put my Gluster 3.6 in a state where I am unable to create any 
volumes. Even having unmounted all Gluster-related mounts, deleting all 
relevant files/directories I get the following error when attempting to create 
a volume. Here is what I am trying to do:

> mkdir -p /gfiles/data
> mount /dev/mapper/vg.files-lvfiles /gfiles -o 
> noatime,errors=remount-ro,data=writeback
> gluster volume create gsfiles replica 2 transport tcp 10.2.3.100:/gfiles/data 
>  10.2.4.100:/gfiles/data

volume create: gsfiles: failed: /gfiles/data is already part of a volume

It does not matter if I create completely new directories, mounts and volume 
names. I get the same error. I have re-installed Gluster. I have restarted all 
services.  
I am on CentOS6.4.

What could be going on here? Anything I can do?

Regards,
Eric Mortensen

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Client hangs during server reboot (2-node replicated setup)

2015-01-26 Thread Eric Mortensen
Hello! 

I created a 2-node replica cluster with:

Volume Name: gsfiles
Type: Replicate
Volume ID: e01f6dc3-eb73-4bea-a187-eda98fe2748a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.2.3.100:/glusterdata/files
Brick2: 10.2.4.100:/glusterdata/files
Options Reconfigured:
auth.allow: 10.2.3.100,10.2.4.100

/etc/fstab:
/dev/mapper/vg.files-lvfiles  /glusterdata  ext4 
noatime,errors=remount-ro,data=writeback
10.2.3.100:gsfiles/files   glusterfs  defaults  0 0 

Both 10.2.3.100 and 10.2.4.100 are clients as well as servers.

When I reboot 10.2.3.100, and try to access /files from 10.2.4.100 the latter 
shell hangs indefinitely until 10.2.3.100 is up again. If I first stop the 
glusterfs and glusterfsd services, then I can access the data on the other node 
while the server reboots.

How do I avoid this? How do I ensure the system as a whole serves client 
requests even if one node goes down? 

Help greatly appreciated,
Eric Mortensen


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster volume not automounted when peer is down

2014-11-25 Thread Eric Ewanco
Hmmm.  I can see how that would work if you had an external client that wanted 
to connect to a cluster, and one of the nodes was down; it would need to pick 
another node in the cluster, yes.  But that's not my scenario.  My scenario is 
I have two nodes in a cluster, one node goes down, and if the surviving node 
reboots, it cannot mount the gluster volume *on itself* at startup, but mounts 
it fine after startup.  In other words, the mount -t glusterfs -a fails (and 
fstab contains localhost:/rel-vol  /home glusterfs  defaults,_netdev  0 0) on 
startup, but not afterwards.  I am not successfully "mapping" these answers to 
my situation -- if I change "localhost" to a round robin dns address with two 
addresses, it will either return the down node (which does us no good) or the 
current node, which is equivalent to localhost, and presumably it will do the 
same thing, won't it?

Confused,
Eric

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Joe Julian
Sent: Tuesday, November 25, 2014 1:04 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster volume not automounted when peer is down

A much simpler answer is to assign a hostname to multiple IP addresses (round 
robin dns). When gethostbyname() returns multiple entries, the client will try 
them all until it's successful.
On 11/24/2014 06:23 PM, Paul Robert Marino wrote:
This is simple and can be handled in many ways.

Some background first.
The mount point is a single IP or host name. The only thing the client uses it 
for is to download a describing all the bricks in the cluster. The next thing 
is it opens connections to all the nodes containing bricks for that volume.

So the answer is tell the client to connect to a virtual IP address.

I personally use keepalived for this but you can use any one of the many IPVS 
Or other tools that manage IPS for this.  I assign the VIP to a primary node 
then have each node monitor the cluster processes if they die on a node it goes 
into a faulted state and can not own the VIP.

As long as the client are connecting to a running host in the cluster you are 
fine even if that host doesn't own bricks in the volume but is aware of them as 
part of the cluster.

-- Sent from my HP Pre3


____
On Nov 24, 2014 8:07 PM, Eric Ewanco 
<mailto:eric.ewa...@genband.com> wrote:
Hi all,

We're trying to use gluster as a replicated volume.  It works OK when both 
peers are up but when one peer is down and the other reboots, the "surviving" 
peer does not automount glusterfs.  Furthermore, after the boot sequence is 
complete, it can be mounted without issue.  It automounts fine when the peer is 
up during startup.  I tried to google this and while I found some similar 
issues, I haven't found any solutions to my problem.  Any insight would be 
appreciated.  Thanks.

gluster volume info output (after startup):
Volume Name: rel-vol
Type: Replicate
Volume ID: 90cbe313-e9f9-42d9-a947-802315ab72b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.250.1.1:/export/brick1
Brick2: 10.250.1.2:/export/brick1

gluster peer status output (after startup):
Number of Peers: 1

Hostname: 10.250.1.2
Uuid: 8d49b929-4660-4b1e-821b-bfcd6291f516
State: Peer in Cluster (Disconnected)

Original volume create command:
gluster volume create rel-vol rep 2 transport tcp 10.250.1.1:/export/brick1 
10.250.1.2:/export/brick1

I am running Gluster 3.4.5 on OpenSuSE 12.2.
gluster --version:
glusterfs 3.4.5 built on Jul 25 2014 08:31:19
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. 
<http://www.gluster.com><http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.

The fstab line is:

localhost:/rel-vol  /homeglusterfs  defaults,_netdev  0 0

lsof -i :24007-24100:
COMMANDPID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
glusterd  4073 root6u  IPv4  82170  0t0  TCP s1:24007->s1:1023 
(ESTABLISHED)
glusterd  4073 root9u  IPv4  13816  0t0  TCP *:24007 (LISTEN)
glusterd  4073 root   10u  IPv4  88106  0t0  TCP s1:exp2->s2:24007 
(SYN_SENT)
glusterfs 4097 root8u  IPv4  16751  0t0  TCP s1:1023->s1:24007 
(ESTABLISHED)

This is shorter than it is when it works, but maybe that's because the mount 
spawns some more processes.


Some ports are down:



root@q50-s1:/root> telnet localhost 24007

Trying ::1...

telnet: connect to address ::1: Connection refused

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.



telnet> close

Connection closed.

root@q50-s1:/root> telnet localhost 24009

Trying ::1...

telnet: connect to address ::1: Connection refused

Trying 127.0.0.1...

telnet: connect to address 127.0.0.1: Co

Re: [Gluster-users] Gluster volume not automounted when peer is down

2014-11-25 Thread Eric Ewanco
I very much appreciate the feedback.  I am a bit confused, though.  What do you 
mean by “tell the client to connect to a virtual IP address”; do you mean issue 
the mount command using a virtual IP address?  I’m presently mounting it on 
localhost, which should always be available.  The only clients accessing the 
volume are two gluster nodes, which are each holding replicas of a single 
volume, so the idea is each one accesses its own replica with gluster used to 
sync the replicas.  Thus when one goes down, the other relies on its own copy 
of the data.  So how would your idea work in this scenario:  The two nodes have 
a virtual IP address, each one mounts that IP address instead of localhost, so 
when one of them goes down the other assumes that IP address and … well, this 
is where I get confused?

Thanks again,
Eric

From: Paul Robert Marino [mailto:prmari...@gmail.com]
Sent: Monday, November 24, 2014 9:23 PM
To: Eric Ewanco; gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster volume not automounted when peer is down

This is simple and can be handled in many ways.

Some background first.
The mount point is a single IP or host name. The only thing the client uses it 
for is to download a describing all the bricks in the cluster. The next thing 
is it opens connections to all the nodes containing bricks for that volume.

So the answer is tell the client to connect to a virtual IP address.

I personally use keepalived for this but you can use any one of the many IPVS 
Or other tools that manage IPS for this.  I assign the VIP to a primary node 
then have each node monitor the cluster processes if they die on a node it goes 
into a faulted state and can not own the VIP.

As long as the client are connecting to a running host in the cluster you are 
fine even if that host doesn't own bricks in the volume but is aware of them as 
part of the cluster.
-- Sent from my HP Pre3


On Nov 24, 2014 8:07 PM, Eric Ewanco 
mailto:eric.ewa...@genband.com>> wrote:
Hi all,

We’re trying to use gluster as a replicated volume.  It works OK when both 
peers are up but when one peer is down and the other reboots, the “surviving” 
peer does not automount glusterfs.  Furthermore, after the boot sequence is 
complete, it can be mounted without issue.  It automounts fine when the peer is 
up during startup.  I tried to google this and while I found some similar 
issues, I haven’t found any solutions to my problem.  Any insight would be 
appreciated.  Thanks.

gluster volume info output (after startup):
Volume Name: rel-vol
Type: Replicate
Volume ID: 90cbe313-e9f9-42d9-a947-802315ab72b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.250.1.1:/export/brick1
Brick2: 10.250.1.2:/export/brick1

gluster peer status output (after startup):
Number of Peers: 1

Hostname: 10.250.1.2
Uuid: 8d49b929-4660-4b1e-821b-bfcd6291f516
State: Peer in Cluster (Disconnected)

Original volume create command:
gluster volume create rel-vol rep 2 transport tcp 10.250.1.1:/export/brick1 
10.250.1.2:/export/brick1

I am running Gluster 3.4.5 on OpenSuSE 12.2.
gluster --version:
glusterfs 3.4.5 built on Jul 25 2014 08:31:19
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.

The fstab line is:

localhost:/rel-vol  /homeglusterfs  defaults,_netdev  0 0

lsof -i :24007-24100:
COMMANDPID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
glusterd  4073 root6u  IPv4  82170  0t0  TCP s1:24007->s1:1023 
(ESTABLISHED)
glusterd  4073 root9u  IPv4  13816  0t0  TCP *:24007 (LISTEN)
glusterd  4073 root   10u  IPv4  88106  0t0  TCP s1:exp2->s2:24007 
(SYN_SENT)
glusterfs 4097 root8u  IPv4  16751  0t0  TCP s1:1023->s1:24007 
(ESTABLISHED)

This is shorter than it is when it works, but maybe that’s because the mount 
spawns some more processes.


Some ports are down:



root@q50-s1:/root> telnet localhost 24007

Trying ::1...

telnet: connect to address ::1: Connection refused

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.



telnet> close

Connection closed.

root@q50-s1:/root> telnet localhost 24009

Trying ::1...

telnet: connect to address ::1: Connection refused

Trying 127.0.0.1...

telnet: connect to address 127.0.0.1: Connection refused

ps axww | fgrep glu:

4073 ?Ssl0:10 /usr/sbin/glusterd -p /run/glusterd.pid

4097 ?Ssl0:00 /usr/sbin/glusterfsd -s 10.250.1.1 --volfile-id 
rel-vol.10.250.1.1.export-brick1 -p 
/var/lib/glusterd/vols/rel-vol/run/10.250.1.1-export-brick1.pid -S 
/var/run/89ba432ed09e07e107723b4b266e18f9.socket --brick-name /export/brick1 -l 
/var/log/glusterfs/bricks/export-brick1.log --xlator-option 
*-posix.glusterd-uuid

[Gluster-users] Gluster volume not automounted when peer is down

2014-11-24 Thread Eric Ewanco
Hi all,

We're trying to use gluster as a replicated volume.  It works OK when both 
peers are up but when one peer is down and the other reboots, the "surviving" 
peer does not automount glusterfs.  Furthermore, after the boot sequence is 
complete, it can be mounted without issue.  It automounts fine when the peer is 
up during startup.  I tried to google this and while I found some similar 
issues, I haven't found any solutions to my problem.  Any insight would be 
appreciated.  Thanks.

gluster volume info output (after startup):
Volume Name: rel-vol
Type: Replicate
Volume ID: 90cbe313-e9f9-42d9-a947-802315ab72b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.250.1.1:/export/brick1
Brick2: 10.250.1.2:/export/brick1

gluster peer status output (after startup):
Number of Peers: 1

Hostname: 10.250.1.2
Uuid: 8d49b929-4660-4b1e-821b-bfcd6291f516
State: Peer in Cluster (Disconnected)

Original volume create command:
gluster volume create rel-vol rep 2 transport tcp 10.250.1.1:/export/brick1 
10.250.1.2:/export/brick1

I am running Gluster 3.4.5 on OpenSuSE 12.2.
gluster --version:
glusterfs 3.4.5 built on Jul 25 2014 08:31:19
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.

The fstab line is:

localhost:/rel-vol  /homeglusterfs  defaults,_netdev  0 0

lsof -i :24007-24100:
COMMANDPID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
glusterd  4073 root6u  IPv4  82170  0t0  TCP s1:24007->s1:1023 
(ESTABLISHED)
glusterd  4073 root9u  IPv4  13816  0t0  TCP *:24007 (LISTEN)
glusterd  4073 root   10u  IPv4  88106  0t0  TCP s1:exp2->s2:24007 
(SYN_SENT)
glusterfs 4097 root8u  IPv4  16751  0t0  TCP s1:1023->s1:24007 
(ESTABLISHED)

This is shorter than it is when it works, but maybe that's because the mount 
spawns some more processes.


Some ports are down:



root@q50-s1:/root> telnet localhost 24007

Trying ::1...

telnet: connect to address ::1: Connection refused

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.



telnet> close

Connection closed.

root@q50-s1:/root> telnet localhost 24009

Trying ::1...

telnet: connect to address ::1: Connection refused

Trying 127.0.0.1...

telnet: connect to address 127.0.0.1: Connection refused

ps axww | fgrep glu:

4073 ?Ssl0:10 /usr/sbin/glusterd -p /run/glusterd.pid

4097 ?Ssl0:00 /usr/sbin/glusterfsd -s 10.250.1.1 --volfile-id 
rel-vol.10.250.1.1.export-brick1 -p 
/var/lib/glusterd/vols/rel-vol/run/10.250.1.1-export-brick1.pid -S 
/var/run/89ba432ed09e07e107723b4b266e18f9.socket --brick-name /export/brick1 -l 
/var/log/glusterfs/bricks/export-brick1.log --xlator-option 
*-posix.glusterd-uuid=3b02a581-8fb9-4c6a-8323-9463262f23bc --brick-port 49152 
--xlator-option rel-vol-server.listen-port=49152

5949 ttyS0S+ 0:00 fgrep glu
These are the error messages I see in /var/log/gluster/home.log (/home is the 
mountpoint):

+--+

[2014-11-24 13:51:27.932285] E 
[client-handshake.c:1742:client_query_portmap_cbk] 0-rel-vol-client-0: failed 
to get the port number for remote subvolume. Please run 'gluster volume status' 
on server to see if brick process is running.

[2014-11-24 13:51:27.932373] W [socket.c:514:__socket_rwv] 0-rel-vol-client-0: 
readv failed (No data available)

[2014-11-24 13:51:27.932405] I [client.c:2098:client_rpc_notify] 
0-rel-vol-client-0: disconnected

[2014-11-24 13:51:30.818281] E [socket.c:2157:socket_connect_finish] 
0-rel-vol-client-1: connection to 10.250.1.2:24007 failed (No route to host)

[2014-11-24 13:51:30.818313] E [afr-common.c:3735:afr_notify] 
0-rel-vol-replicate-0: All subvolumes are down. Going offline until atleast one 
of them comes back up.

[2014-11-24 13:51:30.822189] I [fuse-bridge.c:4771:fuse_graph_setup] 0-fuse: 
switched to graph 0

[2014-11-24 13:51:30.822245] W [socket.c:514:__socket_rwv] 0-rel-vol-client-1: 
readv failed (No data available)

[2014-11-24 13:51:30.822312] I [fuse-bridge.c:3726:fuse_init] 0-glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.18

[2014-11-24 13:51:30.822562] W [fuse-bridge.c:705:fuse_attr_cbk] 
0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)

[2014-11-24 13:51:30.835120] I [fuse-bridge.c:4630:fuse_thread_proc] 0-fuse: 
unmounting /home

[2014-11-24 13:51:30.835397] W [glusterfsd.c:1002:cleanup_and_exit] 
(-->/lib64/libc.so.6(clone+0x6d) [0x7f00f0f682bd] 
(-->/lib64/libpthread.so.0(+0x7e0e) [0x7f0

0f1603e0e] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xc5) [0x4075f5]))) 0-: 
received signum (15), shutting down

[2014-11-24 13:51:30.835416] I [fuse-bridge.c:5262:fini] 0-fuse: Unmounting 
'/home'.

Relevant section from /var/

[Gluster-users] noob: web server + file server

2014-09-03 Thread Eric Boo
Hi,

I need lots of space for my web-based application and decided that Glusterfs is 
the way to go.

As  a start, I was thinking of using my web server as a glusterfs server and 
another server with bigger space (call it Server B) to serve as a peer. Is this 
feasible or should I just set the web server as a peer and get Server B to be 
the glusterfs server?

Also, as my space requirements grow, before I add another node, I can increase 
the existing disk space of Server B.  When disk space is added, I can either 
resize the existing partition to use the free space, or create a new partition 
for the free space. I think it’s probably easier to create a new partition for 
the free space and then add it as another brick to glusterfs. Is this a good 
idea?

Thanks!

-- 
Eric Boo

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] LSI Syncro CS with Glusterfs

2014-08-20 Thread Eric Horwitz
Okay so this is as I speculated. There would not be an automated way to do
this that is already built into gluster.

Maybe this is really a feature request then.

Wouldn't it be advantageous to be able to replicate the path through 2
servers to the same brick instead of replicating the data on the brick. The
assumption being there is a HA failover path built into the hardware to the
storage.

server1(M) -> /dev/brick1 <-- server2(S)
server3(M) -> /dev/brick2 <-- server4(S)

Active server nodes are server1 and server3
Slave server nodes are server2 and server4

If server1 went down server2 would take over

to build this volume would use syntax like:


*# volume create glfs1 stripe 2 server1,server2:/dev/brick1
server3,server4:/dev/brick2*
The point to all of this is cost savings by using active-active storage
without needing to replicate data. Active-active storage is more expensive
than a typical JBODs however, I wouldn't need 2 JBODs for the same space
with replication thereby reducing $/GiB cost.

Thoughts?












On Wed, Aug 20, 2014 at 6:08 AM, Vijay Bellur  wrote:

> On 08/20/2014 02:15 AM, Eric Horwitz wrote:
>
>> Well the idea is to build a dual server cluster in a box using hardware
>> meant more for Windows storage server 2012. This way we do not need to
>> replicate data across the nodes since the 2 servers see the same block
>> storage and you have active failover on all the hardware. Dataon has a
>> system for this and they even suggest using gluster however, I cannot
>> seem to figure out how to implement this model. All gluster nodes would
>> need to be active and there doesn't seem to be a master - slave failover
>> model. Thoughts?
>>
>>
> One way of doing this could be:
>
> - Both servers have to be part of the gluster trusted storage pool.
>
> - Create a distributed volume with a brick from one of the servers, say
> server1.
>
> - Upon server failover, replace/failover the brick by bringing in a new
> brick from server2. Both old and new bricks would need to refer to the same
> underlying block storage. I am not aware of what hooks Syncro provides to
> perform this failover. Brick replacement without any data migration can be
> achieved by:
>
> volume replace-brickcommit force
>
> -Vijay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] LSI Syncro CS with Glusterfs

2014-08-19 Thread Eric Horwitz
Well the idea is to build a dual server cluster in a box using hardware
meant more for Windows storage server 2012. This way we do not need to
replicate data across the nodes since the 2 servers see the same block
storage and you have active failover on all the hardware. Dataon has a
system for this and they even suggest using gluster however, I cannot seem
to figure out how to implement this model. All gluster nodes would need to
be active and there doesn't seem to be a master - slave failover model.
Thoughts?
 On Aug 19, 2014 1:14 PM, "Justin Clift"  wrote:

> On 19/08/2014, at 2:02 PM, Eric Horwitz wrote:
> > Hey everyone,
> >
> > I was wondering if anyone has used the LSI Syncro CS controllers to
> build a HA node pair (CiB) to run glusterfs on so that you would not need
> to use replication? Trying to see if this is a viable solution considering
> it significantly reduces the cost for duplicated storage. Also reduces the
> need to Active-Active storage.
>
> Trying in my head to picture how this would work with GlusterFS
> architecture, but I'm not seeing it.
>
> How are you thinking of putting things together? :)
>
> Regards and best wishes,
>
> Justin Clift
>
> --
> GlusterFS - http://www.gluster.org
>
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
>
> My personal twitter: twitter.com/realjustinclift
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] LSI Syncro CS with Glusterfs

2014-08-19 Thread Eric Horwitz
Hey everyone,

I was wondering if anyone has used the LSI Syncro CS controllers to build a
HA node pair (CiB) to run glusterfs on so that you would not need to use
replication? Trying to see if this is a viable solution considering it
significantly reduces the cost for duplicated storage. Also reduces the
need to Active-Active storage.

If you have any insight into building a HA glusterfs cluster without using
replication it would be greatly appreciated. Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs 3.5.2-1 cannot read volume when stripe<2 NFS OSX 10.9

2014-08-11 Thread Eric Horwitz
Red Hat Bugzilla – Bug 1128820 Submitted
https://bugzilla.redhat.com/show_bug.cgi?id=1128820


On Wed, Aug 6, 2014 at 9:27 PM, Harshavardhana 
wrote:

> Would you mind opening up a bug and provide glusterfs logs server -
> also reproduce it while grabbing a tcpdump on server?
>
> Thanks
>
> On Wed, Aug 6, 2014 at 1:02 PM, Eric Horwitz  wrote:
> > I am having an issue where I can write to a glusterfs striped volume via
> NFS
> > but I cannot read from it. The stripe is 2
> >
> > ls -l hangs. I can rsync to the storage from the Mac.
> >
> > Can anyone help?
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Religious confuse piety with mere ritual, the virtuous confuse
> regulation with outcomes
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterfs 3.5.2-1 NFS on OSX 10.9 cannot read

2014-08-07 Thread Eric Horwitz
I am having an issue where I can write to a glusterfs striped volume but I
cannot read from it. The stripe is 2

ls -l hangs. I can rsync to the storage from the Mac using NFS.

Can anyone help?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfs 3.5.2-1 cannot read volume when stripe<2 NFS OSX 10.9

2014-08-06 Thread Eric Horwitz
I am having an issue where I can write to a glusterfs striped volume via
NFS but I cannot read from it. The stripe is 2

ls -l hangs. I can rsync to the storage from the Mac.

Can anyone help?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Multiple Volumes (bricks), One Disk

2013-11-12 Thread Eric Johnson
I would suggest using different partitions for each brick.  We use LVM 
and start off with a relativity small amount allocated space, then grow 
the partitions as needed.  If you were to place 2 bricks on the same 
partition then the free space is not going to show correctly.  Example:


1TB partition 2 bricks on this partition

brick: vol-1-a   using 200GB
brick: vol-2-a   using 300GB.

Both volumes would show that they have ~500GB free, but in reality there 
would be ~500GB that either could use.  I don't know if there would be 
any other issues with putting 2 or more bricks on the same partition, 
but it doesn't seem like a good idea.  I had gluster setup that way when 
I was first testing it, and it seemed to work other than the free space 
issue, but I quickly realized it would be better to separate out the 
bricks on to their own partition.  Using LVM allows you to easily grow 
partitions as needed.


my 2 cents.


On 11/12/13, 9:31 AM, David Gibbons wrote:

Hi All,

I am interested in some feedback on putting multiple bricks on one 
physical disk. Each brick being assigned to a different volume. Here 
is the scenario:


4 disks per server, 4 servers, 2x2 distribute/replicate

I would prefer to have just one volume but need to do geo-replication 
on some of the data (but not all of it). My thought was to use two 
volumes, which would allow me to selectively geo-replicate just the 
data that I need to, by replicating only one volume.


A couple of questions come to mind:
1) Any implications of doing two bricks for different volumes on one 
physical disk?
2) Will the "free space" across each volume still calculate correctly? 
IE, if one volume takes up 2/3 of the total physical disk space, will 
the second volume still reflect the correct amount of used space?

3) Am I being stupid/missing something obvious?

Cheers,
Dave


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Eric Johnson
713-968-2546
VP of MIS
Internet America
www.internetamerica.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster for RRD Data.

2013-10-08 Thread Eric Rosel
Hi Peter,

I don't think gluster would be good for this use case.
Maybe try to use NFS instead of fuse. It is better for lots of small files.
It would be great if you could report back the test results using NFS.

HTH,
-eric


On Tue, Oct 8, 2013 at 8:27 PM, Peter Mistich
wrote:

> Hi,
>
> I am new to gluster and have looked all over on how to tune gluster for
> rrd files and have come up empty.
>
> I have a default install of gluster and am using the fuse client.
>
> The issue is the performance of rrd updates are very slow.
>
> here are the test I ran one on the physical box and one from the clients.
>
> ---physical
> Create 10 rrds  3 c/s (0.00125 sdv)   Update 10 rrds 37672 u/s
> (0.1 sdv)
> Create 10 rrds  3 c/s (0.00046 sdv)   Update 20 rrds 40358 u/s
> (0.1 sdv)
> Create 20 rrds  2 c/s (0.00094 sdv)   Update 40 rrds 40088 u/s
> (0.1 sdv)
> Create 40 rrds  2 c/s (0.00190 sdv)   Update 80 rrds 39761 u/s
> (0.2 sdv)
> Create 80 rrds  2 c/s (0.00377 sdv)   Update160 rrds 38989 u/s
> (0.3 sdv)
> Create160 rrds  2 c/s (0.00761 sdv)   Update320 rrds 37603 u/s
> (0.5 sdv)
> Create320 rrds  2 c/s (0.01529 sdv)   Update640 rrds 35122 u/s
> (0.7 sdv)
> Create640 rrds  1 c/s (0.03026 sdv)   Update   1280 rrds 30607 u/s
> (0.00011 sdv)
> Create   1280 rrds  1 c/s (0.05992 sdv)   Update   2560 rrds 20996 u/s
> (0.00019 sdv)
>
>
>
>
> ---clients
> Create 10 rrds  1 c/s (0.06533 sdv)   Update 10 rrds 202 u/s
> (0.00083 sdv)
> Create 10 rrds  1 c/s (0.03981 sdv)   Update 20 rrds 194 u/s
> (0.00107 sdv)
> Create 20 rrds  2 c/s (0.41479 sdv)   Update 40 rrds 182 u/s
> (0.00154 sdv)
> Create 40 rrds  2 c/s (0.12405 sdv)   Update 80 rrds 171 u/s
> (0.00144 sdv)
> Create 80 rrds  1 c/s (0.27594 sdv)   Update160 rrds 159 u/s
> (0.00230 sdv)
> Create160 rrds  1 c/s (0.29361 sdv)   Update320 rrds 125 u/s
> (0.00273 sdv)
> Create320 rrds  2 c/s (0.29585 sdv)   Update640 rrds 110 u/s
> (0.00558 sdv)
> Create640 rrds  1 c/s (0.28813 sdv)   Update   1280 rrds 114 u/s
> (0.00362 sdv)
> Create   1280 rrds  1 c/s (0.28959 sdv)   Update   2560 rrds 110 u/s
> (0.00719 sdv)
>
>
> a little information about the setup.
>
> I am running rhel 6.3 on the gluster server and 6.4 on the clients with a
> 10 gig network between the boxes.
>
> any help would be great. I know there has to be something I can tune to
> get closer to speeds to the physical box.
>
> Thanks,
> Pete
> __**_
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] volume info

2013-08-11 Thread Eric Rosel
Thanks for the reply.
Already been to that page, it only describes how to create volumes.
It does not explain how to interpret the output of "gluster volume info".

Thanks,
-eric


On Sun, Aug 11, 2013 at 11:07 AM, Vijay Bellur  wrote:

> On 08/11/2013 06:30 AM, Eric Rosel wrote:
>
>> Hi All,
>>
>> I have this volume on a test cluster:
>>
>> ==
>> [root@node01 ~]# gluster volume info
>> Volume Name: backups
>> Type: Distributed-Replicate
>> Volume ID: 26fe7c5f-c15b-4054-b7ef-**bf6bfae828df
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.2.1:/export/brick1
>> Brick2: 192.168.2.2:/export/brick1
>> Brick3: 192.168.2.1:/export/brick2
>> Brick4: 192.168.2.2:/export/brick2
>> ==
>>
>> How does one determine which bricks are "replicas" and which bricks are
>> used for "distribute"?  There are only 2 nodes here, would all the data
>> in the volume still be accessible if one node were to go down?
>>
>> Apologies if this has been discussed or documented before, please just
>> point me to the appropriate link.
>>
>>
> This might help:
>
> https://github.com/gluster/**glusterfs/blob/master/doc/**
> admin-guide/en-US/markdown/**admin_setting_volumes.md#**
> creating-distributed-**replicated-volumes<https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md#creating-distributed-replicated-volumes>
>
> -Vijay
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] volume info

2013-08-10 Thread Eric Rosel
Hi All,

I have this volume on a test cluster:

==
[root@node01 ~]# gluster volume info

Volume Name: backups
Type: Distributed-Replicate
Volume ID: 26fe7c5f-c15b-4054-b7ef-bf6bfae828df
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.2.1:/export/brick1
Brick2: 192.168.2.2:/export/brick1
Brick3: 192.168.2.1:/export/brick2
Brick4: 192.168.2.2:/export/brick2
==

How does one determine which bricks are "replicas" and which bricks are
used for "distribute"?  There are only 2 nodes here, would all the data in
the volume still be accessible if one node were to go down?

Apologies if this has been discussed or documented before, please just
point me to the appropriate link.

Thanks,
-eric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tuning/Configuration: Specifying the TCP port opened by glusterfsd for each brick?

2012-09-26 Thread Eric
Too bad: That would be helpful with iptables rules.

Thanks anyhow, John.

Eric Pretorious
Truckee, CA





>
> From: John Mark Walker 
>To: Eric  
>Cc: "gluster-users@gluster.org"  
>Sent: Wednesday, September 26, 2012 7:19 AM
>Subject: Re: [Gluster-users] Tuning/Configuration: Specifying the TCP port 
>opened by glusterfsd for each brick?
> 
>
>
>
>
>To: "gluster-users@gluster.org"  
>>>Sent: Thursday, September 6, 2012 6:21 PM
>>>Subject: [Gluster-users] Tuning/Configuration: Specifying the TCP port 
>>>opened by glusterfsd for each brick?
>>> 
>>>
>>>Is it possible to specify the TCP port opened by glusterfsd for each brick 
>>>(e.g., similar to the nfs.port setting)?
>>>
>>>
>
>Having taken a brief look, I don't see any obvious way to change the ports. It 
>would probably entail changing the source and recompiling. 
>
>-JM
>
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Tuning/Configuration: Specifying the TCP port opened by glusterfsd for each brick?

2012-09-26 Thread Eric
Anybody?

It's been three weeks and nobody's even made a suggestion!

Eric Pretorious
Truckee, CA



>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Thursday, September 6, 2012 6:21 PM
>Subject: [Gluster-users] Tuning/Configuration: Specifying the TCP port opened 
>by glusterfsd for each brick?
> 
>
>Is it possible to specify the TCP port opened by glusterfsd for each brick 
>(e.g., similar to the nfs.port setting)?
>
>Eric Pretorious
>Truckee, CA
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Re-provisioning a node and it's bricks

2012-09-08 Thread Eric
FYI: I don't know why, but the second server node required that the volume 
be stopped and restarted before the bricks would be marked as active.


HTH,

Eric P.
Truckee, CA



>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Saturday, September 8, 2012 10:56 AM
>Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks
> 
>
>There's a document describing the procedure for Gluster 3.2.x: 
>http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
>
>The procedure for Gluster 3.3.0 is _remarkably_ simple:
>
>1. Start the glusterd daemon on the newly re-provisioned server node.
>2. Probe the surviving server node from the recovered/re-provisioned server 
>node.
>3. Restart the glusterd daemon on the  recovered/re-provisioned server node.
>
>NOTES:
>1. Do NOT remove the extended file system attributes from the bricks on the 
>server node being 
>   recovered/re-provisioned during recovery/re-provisioning.
>2. Verify that any/all partitions that are used as bricks are mounted before 
>performing these steps.
>3. Verify that any/all iptables firewall rules that are
 necessary for Gluster to communicate have 
>   been added  before performing these steps.
>
>HTH,
>Eric Pretorious
>
>Truckee, CA
>
>
>
>
>>
>> From: Kent Liu 
>>To: 'John Mark Walker' ; 'Eric'  
>>Cc: gluster-users@gluster.org 
>>Sent: Thursday, September 6, 2012 7:00 PM
>>Subject: RE: [Gluster-users] Re-provisioning a node and it's bricks
>> 
>>
>>It would be great if any suggestions from IRC can be shared on this list. 
>>Eric’s question is a common requirement.
>> 
>>Thanks,
>>Kent
>> 
>>From:gluster-users-boun...@gluster.org 
>>[mailto:gluster-users-boun...@gluster.org] On Behalf Of John Mark Walker
>>Sent: Thursday, September 06, 2012 3:02 PM
>>To: Eric
>>Cc: gluster-users@gluster.org
>>Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks
>> 
>>Eric - was good to see you in San Diego. Glad to see you on the list. 
>> 
>>I would recommend trying the IRC channel tomorrow morning. Should be someone 
>>there who can help you. 
>> 
>>-JM
>> 
>>
>>
>>
>>I've created a distributed replicated volume:
>>>> gluster> volume info
>>>>  
>>>> Volume Name: Repositories
>>>> Type: Distributed-Replicate
>>>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>>>> Status:
 Started
>>>> Number of Bricks: 2 x 2 = 4
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: 192.168.1.1:/srv/sda7
>>>> Brick2: 192.168.1.2:/srv/sda7
>>>> Brick3: 192.168.1.1:/srv/sdb7
>>>> Brick4: 192.168.1.2:/srv/sdb7
>>>
>>>...by allocating physical partitions on each HDD of each node for the 
>>>volumes' bricks: e.g.,
>>>
>>>> [eric@sn1 srv]$ mount | grep xfs
>>>> /dev/sda7 on /srv/sda7 type xfs (rw)
>>>> /dev/sdb7 on /srv/sdb7 type xfs (rw)
>>>> /dev/sda8 on /srv/sda8 type xfs (rw)
>>>> /dev/sdb8 on /srv/sdb8 type xfs (rw)
>>>> /dev/sda9 on /srv/sda9 type xfs (rw)
>>>> /dev/sdb9
 on /srv/sdb9 type xfs (rw)
>>>> /dev/sda10 on /srv/sda10 type xfs (rw)
>>>> /dev/sdb10 on /srv/sdb10 type xfs (rw)
>>>
>>>I plan to re-provision both nodes (e.g., convert them  from CentOS -> SLES) 
>>>and need to preserve the data on the current bricks.
>>>
>>>It seems to me that the procedure for this endeavor would be to: detach the 
>>>node that will be re-provisioned; re-provision the node; add the node back 
>>>to the trusted storage pool, and then; add the bricks back to the volume - 
>>>*but* this plan fails at Step #1. i.e.,
>>>
>>> * When attempting to detach the second node from the volume, the Console 
>>>Manager 
>>>   complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
>>> * When attempting to remove the second node's bricks from the volume, the 
>>>Console 
>>>   Manager complains "Bricks not from same subvol for replica".
>>>
>>>Is this even feasible? I've already verified that bricks can be *added* to
 the volume (by adding two additional local partitions to the volume) but I'm 
not sure where to begin preparing the nodes for re-provisioning.
>>>
>>>Eric Pretorious
>>>Truckee, CA
>>>
>>>___
>>>Gluster-users mailing list
>>>Gluster-users@gluster.org
>>>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> 
>>
>>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Re-provisioning a node and it's bricks

2012-09-08 Thread Eric
There's a document describing the procedure for Gluster 
3.2.x:http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server

The procedure for Gluster 3.3.0 is _remarkably_ simple:

1. Start the glusterd daemon on the newly re-provisioned server node.
2. Probe the surviving server node from the recovered/re-provisioned server 
node.
3. Restart the glusterd daemon on the  recovered/re-provisioned server node.

NOTES:
1. Do NOT remove the extended file system attributes from the bricks on the 
server node being 
   recovered/re-provisioned during recovery/re-provisioning.
2. Verify that any/all partitions that are used as bricks are mounted before 
performing these steps.
3. Verify that any/all iptables firewall rules that are necessary for Gluster 
to communicate have 
   been added  before performing these steps.

HTH,
Eric Pretorious

Truckee, CA




>
> From: Kent Liu 
>To: 'John Mark Walker' ; 'Eric'  
>Cc: gluster-users@gluster.org 
>Sent: Thursday, September 6, 2012 7:00 PM
>Subject: RE: [Gluster-users] Re-provisioning a node and it's bricks
> 
>
>It would be great if any suggestions from IRC can be shared on this list. 
>Eric’s question is a common requirement.
> 
>Thanks,
>Kent
> 
>From:gluster-users-boun...@gluster.org 
>[mailto:gluster-users-boun...@gluster.org] On Behalf Of John Mark Walker
>Sent: Thursday, September 06, 2012 3:02 PM
>To: Eric
>Cc: gluster-users@gluster.org
>Subject: Re: [Gluster-users] Re-provisioning a node and it's bricks
> 
>Eric - was good to see you in San Diego. Glad to see you on the list. 
> 
>I would recommend trying the IRC channel tomorrow morning. Should be someone 
>there who can help you. 
> 
>-JM
> 
>
>
>
>I've created a distributed replicated volume:
>>> gluster> volume info
>>>  
>>> Volume Name: Repositories
>>> Type: Distributed-Replicate
>>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>>> Status: Started
>>> Number of Bricks: 2 x 2 = 4
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 192.168.1.1:/srv/sda7
>>> Brick2: 192.168.1.2:/srv/sda7
>>> Brick3: 192.168.1.1:/srv/sdb7
>>> Brick4: 192.168.1.2:/srv/sdb7
>>
>>...by allocating physical partitions on each HDD of each node for the 
>>volumes' bricks: e.g.,
>>
>>> [eric@sn1 srv]$ mount | grep xfs
>>> /dev/sda7 on /srv/sda7 type xfs (rw)
>>> /dev/sdb7 on /srv/sdb7 type xfs (rw)
>>> /dev/sda8 on /srv/sda8 type xfs (rw)
>>> /dev/sdb8 on /srv/sdb8 type xfs (rw)
>>> /dev/sda9 on /srv/sda9 type xfs (rw)
>>> /dev/sdb9 on /srv/sdb9 type xfs (rw)
>>> /dev/sda10 on /srv/sda10 type xfs (rw)
>>> /dev/sdb10 on /srv/sdb10 type xfs (rw)
>>
>>I plan to re-provision both nodes (e.g., convert them  from CentOS -> SLES) 
>>and need to preserve the data on the current bricks.
>>
>>It seems to me that the procedure for this endeavor would be to: detach the 
>>node that will be re-provisioned; re-provision the node; add the node back to 
>>the trusted storage pool, and then; add the bricks back to the volume - *but* 
>>this plan fails at Step #1. i.e.,
>>
>> * When attempting to detach the second node from the volume, the Console 
>>Manager 
>>   complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
>> * When attempting to remove the second node's bricks from the volume, the 
>>Console 
>>   Manager complains "Bricks not from same subvol for replica".
>>
>>Is this even feasible? I've already verified that bricks can be *added* to 
>>the volume (by adding two additional local partitions to the volume) but I'm 
>>not sure where to begin preparing the nodes for re-provisioning.
>>
>>Eric Pretorious
>>Truckee, CA
>>
>>___
>>Gluster-users mailing list
>>Gluster-users@gluster.org
>>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Feature Request, Console Manger: Command History

2012-09-06 Thread Eric
After installing the readline headers:

> [eric@sn2 glusterfs-3.3.0]$ rpm -qa | grep readline
> readline-6.0-4.el6.x86_64
> readline-devel-6.0-4.el6.x86_64


...uninstalling and re-compiling:

> [eric@sn2 glusterfs-3.3.0]$ sudo make uninstall && sudo make clean && 
> ./configure && make && sudo make install

...command history+editing seems to be working splendidly!

Eric Pretorious
Truckee, CA




>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Thursday, September 6, 2012 5:42 PM
>Subject: Re: [Gluster-users] Feature Request, Console Manger: Command History
> 
>
>The readline library is installed:
>
>
>> [eric@sn2 glusterfs-3.3.0]$ ldconfig -p | grep readline
>>    libreadline.so.6 (libc6,x86-64) => /lib64/libreadline.so.6
>
>
>...but ./configure doesn't seem to detect it:
>
>
>> GlusterFS configure summary
>> ===
>> FUSE client    : yes
>> Infiniband verbs   : no
>> epoll IO multiplex : yes
>> argp-standalone    : no
>> fusermount : no
>> readline   : no
>> georeplication : yes
>
>
>Suggestions?
>
>
>Eric Pretorious
>Truckee, CA
>
>
>
>>
>> From: Brian Candler 
>>To: Eric  
>>Cc: "gluster-users@gluster.org"  
>>Sent: Thursday, September 6, 2012 3:26 AM
>>Subject: Re: [Gluster-users] Feature Request, Console Manger: Command History
>> 
>>On Wed, Sep 05, 2012 at 08:53:27PM -0700, Eric wrote:
>>>    I compiled Gluster by following the very simple directions that were
>>>    provided in ./INSTALL:
>>>    1. ./configure
>>>    2. make
>>>    3. make install
>>>    FWIW: There doesn't appear to be anything in the Makefile about
>>>    readline.
>>
>>$ grep readline configure.ac
>>AC_CHECK_LIB([readline -lcurses],[readline],[RLLIBS="-lreadline -lcurses"])
>>AC_CHECK_LIB([readline -ltermcap],[readline],[RLLIBS="-lreadline -ltermcap"])
>>AC_CHECK_LIB([readline -lncurses],[readline],[RLLIBS="-lreadline -lncurses"])
>>   AC_DEFINE(HAVE_READLINE, 1, [readline enabled CLI])
>>echo "readline           : $BUILD_READLINE"
>>
>>That is: the readline library is detected, and the output of ./configure
>>should have reported whether readline was found or not.
>>
>>$
 grep readline glusterfs.spec.in
>>BuildRequires: ncurses-devel readline-devel openssl-devel
>>- Add readline and libtermcap dependencies
>>- Update to support readline
>>
>>That is: when you build an RPM package, readline-devel is automatically
>>required as a build-time dependency (was added Tue Jul 19 2011)
>>
>>
>>
>>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Tuning/Configuration: Specifying the TCP port opened by glusterfsd for each brick?

2012-09-06 Thread Eric
Is it possible to specify the TCP port opened by glusterfsd for each brick 
(e.g., similar to the nfs.port setting)?

Eric Pretorious
Truckee, CA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Feature Request, Console Manger: Command History

2012-09-06 Thread Eric
The readline library is installed:

> [eric@sn2 glusterfs-3.3.0]$ ldconfig -p | grep readline
>    libreadline.so.6 (libc6,x86-64) => /lib64/libreadline.so.6


...but ./configure doesn't seem to detect it:

> GlusterFS configure summary
> ===
> FUSE client    : yes
> Infiniband verbs   : no
> epoll IO multiplex : yes
> argp-standalone    : no
> fusermount : no
> readline   : no
> georeplication : yes


Suggestions?

Eric Pretorious
Truckee, CA




>____
> From: Brian Candler 
>To: Eric  
>Cc: "gluster-users@gluster.org"  
>Sent: Thursday, September 6, 2012 3:26 AM
>Subject: Re: [Gluster-users] Feature Request, Console Manger: Command History
> 
>On Wed, Sep 05, 2012 at 08:53:27PM -0700, Eric wrote:
>>    I compiled Gluster by following the very simple directions that were
>>    provided in ./INSTALL:
>>    1. ./configure
>>    2. make
>>    3. make install
>>    FWIW: There doesn't appear to be anything in the Makefile about
>>    readline.
>
>$ grep readline configure.ac
>AC_CHECK_LIB([readline -lcurses],[readline],[RLLIBS="-lreadline -lcurses"])
>AC_CHECK_LIB([readline -ltermcap],[readline],[RLLIBS="-lreadline -ltermcap"])
>AC_CHECK_LIB([readline -lncurses],[readline],[RLLIBS="-lreadline -lncurses"])
>   AC_DEFINE(HAVE_READLINE, 1, [readline enabled CLI])
>echo "readline           : $BUILD_READLINE"
>
>That is: the readline library is detected, and the output of ./configure
>should have reported whether readline was found or not.
>
>$ grep readline glusterfs.spec.in
>BuildRequires: ncurses-devel readline-devel openssl-devel
>- Add readline and libtermcap dependencies
>- Update to support readline
>
>That is: when you build an RPM package, readline-devel is automatically
>required as a build-time dependency (was added Tue Jul 19 2011)
>
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Re-provisioning a node and it's bricks

2012-09-05 Thread Eric
I've created a distributed replicated volume:


> gluster> volume info
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda7
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7

...by allocating physical partitions on each HDD of each node for the volumes' 
bricks: e.g.,


> [eric@sn1 srv]$ mount | grep xfs
> /dev/sda7 on /srv/sda7 type xfs (rw)
> /dev/sdb7 on /srv/sdb7 type xfs (rw)
> /dev/sda8 on /srv/sda8 type xfs (rw)
> /dev/sdb8 on /srv/sdb8 type xfs (rw)
> /dev/sda9 on /srv/sda9 type xfs (rw)
> /dev/sdb9 on /srv/sdb9 type xfs (rw)
> /dev/sda10 on /srv/sda10 type xfs (rw)
> /dev/sdb10 on /srv/sdb10 type xfs (rw)

I plan to re-provision both nodes (e.g., convert them  from CentOS -> SLES) and 
need to preserve the data on the current bricks.

It seems to me that the procedure for this endeavor would be to: detach the 
node that will be re-provisioned; re-provision the node; add the node back to 
the trusted storage pool, and then; add the bricks back to the volume - *but* 
this plan fails at Step #1. i.e.,

 * When attempting to  detach the second node from the volume, the Console 
Manager 
   complains "Brick(s) with the peer 192.168.1.2 exist in cluster".
 * When attempting to remove the second node's bricks from the volume, the 
Console
   Manager complains "Bricks not from same subvol for replica".

Is this even feasible? I've already verified that bricks can be *added* to the 
volume (by adding two additional local partitions to the volume) but I'm not 
sure where to begin preparing the nodes for re-provisioning.

Eric Pretorious
Truckee, CA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] migration operations: Stopping a migration

2012-09-05 Thread Eric
FYI: While working on another project, I discovered that the file system 
attributes had been restored to their original state (or at least what I 
remember their original state to be):

> [eric@sn1 srv]$ for x in sda7 sdb7 sda8 sdb8 ; do sudo getfattr -m - /srv/$x 
> 2> /dev/null ; done
> # file: srv/sda7
> trusted.afr.Repositories-client-0
> trusted.afr.Repositories-client-1
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
> 
> # file: srv/sdb7
> trusted.afr.Repositories-client-2
> trusted.afr.Repositories-client-3
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
> 
> ...

Eric Pretorious
Truckee, CA





>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Wednesday, September 5, 2012 9:34 PM
>Subject: Re: [Gluster-users] migration operations: Stopping a migration
> 
>
>I chose to clear the attributes, leave the files intact, and replace the new 
>brick with the old brick:
>
>
>> [eric@sn1 srv]$ for x in `sudo getfattr -m - /srv/sda7 2> /dev/null | tail 
>> -n +2` ; do sudo setfattr -x $x /srv/sda7 ; done
>
>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>> 192.168.1.1:/srv/sda7 start
>> replace-brick started successfully
>> 
>> gluster> volume
 replace-brick Repositories 192.168.1.1:/srv/sda8 192.168.1.1:/srv/sda7 status
>> Number of files migrated = 24631    Migration complete
>> 
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>> 192.168.1.1:/srv/sda7 commit
>> replace-brick commit successful
>> 
>> gluster> volume info Repositories
>>  
>> Volume Name: Repositories
>> Type: Distributed-Replicate
>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.1.1:/srv/sda7
>> Brick2: 192.168.1.2:/srv/sda7
>> Brick3: 192.168.1.1:/srv/sdb7
>> Brick4: 192.168.1.2:/srv/sdb7
>
>
>...but the file system attributes of /srv/sda7 (i.e., the old brick) are not 
>the same as they were when I began this journey:
>
>
>> [eric@sn1 srv]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
>> /dev/null ; done
>> # file: srv/sda7
>> trusted.gfid
>> trusted.glusterfs.volume-id
>> 
>> # file: srv/sdb7
>> trusted.afr.Repositories-client-2
>> trusted.afr.Repositories-client-3
>> trusted.gfid
>> trusted.glusterfs.dht
>> trusted.glusterfs.volume-id
>
>
>Why is this? What will the long-term effects of this change? Are there other 
>features that depend on those attributes?
>
>
>
>(The *original* attributes of /srv/sda7 are at the very bottom of this 
>message.)
>
>
>Eric Pretorious
>Truckee, CA
>
>
>>
>> From: Eric 
>>To: "gluster-users@gluster.org"  
>>Sent: Wednesday, September 5, 2012 9:08 PM
>>Subject: Re: [Gluster-users] migration operations: Stopping a migration
>> 
>>
>>On yet another related note: I  noticed that - consistent with the keeping 
>>the file system attributes - the files them selves are left intact on the old 
>>brick. Once I remove the file system attributes...
>>
>>
>>> # for x in `getfattr -m - /srv/sda7 2> /dev/null | tail -n +2` ; do 
>>> setfattr -x $x /srv/sda7 ; done
>>
>>
>>should I remove the old files? Or should I  leave the files intact and 
>>migrate back to the original brick:
>>
>>
>>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
>>> 192.168.1.1:/srv/sda8 start
>>
>>
>>
>>...and then  heal the volume?
>>
>>
>>Eric Pretorious
>>Truckee, CA
>>
>>
>>
>>
>>>
>>> From: Eric 
>>>To: "gluster-users@gluster.org"  
>>>Sent: Wednesday, September 5, 2012 5:42 PM
>>>Subject: Re: [Gluster-users] migration operations: Stopping a migration
>>> 
>>>
>>>On a related note: Now that the PoC has been completed, I'm not able to 
>>>migrate back to the original brick because of the undocumented, 
>>>save-you-from-yourself file system attribute feature:
>>>
>>>
>>>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>>>> 192.168.1.1:/srv/sda7 start
>>>> /srv/sda7 or a prefix of it is already part of a volume
>>>
>>>
>>>
>>>Is there a simpler, more-direct method o

Re: [Gluster-users] migration operations: Stopping a migration

2012-09-05 Thread Eric
I chose to clear the attributes, leave the files intact, and replace the new 
brick with the old brick:

> [eric@sn1 srv]$ for x in `sudo getfattr -m - /srv/sda7 2> /dev/null | tail -n 
> +2` ; do sudo setfattr -x $x /srv/sda7 ; done


> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
> 192.168.1.1:/srv/sda7 start
> replace-brick started successfully
> 
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
> 192.168.1.1:/srv/sda7 status
> Number of files migrated = 24631    Migration complete
> 
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
> 192.168.1.1:/srv/sda7 commit
> replace-brick commit successful
> 
> gluster> volume info Repositories
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda7
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7

...but the file system attributes of /srv/sda7 (i.e., the old brick) are not 
the same as they were when I began this journey:

> [eric@sn1 srv]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
> /dev/null ; done
> # file: srv/sda7
> trusted.gfid
> trusted.glusterfs.volume-id
> 
> # file: srv/sdb7
> trusted.afr.Repositories-client-2
> trusted.afr.Repositories-client-3
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id

Why is this? What will the long-term effects of this change? Are there other 
features that depend on those attributes?


(The *original* attributes of /srv/sda7 are at the very bottom of this message.)

Eric Pretorious
Truckee, CA


>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Wednesday, September 5, 2012 9:08 PM
>Subject: Re: [Gluster-users] migration operations: Stopping a migration
> 
>
>On yet another related note: I  noticed that - consistent with the keeping the 
>file system attributes - the files them selves are left intact on the old 
>brick. Once I remove the file system attributes...
>
>
>> # for x in `getfattr -m - /srv/sda7 2> /dev/null | tail -n +2` ; do setfattr 
>> -x $x /srv/sda7 ; done
>
>
>should I remove the old files? Or should I  leave the files intact and 
>migrate back to the original brick:
>
>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
>> 192.168.1.1:/srv/sda8 start
>
>
>
>...and then  heal the volume?
>
>
>Eric Pretorious
>Truckee, CA
>
>
>
>
>>
>> From: Eric 
>>To: "gluster-users@gluster.org"  
>>Sent: Wednesday, September 5, 2012 5:42 PM
>>Subject: Re: [Gluster-users] migration operations: Stopping a migration
>> 
>>
>>On a related note: Now that the PoC has been completed, I'm not able to 
>>migrate back to the original brick because of the undocumented, 
>>save-you-from-yourself file system attribute feature:
>>
>>
>>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>>> 192.168.1.1:/srv/sda7 start
>>> /srv/sda7 or a prefix of it is already part of a volume
>>
>>
>>
>>Is there a simpler, more-direct method of migrating back to the original 
>>brick or should I wipe the file system attributes manually? I only ask 
>>because:
>>
>>
>>1. the long-term effects of this strategy aren't addressed in the 
>>Administration Guide AFAICT, and;
>>
>>2. though the intent of the feature has merit, it lacks elegance. e.g., the 
>>addition of a "force" attribute
>>
>>   (like that of the commit feature) could be justified in this instance, 
>>IMHO.
>>
>>
>>   > gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>>192.168.1.1:/srv/sda7 start force
>>   > Usage: volume replace-brick
>>{start|pause|abort|status|commit [force]}
>>
>>
>>
>>Eric Pretorious
>>Truckee, CA
>>
>>
>>
>>
>>>
>>> From: Eric 
>>>To: "gluster-users@gluster.org"  
>>>Sent: Wednesday, September 5, 2012 5:27 PM
>>>Subject: Re: [Gluster-users] migration operations: Stopping a migration
>>> 
>>>
>>>On a hunch, I attempted the "volume replace-brick   
>>> commit" command and, without much fanfare, the volume 
>>>information was updated:
>>>
>>>
>>>> gluster> volume repla

Re: [Gluster-users] migration operations: Stopping a migration

2012-09-05 Thread Eric
On yet another related note: I  noticed that - consistent with the keeping the 
file system attributes - the files them selves are left intact on the old 
brick. Once I remove the file system attributes...

> # for x in `getfattr -m - /srv/sda7 2> /dev/null | tail -n +2` ; do setfattr 
> -x $x /srv/sda7 ; done

should I remove the old files? Or should I  leave the files intact and 
migrate back to the original brick:

> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
> 192.168.1.1:/srv/sda8 start


...and then  heal the volume?

Eric Pretorious
Truckee, CA




>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Wednesday, September 5, 2012 5:42 PM
>Subject: Re: [Gluster-users] migration operations: Stopping a migration
> 
>
>On a related note: Now that the PoC has been completed, I'm not able to 
>migrate back to the original brick because of the undocumented, 
>save-you-from-yourself file system attribute feature:
>
>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>> 192.168.1.1:/srv/sda7 start
>> /srv/sda7 or a prefix of it is already part of a volume
>
>
>
>Is there a simpler, more-direct method of migrating back to the original brick 
>or should I wipe the file system attributes manually? I only ask because:
>
>
>1. the long-term effects of this strategy aren't addressed in the 
>Administration Guide AFAICT, and;
>
>2. though the intent of the feature has merit, it lacks elegance. e.g., the 
>addition of a "force" attribute
>
>   (like that of the commit feature) could be justified in this instance, IMHO.
>
>
>   > gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
>192.168.1.1:/srv/sda7 start force
>   > Usage: volume replace-brick
>{start|pause|abort|status|commit [force]}
>
>
>
>Eric Pretorious
>Truckee, CA
>
>
>
>
>>
>> From: Eric 
>>To: "gluster-users@gluster.org"  
>>Sent: Wednesday, September 5, 2012 5:27 PM
>>Subject: Re: [Gluster-users] migration operations: Stopping a migration
>> 
>>
>>On a hunch, I attempted the "volume replace-brick   
>> commit" command and, without much fanfare, the volume information 
>>was updated:
>>
>>
>>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
>>> 192.168.1.1:/srv/sda8 commit
>>> replace-brick commit successful
>>> 
>>> gluster> volume info
>>>  
>>> Volume Name: Repositories
>>> Type:
 Distributed-Replicate
>>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>>> Status: Started
>>> Number of Bricks: 2 x 2 = 4
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 192.168.1.1:/srv/sda8
>>> Brick2: 192.168.1.2:/srv/sda7
>>> Brick3: 192.168.1.1:/srv/sdb7
>>> Brick4: 192.168.1.2:/srv/sdb7
>>> 
>>> gluster> volume status
>>> Status of volume: Repositories
>>> Gluster process                        Port    Online    Pid
>>> --
>>> Brick 192.168.1.1:/srv/sda8                24012    Y    13796
>>> Brick 192.168.1.2:/srv/sda7               
 24009    Y    4946
>>> Brick 192.168.1.1:/srv/sdb7                24010    Y    5438
>>> Brick 192.168.1.2:/srv/sdb7                24010    Y    4951
>>> NFS Server on localhost                    38467    Y    13803
>>> Self-heal Daemon on localhost                N/A    Y    13808
>>> NFS Server on 192.168.1.2                38467    Y    7969
>>> Self-heal Daemon on 192.168.1.2           
     N/A    Y    7974
>>
>>
>>The XFS attributes are still intact on the old brick, however:
>>
>>
>>> [eric@sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
>>> /dev/null ; done
>>> # file: srv/sda7
>>> trusted.afr.Repositories-client-0
>>> trusted.afr.Repositories-client-1
>>> trusted.afr.Repositories-io-threads
>>> trusted.afr.Repositories-replace-brick
>>> trusted.gfid
>>> trusted.glusterfs.dht
>>> trusted.glusterfs.volume-id
>>> 
>>> # file: srv/sdb7
>>> trusted.afr.Repositories-client-2
>>> trusted.afr.Repositories-client-3
>>> trusted.gfid
>>> trusted.glusterfs.dht
>>> trusted.glusterfs.volume-id
>>> 
>>> # file:
 srv/sda8
>>> trusted.afr.Repositories-io-threads
>>> trusted.afr.Repositories-replace-brick
>>>

Re: [Gluster-users] Feature Request, Console Manger: Command History

2012-09-05 Thread Eric
I compiled Gluster by following the very simple directions that were provided 
in ./INSTALL:

1. ./configure
2. make
3. make install

FWIW: There doesn't appear to be anything in the Makefile about readline.

Eric Pretorious
Truckee, CA




>
> From: Erez Lirov 
>To: John Jolet  
>Cc: Jules Wang ; Eric ; 
>"gluster-users@gluster.org"  
>Sent: Wednesday, September 5, 2012 8:11 PM
>Subject: Re: [Gluster-users] Feature Request, Console Manger: Command History
> 
>
>Did you compile with the readline lib?  I wonder if that adds a buffer...
>
>
>On Wed, Sep 5, 2012 at 10:23 PM, John Jolet  wrote:
>
>I hate it that there's no buffer.
I usually do gluster commands at the command line instead of interactively.  
Use the bash buffer that way Sent from my PANTECH Element™ on AT&T Jules Wang 
 wrote: 
>>hi, Eric: 
>>    You may find this thread useful:  
>>http://gluster.org/pipermail/gluster-users/2012-August/011207.html
>>
>>
Best Regards.
>>Jules Wang.
>>At 2012-09-06 08:13:25,Eric  wrote:
>>
>>I've noticed that when interacting with the Gluster Console Manager...
>>>
>>>
>>> * Commands are quite lengthy most of the time.
>>> * Some commands (e.g., commands that check the status of an operation) get 
>>>used over-and-over.
>>>
>>>
>>>So, before I submit a feature request, I thought that I'd ask first if 
>>>others might find the feature useful: Would other admin's also find it 
>>>useful for the Console Manager to maintain a history buffer so that commands 
>>>can be retrieved, edited, and/or re-used?
>>>
>>>
>>>Eric Pretorious
>>>Truckee, CA
>>>
>>
>>
>>___
>>Gluster-users mailing list
>>Gluster-users@gluster.org
>>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] migration operations: Stopping a migration

2012-09-05 Thread Eric
On a related note: Now that the PoC has been completed, I'm not able to migrate 
back to the original brick because of the undocumented, save-you-from-yourself 
file system attribute feature:

> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
> 192.168.1.1:/srv/sda7 start
> /srv/sda7 or a prefix of it is already part of a volume


Is there a simpler, more-direct method of migrating back to the original brick 
or should I wipe the file system attributes manually? I only ask because:

1. the long-term effects of this strategy aren't addressed in the 
Administration Guide AFAICT, and;

2. though the intent of the feature has merit, it lacks elegance. e.g., the 
addition of a "force" attribute

   (like that of the commit feature) could be justified in this instance, IMHO.

   > gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda8 
192.168.1.1:/srv/sda7 start force
   > Usage: volume replace-brick
{start|pause|abort|status|commit [force]}


Eric Pretorious
Truckee, CA




>____
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Wednesday, September 5, 2012 5:27 PM
>Subject: Re: [Gluster-users] migration operations: Stopping a migration
> 
>
>On a hunch, I attempted the "volume replace-brick   
> commit" command and, without much fanfare, the volume information 
>was updated:
>
>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
>> 192.168.1.1:/srv/sda8 commit
>> replace-brick commit successful
>> 
>> gluster> volume info
>>  
>> Volume Name: Repositories
>> Type:
 Distributed-Replicate
>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.1.1:/srv/sda8
>> Brick2: 192.168.1.2:/srv/sda7
>> Brick3: 192.168.1.1:/srv/sdb7
>> Brick4: 192.168.1.2:/srv/sdb7
>> 
>> gluster> volume status
>> Status of volume: Repositories
>> Gluster process                        Port    Online    Pid
>> --
>> Brick 192.168.1.1:/srv/sda8                24012    Y    13796
>> Brick 192.168.1.2:/srv/sda7               
 24009    Y    4946
>> Brick 192.168.1.1:/srv/sdb7                24010    Y    5438
>> Brick 192.168.1.2:/srv/sdb7                24010    Y    4951
>> NFS Server on localhost                    38467    Y    13803
>> Self-heal Daemon on localhost                N/A    Y    13808
>> NFS Server on 192.168.1.2                38467    Y    7969
>> Self-heal Daemon on 192.168.1.2           
     N/A    Y    7974
>
>
>The XFS attributes are still intact on the old brick, however:
>
>
>> [eric@sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
>> /dev/null ; done
>> # file: srv/sda7
>> trusted.afr.Repositories-client-0
>> trusted.afr.Repositories-client-1
>> trusted.afr.Repositories-io-threads
>> trusted.afr.Repositories-replace-brick
>> trusted.gfid
>> trusted.glusterfs.dht
>> trusted.glusterfs.volume-id
>> 
>> # file: srv/sdb7
>> trusted.afr.Repositories-client-2
>> trusted.afr.Repositories-client-3
>> trusted.gfid
>> trusted.glusterfs.dht
>> trusted.glusterfs.volume-id
>> 
>> # file:
 srv/sda8
>> trusted.afr.Repositories-io-threads
>> trusted.afr.Repositories-replace-brick
>> trusted.gfid
>> trusted.glusterfs.volume-id
>
>
>
>Is this intentional (i.e., leaving the the attributes intact)? Or  
>functionality that has yet to be implemented?
>
>
>Eric Pretorious
>Truckee, CA
>
>
>
>>
>> From: Eric 
>>To: "gluster-users@gluster.org"  
>>Sent: Wednesday, September 5, 2012 5:05 PM
>>Subject: [Gluster-users] migration operations: Stopping a migration
>> 
>>
>>I've created a distributed replicated volume:
>>
>>
>>> gluster> volume info
>>>  
>>> Volume Name: Repositories
>>> Type: Distributed-Replicate
>>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>>> Status: Started
>>> Number of Bricks: 2 x 2 = 4
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 192.168.1.1:/srv/sda7
>>> Brick2: 192.168.1.2:/srv/sda7
>>> Brick3: 192.168.1.1:/srv/sdb7
>>> Brick4: 192.168.1.2:/srv/sdb7
>>
>>
>>
>>...and begun migrating data from one brick to another as a PoC:
>>
>>
>>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/s

Re: [Gluster-users] migration operations: Stopping a migration

2012-09-05 Thread Eric
On a hunch, I attempted the "volume replace-brick
commit" command and, without much fanfare, the volume information was updated:

> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
> 192.168.1.1:/srv/sda8 commit
> replace-brick commit successful
> 
> gluster> volume info
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda8
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7
> 
> gluster> volume status
> Status of volume: Repositories
> Gluster process                        Port    Online    Pid
> --
> Brick 192.168.1.1:/srv/sda8                24012    Y    13796
> Brick 192.168.1.2:/srv/sda7                24009    Y    4946
> Brick 192.168.1.1:/srv/sdb7                24010    Y    5438
> Brick 192.168.1.2:/srv/sdb7                24010    Y    4951
> NFS Server on localhost                    38467    Y    13803
> Self-heal Daemon on localhost                N/A    Y    13808
> NFS Server on 192.168.1.2                38467    Y    7969
> Self-heal Daemon on 192.168.1.2                N/A    Y    7974


The XFS attributes are still intact on the old brick, however:

> [eric@sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
> /dev/null ; done
> # file: srv/sda7
> trusted.afr.Repositories-client-0
> trusted.afr.Repositories-client-1
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
> 
> # file: srv/sdb7
> trusted.afr.Repositories-client-2
> trusted.afr.Repositories-client-3
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
> 
> # file: srv/sda8
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.volume-id


Is this intentional (i.e., leaving the the attributes intact)? Or  
functionality that has yet to be implemented?

Eric Pretorious
Truckee, CA


>
> From: Eric 
>To: "gluster-users@gluster.org"  
>Sent: Wednesday, September 5, 2012 5:05 PM
>Subject: [Gluster-users] migration operations: Stopping a migration
> 
>
>I've created a distributed replicated volume:
>
>
>> gluster> volume info
>>  
>> Volume Name: Repositories
>> Type: Distributed-Replicate
>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.1.1:/srv/sda7
>> Brick2: 192.168.1.2:/srv/sda7
>> Brick3: 192.168.1.1:/srv/sdb7
>> Brick4: 192.168.1.2:/srv/sdb7
>
>
>
>...and begun migrating data from one brick to another as a PoC:
>
>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
>> 192.168.1.1:/srv/sda8 start
>> replace-brick started successfully
>> 
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
>> 192.168.1.1:/srv/sda8 status
>> Number of files migrated = 5147   Current file= 
>> /centos/5.8/os/x86_64/CentOS/gnome-pilot-conduits-2.0.13-7.el5.x86_64.rpm 
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7
 192.168.1.1:/srv/sda8 status
>> Number of files migrated = 24631    Migration complete 
>
>
>After the migration is finished, though, the list of bricks is wrong:
>
>
>
>> gluster> volume heal Repositories 
>> info      
>> Heal operation on volume Repositories has been successful
>> 
>> Brick 192.168.1.1:/srv/sda7
>> Number of entries: 0
>> 
>> Brick 192.168.1.2:/srv/sda7
>> Number of entries: 0
>> 
>> Brick 192.168.1.1:/srv/sdb7
>> Number of entries: 0
>> 
>> Brick 192.168.1.2:/srv/sdb7
>> Number of entries: 0
>
>
>...and the XFS attributes are still intact on the old brick:
>
>
>> [eric@sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
>> /dev/null ; done
>> # file: srv/sda7
>> trusted.afr.Repositories-client-0
>> trusted.afr.Repositories-client-1
>> trusted.afr.Repositories-io-threads
>> trusted.afr.Repositories-replace-brick
>> trusted.gfid
>> trusted.glusterfs.dht
>> trusted.glusterfs.pump-path
>> trusted.glusterfs.volume-id
>> 
>> # file:
 srv/sdb7
>> tru

[Gluster-users] Feature Request, Console Manger: Command History

2012-09-05 Thread Eric
I've noticed that when interacting with the Gluster Console Manager...

 * Commands are quite lengthy most of the time.
 * Some commands (e.g., commands that check the status of an operation) get 
used over-and-over.

So, before I submit a feature request, I thought that I'd ask first if others 
might find the feature useful: Would other admin's also find it useful for the 
Console Manager to maintain a history buffer so that commands can be retrieved, 
edited, and/or re-used?

Eric Pretorious
Truckee, CA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] migration operations: Stopping a migration

2012-09-05 Thread Eric
I've created a distributed replicated volume:

> gluster> volume info
>  
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda7
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7


...and begun migrating data from one brick to another as a PoC:

> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
> 192.168.1.1:/srv/sda8 start
> replace-brick started successfully
> 
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
> 192.168.1.1:/srv/sda8 status
> Number of files migrated = 5147   Current file= 
> /centos/5.8/os/x86_64/CentOS/gnome-pilot-conduits-2.0.13-7.el5.x86_64.rpm 
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7 
> 192.168.1.1:/srv/sda8 status
> Number of files migrated = 24631    Migration complete 


After the migration is finished, though, the list of bricks is wrong:


> gluster> volume heal Repositories 
> info  
> Heal operation on volume Repositories has been successful
> 
> Brick 192.168.1.1:/srv/sda7
> Number of entries: 0
> 
> Brick 192.168.1.2:/srv/sda7
> Number of entries: 0
> 
> Brick 192.168.1.1:/srv/sdb7
> Number of entries: 0
> 
> Brick 192.168.1.2:/srv/sdb7
> Number of entries: 0

...and the XFS attributes are still intact on the old brick:

> [eric@sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x 2> 
> /dev/null ; done
> # file: srv/sda7
> trusted.afr.Repositories-client-0
> trusted.afr.Repositories-client-1
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.pump-path
> trusted.glusterfs.volume-id
> 
> # file: srv/sdb7
> trusted.afr.Repositories-client-2
> trusted.afr.Repositories-client-3
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
> 
> # file: srv/sda8
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.volume-id


Have I missed a step? Or: Is this (i.e., clean-up) a bug or functionality that 
hasn't been implemented yet?

Eric Pretorious
Truckee, CA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Configuring ldconfig

2012-07-24 Thread Eric
Is it enough to add the path /usr/local/lib to ldconfig or will I need to add 
the /usr/local/lib/gluster too?
> [eric@sn1 glusterfs-3.3.0]$ ldconfig -p | grep \/usr\/local\/lib
>     libglusterfs.so.0 (libc6,x86-64) => /usr/local/lib/libglusterfs.so.0
>     libglusterfs.so (libc6,x86-64) => /usr/local/lib/libglusterfs.so
>     libgfxdr.so.0 (libc6,x86-64) => /usr/local/lib/libgfxdr.so.0
>     libgfxdr.so (libc6,x86-64) => /usr/local/lib/libgfxdr.so
>     libgfrpc.so.0 (libc6,x86-64) => /usr/local/lib/libgfrpc.so.0
>     libgfrpc.so (libc6,x86-64) => /usr/local/lib/libgfrpc.so


Eric Pretorious
Truckee, CA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] /etc/exports functionality?

2012-06-12 Thread Eric
Did you receive an answer to your question, Scot?

Eric Pretorious
Truckee, CA




>
> From: Scot Kreienkamp 
>To: "gluster-users@gluster.org"  
>Sent: Thursday, May 31, 2012 12:05 PM
>Subject: [Gluster-users] /etc/exports functionality?
> 
>
> 
>Hi,
> 
>I have an NFS server that I would like to move over to gluster.  I have a test 
>3.3 gluster setup (thanks for the shiny new release!), and I'm looking to 
>replicate the setup I have now with my exports file.  I have one main 
>directory and many subdirectories that are remotely mounted via NFS to various 
>hosts.  I don't want to mount the entire filesystem on the remote hosts as 
>that leaves too much room for error and mischief.  Is there a way to control 
>access to a specific shared subdirectory either via NFS or the FUSE client?  
> 
>Thanks!
> 
>Scot Kreienkamp
>skre...@la-z-boy.com
> 
>
>
>This message is intended only for the individual or entity to which it is 
>addressed. It may contain privileged, confidential information which is exempt 
>from disclosure under applicable laws. If you are not the intended recipient, 
>please note that you are strictly
 prohibited from disseminating or distributing this information (other than to 
the intended recipient) or copying this information. If you have received this 
communication in error, please notify us immediately by e-mail or by telephone 
at the above number.
 Thank you. 
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, David:

That's fine. If it's just a version-mismatch I can accept that. I'm sure that 
the developers at Mageia are working on catching-up in Mageia, Version 2. I 
just figured that, if it were a version-mismatch, I'd see something to that 
effect in the servers' log files. That's all.


Eric Pretorious
Truckee, CA


>
> From: David Coulson 
>To: Eric  
>Cc: "gluster-users@gluster.org"  
>Sent: Saturday, May 5, 2012 10:46 AM
>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
> 
>
>There's probably not a whole lot of 'fixing' since the two versions are so 
>different.
>
>You can tell the package maintainer to make a 3.2.6 build. That'll
help.
>
>On 5/5/12 1:43 PM, Eric wrote: 
>Thanks, David:
>>
>>But I'd rather see if I can fix the problem with the Mageia
client (glusterfs-client-3.0.0-2.mga1):
>>  * This will allow me to file a bug report with the package maintainer 
>> (if necessary).
>>  * I already know that the systems that have Gluster 3.2.6 from source 
>> [i.e., the storage nodes] are able to mount the volume.
>>  * I'd rather keep my daily-driver (i.e., the host system) 100% Mageia.
>>Eric Pretorious
>>Truckee, CA
>>
>>
>>
>>>
>>> From: David Coulson 
>>>To: Eric  
>>>Cc: "gluster-users@gluster.org"  
>>>Sent: Saturday, May 5, 2012 9:16 AM
>>>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
>>> 
>>>
>>>That's way old documentation.
>>>
>>>Start by installing 3.2.6 on your client and see if
it works then. I don't think anyone expects 3.2 and
3.0 to work correctly.
>>>
>>>On 5/5/12 12:09 PM, Eric wrote: 
>>>Thanks, David:
>>>>
>>>>
>>>>Yes...
>>>>* iptables has been disabled on all three systems.
>>>>* SELinux is set to permissive on the two systems that employ it - the 
>>>> two CentOS nodes.
>>>>* Port #6996 is referenced in the Troubleshooting section of the 
>>>> Gluster User Guide.
>>>>FWIW: All of this except the SELinux question is already documented in  my 
>>>>post on the Mageia Forum.
>>>>
>>>>
>>>>Eric Pretorious
>>>>Truckee, CA
>>>>
>>>>
>>>>
>>>>>
>>>>> From: David Coulson 
>>>>>To: Eric  
>>>>>Cc: "gluster-users@gluster.org"  
>>>>>Sent: Saturday, May 5, 2012 5:44 AM
>>>>>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
>>>>> 
>>>>>
>>>>>Do you have any firewall rules enabled? I'd start by disabling iptables 
>>>>>(or at least setting everything to ACCEPT) and as someone else suggested 
>>>>>setting selinux to permissive/disabled.
>>>>>
>>>>>Why are your nodes and client using
different versions of Gluster? Why
not just use the 3.2.6 version for
everything? Also, I'm not sure where
port 6996 comes from - Gluster uses
24007 for it's core communications
and ports above that for individual
    bricks.
>>>>>
>>>>>David
>>>>>
>>>>>On 5/5/12 12:27 AM, Eric wrote: 
>>>>>Hi, All:
>>>>>>
>>>>>>I've built a Gluster-based
  storage cluster on a pair of
  CentOS 5.7 (i386) VM's. The
  nodes are using Gluster 3.2.6
      (from source) and the host is
  using Gluster 3.0.0 (from the
      Mageia package repositories):
>>>>>>
>>>>>>
>>>>>>[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
>>>>>>glusterfs 3.2.6 built on
May  3 2012 15:53:02
>>>>>>
>>>>>>[eric@localhost ~]$ rpm -qa | grep glusterfs
>>>>>>glusterfs-c

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, David:

But I'd rather see if I can fix the problem with the Mageia client 
(glusterfs-client-3.0.0-2.mga1):
* This will allow me to file a bug report with the package maintainer 
(if necessary).
* I already know that the systems that have Gluster 3.2.6 from source 
[i.e., the storage nodes] are able to mount the volume.
* I'd rather keep my daily-driver (i.e., the host system) 100% Mageia.
Eric Pretorious
Truckee, CA



>
> From: David Coulson 
>To: Eric  
>Cc: "gluster-users@gluster.org"  
>Sent: Saturday, May 5, 2012 9:16 AM
>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
> 
>
>That's way old documentation.
>
>Start by installing 3.2.6 on your client and see if it works then. I
don't think anyone expects 3.2 and 3.0 to work correctly.
>
>On 5/5/12 12:09 PM, Eric wrote: 
>Thanks, David:
>>
>>
>>Yes...
>>  * iptables has been disabled on all three systems.
>>  * SELinux is set to permissive on the two systems that employ it - the 
>> two CentOS nodes.
>>  * Port #6996 is referenced in the Troubleshooting section of the 
>> Gluster User Guide.
>>FWIW: All of this except the SELinux question is already documented in  my 
>>post on the Mageia Forum.
>>
>>
>>Eric Pretorious
>>Truckee, CA
>>
>>
>>
>>>
>>> From: David Coulson 
>>>To: Eric  
>>>Cc: "gluster-users@gluster.org"  
>>>Sent: Saturday, May 5, 2012 5:44 AM
>>>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
>>> 
>>>
>>>Do you have any firewall rules enabled? I'd start by disabling iptables (or 
>>>at least setting everything to ACCEPT) and as someone else suggested setting 
>>>selinux to permissive/disabled.
>>>
>>>Why are your nodes and client using different
versions of Gluster? Why not just use the 3.2.6
version for everything? Also, I'm not sure where
port 6996 comes from - Gluster uses 24007 for it's
core communications and ports above that for
individual bricks.
>>>
>>>David
>>>
>>>On 5/5/12 12:27 AM, Eric wrote: 
>>>Hi, All:
>>>>
>>>>I've built a Gluster-based storage cluster on
      a pair of CentOS 5.7 (i386) VM's. The nodes
  are using Gluster 3.2.6 (from source) and the
  host is using Gluster 3.0.0 (from the Mageia
  package repositories):
>>>>
>>>>
>>>>[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
>>>>glusterfs 3.2.6 built on May  3 2012
15:53:02
>>>>
>>>>[eric@localhost ~]$ rpm -qa | grep glusterfs
>>>>glusterfs-common-3.0.0-2.mga1
>>>>glusterfs-client-3.0.0-2.mga1
>>>>glusterfs-server-3.0.0-2.mga1
>>>>libglusterfs0-3.0.0-2.mga1
>>>>
>>>>
>>>>None of the systems (i.e., neither the two storage nodes nor the client) 
>>>>can connect to Port 6996 of the cluster (node1.example.com & 
>>>>node2.example.com) but the two storage nodes can mount the shared volume 
>>>>using the Gluster helper and/or NFS:
>>>>
>>>>
>>>>[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
>>>>
>>>>[eric@node1 ~]$ sudo /sbin/modprobe fuse
>>>>
>>>>[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
>>>>fuse                   49237  0 
>>>>
>>>>[eric@node1 ~]$ sudo mount -t glusterfs
node1:/mirror-1 /mnt
>>>>
>>>>[eric@node1 ~]$ sudo grep gluster /etc/mtab 
>>>>glusterfs#node1:/mirror-1 /mnt fuse
rw,allow_other,default_permissions,max_read=131072
0 0
>>>>
>>>>
>>>>...but the host system is only able to connect using NFS:
>>>>
>>>>
>>>>[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
>>>>[2012-05-04 19:09:09] D
[glusterfsd.c:424:_get_specfp] glusterfs:
loading volume file /tmp/glusterfs.vol
>>>>
>>>>Version      : glusterfs 3.0.0 built on Apr
10 2011 19:12:54
>>>>git

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, Xavier:

SELinux is set to permissive on the two systems that have it.
Eric Pretorious

Truckee, CA



>
> From: Xavier Normand 
>To: Eric  
>Cc: "gluster-users@gluster.org"  
>Sent: Friday, May 4, 2012 9:57 PM
>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
> 
>
>Hi
>
>
>Do you have selinux enable?
>
>Envoyé de mon iPhone
>
>Le 2012-05-05 à 00:27, Eric  a écrit :
>
>
>Hi, All:
>>
>>I've built a Gluster-based storage cluster on a pair of CentOS 5.7 (i386) 
>>VM's. The nodes are using Gluster 3.2.6 (from source) and the host is using 
>>Gluster 3.0.0 (from the Mageia package repositories):
>>
>>
>>[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
>>glusterfs 3.2.6 built on May  3 2012 15:53:02
>>
>>[eric@localhost ~]$ rpm -qa | grep glusterfs
>>glusterfs-common-3.0.0-2.mga1
>>glusterfs-client-3.0.0-2.mga1
>>glusterfs-server-3.0.0-2.mga1
>>libglusterfs0-3.0.0-2.mga1
>>
>>
>>None of the systems (i.e., neither  the two storage nodes nor the client) can 
>>connect to Port 6996 of the cluster (node1.example.com & node2.example.com) 
>>but the two storage nodes can mount the shared volume using the Gluster 
>>helper and/or NFS:
>>
>>
>>[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
>>
>>[eric@node1 ~]$ sudo /sbin/modprobe fuse
>>
>>[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
>>fuse                   49237  0 
>>
>>[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt
>>
>>[eric@node1 ~]$ sudo grep gluster /etc/mtab 
>>glusterfs#node1:/mirror-1 /mnt fuse 
>>rw,allow_other,default_permissions,max_read=131072 0 0
>>
>>
>>...but the host system is only able to connect using NFS:
>>
>>
>>[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
>>[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp] glusterfs: loading 
>>volume file /tmp/glusterfs.vol
>>
>>Version      : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
>>git: 2.0.1-886-g8379edd
>>Starting Time: 2012-05-04 19:09:09
>>Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt 
>>PID          : 30159
>>System name  : Linux
>>Nodename     : localhost.localdomain
>>Kernel Release : 2.6.38.8-desktop586-10.mga
>>Hardware Identifier: i686
>>
>>Given volfile:
>>+--+
>>  1: volume mirror-1
>>  2:  type
 protocol/client
>>  3:  option transport-type tcp
>>  4:  option remote-host node1.example.com
>>  5:  option remote-subvolume mirror-1
>>  6: end-volume
>>+--+
>>[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs: running in pid 
>>30159
>>[2012-05-04 19:09:09] D [client-protocol.c:6581:init] mirror-1: defaulting 
>>frame-timeout to 30mins
>>[2012-05-04 19:09:09] D [client-protocol.c:6592:init] mirror-1: defaulting 
>>ping-timeout to 42
>>[2012-05-04
 19:09:09] D [transport.c:145:transport_load] transport: attempt to load
 file /usr/lib/glusterfs/3.0.0/transport/socket.so
>>[2012-05-04 
19:09:09] D [transport.c:145:transport_load] transport: attempt to load 
file /usr/lib/glusterfs/3.0.0/transport/socket.so
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
>>GF_EVENT_PARENT_UP, attempting connect on transport
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
>>GF_EVENT_PARENT_UP, attempting connect on transport
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
>>GF_EVENT_PARENT_UP, attempting connect on transport
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
>>GF_EVENT_PARENT_UP, attempting connect on transport
>>[2012-05-04 19:09:09] N [glusterfsd.c:1361:main] glusterfs: Successfully 
>>started
>>[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
>>connection to  failed (Connection refused)
>>[2012-05-04
 19:09:09] D [fuse-bridge.c:3079:fuse_thread_proc] fuse:  
pthread_cond_timedout returned non zero value ret: 0 errno: 0
>>[2012-05-04
 19:09:09] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE inited 
with protocol versions: glusterfs 7.13 kernel 7.16
>>[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
>>connection to  failed (Connection refused)
>>
&

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, David:

Yes...
* iptables has been disabled on all three systems.
* SELinux is set to permissive on the two systems that employ it - the 
two CentOS nodes.
* Port #6996 is referenced in the Troubleshooting section of the 
Gluster User Guide.
FWIW: All of this except the SELinux question is already documented in  my post 
on the Mageia Forum.

Eric Pretorious
Truckee, CA



>
> From: David Coulson 
>To: Eric  
>Cc: "gluster-users@gluster.org"  
>Sent: Saturday, May 5, 2012 5:44 AM
>Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
> 
>
>Do you have any firewall rules enabled? I'd start by disabling iptables (or at 
>least setting everything to ACCEPT) and as someone else suggested setting 
>selinux to permissive/disabled.
>
>Why are your nodes and client using different versions of Gluster?
Why not just use the 3.2.6 version for everything? Also, I'm not
sure where port 6996 comes from - Gluster uses 24007 for it's core
communications and ports above that for individual bricks.
>
>David
>
>On 5/5/12 12:27 AM, Eric wrote: 
>Hi, All:
>>
>>I've built a Gluster-based storage cluster on a pair of CentOS
  5.7 (i386) VM's. The nodes are using Gluster 3.2.6 (from
      source) and the host is using Gluster 3.0.0 (from the Mageia
  package repositories):
>>
>>
>>[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
>>glusterfs 3.2.6 built on May  3 2012 15:53:02
>>
>>[eric@localhost ~]$ rpm -qa | grep glusterfs
>>glusterfs-common-3.0.0-2.mga1
>>glusterfs-client-3.0.0-2.mga1
>>glusterfs-server-3.0.0-2.mga1
>>libglusterfs0-3.0.0-2.mga1
>>
>>
>>None of the systems (i.e., neither the two storage nodes nor the client) can 
>>connect to Port 6996 of the cluster (node1.example.com & node2.example.com) 
>>but the two storage nodes can mount the shared volume using the Gluster 
>>helper and/or NFS:
>>
>>
>>[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
>>
>>[eric@node1 ~]$ sudo /sbin/modprobe fuse
>>
>>[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
>>fuse                   49237  0 
>>
>>[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt
>>
>>[eric@node1 ~]$ sudo grep gluster /etc/mtab 
>>glusterfs#node1:/mirror-1 /mnt fuse
rw,allow_other,default_permissions,max_read=131072 0 0
>>
>>
>>...but the host system is only able to connect using NFS:
>>
>>
>>[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
>>[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp]
glusterfs: loading volume file /tmp/glusterfs.vol
>>
>>Version      : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
>>git: 2.0.1-886-g8379edd
>>Starting Time: 2012-05-04 19:09:09
>>Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt 
>>PID          : 30159
>>System name  : Linux
>>Nodename     : localhost.localdomain
>>Kernel Release : 2.6.38.8-desktop586-10.mga
>>Hardware Identifier: i686
>>
>>Given volfile:
>>+--+
>>  1: volume mirror-1
>>  2:  type protocol/client
>>  3:  option transport-type tcp
>>  4:  option remote-host node1.example.com
>>  5:  option remote-subvolume mirror-1
>>  6: end-volume
>>+--+
>>[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs:
running in pid 30159
>>[2012-05-04 19:09:09] D [client-protocol.c:6581:init]
mirror-1: defaulting frame-timeout to 30mins
>>[2012-05-04 19:09:09] D [client-protocol.c:6592:init]
mirror-1: defaulting ping-timeout to 42
>>[2012-05-04 19:09:09] D [transport.c:145:transport_load]
transport: attempt to load file
/usr/lib/glusterfs/3.0.0/transport/socket.so
>>[2012-05-04 19:09:09] D [transport.c:145:transport_load]
transport: attempt to load file
/usr/lib/glusterfs/3.0.0/transport/socket.so
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
>>[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
>>[20

[Gluster-users] Gluster client can't connect to Gluster volume

2012-05-04 Thread Eric
Hi, All:

I've built a Gluster-based storage cluster on a pair of CentOS 5.7 (i386) VM's. 
The nodes are using Gluster 3.2.6 (from source) and the host is using Gluster 
3.0.0 (from the Mageia package repositories):

[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012 15:53:02

[eric@localhost ~]$ rpm -qa | grep glusterfs
glusterfs-common-3.0.0-2.mga1
glusterfs-client-3.0.0-2.mga1
glusterfs-server-3.0.0-2.mga1
libglusterfs0-3.0.0-2.mga1

None of the systems (i.e., neither  the two storage nodes nor the client) can 
connect to Port 6996 of the cluster (node1.example.com & node2.example.com) but 
the two storage nodes can mount the shared volume using the Gluster helper 
and/or NFS:

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse                   49237  0 

[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab 
glusterfs#node1:/mirror-1 /mnt fuse 
rw,allow_other,default_permissions,max_read=131072 0 0

...but the host system is only able to connect using NFS:

[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp] glusterfs: loading 
volume file /tmp/glusterfs.vol

Version      : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt 
PID          : 30159
System name  : Linux
Nodename     : localhost.localdomain
Kernel Release : 2.6.38.8-desktop586-10.mga
Hardware Identifier: i686

Given volfile:
+--+
  1: volume mirror-1
  2:  type protocol/client
  3:  option transport-type tcp
  4:  option remote-host node1.example.com
  5:  option remote-subvolume mirror-1
  6: end-volume
+--+
[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs: running in pid 30159
[2012-05-04 19:09:09] D [client-protocol.c:6581:init] mirror-1: defaulting 
frame-timeout to 30mins
[2012-05-04 19:09:09] D [client-protocol.c:6592:init] mirror-1: defaulting 
ping-timeout to 42
[2012-05-04
 19:09:09] D [transport.c:145:transport_load] transport: attempt to load
 file /usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 
19:09:09] D [transport.c:145:transport_load] transport: attempt to load 
file /usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] N [glusterfsd.c:1361:main] glusterfs: Successfully started
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
connection to  failed (Connection refused)
[2012-05-04
 19:09:09] D [fuse-bridge.c:3079:fuse_thread_proc] fuse:  
pthread_cond_timedout returned non zero value ret: 0 errno: 0
[2012-05-04
 19:09:09] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE inited 
with protocol versions: glusterfs 7.13 kernel 7.16
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
connection to  failed (Connection refused)

I've read through the Troubleshooting section of the Gluster Administration 
Guide and the Gluster User Guide but can't seem to resolve the problem. (See my 
post on the Mageia Forum for all the troubleshooting details: 
https://forums.mageia.org/en/viewtopic.php?f=7&t=2358&p=17517)

What might be causing this?

TIA,
Eric Pretorious
Truckee, CA


https://forums.mageia.org/en/viewtopic.php?f=7&t=2358&p=17517
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] issue with GlusterFS to store KVM guests

2011-07-29 Thread Eric
this worked, thank you!

On Fri, Jul 29, 2011 at 3:47 PM, Michael  wrote:

> try to start kvm disk image with cache=writeback or writethrough.
> if cache=none than will get this error.
>
> Runnig kvm on gluster 3.3beta1 on debian x64 sqeeze. so far so good. live
> -migartion, failover works ok.
>
> On Sat, Jul 30, 2011 at 10:22 AM, Eric  wrote:
>
>> i'm having difficulty running KVM virtual machines off of a glusterFS
>> volume mounted using the glusterFS client. i am running centOS 6, 64-bit.
>>
>> i am using virt-install to create my images but encountering the following
>> error:
>>
>> qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img:
>> Invalid argument
>> (see below for a more lengthy version of the error)
>>
>> i have found an example of someone else having this issue as well but no
>> solution was offered (
>> http://gluster.org/pipermail/gluster-users/2011-February/006731.html). i
>> also found the following FAQ entry that states that NFS should be used to
>> store VM images (
>> http://www.gluster.org/faq/index.php?sid=36929&lang=en&action=artikel&cat=3&id=5&artlang=en
>> )
>>
>> i would prefer to use the glusterFS client for the automatic fail-over
>> that it offers. is this possible?
>>
>> i am creating the VM as follows:
>>
>> sudo virt-install --name testvm --ram 4096 --os-type='linux'
>> --os-variant=rhel5.4 --disk path=/mnt/myreplicatestvolume/testvm.img,size=50
>> --network bridge:br0 --accelerate --vnc --mac=54:52:00:A1:B2:C3 --location
>> http://server/centos/5.4/os/x86_64/ -x "ks=http://server/kscfg/testvm.cfg";
>> --uuid=----0001 --vcpus=4
>>
>> full text of error:
>>
>> Starting install...
>> Retrieving file
>> vmlinuz...
>> | 3.7 MB 00:00 ...
>> Retrieving file
>> initrd.img...
>> |  14 MB 00:00 ...
>> ERRORinternal error process exited while connecting to monitor: char
>> device redirected to /dev/pts/5
>> qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img:
>> Invalid argument
>>
>> Domain installation does not appear to have been
>>  successful.  If it was, you can restart your domain
>>  by running 'virsh start testvm'; otherwise, please
>>  restart your installation.
>> ERRORinternal error process exited while connecting to monitor: char
>> device redirected to /dev/pts/5
>> qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img:
>> Invalid argument
>> Traceback (most recent call last):
>>   File "/usr/bin/virt-install", line 1054, in 
>> main()
>>   File "/usr/bin/virt-install", line 936, in main
>> start_time, guest.start_install)
>>   File "/usr/bin/virt-install", line 978, in do_install
>> dom = install_func(conscb, progresscb, wait=(not wait))
>>   File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 973, in
>> start_install
>> return self._do_install(consolecb, meter, removeOld, wait)
>>   File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1038, in
>> _do_install
>> "install")
>>   File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1009, in
>> _create_guest
>> dom = self.conn.createLinux(start_xml, 0)
>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1270, in
>> createLinux
>> if ret is None:raise libvirtError('virDomainCreateLinux() failed',
>> conn=self)
>> libvirtError: internal error process exited while connecting to monitor:
>> char device redirected to /dev/pts/5
>> qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img:
>> Invalid argument
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>
> --
> --
> Michael
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] issue with GlusterFS to store KVM guests

2011-07-29 Thread Eric
i'm having difficulty running KVM virtual machines off of a glusterFS volume
mounted using the glusterFS client. i am running centOS 6, 64-bit.

i am using virt-install to create my images but encountering the following
error:

qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid
argument
(see below for a more lengthy version of the error)

i have found an example of someone else having this issue as well but no
solution was offered (
http://gluster.org/pipermail/gluster-users/2011-February/006731.html). i
also found the following FAQ entry that states that NFS should be used to
store VM images (
http://www.gluster.org/faq/index.php?sid=36929&lang=en&action=artikel&cat=3&id=5&artlang=en
)

i would prefer to use the glusterFS client for the automatic fail-over that
it offers. is this possible?

i am creating the VM as follows:

sudo virt-install --name testvm --ram 4096 --os-type='linux'
--os-variant=rhel5.4 --disk path=/mnt/myreplicatestvolume/testvm.img,size=50
--network bridge:br0 --accelerate --vnc --mac=54:52:00:A1:B2:C3 --location
http://server/centos/5.4/os/x86_64/ -x "ks=http://server/kscfg/testvm.cfg";
--uuid=----0001 --vcpus=4

full text of error:

Starting install...
Retrieving file
vmlinuz...
| 3.7 MB 00:00 ...
Retrieving file
initrd.img...
|  14 MB 00:00 ...
ERRORinternal error process exited while connecting to monitor: char
device redirected to /dev/pts/5
qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid
argument

Domain installation does not appear to have been
 successful.  If it was, you can restart your domain
 by running 'virsh start testvm'; otherwise, please
 restart your installation.
ERRORinternal error process exited while connecting to monitor: char
device redirected to /dev/pts/5
qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid
argument
Traceback (most recent call last):
  File "/usr/bin/virt-install", line 1054, in 
main()
  File "/usr/bin/virt-install", line 936, in main
start_time, guest.start_install)
  File "/usr/bin/virt-install", line 978, in do_install
dom = install_func(conscb, progresscb, wait=(not wait))
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 973, in
start_install
return self._do_install(consolecb, meter, removeOld, wait)
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1038, in
_do_install
"install")
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1009, in
_create_guest
dom = self.conn.createLinux(start_xml, 0)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1270, in
createLinux
if ret is None:raise libvirtError('virDomainCreateLinux() failed',
conn=self)
libvirtError: internal error process exited while connecting to monitor:
char device redirected to /dev/pts/5
qemu: could not open disk image /mnt/myreplicatestvolume/testvm.img: Invalid
argument
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Disaster Recovery

2010-08-18 Thread Eric Alvarado
Greetings Gluster Community,

I am brand new to the list and am probably ask the same questions that were 
asked and answered before. I am looking for information specifically centered 
around disaster recovery with the Gluster system configured in a mirror format. 
The docs on the Gluster site do not seem to delve into recovering failed drives 
or nodes.

Thanks in advance.

Cheers,
Eric Alvarado
Director of Academic Technologies
The George Washington University
(o) 202-994-3131
(e) alvar...@gwu.edu

Sent from my iPhone 4
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users