Re: [Gluster-users] GlusteFS VS DRBD + power failure ?

2014-10-27 Thread Daniel Müller
For sure power failure on both nodes will corrupt drbd as well. Power failure 
in a clusterd datacenter is the worst case and point of failure.


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 



Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Tytus Rogalewski
Gesendet: Montag, 27. Oktober 2014 21:35
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] GlusteFS VS DRBD + power failure ?

Hi guys,
I wanted to ask you about what happen in case of power failure.
I have 2 node proxmox cluster with glusterfs as sdb1 XFS, and mounted on each 
node as localhost/glusterstorage.
I am storing VMs on it as qcow2(and inside ext4 filesystem).
 Live migration works ok WOW.. Everything works fine.
But tell me will something bad happen when the power will fail on whole 
datacenter ?
Will be data corrupted and will be the same thing if i am using drbd ?
DRBD doesnt give me so much flexability(because i cant use qcow2 and store 
files like iso or backups on drbd), but glusterfs does give me much flexability 
!
Anyway yesterday i created glusterfs with ext4, and VM qcow with ext4 on it and 
when i made "reboot -f"(i assume this is the same as i will pull power cord off 
?) - after node went online again, VM data was corrupted and i had many ext 
failures inside VM.
Tell me was that because i used ext4 on top of sdb1 glusterfs storage or will 
that work the same with XFS ?
Is drbd better  protection in case of power failure ?

Anyway second question, if i have 2 nodes with glusterfs.
node1 is changing file1.txt
node2 is changing file2.txt
then i will disconnect glusterfs in network, and data keeps changing on both 
nodes)
After i will reconnect glusterfs how this will go?
Newer changed file1 from node1 will overwrite file1 on node2?
and newer file2 changed on node2 will overwrite file2 on node1 ?
Am i correct ?

Thx for answer :)


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] FailOver

2014-10-27 Thread Punit Dambiwal
Hi Vijay,

Thanksis  "replace-brick commit force" also can possible through Ovirt
Admin Portal or do i need to use the command line to run this command ??

Also i am looking for HA for glusterfs...let me explain
my architecture more :-

1. I have 4 node glusterfs clusterand same 4 nodes i am using as Ovirt
HV nodes (storage + compute).
2. Now if i mount it in Ovirt...it will mount on one node ip addressand
if this node goes down...all my VM will pause...
3. I want to use HA or LB through CTDB or HA proxy...so even any node goes
down the VM will not affect...
4. I am ready to put two additional node for HA/LB purpose.

Please suggest me good way to achieve this

On Mon, Oct 27, 2014 at 6:19 PM, Vijay Bellur  wrote:

> On 10/23/2014 01:35 PM, Punit Dambiwal wrote:
>
>> On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal > > wrote:
>>
>> Hi,
>>
>> I have one question regarding the gluster failover...let me
>> explain my current architecture,i am using Ovirt with gluster...
>>
>> 1. One Ovirt Engine (Ovirt 3.4)
>> 2. 4 * Ovirt Node as well as Gluster storage node...with 12
>> brick in one node...(Gluster Version 3.5)
>> 3. All 4 node in distributed replicated structure with replica
>> level =2...
>> 4. I have one spare node with 12 brick for Failover purpose..
>>
>> Now there is two questions :-
>>
>> 1. If any of brick failed...how i can failover this brick...how
>> to remove the failed brick and replace with another brick??
>> Do i need to replace the whole node or i can replace the single
>> brick ??
>>
>>
> Failure of a single brick can be addressed by performing "replace-brick
> commit force" to replace the failed brick with a new brick and then trigger
> self-heal to rebuild data on the replace brick.
>
>  2. If one of the whole node with 12 brick down and can not come
>> up...how i can replace it with the new onedo i need to add
>> two node to main the replication level... ??
>>
>>
> You can add a replacement node to the cluster and use "replace-brick
> commit force" to adjust the volume topology. Self-healing will rebuild data
> on the new node. You might want to replace one/few bricks at a time to
> ensure that your servers do not get bogged down by self-healing.
>
> -Vijay
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Is RAID necessary/recommended?

2014-10-27 Thread Lindsay Mathieson
I have a 2 node proxmox cluster running 16 VM's off a NAS NFS share
over 1GB ethernet.

Looking at moving to local storage shared between the nodes using
Gluster replication, over a dedicated 1GB link. 3TB WD Red drives on
each node.

I was going to setup RAID1 (2*3TB) on each node, but is that overkill?

Read performance would be improved, but the bottleneck for writes is
going to be the 1GB ethernet anyway.


Data is backed up online offsite nightly and DR backups are done
weekly to external drives, because as we all know, RAID is not a
backup :)

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Recomended underlying filesystem

2014-10-27 Thread Lindsay Mathieson
On Mon, 27 Oct 2014 04:15:56 PM Nux! wrote:
> Hi,
> 
> XFS is the recommended filesystem by RedHat AFAIK.
> 
> Lucian

Thanks All.


-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusteFS VS DRBD + power failure ?

2014-10-27 Thread Tytus Rogalewski
Hi guys,
I wanted to ask you about what happen in case of power failure.
I have 2 node proxmox cluster with glusterfs as sdb1 XFS, and mounted on
each node as localhost/glusterstorage.
I am storing VMs on it as qcow2(and inside ext4 filesystem).
 Live migration works ok WOW.. Everything works fine.
But tell me will something bad happen when the power will fail on whole
datacenter ?
Will be data corrupted and will be the same thing if i am using drbd ?
DRBD doesnt give me so much flexability(because i cant use qcow2 and store
files like iso or backups on drbd), but glusterfs does give me much
flexability !
Anyway yesterday i created glusterfs with ext4, and VM qcow with ext4 on it
and when i made "reboot -f"(i assume this is the same as i will pull power
cord off ?) - after node went online again, VM data was corrupted and i had
many ext failures inside VM.
Tell me was that because i used ext4 on top of sdb1 glusterfs storage or
will that work the same with XFS ?
Is drbd better  protection in case of power failure ?

Anyway second question, if i have 2 nodes with glusterfs.
node1 is changing file1.txt
node2 is changing file2.txt
then i will disconnect glusterfs in network, and data keeps changing on
both nodes)
After i will reconnect glusterfs how this will go?
Newer changed file1 from node1 will overwrite file1 on node2?
and newer file2 changed on node2 will overwrite file2 on node1 ?
Am i correct ?

Thx for answer :)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] feature request - improved auto re-connect?

2014-10-27 Thread Kingsley
Hi,

(using CentOS 6.5 / Gluster 3.6.0beta3).

I've been testing various failure scenarios and have managed to upset
gluster by changing which bricks are viewable by the client.

I've created a volume gv0 with 2 replicas across 2 bricks
test1:/data/brick/gv0 and test2:/data/brick/gv0

I edited iptables on the brick servers test1 and test2 such that only
test2 was visible to client machine test5, and then made some changes.
Some hours later, I edited iptables on test1 and test2 so that now, only
test1 was visible to client machine test5. This seemed to upset it (see
transcript below).

Would it be possible for clients to cache a full list of brick servers
and then use that cached list to do $clever_things to maintain
connectivity / re-connect automatically?

Transcript follows (this was just after I switched iptables round on the
bricks after the test5 client had been using just the one accessible
brick for a while):

(I was sitting in a subdirectory of /mnt/gv0 initially):
test5# ls -l
ls: cannot open directory .: Transport endpoint is not connected

test5# df
df: `/mnt/gv0-slave': Transport endpoint is not connected
df: `/mnt/gv0': Transport endpoint is not connected
Filesystem   1K-blocksUsed Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
   5716804 1842760   3583640  34% /
tmpfs   508140   0508140   0% /dev/shm
/dev/xvda1  495844  121241349003  26% /boot
test1:gv1  5716736 2299008   3127424  43% /mnt/gv1

test5# mount -t glusterfs test1:gv0 /mnt/gv0
ERROR: Mount point does not exist.
Usage:  mount.glusterfs : -o  

Options:
man 8 mount.glusterfs

To display the version number of the mount helper:
mount.glusterfs --version

test5# cd /
test5# mount -t glusterfs test1:gv0 /mnt/gv0
ERROR: Mount point does not exist.
Usage:  mount.glusterfs : -o  

Options:
man 8 mount.glusterfs

To display the version number of the mount helper:
mount.glusterfs --version

test5# ls -l /mnt/
ls: cannot access /mnt/gv0: Transport endpoint is not connected
ls: cannot access /mnt/gv0-slave: Transport endpoint is not connected
total 4
d? ? ??   ?? gv0
d? ? ??   ?? gv0-slave
drwxr-xr-x 3 root root 4096 Oct 24 12:29 gv1

test5# mount
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/xvda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /proc/xen type xenfs (rw)
test4:gv0-slave on /mnt/gv0-slave type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
test1:gv0 on /mnt/gv0 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
test1:gv1 on /mnt/gv1 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)

test5# umount /mnt/gv0-slave

test5# mount
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/xvda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /proc/xen type xenfs (rw)
test1:gv0 on /mnt/gv0 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
test1:gv1 on /mnt/gv1 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)

test5# umount /mnt/gv0
test5# ls -l /mnt
total 12
drwxr-xr-x 2 root root 4096 Sep 23 15:55 gv0
drwxr-xr-x 2 root root 4096 Oct 13 15:09 gv0-slave
drwxr-xr-x 3 root root 4096 Oct 24 12:29 gv1

-- 
Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Small files

2014-10-27 Thread Jeff Darcy
> To what extent, is Gluster a good choice for the "many small files scenario",
> as opposed to HDFS? Last I checked, hdfs would consume humongous memory
> resources if the cluster has many small files, given its architecture. There
> are some hackish solutions on top HDFS for the case of many small files
> rather than huge files, but it would be nice to find a file system that
> matches that scenario well as is. So I wonder how would Gluster do when
> files are typically small.

We're not as bad as HDFS, but it's still not what I'd call a good
scenario for us.  While we have good space efficiency for small files,
and we don't have a single-metadata-server SPOF either, the price we pay
is a hit to our performance for creates (and renames).  There are
several efforts under way to improve this, but there's only so much we
can do when directory contents must be consistent across the volume
despite being spread across many bricks (or replica sets).  More details
on those efforts are here.

http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Small files

2014-10-27 Thread Matan Safriel
Hi,

To what extent, is Gluster a good choice for the "many small files
scenario", as opposed to HDFS? Last I checked, hdfs would consume humongous
memory resources if the cluster has many small files, given its
architecture. There are some hackish solutions on top HDFS for the case of
many small files rather than huge files, but it would be nice to find a
file system that matches that scenario well as is. So I wonder how would
Gluster do when files are typically small.

Thanks!
Matan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] gluster write million of lines: WRITE => -1 (Transport endpoint is not connected)

2014-10-27 Thread Raghavendra G
Seems like there were on-going write operations. On errors we log and
network disconnect has resulted in these logs.

On Mon, Oct 27, 2014 at 7:21 PM, Sergio Traldi 
wrote:

> Hi all,
> One server Redhat 6 with this rpms set:
>
> [ ~]# rpm -qa | grep gluster | sort
> glusterfs-3.5.2-1.el6.x86_64
> glusterfs-api-3.5.2-1.el6.x86_64
> glusterfs-cli-3.5.2-1.el6.x86_64
> glusterfs-fuse-3.5.2-1.el6.x86_64
> glusterfs-geo-replication-3.5.2-1.el6.x86_64
> glusterfs-libs-3.5.2-1.el6.x86_64
> glusterfs-server-3.5.2-1.el6.x86_64
>
> I have a gluster volume with 1 server and 1 brick:
>
> [ ~]# gluster volume info volume-nova-pp
> Volume Name: volume-nova-pp
> Type: Distribute
> Volume ID: b5ec289b-9a54-4df1-9c21-52ca556aeead
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.61.100:/brick-nova-pp/mpathc
> Options Reconfigured:
> storage.owner-gid: 162
> storage.owner-uid: 162
>
> There are four clients attached to this volume with same O.S. and same
> fuse gluster rpms set:
> [ ~]# rpm -qa | grep gluster | sort
> glusterfs-3.5.0-2.el6.x86_64
> glusterfs-api-3.5.0-2.el6.x86_64
> glusterfs-fuse-3.5.0-2.el6.x86_64
> glusterfs-libs-3.5.0-2.el6.x86_6
>
> Last week, but it happens also two weeks ago, I found the disk almost full
> and I found the gluster logs /var/log/glusterfs/var-lib-nova-instances.log
> of 68GB:
> In the log there was the starting problem:
>
> [2014-10-10 07:29:43.730792] W [socket.c:522:__socket_rwv] 0-glusterfs:
> readv on 192.168.61.100:24007 failed (No data available)
> [2014-10-10 07:29:54.022608] E [socket.c:2161:socket_connect_finish]
> 0-glusterfs: connection to 192.168.61.100:24007 failed (Connection
> refused)
> [2014-10-10 07:30:05.271825] W [client-rpc-fops.c:866:client3_3_writev_cbk]
> 0-volume-nova-pp-client-0: remote operation failed: Input/output error
> [2014-10-10 07:30:08.783145] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 3661260: WRITE => -1 (Input/output error)
> [2014-10-10 07:30:08.783368] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 3661262: WRITE => -1 (Input/output error)
> [2014-10-10 07:30:08.806553] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 3661649: WRITE => -1 (Input/output error)
> [2014-10-10 07:30:08.844415] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 3662235: WRITE => -1 (Input/output error)
>
> and a lot of these lines:
>
> [2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not
> connected)
> [2014-10-15 14:41:15.896205] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 951700232: WRITE => -1 (Transport endpoint is not
> connected)
>
> This second line log with different "sector" number has been written every
> millisecond so in about 1 minute we have 1GB write in O.S. disk.
>
> I search for a solution but I didn't find nobody having the same problem.
>
> I think there was a network problem  but why does gluster write in logs
> million of:
> [2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk]
> 0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not
> connected) ?
>
> Thanks in advance.
> Cheers
> Sergio
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Recomended underlying filesystem

2014-10-27 Thread Nux!
Hi,

XFS is the recommended filesystem by RedHat AFAIK.

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Lindsay Mathieson" 
> To: "gluster-users" 
> Sent: Monday, 27 October, 2014 12:52:01
> Subject: [Gluster-users] Recomended underlying filesystem

> Looking at setting up a two node replicated gluster filesystem. Base hard
> disks on each node are 2*2TB in RAID1. It will be used for serving VM Images.
> 
> Does the underlying filesystem particularly matter? EXT4? XFS?
> 
> 
> thanks,
> 
> --
> Lindsay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Recomended underlying filesystem

2014-10-27 Thread James
On Mon, Oct 27, 2014 at 8:52 AM, Lindsay Mathieson
 wrote:
> Looking at setting up a two node replicated gluster filesystem. Base hard
> disks on each node are 2*2TB in RAID1. It will be used for serving VM Images.
>
> Does the underlying filesystem particularly matter? EXT4? XFS?
>
>
> thanks,


Something with xattrs. XFS is most tested/supported.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-27 Thread Kaleb S. KEITHLEY

On 10/27/2014 10:05 AM, Prasun Gera wrote:

Just wanted to check if this issue was reproduced by RedHat. I don't see
any updates on Satellite yet that resolve this.


It's being addressed. I'm not involved with the side that will put 
updates on Satellite, so I can't say when they'll appear




On Fri, Oct 17, 2014 at 4:55 AM, Prasun Gera mailto:prasun.g...@gmail.com>> wrote:

Actually, that's not such a great idea and will likely cause issues
in the future. I wouldn't recommend it. I'm not using this in
production, and was just tinkering around.

On Fri, Oct 17, 2014 at 4:27 AM, Prasun Gera mailto:prasun.g...@gmail.com>> wrote:

I actually managed to remove all the conflicted packages
using rpm --erase --nodeps (in case it helps someone). This is
what worked for me:

rpm --erase --nodeps augeas-libs glusterfs glusterfs-api
glusterfs-fuse glusterfs-libs glusterfs-rdma libsmbclient samba
samba-client samba-common samba-winbind samba-winbind-clients
(This saves /etc/samba/smb.conf.rpmsave)
yum install augeas-libs glusterfs glusterfs-api glusterfs-fuse
glusterfs-libs glusterfs-rdma libsmbclient samba samba-client
samba-common samba-winbind samba-winbind-clients
cp /etc/samba/smb.conf /etc/samba/smb.conf.bkp
cp /etc/samba/smb.conf.rpmsave /etc/samba/smb.conf

This seems to be working all right, although I haven't tested
this much yet.


On Fri, Oct 17, 2014 at 4:21 AM, RAGHAVENDRA TALUR
mailto:raghavendra.ta...@gmail.com>> wrote:



On Fri, Oct 17, 2014 at 3:59 AM, Kaleb KEITHLEY
mailto:kkeit...@redhat.com>> wrote:

On 10/16/2014 06:25 PM, Prasun Gera wrote:

Thanks. Like I said, I'm not using the glusterfs
public/epel
repositories.


Oh. Sorry. No, I didn't see that.

Do you mean that I should add the public repos ?


Nope. If you weren't already using the public repos then
don't add them now. Sorry for any confusion.

I don't
have any packages from the public repo. So I thought
that my system
should be internally consistent since all the
packages that it has are
from RHN. It's the base OS channel that is causing
the problems.
Disabling the RHSS storage channel doesn't fix it.


I'll alert Red Hat's release engineering. Sounds like
they borked something.


Prasun,

The conflict is between different versions of samba and
glusterfs in rhs channel and rhel channel.
Temporary workaround for you should be
yum update --exclude="glusterfs*" --exclude="samba*"
--exclude="libsmb*"

Let us know if that works.

*Raghavendra Talur *
*
*



--

Kaleb

_
Gluster-users mailing list
Gluster-users@gluster.org 

http://supercolony.gluster.__org/mailman/listinfo/gluster-__users









--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Can GlusterFS be installed on SUSE 10 Enterprise Version

2014-10-27 Thread Kaleb S. KEITHLEY

On 10/27/2014 08:30 AM, xia...@neusoft.com wrote:

Hi, every one:

 Can GlusterFS be installed on SUSE 10 Enterprise Version ?

 System info is following:

 # cat /proc/version
 Linux version 2.6.16.60-0.21-smp (geeko@buildhost) (gcc version 4.1.2 
20070115 (SUSE Linux)) #1 SMP Tue May 6 12:41:02 UTC 2008
 # gcc -v
 gcc version 4.1.2 20070115 (SUSE Linux)



It builds on RHEL5, although things like python in RHEL5 are too old for 
Gluster, so you don't get geo-replication.


It'll probably build on SLES 10 SP4. It might take some work. Try it and 
see.


--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-27 Thread Prasun Gera
Just wanted to check if this issue was reproduced by RedHat. I don't see
any updates on Satellite yet that resolve this.

On Fri, Oct 17, 2014 at 4:55 AM, Prasun Gera  wrote:

> Actually, that's not such a great idea and will likely cause issues in the
> future. I wouldn't recommend it. I'm not using this in production, and was
> just tinkering around.
>
> On Fri, Oct 17, 2014 at 4:27 AM, Prasun Gera 
> wrote:
>
>> I actually managed to remove all the conflicted packages using rpm
>> --erase --nodeps (in case it helps someone). This is what worked for me:
>>
>> rpm --erase --nodeps augeas-libs glusterfs glusterfs-api glusterfs-fuse
>> glusterfs-libs glusterfs-rdma libsmbclient samba samba-client samba-common
>> samba-winbind samba-winbind-clients
>> (This saves /etc/samba/smb.conf.rpmsave)
>> yum install augeas-libs glusterfs glusterfs-api glusterfs-fuse
>> glusterfs-libs glusterfs-rdma libsmbclient samba samba-client samba-common
>> samba-winbind samba-winbind-clients
>> cp /etc/samba/smb.conf /etc/samba/smb.conf.bkp
>> cp /etc/samba/smb.conf.rpmsave /etc/samba/smb.conf
>>
>> This seems to be working all right, although I haven't tested this much
>> yet.
>>
>>
>> On Fri, Oct 17, 2014 at 4:21 AM, RAGHAVENDRA TALUR <
>> raghavendra.ta...@gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Oct 17, 2014 at 3:59 AM, Kaleb KEITHLEY 
>>> wrote:
>>>
 On 10/16/2014 06:25 PM, Prasun Gera wrote:

> Thanks. Like I said, I'm not using the glusterfs public/epel
> repositories.
>

 Oh. Sorry. No, I didn't see that.

  Do you mean that I should add the public repos ?
>

 Nope. If you weren't already using the public repos then don't add them
 now. Sorry for any confusion.

  I don't
> have any packages from the public repo. So I thought that my system
> should be internally consistent since all the packages that it has are
> from RHN. It's the base OS channel that is causing the problems.
> Disabling the RHSS storage channel doesn't fix it.
>

 I'll alert Red Hat's release engineering. Sounds like they borked
 something.
>>>
>>>
>>> Prasun,
>>>
>>> The conflict is between different versions of samba and glusterfs in rhs
>>> channel and rhel channel.
>>> Temporary workaround for you should be
>>> yum update --exclude="glusterfs*" --exclude="samba*" --exclude="libsmb*"
>>>
>>> Let us know if that works.
>>>
>>> *Raghavendra Talur *
>>>
>>>
>>>


 --

 Kaleb

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster write million of lines: WRITE => -1 (Transport endpoint is not connected)

2014-10-27 Thread Sergio Traldi

Hi all,
One server Redhat 6 with this rpms set:

[ ~]# rpm -qa | grep gluster | sort
glusterfs-3.5.2-1.el6.x86_64
glusterfs-api-3.5.2-1.el6.x86_64
glusterfs-cli-3.5.2-1.el6.x86_64
glusterfs-fuse-3.5.2-1.el6.x86_64
glusterfs-geo-replication-3.5.2-1.el6.x86_64
glusterfs-libs-3.5.2-1.el6.x86_64
glusterfs-server-3.5.2-1.el6.x86_64

I have a gluster volume with 1 server and 1 brick:

[ ~]# gluster volume info volume-nova-pp
Volume Name: volume-nova-pp
Type: Distribute
Volume ID: b5ec289b-9a54-4df1-9c21-52ca556aeead
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.61.100:/brick-nova-pp/mpathc
Options Reconfigured:
storage.owner-gid: 162
storage.owner-uid: 162

There are four clients attached to this volume with same O.S. and same 
fuse gluster rpms set:

[ ~]# rpm -qa | grep gluster | sort
glusterfs-3.5.0-2.el6.x86_64
glusterfs-api-3.5.0-2.el6.x86_64
glusterfs-fuse-3.5.0-2.el6.x86_64
glusterfs-libs-3.5.0-2.el6.x86_6

Last week, but it happens also two weeks ago, I found the disk almost 
full and I found the gluster logs 
/var/log/glusterfs/var-lib-nova-instances.log of 68GB:

In the log there was the starting problem:

[2014-10-10 07:29:43.730792] W [socket.c:522:__socket_rwv] 0-glusterfs: 
readv on 192.168.61.100:24007 failed (No data available)
[2014-10-10 07:29:54.022608] E [socket.c:2161:socket_connect_finish] 
0-glusterfs: connection to 192.168.61.100:24007 failed (Connection refused)
[2014-10-10 07:30:05.271825] W 
[client-rpc-fops.c:866:client3_3_writev_cbk] 0-volume-nova-pp-client-0: 
remote operation failed: Input/output error
[2014-10-10 07:30:08.783145] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3661260: WRITE => -1 (Input/output error)
[2014-10-10 07:30:08.783368] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3661262: WRITE => -1 (Input/output error)
[2014-10-10 07:30:08.806553] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3661649: WRITE => -1 (Input/output error)
[2014-10-10 07:30:08.844415] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 3662235: WRITE => -1 (Input/output error)


and a lot of these lines:

[2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not 
connected)
[2014-10-15 14:41:15.896205] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 951700232: WRITE => -1 (Transport endpoint is not 
connected)


This second line log with different "sector" number has been written 
every millisecond so in about 1 minute we have 1GB write in O.S. disk.


I search for a solution but I didn't find nobody having the same problem.

I think there was a network problem  but why does gluster write in logs 
million of:
[2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk] 
0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not 
connected) ?


Thanks in advance.
Cheers
Sergio
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 答复: Can GlusterFS be installed on SUSE 10 Enterprise Version

2014-10-27 Thread xia...@neusoft.com
Full System Info is as following:

cat /proc/version
Linux version 2.6.16.60-0.85.1-smp (geeko@buildhost) (gcc version 4.1.2 
20070115 (SUSE Linux)) #1 SMP Thu Mar 17 11:45:06 UTC 2011

uname -a
Linux WH_PL3S_SJTB4 2.6.16.60-0.85.1-smp #1 SMP Thu Mar 17 11:45:06 UTC 2011 
x86_64 x86_64 x86_64 GNU/Linux
uname -r
2.6.16.60-0.85.1-smp

lsb_release -a
LSB Version:
core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64:desktop-3.1-amd64:desktop-3.1-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.1-amd64:graphics-3.1-noarch
Distributor ID: SUSE LINUX
Description:SUSE Linux Enterprise Server 10 (x86_64)
Release:10
Codename:   n/a



发件人: xia...@neusoft.com
发送时间: 2014年10月27日 20:30
收件人: gluster-users
主题: Can GlusterFS be installed on SUSE 10 Enterprise Version

Hi, every one:

Can GlusterFS be installed on SUSE 10 Enterprise Version ?

System info is following:

# cat /proc/version
Linux version 2.6.16.60-0.21-smp (geeko@buildhost) (gcc version 4.1.2 
20070115 (SUSE Linux)) #1 SMP Tue May 6 12:41:02 UTC 2008
# gcc -v
gcc version 4.1.2 20070115 (SUSE Linux)

Thanks a lot !
---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Recomended underlying filesystem

2014-10-27 Thread Lindsay Mathieson
Looking at setting up a two node replicated gluster filesystem. Base hard 
disks on each node are 2*2TB in RAID1. It will be used for serving VM Images.

Does the underlying filesystem particularly matter? EXT4? XFS?


thanks,

-- 
Lindsay

signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster on SPARC / Solaris

2014-10-27 Thread Gene Liverman
I have a SPARC server that I'd like to utilize Gluster on and was wondering
if there is any support for that architecture?  I am game to run Linux or
Solaris or whatever on the box. Thanks!




--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Can GlusterFS be installed on SUSE 10 Enterprise Version

2014-10-27 Thread xia...@neusoft.com
Hi, every one:

Can GlusterFS be installed on SUSE 10 Enterprise Version ? 

System info is following:

# cat /proc/version 
Linux version 2.6.16.60-0.21-smp (geeko@buildhost) (gcc version 4.1.2 
20070115 (SUSE Linux)) #1 SMP Tue May 6 12:41:02 UTC 2008
# gcc -v
gcc version 4.1.2 20070115 (SUSE Linux)

Thanks a lot !
---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FailOver

2014-10-27 Thread Vijay Bellur

On 10/23/2014 01:35 PM, Punit Dambiwal wrote:

On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal mailto:hypu...@gmail.com>> wrote:

Hi,

I have one question regarding the gluster failover...let me
explain my current architecture,i am using Ovirt with gluster...

1. One Ovirt Engine (Ovirt 3.4)
2. 4 * Ovirt Node as well as Gluster storage node...with 12
brick in one node...(Gluster Version 3.5)
3. All 4 node in distributed replicated structure with replica
level =2...
4. I have one spare node with 12 brick for Failover purpose..

Now there is two questions :-

1. If any of brick failed...how i can failover this brick...how
to remove the failed brick and replace with another brick??
Do i need to replace the whole node or i can replace the single
brick ??



Failure of a single brick can be addressed by performing "replace-brick 
commit force" to replace the failed brick with a new brick and then 
trigger self-heal to rebuild data on the replace brick.



2. If one of the whole node with 12 brick down and can not come
up...how i can replace it with the new onedo i need to add
two node to main the replication level... ??



You can add a replacement node to the cluster and use "replace-brick 
commit force" to adjust the volume topology. Self-healing will rebuild 
data on the new node. You might want to replace one/few bricks at a time 
to ensure that your servers do not get bogged down by self-healing.


-Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Deduplication

2014-10-27 Thread Vijay Bellur

[Adding gluster-devel]

On 10/22/2014 06:50 PM, Michael Lessard wrote:

Hi list,

Deduplication is mentioned on the Gluster project page :
http://gluster.org/community/documentation/index.php/Projects

I'm wondering what is the priority for this feature to be added to
Gluster ? Is it already on the roadmap ?
As most of the nas vendors out there offer this feature, it will be a
nice addition to Gluster.



Deduplication is a feature which we intend to get into a future release. 
Deduplication in GlusterFS can possibly build on the work being done for 
bitrot now [1].


If anybody is interested in taking up this feature for GlusterFS 3.7, we 
can definitely offer assistance for implementing this feature.


Thanks,
Vijay

[1] http://www.gluster.org/community/documentation/index.php/Features/BitRot

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users