Re: [Gluster-users] Split-brain seen with [0 0] pending matrix and io-cache page errors

2014-10-17 Thread Pranith Kumar Karampuri

hi,
  Could you see if the size of the file mismatches?

Pranith

On 10/18/2014 04:20 AM, Anirban Ghoshal wrote:

Hi everyone,

I have this really confusing split-brain here that's bothering me. I 
am running glusterfs 3.4.2 over linux 2.6.34. I have a replica 2 
volume 'testvol' that is It seems I cannot read/stat/edit the file in 
question, and `gluster volume heal testvol info split-brain` shows 
nothing. Here are the logs from the fuse-mount for the volume:


[2014-09-29 07:53:02.867111] W [fuse-bridge.c:1172:fuse_err_cbk] 
0-glusterfs-fuse: 4560969: FLUSH() ERR => -1 (Input/output error)
[2014-09-29 07:54:16.007799] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c8529d20 & waitq = 
0x7fd5c8067d40
[2014-09-29 07:54:16.007854] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561103: READ => -1 (Input/output error)
[2014-09-29 07:54:16.008018] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c8607ee0 & waitq = 
0x7fd5c8067d40
[2014-09-29 07:54:16.008056] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561104: READ => -1 (Input/output error)
[2014-09-29 07:54:16.008233] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c8066f30 & waitq = 
0x7fd5c8067d40
[2014-09-29 07:54:16.008269] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561105: READ => -1 (Input/output error)
[2014-09-29 07:54:16.008800] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c860bcf0 & waitq = 
0x7fd5c863b1f0
[2014-09-29 07:54:16.008839] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561107: READ => -1 (Input/output error)
[2014-09-29 07:54:16.009365] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c85fd120 & waitq = 
0x7fd5c8067d40
[2014-09-29 07:54:16.009413] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561109: READ => -1 (Input/output error)
[2014-09-29 07:54:16.040549] W [afr-open.c:213:afr_open] 
0-testvol-replicate-0: failed to open as split brain seen, returning EIO
[2014-09-29 07:54:16.040594] W [fuse-bridge.c:915:fuse_fd_cbk] 
0-glusterfs-fuse: 4561142: OPEN() 
/SECLOG/20140908.d/SECLOG_00427425_.log => 
-1 (Input/output error)


Could somebody please give me some clue on where to begin? I checked 
the xattrs on 
/SECLOG/20140908.d/SECLOG_00427425_.log and 
it seems the changelogs are [0, 0] on both replicas, and the gfid's match.


Thank you very much for any help on this.
Anirban





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Split-brain seen with [0 0] pending matrix and io-cache page errors

2014-10-17 Thread Anirban Ghoshal
Hi everyone,

I have this really confusing split-brain here that's bothering me. I am running 
glusterfs 3.4.2 over linux 2.6.34. I have a replica 2 volume 'testvol' that is 
It seems I cannot read/stat/edit the file in question, and `gluster volume heal 
testvol info split-brain` shows nothing. Here are the logs from the fuse-mount 
for the volume:

[2014-09-29 07:53:02.867111] W [fuse-bridge.c:1172:fuse_err_cbk] 
0-glusterfs-fuse: 4560969: FLUSH() ERR => -1 (Input/output error) 
[2014-09-29 07:54:16.007799] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c8529d20 & waitq = 
0x7fd5c8067d40 
[2014-09-29 07:54:16.007854] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561103: READ => -1 (Input/output error) 
[2014-09-29 07:54:16.008018] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c8607ee0 & waitq = 
0x7fd5c8067d40 
[2014-09-29 07:54:16.008056] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561104: READ => -1 (Input/output error) 
[2014-09-29 07:54:16.008233] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c8066f30 & waitq = 
0x7fd5c8067d40 
[2014-09-29 07:54:16.008269] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561105: READ => -1 (Input/output error) 
[2014-09-29 07:54:16.008800] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c860bcf0 & waitq = 
0x7fd5c863b1f0 
[2014-09-29 07:54:16.008839] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561107: READ => -1 (Input/output error) 
[2014-09-29 07:54:16.009365] W [page.c:991:__ioc_page_error] 
0-testvol-io-cache: page error for page = 0x7fd5c85fd120 & waitq = 
0x7fd5c8067d40 
[2014-09-29 07:54:16.009413] W [fuse-bridge.c:2089:fuse_readv_cbk] 
0-glusterfs-fuse: 4561109: READ => -1 (Input/output error) 
[2014-09-29 07:54:16.040549] W [afr-open.c:213:afr_open] 0-testvol-replicate-0: 
failed to open as split brain seen, returning EIO 
[2014-09-29 07:54:16.040594] W [fuse-bridge.c:915:fuse_fd_cbk] 
0-glusterfs-fuse: 4561142: OPEN() 
/SECLOG/20140908.d/SECLOG_00427425_.log => -1 
(Input/output error)


Could somebody please give me some clue on where to begin? I checked the xattrs 
on /SECLOG/20140908.d/SECLOG_00427425_.log and 
it seems the changelogs are [0, 0] on both replicas, and the gfid's match.

Thank you very much for any help on this.
Anirban___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-17 Thread Anirban Ghoshal
It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`. You 
will probably find something out that will help your cause. In general, if you 
just wish to start the thing up without going into the why of it, try `gluster 
volume set engine nfs.disable on` followed by `gluster volume set engine 
nfs.disable off`. It does the trick quite often for me because it is a polite 
way to askmgmt/glusterd to try and respawn the nfs server process if need be. 
But, keep in mind that this will call a (albeit small) service interruption to 
all clients accessing volume engine over nfs.

Thanks, 
Anirban


On Saturday, 18 October 2014 1:03 AM, Demeter Tibor  wrote:
 




Hi,

I have make a glusterfs with nfs support.

I don't know why, but after a reboot the nfs does not listen on localhost, only 
on gs01.


[root@node0 ~]# gluster volume info engine

Volume Name: engine
Type: Replicate
Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gs00.itsmart.cloud:/gluster/engine0
Brick2: gs01.itsmart.cloud:/gluster/engine1
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
auth.allow: *
nfs.disable: off

[root@node0 ~]# gluster volume status engine
Status of volume: engine
Gluster process Port Online Pid
--
Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
NFS Server on localhost N/A N N/A
Self-heal Daemon on localhost N/A Y 3261
NFS Server on gs01.itsmart.cloud 2049 Y 5216
Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223



Does anybody help me?

Thanks in advance.

Tibor
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-17 Thread Niels de Vos
On Fri, Oct 17, 2014 at 09:33:06PM +0200, Demeter Tibor wrote:
> 
> Hi, 
> 
> I have make a glusterfs with nfs support. 
> 
> I don't know why, but after a reboot the nfs does not listen on localhost, 
> only on gs01. 

You should be able to find some hints in /var/log/glusterfs/nfs.log.

HTH,
Niels

> 
> 
> 
> 
> [root@node0 ~]# gluster volume info engine 
> 
> Volume Name: engine 
> Type: Replicate 
> Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3 
> Status: Started 
> Number of Bricks: 1 x 2 = 2 
> Transport-type: tcp 
> Bricks: 
> Brick1: gs00.itsmart.cloud:/gluster/engine0 
> Brick2: gs01.itsmart.cloud:/gluster/engine1 
> Options Reconfigured: 
> storage.owner-uid: 36 
> storage.owner-gid: 36 
> performance.quick-read: off 
> performance.read-ahead: off 
> performance.io-cache: off 
> performance.stat-prefetch: off 
> cluster.eager-lock: enable 
> network.remote-dio: enable 
> cluster.quorum-type: auto 
> cluster.server-quorum-type: server 
> auth.allow: * 
> nfs.disable: off 
> 
> 
> [root@node0 ~]# gluster volume status engine 
> Status of volume: engine 
> Gluster process Port Online Pid 
> --
>  
> Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250 
> Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518 
> NFS Server on localhost N/A N N/A 
> Self-heal Daemon on localhost N/A Y 3261 
> NFS Server on gs01.itsmart.cloud 2049 Y 5216 
> Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223 
> 
> 
> 
> 
> 
> 
> Does anybody help me? 
> 
> 
> Thanks in advance. 
> 
> 
> 
> 
> Tibor 

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster 3.6.0 beta3

2014-10-17 Thread M S Vishwanath Bhat

Hi,

Right now, distributed-geo-rep has bunch of known issues with deletes 
and renames. Part of the issue was solved with a patch sent to upstream 
recently. But still it doesn't solve complete issue.


So long story short, dist-geo-rep has still issues with short lived 
renames where the renamed files are hashed to different subvolume 
(bricks). If the renamed file is hashed to same brick then issue should 
not be seen (hopefully).


Using volume set, we can force the renamed file to be hashed to same 
brick. "gluster volume set  cluster.extra-hash-regex 
"


For example if you open a file in vi, it will rename the file to 
filename.txt~, so the regex should be

gluster volume set VOLNAME cluster.extra-hash-regex '^(.+)~$'

But for this to work, the format of the files created by your 
application has to be identified. Does your application create files in 
a identifiable format which can be specified in a regex? Is this a 
possibility?



Best Regards,
Vishwanath

On 15/10/14 15:41, Kingsley wrote:

I have added a comment to that bug report (a paste of my original
email).

Cheers,
Kingsley.

On Tue, 2014-10-14 at 22:10 +0100, James Payne wrote:

Just adding that I have verified this as well with the 3.6 beta, I added a
log to the ticket regarding this.

https://bugzilla.redhat.com/show_bug.cgi?id=1141379

Please feel free to add to the bug report, I think we are seeing the same
issue. It isn't present in the 3.4 series which in the one I'm testing
currently. (no distributed geo rep though)

Regards
James

-Original Message-
From: Kingsley [mailto:glus...@gluster.dogwind.com]
Sent: 13 October 2014 16:51
To: gluster-users@gluster.org
Subject: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster
3.6.0 beta3

Hi,

I have a small script to simulate file activity for an application we have.
It breaks geo-replication within about 15 - 20 seconds when I try it.

This is on a small Gluster test environment running in some VMs running
CentOS 6.5 and using gluster 3.6.0 beta3. I have 6 VMs - test1, test2,
test3, test4, test5 and test6. test1, test2 , test3 and test4 are gluster
servers while test5 and test6 are the clients. test3 is actually not used in
this test.


Before the test, I had a single gluster volume as follows:

test1# gluster volume status
Status of volume: gv0
Gluster process PortOnline  Pid

--
Brick test1:/data/brick/gv0 49168   Y
12017
Brick test2:/data/brick/gv0 49168   Y
11835
NFS Server on localhost 2049Y
12032
Self-heal Daemon on localhost   N/A Y
12039
NFS Server on test4 2049Y   7934
Self-heal Daemon on test4   N/A Y   7939
NFS Server on test3 2049Y
11768
Self-heal Daemon on test3   N/A Y
11775
NFS Server on test2 2049Y
11849
Self-heal Daemon on test2   N/A Y
11855

Task Status of Volume gv0

--
There are no active volume tasks


I created a new volume and set up geo-replication as follows (as these are
test machines I only have one file system on each, hence using "force" to
create the bricks in the root FS):

test4# date ; gluster volume create gv0-slave test4:/data/brick/gv0-slave
force; date Mon Oct 13 15:03:14 BST 2014 volume create: gv0-slave: success:
please start the volume to access data Mon Oct 13 15:03:15 BST 2014

test4# date ; gluster volume start gv0-slave; date Mon Oct 13 15:03:36 BST
2014 volume start: gv0-slave: success Mon Oct 13 15:03:39 BST 2014

test4# date ; gluster volume geo-replication gv0 test4::gv0-slave create
push-pem force ; date Mon Oct 13 15:05:59 BST 2014 Creating geo-replication
session between gv0 & test4::gv0-slave has been successful Mon Oct 13
15:06:11 BST 2014


I then mount volume gv0 on one of the client machines. I can create files
within the gv0 volume and can see the changes being replicated to the
gv0-slave volume, so I know that geo-replication is working at the start.

When I run my script (which quickly creates, deletes and renames files),
geo-replication breaks within a very short time. The test script output is
in http://gluster.dogwind.com/files/georep20141013/test6_script-output.log
(I interrupted the script once I saw that geo-replication was broken).
Note that when it deletes a file, it renames any later-numbered file so that
the file numbering remains sequential with no gaps; this simulates a real
world application that we use.

If you want a copy of the test script, it's here:
http://gluster.dogwind.com/files/georep20141013/test_script.tar.gz


The various gluster log files can be downlo

[Gluster-users] NFS not start on localhost

2014-10-17 Thread Demeter Tibor

Hi, 

I have make a glusterfs with nfs support. 

I don't know why, but after a reboot the nfs does not listen on localhost, only 
on gs01. 




[root@node0 ~]# gluster volume info engine 

Volume Name: engine 
Type: Replicate 
Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp 
Bricks: 
Brick1: gs00.itsmart.cloud:/gluster/engine0 
Brick2: gs01.itsmart.cloud:/gluster/engine1 
Options Reconfigured: 
storage.owner-uid: 36 
storage.owner-gid: 36 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.stat-prefetch: off 
cluster.eager-lock: enable 
network.remote-dio: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
auth.allow: * 
nfs.disable: off 


[root@node0 ~]# gluster volume status engine 
Status of volume: engine 
Gluster process Port Online Pid 
-- 
Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250 
Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518 
NFS Server on localhost N/A N N/A 
Self-heal Daemon on localhost N/A Y 3261 
NFS Server on gs01.itsmart.cloud 2049 Y 5216 
Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223 






Does anybody help me? 


Thanks in advance. 




Tibor 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication with lots of folders and files

2014-10-17 Thread Michael Rauch
Hello,

we use GlusterFS Version 3.4.5-1.el6.x86_64.
In the logs appear no errors.

Regards,

Michael


Von: Aravinda [mailto:avish...@redhat.com]
Gesendet: Donnerstag, 16. Oktober 2014 06:48
An: Michael Rauch; gluster-users@gluster.org
Betreff: Re: [Gluster-users] geo-replication with lots of folders and files

Hello,

Please let us know the version of GlusterFS you are using. Do you see any 
errors in the log files about sync failure?(Logs will be in 
/var/log/glusterfs/geo-replication dir in each master nodes and 
/var/log/glusterfs/geo-replication-slaves in each slave nodes)

--
regards
Aravinda
http://aravindavk.in


On 10/15/2014 09:18 PM, Michael Rauch wrote:
Hello all,

we try to setup a geo-replication for a volume with 200.000 Folders and
about 1.500.000 evenly distributed small files. The geo-replication is build
over a VPN connection.

Initially when the geo-replication starts, both sides have the same files
and directories. Randomly written files on side A appears after some
seconds on the other side B. The write load is about 200.000 small files
per day. All works as expected.

We facing a problem when the VPN Connection has an issue for some
minutes. During the connectivity problem the file write on side A is ongoing.

After the connectivity is stable again the geo-replication is not able
to get both sides in sync again.

Is this behavior already known?

Thanks,

Michael




___

Gluster-users mailing list

Gluster-users@gluster.org

http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-17 Thread Prasun Gera
Actually, that's not such a great idea and will likely cause issues in the
future. I wouldn't recommend it. I'm not using this in production, and was
just tinkering around.

On Fri, Oct 17, 2014 at 4:27 AM, Prasun Gera  wrote:

> I actually managed to remove all the conflicted packages using rpm --erase
> --nodeps (in case it helps someone). This is what worked for me:
>
> rpm --erase --nodeps augeas-libs glusterfs glusterfs-api glusterfs-fuse
> glusterfs-libs glusterfs-rdma libsmbclient samba samba-client samba-common
> samba-winbind samba-winbind-clients
> (This saves /etc/samba/smb.conf.rpmsave)
> yum install augeas-libs glusterfs glusterfs-api glusterfs-fuse
> glusterfs-libs glusterfs-rdma libsmbclient samba samba-client samba-common
> samba-winbind samba-winbind-clients
> cp /etc/samba/smb.conf /etc/samba/smb.conf.bkp
> cp /etc/samba/smb.conf.rpmsave /etc/samba/smb.conf
>
> This seems to be working all right, although I haven't tested this much
> yet.
>
>
> On Fri, Oct 17, 2014 at 4:21 AM, RAGHAVENDRA TALUR <
> raghavendra.ta...@gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 17, 2014 at 3:59 AM, Kaleb KEITHLEY 
>> wrote:
>>
>>> On 10/16/2014 06:25 PM, Prasun Gera wrote:
>>>
 Thanks. Like I said, I'm not using the glusterfs public/epel
 repositories.

>>>
>>> Oh. Sorry. No, I didn't see that.
>>>
>>>  Do you mean that I should add the public repos ?

>>>
>>> Nope. If you weren't already using the public repos then don't add them
>>> now. Sorry for any confusion.
>>>
>>>  I don't
 have any packages from the public repo. So I thought that my system
 should be internally consistent since all the packages that it has are
 from RHN. It's the base OS channel that is causing the problems.
 Disabling the RHSS storage channel doesn't fix it.

>>>
>>> I'll alert Red Hat's release engineering. Sounds like they borked
>>> something.
>>
>>
>> Prasun,
>>
>> The conflict is between different versions of samba and glusterfs in rhs
>> channel and rhel channel.
>> Temporary workaround for you should be
>> yum update --exclude="glusterfs*" --exclude="samba*" --exclude="libsmb*"
>>
>> Let us know if that works.
>>
>> *Raghavendra Talur *
>>
>>
>>
>>>
>>>
>>> --
>>>
>>> Kaleb
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-17 Thread Prasun Gera
I actually managed to remove all the conflicted packages using rpm --erase
--nodeps (in case it helps someone). This is what worked for me:

rpm --erase --nodeps augeas-libs glusterfs glusterfs-api glusterfs-fuse
glusterfs-libs glusterfs-rdma libsmbclient samba samba-client samba-common
samba-winbind samba-winbind-clients
(This saves /etc/samba/smb.conf.rpmsave)
yum install augeas-libs glusterfs glusterfs-api glusterfs-fuse
glusterfs-libs glusterfs-rdma libsmbclient samba samba-client samba-common
samba-winbind samba-winbind-clients
cp /etc/samba/smb.conf /etc/samba/smb.conf.bkp
cp /etc/samba/smb.conf.rpmsave /etc/samba/smb.conf

This seems to be working all right, although I haven't tested this much yet.


On Fri, Oct 17, 2014 at 4:21 AM, RAGHAVENDRA TALUR <
raghavendra.ta...@gmail.com> wrote:

>
>
> On Fri, Oct 17, 2014 at 3:59 AM, Kaleb KEITHLEY 
> wrote:
>
>> On 10/16/2014 06:25 PM, Prasun Gera wrote:
>>
>>> Thanks. Like I said, I'm not using the glusterfs public/epel
>>> repositories.
>>>
>>
>> Oh. Sorry. No, I didn't see that.
>>
>>  Do you mean that I should add the public repos ?
>>>
>>
>> Nope. If you weren't already using the public repos then don't add them
>> now. Sorry for any confusion.
>>
>>  I don't
>>> have any packages from the public repo. So I thought that my system
>>> should be internally consistent since all the packages that it has are
>>> from RHN. It's the base OS channel that is causing the problems.
>>> Disabling the RHSS storage channel doesn't fix it.
>>>
>>
>> I'll alert Red Hat's release engineering. Sounds like they borked
>> something.
>
>
> Prasun,
>
> The conflict is between different versions of samba and glusterfs in rhs
> channel and rhel channel.
> Temporary workaround for you should be
> yum update --exclude="glusterfs*" --exclude="samba*" --exclude="libsmb*"
>
> Let us know if that works.
>
> *Raghavendra Talur *
>
>
>
>>
>>
>> --
>>
>> Kaleb
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-17 Thread RAGHAVENDRA TALUR
On Fri, Oct 17, 2014 at 3:59 AM, Kaleb KEITHLEY  wrote:

> On 10/16/2014 06:25 PM, Prasun Gera wrote:
>
>> Thanks. Like I said, I'm not using the glusterfs public/epel
>> repositories.
>>
>
> Oh. Sorry. No, I didn't see that.
>
>  Do you mean that I should add the public repos ?
>>
>
> Nope. If you weren't already using the public repos then don't add them
> now. Sorry for any confusion.
>
>  I don't
>> have any packages from the public repo. So I thought that my system
>> should be internally consistent since all the packages that it has are
>> from RHN. It's the base OS channel that is causing the problems.
>> Disabling the RHSS storage channel doesn't fix it.
>>
>
> I'll alert Red Hat's release engineering. Sounds like they borked
> something.


Prasun,

The conflict is between different versions of samba and glusterfs in rhs
channel and rhel channel.
Temporary workaround for you should be
yum update --exclude="glusterfs*" --exclude="samba*" --exclude="libsmb*"

Let us know if that works.

*Raghavendra Talur *



>
>
> --
>
> Kaleb
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users