Re: [Gluster-users] glusterfs on Microsoft Azure?

2015-09-24 Thread Daniel Müller
Did you made: gluster peer probe SERVER?
Waht is telling you the probe?


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 



Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Aki Ketolainen
Gesendet: Donnerstag, 24. September 2015 11:51
An: gluster-users@gluster.org
Betreff: [Gluster-users] glusterfs on Microsoft Azure?

Hi,

I was trying to setup glusterfs 3.7.4-2 on CentOS 6.7 on Microsoft Azure.
We are fortunate to receive service from Microsoft Bizspark program for
startups.
Unfortunately, we are forced to run the virtual machines on separate Azure
accounts.
This in turn causes the VM's not being able to ping each other.

Reading the gluster documentation at
http://gluster.readthedocs.org/en/latest/Install-Guide/Configure/
it says "For the Gluster to communicate within a cluster either the
firewalls have to be turned off or enable communication for each server."

In my case this is impossible because the VM's under different Azure
accounts are unable to
ping each other. This restriction is a design choice Microsoft has made. The
needed glusterfs 
service ports are open between the VM's.

What could be a remedy to this problem?

Here's detailed information about my setup:

(on hostA)
# gluster peer status
Number of Peers: 1

Hostname: hostB.domain
Uuid: 7e468940-97e5-4595-a835-5eeedae07770
State: Peer in Cluster (Connected)

(on hostB)
# gluster peer status
Number of Peers: 1

Hostname: hostA.domain
Uuid: 1a5a62a8-4e08-4f06-9f5d-e159a2f7e5c9
State: Peer in Cluster (Connected)

(on hostA)
# gluster volume create home replica 2 hostA.domain:/data/home/gv0
hostB.domain:/data/home/gv0
volume create: home: failed: Host hostA.domain is not in 'Peer in Cluster'
state

(/var/log/glusterfs/etc-glusterfs-glusterd.vol.log)
[2015-09-22 07:54:36.374674] I [MSGID: 106487]
[glusterd-handler.c:1402:__glusterd_handle_cli_list_friends] 0-glusterd:
Received cli list req
[2015-09-22 07:59:08.515867] E [MSGID: 106452]
[glusterd-utils.c:5569:glusterd_new_brick_validate] 0-management: Host
hostA.domain is not in 'Peer in Cluster' state
[2015-09-22 07:59:08.515896] E [MSGID: 106536]
[glusterd-volume-ops.c:1273:glusterd_op_stage_create_volume] 0-management:
Host hostA.domain is not in 'Peer in Cluster' state
[2015-09-22 07:59:08.515908] E [MSGID: 106301]
[glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
operation 'Volume Create' failed on localhost : Host hostA.domain is not in
'Peer in Cluster' state

Best regards,

Aki




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Need help making a decision choosing MS DFS or Gluster+SAMBA+CTDB

2015-08-10 Thread Daniel Müller
An example of a working share on samba4:

You can choose to work with vfs objects= glusterfs 
Glusterfs:volume=yourvolume
Glusterfs:volfile.server=Your.server
Form e it turned out to be too buggy.


I just used instead the path=/path/toyour/mountedgluster

You will need this:
posix locking =NO
kernel share modes = No

 [edv]
comment=edv s4master verzeichnis auf gluster node1
vfs objects= recycle
##vfs objects= recycle, glusterfs
recycle:repository= /%P/Papierkorb
##glusterfs:volume= sambacluster
##glusterfs:volfile_server = XXX..
recycle:exclude = *.tmp,*.temp,*.log,*.ldb,*.TMP,?~$*,~$*,Thumbs.db
recycle:keeptree = Yes
recycle:exclude_dir = .Papierkorb,Papierkorb,tmp,temp,profile,.profile
recycle:touch_mtime = yes
recycle:versions = Yes
recycle:minsize = 1
msdfs root=yes
path=/mnt/glusterfs/ads/wingroup/edv
read only=no
posix locking =NO
kernel share modes = No
access based share enum=yes
hide unreadable=yes
hide unwriteable files=yes
veto files = Thumbs.db
delete veto files = yes

Greetings
Daniel


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 



-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Dan Mons
Gesendet: Montag, 10. August 2015 09:08
An: Mathieu Chateau
Cc: gluster-users; David
Betreff: Re: [Gluster-users] Need help making a decision choosing MS DFS or
Gluster+SAMBA+CTDB

If you're looking at a Gluster+Samba setup of any description for people
extensively using Microsoft Office tools (either Windows or Mac clients), I
*strongly* suggested exhaustive testing of Microsoft Word and Excel.

I've yet to find a way to make these work 100% on Gluster.  Strange
client-side locking behaviour with these tools often make documents
completely unusable when hosted off Gluster.   We host our large
production files (VFX industry) off Gluster, however have a separate Windows
Server VM purely for administration to host their legacy Microsoft Office
documents (we've since migrated largely to Google Apps + Google Drive for
that stuff, but the legacy requirement remains for a handful of users).

-Dan


Dan Mons - R&D Sysadmin
Cutting Edge
http://cuttingedge.com.au


On 10 August 2015 at 15:42, Mathieu Chateau  wrote:
> Hello,
>
> what do you mean by "true" clustering ?
> We can do a Windows Failover cluster (1 virtual ip, 1 virtual name), 
> but this mean using a shared storage like SAN.
>
> Then it depends on your network topology. If you have multiple 
> geographical sites / datacenter, then DFS-R behave a lot better than 
> Gluster in replicated mode. Users won't notice any latency, At the 
> price that replication is async.
>
>
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
>
> 2015-08-10 7:26 GMT+02:00 Ira Cooper :
>>
>> Mathieu Chateau  writes:
>>
>> > I do have DFS-R in production, that replaced sometimes netapp ones.
>> > But no similar workload as my current GFS.
>> >
>> > In active/active, the most common issue is file changed on both 
>> > side (no global lock) Will users access same content from linux & 
>> > windows ?
>>
>> If you want to go active/active.  I'd recommend Samba + CTDB + Gluster.
>>
>> You want true clustering, and a system that can handle the locking etc.
>>
>> I'd layer normal DFS to do "namespace" control, and to help with 
>> handling failover, or just use round robin DNS.
>>
>> Thanks,
>>
>> -Ira
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusteFS VS DRBD + power failure ?

2014-10-27 Thread Daniel Müller
For sure power failure on both nodes will corrupt drbd as well. Power failure 
in a clusterd datacenter is the worst case and point of failure.


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 



Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Tytus Rogalewski
Gesendet: Montag, 27. Oktober 2014 21:35
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] GlusteFS VS DRBD + power failure ?

Hi guys,
I wanted to ask you about what happen in case of power failure.
I have 2 node proxmox cluster with glusterfs as sdb1 XFS, and mounted on each 
node as localhost/glusterstorage.
I am storing VMs on it as qcow2(and inside ext4 filesystem).
 Live migration works ok WOW.. Everything works fine.
But tell me will something bad happen when the power will fail on whole 
datacenter ?
Will be data corrupted and will be the same thing if i am using drbd ?
DRBD doesnt give me so much flexability(because i cant use qcow2 and store 
files like iso or backups on drbd), but glusterfs does give me much flexability 
!
Anyway yesterday i created glusterfs with ext4, and VM qcow with ext4 on it and 
when i made "reboot -f"(i assume this is the same as i will pull power cord off 
?) - after node went online again, VM data was corrupted and i had many ext 
failures inside VM.
Tell me was that because i used ext4 on top of sdb1 glusterfs storage or will 
that work the same with XFS ?
Is drbd better  protection in case of power failure ?

Anyway second question, if i have 2 nodes with glusterfs.
node1 is changing file1.txt
node2 is changing file2.txt
then i will disconnect glusterfs in network, and data keeps changing on both 
nodes)
After i will reconnect glusterfs how this will go?
Newer changed file1 from node1 will overwrite file1 on node2?
and newer file2 changed on node2 will overwrite file2 on node1 ?
Am i correct ?

Thx for answer :)


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] logfiles get flooded by warnings

2014-09-19 Thread Daniel Müller
I had the same issue. Did restart gluster with no solution. Just did set up a 
cron job to set back the logs because my partition was near to full.
Then some day after restarting both replica servers the warning was gone.
No solution!??


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 



Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Bernhard Glomm
Gesendet: Freitag, 19. September 2014 10:26
An: gluster-users@gluster.org
Betreff: [Gluster-users] logfiles get flooded by warnings

Hi all,

I'm running 
#: glusterfs -V
#: glusterfs 3.4.5 built on Aug  6 2014 19:15:07
on ubuntu 14.04 from the semiosis ppa

I have a replica 2 with 2 servers
another client does a fuse mount of a volume.
On rsyncing a bit of data onto the fuse mount,
 I get an entry like the below one on the client - for each file that is copied 
onto the volume

[2014-09-19 07:57:39.877806] W 
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0--client-0: 
remote operation failed: No data available
[2014-09-19 07:57:39.877963] W 
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0--client-1: 
remote operation failed: No data available
[2014-09-19 07:57:39.878462] W [fuse-bridge.c:1172:fuse_err_cbk] 
0-glusterfs-fuse: 21741144: REMOVEXATTR() 
//. => -1 (No data available)

The data itself is present and accessible on the volume and on both bricks.

So three questions:
a.) what kind of data is not available? what is the client complaining about?
b.) since it is a warning and the data seems to be okay - is there anything I 
need to fix?
c.) How can I get rid of the amount of log lines? it's more than 3GB/day..

TIA

Bernhard

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] mysql on gluster storage

2014-08-19 Thread Daniel Müller
Hm, it seems it needs to set your volume right. With samba4 I had issues and  
set the volume like this:
performance.stat-prefetch: off


Volume Name: sambacluster
Type: Replicate
Volume ID: 4fd0da03-8579-47cc-926b-d7577dac56cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: s4master:/raid5hs/glusterfs/samba
Brick2: s4slave:/raid5hs/glusterfs/samba
Options Reconfigured:
performance.quick-read: on
network.ping-timeout: 5
performance.stat-prefetch: off

It worked for me

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de



-Ursprüngliche Nachricht-
Von: c...@kruemel.org [mailto:c...@kruemel.org] 
Gesendet: Dienstag, 19. August 2014 16:30
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: AW: [Gluster-users] mysql on gluster storage

> I know this issue if you are running proxmox for virt. machine and you 
> store it on gluster then you need to set "write through" in you 
> settings of your hdd (vmdk or else).

Thanks, Daniel. I have that, and I believe it is the default setting for 
KVM/QEMU (other than in Proxmox, where it is "none"). I.e. the errors occur 
despite "writethrough".

Cheers, Chris

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Anyone using GlusterFS 3.5 in production environment?

2014-08-19 Thread Daniel Müller
2 Samba 4 file server on "glusterfs 3.5.1 built on Jun 24 2014 15:09:41" on
centos 6.5, 2 node replication
as backup.


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 



Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Chuck
Gesendet: Dienstag, 19. August 2014 09:08
An: gluster-users
Betreff: [Gluster-users] Anyone using GlusterFS 3.5 in production
environment?

Hi, Guys,
 
We are thinking deploy 3.5.
On the website, it said 3.5 is the stable release.
But 3.4 is recommended for production use.
Has anyone already deployed 3.5.x in your production environment?
 
Thanks,
Chuck
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] mysql on gluster storage

2014-08-18 Thread Daniel Müller
I know this issue if you are running proxmox for virt. machine and you store
it on gluster then you need to set "write through" in you settings of your
hdd (vmdk or else).



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de

"Der Mensch ist die Medizin des Menschen" 




-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von c...@kruemel.org
Gesendet: Dienstag, 19. August 2014 08:10
An: gluster-users@gluster.org
Betreff: [Gluster-users] mysql on gluster storage

Hi, I'm seeing some strange errors running mysql on gluster:

I'm sharing Gluster volumes from a storage server to a virtual machine
running on the storage server. The VM runs mysql. I have mounted
/var/lib/mysql via glusterfs. /var/lib/mysql is mounted before mysql starts,
and unmounted after it has ended (did that manually to be sure).

Starting up the VM (from the first reboot after installation on), I get for
every database:

140819  7:24:29  InnoDB: Error: table `test`.`test_cache` does not exist in
the InnoDB internal
InnoDB: data dictionary though MySQL is trying to drop it.
InnoDB: Have you copied the .frm file of the table to the
InnoDB: MySQL database directory from another database?
InnoDB: You can look for further help from
InnoDB: 
http://dev.mysql.com/doc/refman/5.5/en/innodb-troubleshooting.html

I can then feed my database backup into mysql, and everything is running
fine until I reboot.

My guess is that some files are not written properly, some buffers are not
flushed properly, ... before reboot, even when manually unmounting after
manually stopping mysql.

Would I need to do a manual flush or something before unmounting/rebooting?

Technical details: Storage server an VM both run Ubuntu 12.04. glusterfs
3.4.2 installed from ppa:semiosis/ubuntu-glusterfs-3.4.

The resource was created on the storage server with

gluster volume create mysql transport tcp machine:/mnt/raid/mysql

Chris
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-31 Thread Daniel Müller
Working 172.17.2.30,
[root@centclust2 glusterfs]# telnet 172.17.2.30 49152
Trying 172.17.2.30...
Connected to 172.17.2.30.
Escape character is '^]'.

Working 192.168.135.36,

[root@centclust1 ssl]# telnet 192.168.135.36 49152
Trying 192.168.135.36...
Connected to centclust1 (192.168.135.36).
Escape character is '^]'.

Working 172.17.2.31,

[root@centclust1 ~]# telnet 172.17.2.31 49152
Trying 172.17.2.31...
Connected to centclust2 (172.17.2.31).
Escape character is '^]'.


Working 192.168.135.46,

[root@centclust1 ~]# telnet 192.168.135.46 49152
Trying 192.168.135.46...
Connected to centclust2 (192.168.135.46).
Escape character is '^]'.

This did work even before!

Greetings
Daniel


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de





-Ursprüngliche Nachricht-
Von: Vijaikumar M [mailto:vmall...@redhat.com] 
Gesendet: Donnerstag, 31. Juli 2014 13:33
An: muel...@tropenklinik.de
Cc: 'Krishnan Parthasarathi'; gluster-devel-boun...@gluster.org; 
gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on 
centos 6.5

Hi Daniel,

Check if telnet works on brick port from both the interface.

telnet 172.17.2.30
telnet 192.168.135.36 

telnet 172.17.2.31
telnet 192.168.135.46 


Thanks,
Vijay


On Thursday 31 July 2014 04:37 PM, Daniel Müller wrote:
> So,
>
> [root@centclust1 ~]# ifconfig
> eth0  Link encap:Ethernet  Hardware Adresse 00:25:90:80:D9:E8
>inet Adresse:172.17.2.30  Bcast:172.17.2.255  Maske:255.255.255.0
>inet6 Adresse: fe80::225:90ff:fe80:d9e8/64 
> Gültigkeitsbereich:Verbindung
>UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>RX packets:3506528 errors:0 dropped:0 overruns:0 frame:0
>TX packets:169905 errors:0 dropped:0 overruns:0 carrier:0
>Kollisionen:0 Sendewarteschlangenlänge:1000
>RX bytes:476128477 (454.0 MiB)  TX bytes:18788266 (17.9 MiB)
>Speicher:fe86-fe88
>
> eth1  Link encap:Ethernet  Hardware Adresse 00:25:90:80:D9:E9
>inet Adresse:192.168.135.36  Bcast:192.168.135.255  
> Maske:255.255.255.0
>inet6 Adresse: fe80::225:90ff:fe80:d9e9/64 
> Gültigkeitsbereich:Verbindung
>UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>RX packets:381664693 errors:0 dropped:0 overruns:0 frame:0
>TX packets:380924973 errors:0 dropped:0 overruns:0 carrier:0
>Kollisionen:0 Sendewarteschlangenlänge:1000
>RX bytes:477454156923 (444.6 GiB)  TX bytes:476729269342 (443.9 
> GiB)
>Speicher:fe8e-fe90
>
> loLink encap:Lokale Schleife
>inet Adresse:127.0.0.1  Maske:255.0.0.0
>inet6 Adresse: ::1/128 Gültigkeitsbereich:Maschine
>UP LOOPBACK RUNNING  MTU:16436  Metric:1
>RX packets:93922879 errors:0 dropped:0 overruns:0 frame:0
>TX packets:93922879 errors:0 dropped:0 overruns:0 carrier:0
>Kollisionen:0 Sendewarteschlangenlänge:0
>RX bytes:462579764180 (430.8 GiB)  TX bytes:462579764180 
> (430.8 GiB)
>
>
> [root@centclust2 ~]# ifconfig
> eth0  Link encap:Ethernet  Hardware Adresse 00:25:90:80:EF:00
>inet Adresse:172.17.2.31  Bcast:172.17.2.255  Maske:255.255.255.0
>inet6 Adresse: fe80::225:90ff:fe80:ef00/64 
> Gültigkeitsbereich:Verbindung
>UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>RX packets:1383117 errors:0 dropped:0 overruns:0 frame:0
>TX packets:45828 errors:0 dropped:0 overruns:0 carrier:0
>Kollisionen:0 Sendewarteschlangenlänge:1000
>RX bytes:185634714 (177.0 MiB)  TX bytes:5357926 (5.1 MiB)
>Speicher:fe86-fe88
>
> eth1  Link encap:Ethernet  Hardware Adresse 00:25:90:80:EF:01
>inet Adresse:192.168.135.46  Bcast:192.168.135.255  
> Maske:255.255.255.0
>inet6 Adresse: fe80::225:90ff:fe80:ef01/64 
> Gültigkeitsbereich:Verbindung
>UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>RX packets:340364283 errors:0 dropped:0 overruns:0 frame:0
>TX packets:59930672 errors:0 dropped:0 overruns:0 carrier:0
>Kollisionen:0 Sendewarteschlangenlänge:1000
>RX bytes:473823738544 (441.2 GiB)  TX bytes:9973035418 (9.2 GiB)
>Speicher:fe8e-fe90
>
> loLink encap:Lokale Schleife
>inet Adresse:127.0.0.1  Maske:255.0.0.0
>inet6 Adresse: ::1/128 Gültigkeitsbereich:Maschine
>UP LOOPBACK RUNNING  MTU:164

Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-31 Thread Daniel Müller
 0.0.0.0 255.255.0.0 U 1003   00 eth1
0.0.0.0 192.168.135.230 0.0.0.0 UG0  00 eth1



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de





-Ursprüngliche Nachricht-
Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com] 
Gesendet: Donnerstag, 31. Juli 2014 12:55
An: muel...@tropenklinik.de
Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on 
centos 6.5

Daniel,

Could you provide the following details from your original two NIC setup probed 
using hostname?
1) output of ifconfig of the two NICs on both the nodes .
2) output of route from both the nodes.

~KP
- Original Message -
> Hello and thank you so far,
> What I have recognized is, having more than one nic running this is 
> confusing glusterfs 3.5. I never saw this on my glusterfs 3.4 and 3.2 
> systems still working.
> So I set up just clean erased gluster with yum glusterfs* erase  and did:
> Logged in to my both nodes in the 135 subnet,ex:
> Ssh 192.168.135.36 (centclust1)  (172.17.2.30 is the 2nd nic) Ssh 
> 192.168.135.46 (centclust2)  (172.17.2.31 is the 2nd nic) Started 
> gluster on both nodes , service glusterd start.
> Did the peer probe on 192.168.135.36/centclust1:
> Gluster peer probe 192.168.135.46 //Former I did gluster peer probe
> centclust2
> This result in:
> [root@centclust1 ~]# gluster peer status Number of Peers: 1
> 
> Hostname: 192.168.135.46
> Uuid: c395c15d-5187-4e5b-b680-57afcb88b881
> State: Peer in Cluster (Connected)
> 
> [root@centclust2 backup]# gluster peer status Number of Peers: 1
> 
> Hostname: 192.168.135.36
> Uuid: 94d5903b-ebe9-40d6-93bf-c2f2e92909a0
> State: Peer in Cluster (Connected)
> The signifent difference gluster now shows the ip of both nodes
> 
> Now I did the create the replicating vol:
> gluster volume create smbcluster replica 2 transport tcp 
> 192.168.135.36:/sbu/glusterfs/export  
> 192.168.135.46:/sbu/glusterfs/export
> started the volume
> gluster volume status
> Status of volume: smbcluster
> Gluster process PortOnline  Pid
> --
> Brick 192.168.135.36:/sbu/glusterfs/export  49152   Y   27421
> Brick 192.168.135.46:/sbu/glusterfs/export  49152   Y   12186
> NFS Server on localhost 2049Y   27435
> Self-heal Daemon on localhost   N/A Y   27439
> NFS Server on 192.168.135.462049Y   12200
> Self-heal Daemon on 192.168.135.46  N/A Y   12204
> 
> Task Status of Volume smbcluster
> --
> 
> There are no active volume tasks
> 
> Mounted the volumes:
> 
> Centclust1:mount -t glusterfs 192.168.135.36:/smbcluster /mntgluster 
> -o acl Centclust2:mount -t glusterfs 192.168.135.46:/smbcluster 
> /mntgluster -o acl
> 
> And BINGO up and running!!!
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 16:52
> An: muel...@tropenklinik.de
> Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 
> 3.5.1 on centos 6.5
> 
> Daniel,
> 
> I didn't get a chance to follow up with debugging this issue. I will 
> look into this and get back to you. I suspect that there is something 
> different about the network layer behaviour in your setup.
> 
> ~KP
> 
> - Original Message -
> > Just another other test:
> > [root@centclust1 sicherung]# getfattr -d -e hex -m . /sicherung/bu
> > getfattr: Entferne führenden '/' von absoluten Pfadnamen # file:
> > sicherung/bu
> > security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696
> > c6
> > 55f743a733000
> > trusted.afr.smbbackup-client-0=0x
> > trusted.afr.smbbackup-client-1=0x00020001
> > trusted.gfid=0x0001
> > trusted.glusterfs.dht=0x0001
> > trusted.glust

Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-31 Thread Daniel Müller
Hello and thank you so far,
What I have recognized is, having more than one nic running this is confusing 
glusterfs 3.5. I never
saw this on my glusterfs 3.4 and 3.2 systems still working.
So I set up just clean erased gluster with yum glusterfs* erase  and did:
Logged in to my both nodes in the 135 subnet,ex:
Ssh 192.168.135.36 (centclust1)  (172.17.2.30 is the 2nd nic)
Ssh 192.168.135.46 (centclust2)  (172.17.2.31 is the 2nd nic)
Started gluster on both nodes , service glusterd start.
Did the peer probe on 192.168.135.36/centclust1:
Gluster peer probe 192.168.135.46 //Former I did gluster peer probe centclust2 
This result in:
[root@centclust1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.135.46
Uuid: c395c15d-5187-4e5b-b680-57afcb88b881
State: Peer in Cluster (Connected)

[root@centclust2 backup]# gluster peer status
Number of Peers: 1

Hostname: 192.168.135.36
Uuid: 94d5903b-ebe9-40d6-93bf-c2f2e92909a0
State: Peer in Cluster (Connected)
The signifent difference gluster now shows the ip of both nodes

Now I did the create the replicating vol:
gluster volume create smbcluster replica 2 transport tcp  
192.168.135.36:/sbu/glusterfs/export  192.168.135.46:/sbu/glusterfs/export
started the volume
gluster volume status
Status of volume: smbcluster
Gluster process PortOnline  Pid
--
Brick 192.168.135.36:/sbu/glusterfs/export  49152   Y   27421
Brick 192.168.135.46:/sbu/glusterfs/export  49152   Y   12186
NFS Server on localhost 2049Y   27435
Self-heal Daemon on localhost   N/A Y   27439
NFS Server on 192.168.135.462049Y   12200
Self-heal Daemon on 192.168.135.46  N/A Y   12204

Task Status of Volume smbcluster
--
There are no active volume tasks

Mounted the volumes:

Centclust1:mount -t glusterfs 192.168.135.36:/smbcluster /mntgluster -o acl
Centclust2:mount -t glusterfs 192.168.135.46:/smbcluster /mntgluster -o acl

And BINGO up and running!!!


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de




-Ursprüngliche Nachricht-
Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com] 
Gesendet: Mittwoch, 30. Juli 2014 16:52
An: muel...@tropenklinik.de
Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on 
centos 6.5

Daniel,

I didn't get a chance to follow up with
debugging this issue. I will look into this and get back to you. I suspect that 
there is something different about the network layer behaviour in your setup.

~KP

- Original Message -
> Just another other test:
> [root@centclust1 sicherung]# getfattr -d -e hex -m . /sicherung/bu
> getfattr: Entferne führenden '/' von absoluten Pfadnamen # file: 
> sicherung/bu 
> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c6
> 55f743a733000 
> trusted.afr.smbbackup-client-0=0x
> trusted.afr.smbbackup-client-1=0x00020001
> trusted.gfid=0x0001
> trusted.glusterfs.dht=0x0001
> trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df
> 
> [root@centclust2 glusterfs]# getfattr -d -e hex -m . /sicherung/bu
> getfattr: Entferne führenden '/' von absoluten Pfadnamen # file: 
> sicherung/bu 
> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c6
> 55f743a733000
> trusted.afr.smbbackup-client-0=0x00020001
> trusted.afr.smbbackup-client-1=0x
> trusted.gfid=0x0001
> trusted.glusterfs.dht=0x0001
> trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df
> 
> Is this ok?
> 
> After long testing and doing a /etc/init.d/network restart the 
> replication started once/a short time then ended up!?
> Any idea???
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> 
> "Der Mensch ist die Medizin des Menschen"
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 11:09
> An: muel...@tropenklinik.de
> Cc: gluster-devel-boun...@gluster.org; gluster-users@glus

Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-30 Thread Daniel Müller
Just another other test:
[root@centclust1 sicherung]# getfattr -d -e hex -m . /sicherung/bu
getfattr: Entferne führenden '/' von absoluten Pfadnamen
# file: sicherung/bu
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.afr.smbbackup-client-0=0x
trusted.afr.smbbackup-client-1=0x00020001
trusted.gfid=0x0001
trusted.glusterfs.dht=0x0001
trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df

[root@centclust2 glusterfs]# getfattr -d -e hex -m . /sicherung/bu
getfattr: Entferne führenden '/' von absoluten Pfadnamen
# file: sicherung/bu
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.afr.smbbackup-client-0=0x00020001
trusted.afr.smbbackup-client-1=0x
trusted.gfid=0x0001
trusted.glusterfs.dht=0x0001
trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df

Is this ok?

After long testing and doing a /etc/init.d/network restart the replication 
started once/a short time
then ended up!?
Any idea???


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de

"Der Mensch ist die Medizin des Menschen" 




-Ursprüngliche Nachricht-
Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com] 
Gesendet: Mittwoch, 30. Juli 2014 11:09
An: muel...@tropenklinik.de
Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on 
centos 6.5

Could you provide the output of the following command?

netstat -ntap | grep gluster

This should tell us if glusterfsd processes (bricks) are listening on all 
interfaces.

~KP

- Original Message -
> Just one idea
> I add a second NIC with a 172.2.17... adress on both machines.
> Could this cause the trouble!?
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 09:29
> An: muel...@tropenklinik.de
> Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 
> 3.5.1 on centos 6.5
> 
> Daniel,
> 
> From a quick look, I see that glustershd and the nfs client is unable 
> to connect to one of the bricks. This is resulting in data from mounts 
> being written to local bricks only.
> I should have asked this before, could you provide the bricks logs as well?
> 
> Could you also try to connect to the bricks using telnet?
> For eg, from centclust1, telnet centclust2 .
> 
> ~KP
> 
> - Original Message -
> > So my logs. I disable ssl meanwhile but it is the same situation. No 
> > replication!?
> > 
> > 
> > 
> > EDV Daniel Müller
> > 
> > Leitung EDV
> > Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
> > 72076 Tübingen
> > Tel.: 07071/206-463, Fax: 07071/206-499
> > eMail: muel...@tropenklinik.de
> > Internet: www.tropenklinik.de
> > 
> > 
> > 
> > 
> > 
> > -Ursprüngliche Nachricht-
> > Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> > Gesendet: Mittwoch, 30. Juli 2014 08:56
> > An: muel...@tropenklinik.de
> > Cc: gluster-users@gluster.org; gluster-devel-boun...@gluster.org
> > Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs
> > 3.5.1 on centos 6.5
> > 
> > Could you attach the entire mount and glustershd log files to this thread?
> > 
> > ~KP
> > 
> > - Original Message -
> > > NO ONE!??
> > > This is an entry of my glustershd.log:
> > > [2014-07-30 06:40:59.294334] W
> > > [client-handshake.c:1846:client_dump_version_cbk] 0-smbbackup-client-1:
> > > received RPC status error
> > > [2014-07-30 06:40:59.294352] I [client.c:2229:client_rpc_notify]
> > > 0-smbbackup-client-1: disconnected from 172.17.2.31:49152. Client 
> > > process will keep trying to connect to glusterd until brick's port 
> > > is available
> > > 
> > > 
> > > This is from mnt-sicherung.log:
> > > [2014-07-30 06:40:38.259850] E [socket.c:2820:socket_connect]
> > > 1-smbbackup-client-0: connect

Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-30 Thread Daniel Müller
Just one idea
I add a second NIC with a 172.2.17... adress on both machines.
Could this cause the trouble!?

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de




-Ursprüngliche Nachricht-
Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com] 
Gesendet: Mittwoch, 30. Juli 2014 09:29
An: muel...@tropenklinik.de
Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on 
centos 6.5

Daniel,

From a quick look, I see that glustershd and the nfs client is unable to 
connect to one of the bricks. This is resulting in data from mounts being 
written to local bricks only.
I should have asked this before, could you provide the bricks logs as well?

Could you also try to connect to the bricks using telnet?
For eg, from centclust1, telnet centclust2 .

~KP

- Original Message -
> So my logs. I disable ssl meanwhile but it is the same situation. No 
> replication!?
> 
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> 
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 08:56
> An: muel...@tropenklinik.de
> Cc: gluster-users@gluster.org; gluster-devel-boun...@gluster.org
> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 
> 3.5.1 on centos 6.5
> 
> Could you attach the entire mount and glustershd log files to this thread?
> 
> ~KP
> 
> - Original Message -
> > NO ONE!??
> > This is an entry of my glustershd.log:
> > [2014-07-30 06:40:59.294334] W
> > [client-handshake.c:1846:client_dump_version_cbk] 0-smbbackup-client-1:
> > received RPC status error
> > [2014-07-30 06:40:59.294352] I [client.c:2229:client_rpc_notify]
> > 0-smbbackup-client-1: disconnected from 172.17.2.31:49152. Client 
> > process will keep trying to connect to glusterd until brick's port 
> > is available
> > 
> > 
> > This is from mnt-sicherung.log:
> > [2014-07-30 06:40:38.259850] E [socket.c:2820:socket_connect]
> > 1-smbbackup-client-0: connection attempt on 172.17.2.30:24007 
> > failed, (Connection timed out) [2014-07-30 06:40:41.275120] I 
> > [rpc-clnt.c:1729:rpc_clnt_reconfig]
> > 1-smbbackup-client-0: changing port to 49152 (from 0)
> > 
> > [root@centclust1 sicherung]# gluster --remote-host=centclust1  peer 
> > status Number of Peers: 1
> > 
> > Hostname: centclust2
> > Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> > State: Peer in Cluster (Connected)
> > 
> > [root@centclust1 sicherung]# gluster --remote-host=centclust2  peer 
> > status Number of Peers: 1
> > 
> > Hostname: 172.17.2.30
> > Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > State: Peer in Cluster (Connected)
> > 
> > [root@centclust1 ssl]# ps aux | grep gluster
> > root 13655  0.0  0.0 413848 16872 ?Ssl  08:10   0:00
> > /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
> > root 13958  0.0  0.0 12139920 44812 ?  Ssl  08:11   0:00
> > /usr/sbin/glusterfsd -s centclust1.tplk.loc --volfile-id 
> > smbbackup.centclust1.tplk.loc.sicherung-bu -p 
> > /var/lib/glusterd/vols/smbbackup/run/centclust1.tplk.loc-sicherung-bu.
> > pid -S /var/run/4c65260e12e2d3a9a5549446f491f383.socket --brick-name 
> > /sicherung/bu -l /var/log/glusterfs/bricks/sicherung-bu.log
> > --xlator-option
> > *-posix.glusterd-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > --brick-port
> > 49152 --xlator-option smbbackup-server.listen-port=49152
> > root 13972  0.0  0.0 815748 58252 ?Ssl  08:11   0:00
> > /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> > /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
> > /var/run/ee6f37fc79b9cb1968eca387930b39fb.socket
> > root 13976  0.0  0.0 831160 29492 ?Ssl  08:11   0:00
> > /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p 
> > /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> > /var/log/glusterfs/glustershd.log -S 
> > /var/run/aa970d146eb23ba7124e6c4511879850.socket --xlator-option 
> > *replicate*.node-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
> > root 15781  0.0  0.0 105308   932 pts/1S+   08:47   0:00 grep
> > gluster
> > root 29283  0.0  0.0 451116 56812 ?  

[Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-29 Thread Daniel Müller
Jul29   0:21
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/de1427ce373c792c76c38b12c106f029.socket --xlator-option
*replicate*.node-uuid=83e6d78c-0119-4537-8922-b3e731718864




Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de



-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Dienstag, 29. Juli 2014 16:02
An: 'gluster-users@gluster.org'
Betreff: Strange issu concerning glusterfs 3.5.1 on centos 6.5

Dear all,

there is a strange issue centos6.5 and glusterfs 3.5.1:

 glusterd -V
glusterfs 3.5.1 built on Jun 24 2014 15:09:41
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation

I try to set up a replicated 2 brick vol on two centos 6.5 server.
I can probe well and my nodes are reporting no errors:
 
[root@centclust1 mnt]# gluster peer status
Number of Peers: 1

Hostname: centclust2
Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
State: Peer in Cluster (Connected)

[root@centclust2 sicherung]# gluster peer status
Number of Peers: 1

Hostname: 172.17.2.30
Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
State: Peer in Cluster (Connected)

Now I set up a replicating VOl on an XFS-Disk: /dev/sdb1 on /sicherung type
xfs (rw)

gluster volume create smbbackup replica 2 transport tcp
centclust1.tplk.loc:/sicherung/bu  centclust2.tplk.loc:/sicherung/bu

gluster volume smbbackup status reports ok:

[root@centclust1 mnt]# gluster volume status smbbackup
Status of volume: smbbackup
Gluster process PortOnline  Pid

--
Brick centclust1.tplk.loc:/sicherung/bu 49152   Y
31969
Brick centclust2.tplk.loc:/sicherung/bu 49152   Y   2124
NFS Server on localhost 2049Y
31983
Self-heal Daemon on localhost   N/A Y
31987
NFS Server on centclust22049Y   2138
Self-heal Daemon on centclust2  N/A Y   2142

Task Status of Volume smbbackup

--
There are no active volume tasks

[root@centclust2 sicherung]# gluster volume status smbbackup
Status of volume: smbbackup
Gluster process PortOnline  Pid

--
Brick centclust1.tplk.loc:/sicherung/bu 49152   Y
31969
Brick centclust2.tplk.loc:/sicherung/bu 49152   Y   2124
NFS Server on localhost 2049Y   2138
Self-heal Daemon on localhost   N/A Y   2142
NFS Server on 172.17.2.30   2049Y
31983
Self-heal Daemon on 172.17.2.30 N/A Y
31987

Task Status of Volume smbbackup

--
There are no active volume tasks

I mounted the vol on both servers with:

mount -t glusterfs centclust1.tplk.loc:/smbbackup  /mnt/sicherung -o acl
mount -t glusterfs centclust2.tplk.loc:/smbbackup  /mnt/sicherung -o acl

But when I write in /mnt/sicherung the files are not replicated to the other
node in anyway!??

They rest on the local server in /mnt/sicherung and /sicherung/bu
On each node separate:#
[root@centclust1 sicherung]# pwd
/mnt/sicherung

[root@centclust1 sicherung]# touch test.txt
[root@centclust1 sicherung]# ls
test.txt
[root@centclust2 sicherung]# pwd
/mnt/sicherung
[root@centclust2 sicherung]# ls
more.txt
[root@centclust1 sicherung]# ls -la /sicherung/bu
insgesamt 0
drwxr-xr-x.  3 root root  38 29. Jul 15:56 .
drwxr-xr-x.  3 root root  15 29. Jul 14:31 ..
drw---. 15 root root 142 29. Jul 15:56 .glusterfs
-rw-r--r--.  2 root root   0 29. Jul 15:56 test.txt
[root@centclust2 sicherung]# ls -la /sicherung/bu
insgesamt 0
drwxr-xr-x. 3 root root 38 29. Jul 15:32 .
drwxr-xr-x. 3 root root 15 29. Jul 14:31 ..
drw---. 7 root root 70 29. Jul 15:32 .glusterfs
-rw-r--r--. 2 root root  0 29. Jul 15:32 more.txt



Greetings
Daniel



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet

[Gluster-users] Strange issu concerning glusterfs 3.5.1 on centos 6.5

2014-07-29 Thread Daniel Müller
Dear all,

there is a strange issue centos6.5 and glusterfs 3.5.1:

 glusterd -V
glusterfs 3.5.1 built on Jun 24 2014 15:09:41
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation

I try to set up a replicated 2 brick vol on two centos 6.5 server.
I can probe well and my nodes are reporting no errors:
 
[root@centclust1 mnt]# gluster peer status
Number of Peers: 1

Hostname: centclust2
Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
State: Peer in Cluster (Connected)

[root@centclust2 sicherung]# gluster peer status
Number of Peers: 1

Hostname: 172.17.2.30
Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
State: Peer in Cluster (Connected)

Now I set up a replicating VOl on an XFS-Disk: /dev/sdb1 on /sicherung type
xfs (rw)

gluster volume create smbbackup replica 2 transport tcp
centclust1.tplk.loc:/sicherung/bu  centclust2.tplk.loc:/sicherung/bu

gluster volume smbbackup status reports ok:

[root@centclust1 mnt]# gluster volume status smbbackup
Status of volume: smbbackup
Gluster process PortOnline  Pid

--
Brick centclust1.tplk.loc:/sicherung/bu 49152   Y
31969
Brick centclust2.tplk.loc:/sicherung/bu 49152   Y   2124
NFS Server on localhost 2049Y
31983
Self-heal Daemon on localhost   N/A Y
31987
NFS Server on centclust22049Y   2138
Self-heal Daemon on centclust2  N/A Y   2142

Task Status of Volume smbbackup

--
There are no active volume tasks

[root@centclust2 sicherung]# gluster volume status smbbackup
Status of volume: smbbackup
Gluster process PortOnline  Pid

--
Brick centclust1.tplk.loc:/sicherung/bu 49152   Y
31969
Brick centclust2.tplk.loc:/sicherung/bu 49152   Y   2124
NFS Server on localhost 2049Y   2138
Self-heal Daemon on localhost   N/A Y   2142
NFS Server on 172.17.2.30   2049Y
31983
Self-heal Daemon on 172.17.2.30 N/A Y
31987

Task Status of Volume smbbackup

--
There are no active volume tasks

I mounted the vol on both servers with:

mount -t glusterfs centclust1.tplk.loc:/smbbackup  /mnt/sicherung -o acl
mount -t glusterfs centclust2.tplk.loc:/smbbackup  /mnt/sicherung -o acl

But when I write in /mnt/sicherung the files are not replicated to the other
node in anyway!??

They rest on the local server in /mnt/sicherung and /sicherung/bu
On each node separate:#
[root@centclust1 sicherung]# pwd
/mnt/sicherung

[root@centclust1 sicherung]# touch test.txt
[root@centclust1 sicherung]# ls
test.txt
[root@centclust2 sicherung]# pwd
/mnt/sicherung
[root@centclust2 sicherung]# ls
more.txt
[root@centclust1 sicherung]# ls -la /sicherung/bu
insgesamt 0
drwxr-xr-x.  3 root root  38 29. Jul 15:56 .
drwxr-xr-x.  3 root root  15 29. Jul 14:31 ..
drw---. 15 root root 142 29. Jul 15:56 .glusterfs
-rw-r--r--.  2 root root   0 29. Jul 15:56 test.txt
[root@centclust2 sicherung]# ls -la /sicherung/bu
insgesamt 0
drwxr-xr-x. 3 root root 38 29. Jul 15:32 .
drwxr-xr-x. 3 root root 15 29. Jul 14:31 ..
drw---. 7 root root 70 29. Jul 15:32 .glusterfs
-rw-r--r--. 2 root root  0 29. Jul 15:32 more.txt



Greetings
Daniel



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Samba-VFS-Glusterfs issues

2014-07-17 Thread Daniel Müller
Then try following: just mount your gluster vol on  centos. Do not use the 
vfs!!! Just point your path to the mounted glusterfs.
And try to write to it from samba. This should work without any issue.

Ex: my mount point--172.17.1.1:/sambacluster on /mnt/glusterfs type 
fuse.glusterfs (rw,allow_other,max_read=131072)

My path/share in samba :path=/mnt/glusterfs...


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de




-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Jon Archer
Gesendet: Donnerstag, 17. Juli 2014 17:17
An: 'gluster-users'
Betreff: Re: [Gluster-users] Samba-VFS-Glusterfs issues

For reference, I am running CentOS 6.5 but have also tried this on Fedora 20 
with the exact same results.

Jon

On 2014-07-17 15:34, Jon Archer wrote:
> Yes I can mount the gluster volume at the shell and read/write from/to 
> it so there is no issue with Gluster or the volume. It seems to be 
> between samba and gluster from what I can gather.
> 
> 
> Cheers
> 
> Jon
> 
> 
> On 2014-07-17 15:04, Daniel Müller wrote:
>> With samba 4.1.7 on centos 6.5, glusterfs 3.4.1qa2
>> 
>> 
>> gluster volume status
>> Status of volume: sambacluster
>> Gluster process PortOnline 
>>  Pid
>> -
>> ---
>> --
>> Brick s4master:/raid5hs/glusterfs/samba 49152   Y  
>>  2674
>> Brick s4slave:/raid5hs/glusterfs/samba  49152   Y  
>>  2728
>> NFS Server on localhost 2049Y  
>>  2906
>> Self-heal Daemon on localhost   N/A Y  
>>  2909
>> NFS Server on s4slave   2049Y  
>>  3037
>> Self-heal Daemon on s4slave N/A Y  
>>  3041
>> 
>> There are no active volume tasks
>> 
>> Mounts
>> 
>> /dev/sdb1 on /raid5hs type xfs (rw)
>> And
>> 
>> 172.17.1.1:/sambacluster on /mnt/glusterfs type fuse.glusterfs
>> (rw,allow_other,max_read=131072)
>> 
>> 
>> 
>> 
>> 
>> [home]
>> comment=home s4master vfs objects= recycle vfs objects= recycle, 
>> glusterfs recycle:repository= /%P/%U/.Papierkorb glusterfs:volume= 
>> sambacluster glusterfs:volfile_server = 172.17.1.1  <---set my ip 
>> there recycle:exclude = *.tmp,*.temp,*.log,*.ldb,*.TMP,?~$*,~$*
>> recycle:keeptree = Yes
>> recycle:exclude = ?~$*,~$*,*.tmp,*.temp,*.TMP,Thumbs.db
>> recycle:exclude_dir = 
>> .Papierkorb,Papierkorb,tmp,temp,profile,.profile
>> recycle:touch_mtime = yes
>> recycle:versions = Yes
>> recycle:minsize = 1
>> msdfs root=yes
>> path= /ads/home ##<---set path that wil be accessible on my gluster 
>> vol read only=no posix locking =NO  ##<--gluster can work kernel 
>> share modes = No  ##gluster can work access based share enum=yes hide 
>> unreadable=yes
>> 
>> 
>> Can you read and write to the rmounted gluster-vol from shell?
>> 
>> 
>> EDV Daniel Müller
>> 
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>> 72076 Tübingen
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> 
>> 
>> 
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Jon Archer
>> Gesendet: Donnerstag, 17. Juli 2014 15:42
>> An: gluster-users
>> Betreff: [Gluster-users] Samba-VFS-Glusterfs issues
>> 
>> Hi all,
>> 
>> I'm currently testing out the samba-vfs-glusterfs configuration to 
>> look into replacing fuse mounted volumes.
>> 
>> I've got a server configured as per:
>> http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with
>> -samba- and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
>> but am seeing an issue:
>> "Failed to set volfile_server..."
>> 
>> I have a gluster volume share:
>> 
>> Volume Name: share
>> Type: Replicate
>> Volume ID: 06d1eb42-873d-43fe-ae94-562e975cca9a
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: storenode1:/gluster/br

Re: [Gluster-users] Samba-VFS-Glusterfs issues

2014-07-17 Thread Daniel Müller
With samba 4.1.7 on centos 6.5, glusterfs 3.4.1qa2


gluster volume status
Status of volume: sambacluster
Gluster process PortOnline  Pid

--
Brick s4master:/raid5hs/glusterfs/samba 49152   Y   2674
Brick s4slave:/raid5hs/glusterfs/samba  49152   Y   2728
NFS Server on localhost 2049Y   2906
Self-heal Daemon on localhost   N/A Y   2909
NFS Server on s4slave   2049Y   3037
Self-heal Daemon on s4slave N/A Y   3041

There are no active volume tasks

Mounts

/dev/sdb1 on /raid5hs type xfs (rw)
And

172.17.1.1:/sambacluster on /mnt/glusterfs type fuse.glusterfs
(rw,allow_other,max_read=131072)





[home]
comment=home s4master vfs objects= recycle
vfs objects= recycle, glusterfs
recycle:repository= /%P/%U/.Papierkorb
glusterfs:volume= sambacluster
glusterfs:volfile_server = 172.17.1.1  <---set my ip there
recycle:exclude = *.tmp,*.temp,*.log,*.ldb,*.TMP,?~$*,~$*
recycle:keeptree = Yes
recycle:exclude = ?~$*,~$*,*.tmp,*.temp,*.TMP,Thumbs.db
recycle:exclude_dir = .Papierkorb,Papierkorb,tmp,temp,profile,.profile
recycle:touch_mtime = yes
recycle:versions = Yes
recycle:minsize = 1
msdfs root=yes
path= /ads/home ##<---set path that wil be accessible on my gluster vol
read only=no
posix locking =NO  ##<--gluster can work
kernel share modes = No  ##gluster can work
access based share enum=yes
hide unreadable=yes


Can you read and write to the rmounted gluster-vol from shell?


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de





-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Jon Archer
Gesendet: Donnerstag, 17. Juli 2014 15:42
An: gluster-users
Betreff: [Gluster-users] Samba-VFS-Glusterfs issues

Hi all,

I'm currently testing out the samba-vfs-glusterfs configuration to look into
replacing fuse mounted volumes.

I've got a server configured as per:
http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-
and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
but am seeing an issue:
"Failed to set volfile_server..."

I have a gluster volume share:

Volume Name: share
Type: Replicate
Volume ID: 06d1eb42-873d-43fe-ae94-562e975cca9a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: storenode1:/gluster/bricks/share/brick1
Brick2: storenode2:/gluster/bricks/share/brick1
Options Reconfigured:
server.allow-insecure: on


and this is then added as a share in samba:
[share]
comment = Gluster and CTDB based share
path = /
read only = no
guest ok = yes
valid users = jon
vfs objects = glusterfs
glusterfs:loglevel = 10
glusterfs:volume = share


The share is visible at top level just not accessible, if i do try to access
it I get a samba client log entry of:
[2014/07/17 13:31:56.084620, 0]
../source3/modules/vfs_glusterfs.c:253(vfs_gluster_connect)
Failed to set volfile_server localhost


I've tried setting the glusterfs:volfile_server option in smb.conf to IP,
localhost and hostname with the same response just specifying IP,localhost
or hostname in the error message.

Packages installed are:

rpm -qa|grep gluster
glusterfs-libs-3.5.1-1.el6.x86_64
glusterfs-cli-3.5.1-1.el6.x86_64
glusterfs-server-3.5.1-1.el6.x86_64
glusterfs-3.5.1-1.el6.x86_64
glusterfs-fuse-3.5.1-1.el6.x86_64
samba-vfs-glusterfs-4.1.9-2.el6.x86_64
glusterfs-api-3.5.1-1.el6.x86_64

rpm -qa|grep samba
samba-4.1.9-2.el6.x86_64
samba-common-4.1.9-2.el6.x86_64
samba-winbind-modules-4.1.9-2.el6.x86_64
samba-vfs-glusterfs-4.1.9-2.el6.x86_64
samba-winbind-clients-4.1.9-2.el6.x86_64
samba-libs-4.1.9-2.el6.x86_64
samba-winbind-4.1.9-2.el6.x86_64
samba-client-4.1.9-2.el6.x86_64


for purposes of testing I have disabled SELINUX and flushed all firewall
rules.

Does anyone have any ideas on this error and how to resolve?

Thanks

Jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange problem using samba_vfs_gluster

2014-06-26 Thread Daniel Müller
glusterfs:volume= yourmountedglustervol
glusterfs:volfile_server = 172.17.1.1 <--simetimes needed the ip

Good Luck
Daniel

EDV Daniel Müller


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de





-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Lalatendu Mohanty
Gesendet: Freitag, 27. Juni 2014 06:45
An: Michael DePaulo; Volnei
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange problem using samba_vfs_gluster

On 06/27/2014 06:04 AM, Michael DePaulo wrote:
> On Thu, Jun 26, 2014 at 8:20 AM, Volnei  wrote:
>> Hi,
>>
>> If I use the configuration method "vfs" I can see the share, But I 
>> cannot write anything into it.
>> On Windows7, for example, the following message is showed:
>>
>> Error 0x8007045D: the request could not be performed because of an 
>> I/O device error ever I try create a folder or a file.
>>
>> If I use the configuration at mount point, then everything works
perfectly.
>>
>> This is a bug or I'm doing something wrong.
>>
>> Thanks a lot
>>
>> (smb.conf)
>>
>> [GV0-GLUSTERFS]
>>  comment = For samba share of volume gv0
>>  path = /
>>  writable = Yes
>>  read only = No
>>  guest ok = Yes
>>  browseable = yes
>>  create mask = 0777
>>  kernel share modes = No
>>  vfs objects = glusterfs
>>  glusterfs:loglevel = 7
>>  glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
>>  glusterfs:volume = gv0
>>
>> # This works fine
>> #[GLUSTERFS_DATAS]
>> #comment = glusterfs via mountpoint
>> #path = /mnt/dados
>> #writable = Yes
>> #guest ok = Yes
>> #browseable = yes
>> #create mask = 0777
>>
>>
>> Erro message when I use vfs_gluster
>>
>> [2014-06-26 12:06:57.293530] E
>> [afr-self-heal-common.c:233:afr_sh_print_split_brain_log]
0-gv0-replicate-0:
>> Unable to self-heal contents of '/.' (possible split-brain). Please 
>> delete the file from all but the preferred subvolume.- Pending 
>> matrix:  [ [ 0 2 ] [
>> 2 0 ] ]
>> [2014-06-26 12:06:57.294679] E
>> [afr-self-heal-common.c:2859:afr_log_self_heal_completion_status]
>> 0-gv0-replicate-0:  metadata self heal  failed,   on /.
>>
>>
>>
>> Versions:
>>
>> samba-4.1.9-3.fc20.x86_64
>> samba-vfs-glusterfs-4.1.9-3.fc20.x86_64
>> samba-common-4.1.9-3.fc20.x86_64
>> samba-winbind-4.1.9-3.fc20.x86_64
>> samba-winbind-clients-4.1.9-3.fc20.x86_64
>> samba-libs-4.1.9-3.fc20.x86_64
>> samba-winbind-modules-4.1.9-3.fc20.x86_64
>> samba-winbind-krb5-locator-4.1.9-3.fc20.x86_64
>> samba-client-4.1.9-3.fc20.x86_64
>>
>> glusterfs-fuse-3.5.0-3.fc20.x86_64
>> glusterfs-server-3.5.0-3.fc20.x86_64
>> glusterfs-libs-3.5.0-3.fc20.x86_64
>> glusterfs-api-3.5.0-3.fc20.x86_64
>> glusterfs-cli-3.5.0-3.fc20.x86_64
>> glusterfs-3.5.0-3.fc20.x86_64
>>
> On a related note, has anyone had success using samba_vfs_gluster with 
> glusterfs 3.5.x instead of 3.4.x? Is it even supported?
>
> The example post uses 3.4.x:
> http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-
> samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
> ___
>
I think I had tested early versions of glusterfs 3.5.0 with samba vfs plugin
and it had worked fine. It should work as mentioned in the above blog post.
If it is not working, then it might be a genuine bug or configuration issue.

-Lala
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] WG: Planing Update gluster 3.4 to gluster 3.5 on centos 6.4

2014-05-23 Thread Daniel Müller
Hi again,
can anyone proof the update is running well on a life production  system
without my replicating volumes are damaged!?
I need to do this update without stopping gluster service. 



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de




-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Mittwoch, 14. Mai 2014 08:40
An: 'gluster-users@gluster.org'
Betreff: Planing Update gluster 3.4 to gluster 3.5 on centos 6.4

Hello to all,
I am planning updating gluster 3.4 to recent version 3.5 is there any issue
concerning my replicating vols? Or can I simply yum install...

Greetings
Daniel


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Planing Update gluster 3.4 to gluster 3.5 on centos 6.4

2014-05-13 Thread Daniel Müller
Hello to all,
I am planning updating gluster 3.4 to recent version 3.5 is there any issue
concerning my replicating vols? Or can I simply yum install...

Greetings
Daniel


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] RPMs for Samba 4.1.3 w/ Gluster VFS plug-in for RHEL, CentOS, etc., now available

2014-02-17 Thread Daniel Müller
For you shares set too:  posix locking = No


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"




-Ursprüngliche Nachricht-
Von: Dan Mons [mailto:dm...@cuttingedge.com.au] 
Gesendet: Montag, 17. Februar 2014 01:02
An: muel...@tropenklinik.de
Cc: l...@redhat.com; Kaleb KEITHLEY; gluster-users; Jose Rivera; Gluster
Devel
Betreff: Re: [Gluster-users] [Gluster-devel] RPMs for Samba 4.1.3 w/ Gluster
VFS plug-in for RHEL, CentOS, etc., now available

More testing of vfs_glusterfs reveals problems with Microsoft and Adobe
software, and their silly locking.  (For example, Microsoft Excel opening a
.xlsx, or Adobe Photoshop opening a .jpg or .psd).
Dozens of other production applications we use are all fine, for what it's
worth (the bigger the vendor, the more likely it seems their software is
broken.  How amusing).

The following config (without vfx_glusterfs) is the only way I can make
these particular applications play ball.  Still playing with "fake oplocks"
as well to see what broader effect that has.

[prodbackup]
vfs object = streams_xattr
path = /prodbackup
Comment = prodbackup
browseable = yes
writable = yes
guest ok = no
valid users = +prod
create mask = 0660
force create mode = 0660
directory mask = 0770
force directory mode = 0770
hide dot files = no
security mask = 0660
force security mode = 0660
directory security mask = 0770
force directory security mode = 0770
kernel share modes = no
kernel oplocks = no
ea support = yes
oplocks = yes
level2 oplocks = yes

-Dan


Dan Mons
Skunk Works
Cutting Edge
http://cuttingedge.com.au


On 17 February 2014 07:58, Dan Mons  wrote:
> Thank you Daniel and Lala,
>
> "kernel share modes = no" was the magic incantation.  This is working 
> nicely for me now.  Much appreciated.
>
> -Dan
>
> 
> Dan Mons
> Skunk Works
> Cutting Edge
> http://cuttingedge.com.au
>
>
> On 14 February 2014 16:52, Daniel Müller  wrote:
>> HI again,
>>
>> I had the same issue. I got it working by:
>> Adding -- kernel share modes = No to the sahres
>> EX.:
>> [home]
>> comment=gluster test
>> vfs objects=glusterfs
>> glusterfs:volume= sambacluster
>> glusterfs:volfile_server = 172.17.1.1 path=/ads/home read only=no 
>> posix locking =NO kernel share modes = No
>>
>> By the way running Centos6.4 Samba 4.1.4, gluster 3.4.1qa2.
>>
>>
>> Good Luck
>> Daniel
>>
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>> 72076 Tübingen
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> "Der Mensch ist die Medizin des Menschen"
>>
>>
>>
>>
>> Von: gluster-users-boun...@gluster.org
>> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Lalatendu
Mohanty
>> Gesendet: Freitag, 14. Februar 2014 07:25
>> An: Dan Mons; Kaleb KEITHLEY
>> Cc: gluster-users@gluster.org; Jose Rivera; Gluster Devel
>> Betreff: Re: [Gluster-users] [Gluster-devel] RPMs for Samba 4.1.3 w/
Gluster
>> VFS plug-in for RHEL, CentOS, etc., now available
>>
>> On 02/14/2014 06:37 AM, Dan Mons wrote:
>> Hi,
>>
>> This is failing for me.  I've had the same problems after trying to build
my
>> own vfs_glusterfs from source.  I'm certain I'm doing something stupid.
>>
>> Client is a Windows Server 2008R2 64bit machine with AD authentication.
>>
>> Server is CentOS 6.4, Gluster 3.4.1GA, Samba 4.1.4 with matching
>> samba_vfs_glusterfs as per this thread, AD authentication (sssd for the
>> Linux/PAM/nsswitch side, and Samba is configured as a member server).
The
>> Gluster volume is working fine (this is our production test/backup
cluster,
>> and has been in operation for over a year).
>>
>> Samba works fine when pointing to a local FUSE mount (this is how we run
in
>> production today for Windows and Mac clients).  When I change to
>> vfs_glusterfs, it all goes wrong.
>>
>> Client errors include:
>>
>> Action: On Windows regular windows explorer directory browsing
>> Result: All good.  Much faster than regular Samba to FUSE.
>>
>> Action: On Windows: Right click -> New ->  Text Document
>> Result: "Unable to create the file "New T

Re: [Gluster-users] [Gluster-devel] RPMs for Samba 4.1.3 w/ Gluster VFS plug-in for RHEL, CentOS, etc., now available

2014-02-13 Thread Daniel Müller
HI again,

I had the same issue. I got it working by:
Adding -- kernel share modes = No to the sahres
EX.:
[home]
comment=gluster test
vfs objects=glusterfs
glusterfs:volume= sambacluster
glusterfs:volfile_server = 172.17.1.1
path=/ads/home
read only=no
posix locking =NO
kernel share modes = No

By the way running Centos6.4 Samba 4.1.4, gluster 3.4.1qa2.


Good Luck
Daniel

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"




Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Lalatendu Mohanty
Gesendet: Freitag, 14. Februar 2014 07:25
An: Dan Mons; Kaleb KEITHLEY
Cc: gluster-users@gluster.org; Jose Rivera; Gluster Devel
Betreff: Re: [Gluster-users] [Gluster-devel] RPMs for Samba 4.1.3 w/ Gluster
VFS plug-in for RHEL, CentOS, etc., now available

On 02/14/2014 06:37 AM, Dan Mons wrote:
Hi,

This is failing for me.  I've had the same problems after trying to build my
own vfs_glusterfs from source.  I'm certain I'm doing something stupid. 

Client is a Windows Server 2008R2 64bit machine with AD authentication.

Server is CentOS 6.4, Gluster 3.4.1GA, Samba 4.1.4 with matching
samba_vfs_glusterfs as per this thread, AD authentication (sssd for the
Linux/PAM/nsswitch side, and Samba is configured as a member server).  The
Gluster volume is working fine (this is our production test/backup cluster,
and has been in operation for over a year).

Samba works fine when pointing to a local FUSE mount (this is how we run in
production today for Windows and Mac clients).  When I change to
vfs_glusterfs, it all goes wrong.

Client errors include:

Action: On Windows regular windows explorer directory browsing
Result: All good.  Much faster than regular Samba to FUSE.

Action: On Windows: Right click -> New ->  Text Document
Result: "Unable to create the file "New Text Document.txt".  The system
cannot find the file specified.

Action: 
On a Linux box: dmesg > test.txt
On Windows: double-click test.txt
Result: The process cannot access the file because it is in use by another
process

Action: On Windows: drag and drop a text file to a share
Result: nothing (file not copied, no error dialog).

Samba logs:
[2014/02/14 10:44:52.972999,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:46:31.020793,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:47:03.326100,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:47:08.449040,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:48:21.007241,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:48:21.068066,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:51:36.683883,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:51:36.743577,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:53:14.160588,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:53:57.229060,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:53:57.288750,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:54:47.062171,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:54:47.121809,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available
[2014/02/14 10:55:16.602058,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
  prodbackup: Initialized volume from server localhost
[2014/02/14 10:55:16.670562,  0]
../source3/modules/vfs_glusterfs.c:627(vfs_gluster_stat)
  glfs_stat(./..) failed: No data available



The log seems similar to bug
https://bugzilla.redhat.com/show_bug.cgi?id=1062674

Please put "kernel share modes = No" for the shares and let us know if it
works for you.

-Lala

/etc/samba/smb.conf:

[global]
        workgroup = BLAH
        server string = Samba Server Version %v
        log file = /var/log/samba/log.%m
        max log size = 50
        security = ads
        passdb backend = tdbsam
        realm = BLAH
        domain master

Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-11 Thread Daniel Müller
That did the trick. Thank you all!!!

 

Greetings

Daniel 

 



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 

Tel.: 07071/206-463, Fax: 07071/206-499
eMail:  <mailto:muel...@tropenklinik.de> muel...@tropenklinik.de
Internet: www.tropenklinik.de 

"Der Mensch ist die Medizin des Menschen"

 

 



 

Von: Xavier Hernandez [mailto:xhernan...@datalab.es] 
Gesendet: Dienstag, 11. Februar 2014 09:48
An: muel...@tropenklinik.de
Cc: m...@mattandtiff.net; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster, Samba, and VFS

 

Hi Daniel,

have you tried to set the following option into the share definition of 
smb.conf as Lalatendu said (see the bug report 
https://bugzilla.redhat.com/show_bug.cgi?id=1062674) ?

kernel share modes = no

I had a very similar problem and this option solved it.

Best regards,

Xavi

El 11/02/14 07:42, Daniel Müller ha escrit:

No, not really:
Look at my thread: samba vfs objects glusterfs is it now working?
I am just waiting for an answer to fix this.
The only way I succeeded to make it work is how you descriped (exporting
fuse mount thru samba)
 
 
 
EDV Daniel Müller
 
Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"
 
 
 
 
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Matt Miller
Gesendet: Montag, 10. Februar 2014 16:43
An: gluster-users@gluster.org
Betreff: [Gluster-users] Gluster, Samba, and VFS
 
Stumbled upon
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
when trying to find info on how to make Gluster and Samba play nice as a
general purpose file server.  I have had severe performance problems in the
past with mounting the Gluster volume as a Fuse mount, then exporting the
Fuse mount thru Samba.  As I found out after setting up the cluster this is
somewhat expected when serving out lots of small files.  Was hoping VFS
would provide better performance when serving out lots and lots of small
files.
Is anyone using VFS extensions in production?  Is it ready for prime time? 
I could not find a single reference to it on Gluster's main website (maybe I
am looking in the wrong place), so not sure of the stability or
supported-ness of this.
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

 

<>___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster, Samba, and VFS

2014-02-10 Thread Daniel Müller
No, not really:
Look at my thread: samba vfs objects glusterfs is it now working?
I am just waiting for an answer to fix this.
The only way I succeeded to make it work is how you descriped (exporting
fuse mount thru samba)



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"




Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Matt Miller
Gesendet: Montag, 10. Februar 2014 16:43
An: gluster-users@gluster.org
Betreff: [Gluster-users] Gluster, Samba, and VFS

Stumbled upon
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/commits/master
when trying to find info on how to make Gluster and Samba play nice as a
general purpose file server.  I have had severe performance problems in the
past with mounting the Gluster volume as a Fuse mount, then exporting the
Fuse mount thru Samba.  As I found out after setting up the cluster this is
somewhat expected when serving out lots of small files.  Was hoping VFS
would provide better performance when serving out lots and lots of small
files.
Is anyone using VFS extensions in production?  Is it ready for prime time? 
I could not find a single reference to it on Gluster's main website (maybe I
am looking in the wrong place), so not sure of the stability or
supported-ness of this.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Samba vfs objects glusterfs is it now working?

2014-02-06 Thread Daniel Müller
Dear all,

I try to establish the vfs object glusterfs.
My settings samba 4.1.4 share definition:

[home]
comment=gluster test
vfs objects=glusterfs
glusterfs:volume= sambacluster
glusterfs:volfile_server = 172.17.1.1
path=/ads/home
read only=no
posix locking =NO


Mount gluster in fstab:

172.17.1.1:/sambacluster   /mnt/glusterfs glusterfs
defaults,acl,__netdev  0  0

Under /dev/sdb1: mkfs.xfs -i size=512 /dev/sdb1  
/dev/sdb1   /raid5hs   xfs defaults   1 2

[root@s4master ~]# gluster volume info

Volume Name: sambacluster
Type: Replicate
Volume ID: 4fd0da03-8579-47cc-926b-d7577dac56cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: s4master:/raid5hs/glusterfs/samba
Brick2: s4slave:/raid5hs/glusterfs/samba
Options Reconfigured:
performance.quick-read: on
network.ping-timeout: 5
performance.stat-prefetch: off

Samba is reporting to load the vol:

Feb  7 08:20:52 s4master GlusterFS[6867]: [2014/02/07 08:20:52.455982,  0]
../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect)
Feb  7 08:20:52 s4master GlusterFS[6867]:   sambacluster: Initialized volume
from server 172.17.1.1

But when I try to write Office Files and txt Files from a win client to the
share there is an error "the file:x could not be created, System could
not find the file". But after a refresh the file was generated anyway.
This files cannot be changed but deleted anyway.
Directories can be generated without this issue!?

Can you give me a hint to make things work? Or is the plugin not working at
this state?


Greetings
Daniel 



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"






___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Samba cluster without CTDB?

2014-02-04 Thread Daniel Müller
You can manage your shares keep in sync with your fileservers. I have a
PDC/BDC/Openldap-Master/Master Ucarp running in such a environment.
The smb.conf just differ with the settings of being pdc or bdc. Changes have
to be edit manually in your smb.conf. 


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 
"Der Mensch ist die Medizin des Menschen"





-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Nux!
Gesendet: Dienstag, 4. Februar 2014 11:53
An: Gluster-users@gluster.org
Betreff: [Gluster-users] Samba cluster without CTDB?

Hello,

On my GlusterFS nodes I already run VRRP (keepalived), installing CTDB would
conflict with this. Is there any way to get Samba gluster-aware without
messing up my HA/VIP setup?

Regards,
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

2013-09-24 Thread Daniel Müller
The same error after installing 3.4.1qa2!


---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Vijay Bellur [mailto:vbel...@redhat.com] 
Gesendet: Montag, 23. September 2013 15:01
An: muel...@tropenklinik.de
Cc: 'RAGHAVENDRA TALUR'; samba-boun...@lists.samba.org;
gluster-users@gluster.org
Betreff: Re: AW: AW: [Gluster-users] Glusterfs looses extendeed attributes
Samba 4

On 09/23/2013 01:59 PM, Daniel Müller wrote:
> Where do I get: 3.4.1qa2?
>
>
http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/CentOS/epel-6.4/
> x86_64/ <--not available

You can pick it up from here:

http://bits.gluster.org/pub/gluster/glusterfs/3.4.1qa2/x86_64/

-Vijay



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

2013-09-23 Thread Daniel Müller
Where do I get: 3.4.1qa2?

http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/CentOS/epel-6.4/
x86_64/ <--not available

Daniel
---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Vijay Bellur [mailto:vbel...@redhat.com] 
Gesendet: Freitag, 20. September 2013 09:26
An: muel...@tropenklinik.de
Cc: 'RAGHAVENDRA TALUR'; samba-boun...@lists.samba.org;
gluster-users@gluster.org
Betreff: Re: AW: [Gluster-users] Glusterfs looses extendeed attributes Samba
4

On 09/19/2013 05:25 PM, Daniel Müller wrote:
> Bingo! The extended attributes are on and working.
> There seems only a little problem with windows (dsa.msc) complaining 
> about that it could not create the folder but it does?

Can you please try with 3.4.1qa2 and let us know if the issue happens there?

Providing the samba and vfs plugin logs would be useful to debug the problem
further.

Regards,
Vijay

> Thank you
> Daniel
>
> -------
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Vijay Bellur [mailto:vbel...@redhat.com]
> Gesendet: Donnerstag, 19. September 2013 12:12
> An: muel...@tropenklinik.de
> Cc: 'RAGHAVENDRA TALUR'; samba-boun...@lists.samba.org; 
> gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Glusterfs looses extendeed attributes 
> Samba 4
>
> On 09/19/2013 01:30 PM, Daniel Müller wrote:
>> I logon to the samba4 domain. To set the home directory with the 
>> dsa.msc tool from a windows client-->there profile
>> even the tool complaining the directory could not be created. It 
>> is on my linux box on the glusterd root /raid5hs..
>> The home directories are on a gluster volume exported using samba 4:
>>[home]
>> path= /mnt/glusterfs/ads/home < my glusterfs-client mount
>>   readonly = No
>>posix locking =NO
>>  vfs objects = acl_xattr <--- set but no function
>>
>> I  do a "mount -t glusterfs 172.17.1.1:/sambacluster /mnt/glusterfs -o
ac"
>> no result.
>> I think this is an issue of the glusterfs 3.4 client
>
> Do you observe the same behavior if stat-prefetch is disabled in the
volume?
> stat-prefetch translator can be disabled through the following 
> configuration
> command:
>
> #gluster volume set  stat-prefetch off
>
> -Vijay
>
>
>
>


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

2013-09-19 Thread Daniel Müller
Bingo! The extended attributes are on and working.
There seems only a little problem with windows (dsa.msc) complaining about
that it could not create the folder but it does?
Thank you
Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Vijay Bellur [mailto:vbel...@redhat.com] 
Gesendet: Donnerstag, 19. September 2013 12:12
An: muel...@tropenklinik.de
Cc: 'RAGHAVENDRA TALUR'; samba-boun...@lists.samba.org;
gluster-users@gluster.org
Betreff: Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

On 09/19/2013 01:30 PM, Daniel Müller wrote:
> I logon to the samba4 domain. To set the home directory with the 
> dsa.msc tool from a windows client-->there profile
>even the tool complaining the directory could not be created. It is 
> on my linux box on the glusterd root /raid5hs..
> The home directories are on a gluster volume exported using samba 4:
>   [home]
> path= /mnt/glusterfs/ads/home < my glusterfs-client mount
>  readonly = No
>   posix locking =NO
> vfs objects = acl_xattr <--- set but no function
>
> I  do a "mount -t glusterfs 172.17.1.1:/sambacluster /mnt/glusterfs -o ac"
> no result.
> I think this is an issue of the glusterfs 3.4 client

Do you observe the same behavior if stat-prefetch is disabled in the volume?
stat-prefetch translator can be disabled through the following configuration
command:

#gluster volume set  stat-prefetch off

-Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

2013-09-19 Thread Daniel Müller
I logon to the samba4 domain. To set the home directory with the dsa.msc
tool from a windows client-->there profile
  even the tool complaining the directory could not be created. It is on my
linux box on the glusterd root /raid5hs..
The home directories are on a gluster volume exported using samba 4:
 [home]
path= /mnt/glusterfs/ads/home < my glusterfs-client mount
readonly = No
 posix locking =NO
   vfs objects = acl_xattr <--- set but no function

I  do a "mount -t glusterfs 172.17.1.1:/sambacluster /mnt/glusterfs -o ac"
no result.
I think this is an issue of the glusterfs 3.4 client

I cannot see any extended attributes from a windows client!
Strange!!!


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: RAGHAVENDRA TALUR [mailto:raghavendra.ta...@gmail.com] 
Gesendet: Donnerstag, 19. September 2013 09:04
An: muel...@tropenklinik.de
Cc: samba-boun...@lists.samba.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

Hi Daniel,

If I understand correctly, you are doing this:
1. Using a windows client, set acls on files on a gluster volume exported
using samba 4.
2. Mount the same volume using native glusterfs mount option.
3. Check for the acl set from windows client.

Result is that you are not able to see the acls from glusterfs mount.

My questions:
1. Do you still see the extended attributes on a windows client?
2. Can you trying doing a glusterfs mount with acl option and tell if that
works?
 
Thanks,
Raghavendra Talur

On Wed, Sep 18, 2013 at 7:46 PM, Daniel Müller 
wrote:
No one!?
How do I get the extended attributes passed through to the glusterfs client?
Samba4 (With ADS Tool Microsoft) indeed does create the directories on the
gluster /raid5hs/glusterfs/samba/ads/home
the right way but complaining it could not.
But glusterfs : xxx.xxx.xxx:/sambacluster on /mnt/glusterfs type
fuse.glusterfs (rw,allow_other,max_read=131072)
Cannot show up the extended attributes!?

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de]
Gesendet: Montag, 16. September 2013 10:24
An: 'samba-boun...@lists.samba.org'
Cc: 'gluster-users@gluster.org'
Betreff: Glusterfs looses extendeed attributes Samba 4

Hello to all,

There is a strange problem with samba 4 and glusterfs 3.4 on centos 6.4.
As the Glusterd-XFS-Filesystem mounted on  my /raid5hs Have the right
extended attributes:

The xfs Server-Gluster-Partition mounted on /raid5hs ls -la
/raid5hs/glusterfs/samba/ads/home/Administrator insgesamt 0
drwxrwxr-x+ 2 300 users  6 23. Aug 10:35

[root@s4master bricks]# getfacl
/raid5hs/glusterfs/samba/ads/home/Administrator
getfacl: Entferne führende '/' von absoluten Pfadnamen
# file: raid5hs/glusterfs/samba/ads/home/Administrator
# owner: 300
# group: users
user::rwx
user:root:rwx
group::rwx
group:users:rwx
group:300:rwx
mask::rwx
other::r-x
default:user::rwx
default:user:root:rwx
default:user:300:rwx
default:group::r-x
default:group:users:r-x
default:mask::rwx
default:other::r-x

The Gluster-Brick Glusterfs-Client remounted on /mnt/glusterfs on the same
host has lost this attributes

ls -la   /mnt/glusterfs/ads/home/Administrator
insgesamt 0
drwxrwxr-x 2 300 users  6 23. Aug 10:35 .


[root@s4master bricks]# getfacl  /mnt/glusterfs/ads/home/Administrator
getfacl: Entferne führende '/' von absoluten Pfadnamen
# file: mnt/glusterfs/ads/home/Administrator
# owner: 300
# group: users
user::rwx
group::rwx
other::r-x


The brick is mounted like this --> mount -t glusterfs
172.17.1.1:/sambacluster /mnt/glusterfs -o ac

My smb.conf:

# Global parameters
[global]
        workgroup = TPLK
        realm = tplk.loc
        netbios name = S4MASTER
        server role = active directory domain controller
        server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl,
winbind, ntp_signd, kcc, dnsupdate
        idmap_ldb:use rfc2307 = yes
        allow dns updates = yes
        follow symlinks = yes
        unix extensions = no


[netlogon]
        path = /usr/local/samba/var/locks/sysvol/tplk.loc/scripts
        read only = No

[sysvol]
        path = /usr/local/samba/var/locks/sysvol
        read only = No

[home]
path= /mnt/glusterfs/ads/home
readonly = No
posix locking =NO

Any idea!?

Greetings

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-4

Re: [Gluster-users] Mounting same replica-volume on multiple clients. ????

2013-09-18 Thread Daniel Müller
Just try without "-o backupvolfile-server"
What are telling your logs?
---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Bobby Jacob [mailto:bobby.ja...@alshaya.com] 
Gesendet: Donnerstag, 19. September 2013 08:31
An: Bobby Jacob; muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: RE: [Gluster-users] Mounting same replica-volume on multiple
clients. 

Hi All,

Any idea as to why the below issue happens. ? 

Mounting the volume on Client1   :   mount -t glusterfs -o
backupvolfile-server=GFS02 GFS01:/testvol /data 
Mounting the volume on Client2   :   mount -t glusterfs -o
backupvolfile-server=GFS01 GFS02:/testvol /data

the gluster volume status on both the gluster servers are as follows :

Volume Name: testvol
Type: Replicate
Volume ID: db249daf-174c-493d-b976-58bf9a862d2f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GFS01/B01
Brick2: GFS02/B01

Thanks & Regards,
Bobby Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Bobby Jacob
Sent: Wednesday, September 18, 2013 8:48 AM
To: muel...@tropenklinik.de; gluster-users@gluster.org
Subject: Re: [Gluster-users] Mounting same replica-volume on multiple
clients. 

Exactly. !! BUT I am writing through the volume mount-point from the
clients. !! NOT directly into the bricks. !!

I'm using GlusterFS 3.3.2 with Centos6.4 . !

Thanks & Regards,
Bobby Jacob

-Original Message-----
From: Daniel Müller [mailto:muel...@tropenklinik.de]
Sent: Wednesday, September 18, 2013 8:46 AM
To: Bobby Jacob; gluster-users@gluster.org
Subject: AW: [Gluster-users] Mounting same replica-volume on multiple
clients. 

Hello,
this ist he behavior as if you write directly into the glusterd
directory/partition and not to the remounted replicating bricks!?


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Bobby Jacob
Gesendet: Mittwoch, 18. September 2013 07:36
An: gluster-users@gluster.org
Betreff: [Gluster-users] Mounting same replica-volume on multiple clients.


HI,

I have 2 gluster nodes (GFS01/GFS02) each with a single brick (B01/B01). I
have created a simple replica volume with these bricks. 
Bricks    : GFS01/B01 and GFS02/B01.
Volume: TestVol

I have 2 clients (C01/C02) which will mount this "testvol" for simultaneous
read/write. The 2 clients run the same application which is load-balanced,
so user request are end to both the client servers which reads/writes data
to both the same volume.

Mounting the volume on C1   :   mount -t glusterfs -o
backupvolfile-server=GFS02 GFS01:/testvol /data Mounting the volume on
C2   :   mount -t glusterfs -o backupvolfile-server=GFS01
GFS02:/testvol /data

Is this the appropriate way to be followed.? 

At times, I notice that when I write data through C1-mount point the data is
written only to GFS01/B01 and if data is written through C2-mount point the
data is written only to GFS02/B01.

Please advise. !!


Thanks & Regards,
Bobby Jacob

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs looses extendeed attributes Samba 4

2013-09-18 Thread Daniel Müller
No one!?
How do I get the extended attributes passed through to the glusterfs client?
Samba4 (With ADS Tool Microsoft) indeed does create the directories on the
gluster /raid5hs/glusterfs/samba/ads/home
the right way but complaining it could not.
But glusterfs : xxx.xxx.xxx:/sambacluster on /mnt/glusterfs type
fuse.glusterfs (rw,allow_other,max_read=131072)
Cannot show up the extended attributes!?

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Montag, 16. September 2013 10:24
An: 'samba-boun...@lists.samba.org'
Cc: 'gluster-users@gluster.org'
Betreff: Glusterfs looses extendeed attributes Samba 4

Hello to all,

There is a strange problem with samba 4 and glusterfs 3.4 on centos 6.4.
As the Glusterd-XFS-Filesystem mounted on  my /raid5hs Have the right
extended attributes:

The xfs Server-Gluster-Partition mounted on /raid5hs ls -la
/raid5hs/glusterfs/samba/ads/home/Administrator insgesamt 0
drwxrwxr-x+ 2 300 users  6 23. Aug 10:35

[root@s4master bricks]# getfacl
/raid5hs/glusterfs/samba/ads/home/Administrator
getfacl: Entferne führende '/' von absoluten Pfadnamen
# file: raid5hs/glusterfs/samba/ads/home/Administrator
# owner: 300
# group: users
user::rwx
user:root:rwx
group::rwx
group:users:rwx
group:300:rwx
mask::rwx
other::r-x
default:user::rwx
default:user:root:rwx
default:user:300:rwx
default:group::r-x
default:group:users:r-x
default:mask::rwx
default:other::r-x

The Gluster-Brick Glusterfs-Client remounted on /mnt/glusterfs on the same
host has lost this attributes

ls -la   /mnt/glusterfs/ads/home/Administrator
insgesamt 0
drwxrwxr-x 2 300 users  6 23. Aug 10:35 .


[root@s4master bricks]# getfacl  /mnt/glusterfs/ads/home/Administrator
getfacl: Entferne führende '/' von absoluten Pfadnamen
# file: mnt/glusterfs/ads/home/Administrator
# owner: 300
# group: users
user::rwx
group::rwx
other::r-x
 

The brick is mounted like this --> mount -t glusterfs
172.17.1.1:/sambacluster /mnt/glusterfs -o ac

My smb.conf:

# Global parameters
[global]
workgroup = TPLK
realm = tplk.loc
netbios name = S4MASTER
server role = active directory domain controller
server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl,
winbind, ntp_signd, kcc, dnsupdate
idmap_ldb:use rfc2307 = yes
allow dns updates = yes
follow symlinks = yes
unix extensions = no


[netlogon]
path = /usr/local/samba/var/locks/sysvol/tplk.loc/scripts
read only = No

[sysvol]
path = /usr/local/samba/var/locks/sysvol
read only = No

[home]
path= /mnt/glusterfs/ads/home
readonly = No
posix locking =NO

Any idea!?

Greetings

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Mounting same replica-volume on multiple clients. ????

2013-09-18 Thread Daniel Müller
What about 
gluster volume info on both nodes!?

Ex.:
Volume Name: sambacluster
Type: Replicate
Volume ID: 4fd0da03-8579-47cc-926b-d7577dac56cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: s4master:/raid5hs/glusterfs/samba
Brick2: s4slave:/raid5hs/glusterfs/samba
Options Reconfigured:
network.ping-timeout: 5
performance.quick-read: on

What are telling you your log files?

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Bobby Jacob [mailto:bobby.ja...@alshaya.com] 
Gesendet: Mittwoch, 18. September 2013 07:48
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: RE: [Gluster-users] Mounting same replica-volume on multiple
clients. 

Exactly. !! BUT I am writing through the volume mount-point from the
clients. !! NOT directly into the bricks. !!

I'm using GlusterFS 3.3.2 with Centos6.4 . !

Thanks & Regards,
Bobby Jacob

-Original Message-----
From: Daniel Müller [mailto:muel...@tropenklinik.de] 
Sent: Wednesday, September 18, 2013 8:46 AM
To: Bobby Jacob; gluster-users@gluster.org
Subject: AW: [Gluster-users] Mounting same replica-volume on multiple
clients. 

Hello,
this ist he behavior as if you write directly into the glusterd
directory/partition and not to the remounted replicating bricks!?


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Bobby Jacob
Gesendet: Mittwoch, 18. September 2013 07:36
An: gluster-users@gluster.org
Betreff: [Gluster-users] Mounting same replica-volume on multiple clients.


HI,

I have 2 gluster nodes (GFS01/GFS02) each with a single brick (B01/B01). I
have created a simple replica volume with these bricks. 
Bricks    : GFS01/B01 and GFS02/B01.
Volume: TestVol

I have 2 clients (C01/C02) which will mount this "testvol" for simultaneous
read/write. The 2 clients run the same application which is load-balanced,
so user request are end to both the client servers which reads/writes data
to both the same volume.

Mounting the volume on C1   :   mount -t glusterfs -o
backupvolfile-server=GFS02 GFS01:/testvol /data Mounting the volume on
C2   :   mount -t glusterfs -o
backupvolfile-server=GFS01 GFS02:/testvol /data

Is this the appropriate way to be followed.? 

At times, I notice that when I write data through C1-mount point the data is
written only to GFS01/B01 and if data is written through C2-mount point the
data is written only to GFS02/B01.

Please advise. !!


Thanks & Regards,
Bobby Jacob


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Mounting same replica-volume on multiple clients. ????

2013-09-17 Thread Daniel Müller
Hello,
this ist he behavior as if you write directly into the glusterd
directory/partition and not to the remounted replicating bricks!?


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Bobby Jacob
Gesendet: Mittwoch, 18. September 2013 07:36
An: gluster-users@gluster.org
Betreff: [Gluster-users] Mounting same replica-volume on multiple clients.


HI,

I have 2 gluster nodes (GFS01/GFS02) each with a single brick (B01/B01). I
have created a simple replica volume with these bricks. 
Bricks    : GFS01/B01 and GFS02/B01.
Volume: TestVol

I have 2 clients (C01/C02) which will mount this “testvol” for simultaneous
read/write. The 2 clients run the same application which is load-balanced,
so user request are end to both the client servers which reads/writes data
to both the same volume.

Mounting the volume on C1   :   mount –t glusterfs –o
backupvolfile-server=GFS02 GFS01:/testvol /data
Mounting the volume on C2   :   mount –t glusterfs –o
backupvolfile-server=GFS01 GFS02:/testvol /data

Is this the appropriate way to be followed.? 

At times, I notice that when I write data through C1-mount point the data is
written only to GFS01/B01 and if data is written through C2-mount point the
data is written only to GFS02/B01.

Please advise. !!


Thanks & Regards,
Bobby Jacob

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Glusterfs looses extendeed attributes Samba 4

2013-09-16 Thread Daniel Müller
Hello to all,

There is a strange problem with samba 4 and glusterfs 3.4 on centos 6.4.
As the Glusterd-XFS-Filesystem mounted on  my /raid5hs
Have the right extended attributes:

The xfs Server-Gluster-Partition mounted on /raid5hs 
ls -la  /raid5hs/glusterfs/samba/ads/home/Administrator insgesamt 0
drwxrwxr-x+ 2 300 users  6 23. Aug 10:35

[root@s4master bricks]# getfacl
/raid5hs/glusterfs/samba/ads/home/Administrator
getfacl: Entferne führende '/' von absoluten Pfadnamen
# file: raid5hs/glusterfs/samba/ads/home/Administrator
# owner: 300
# group: users
user::rwx
user:root:rwx
group::rwx
group:users:rwx
group:300:rwx
mask::rwx
other::r-x
default:user::rwx
default:user:root:rwx
default:user:300:rwx
default:group::r-x
default:group:users:r-x
default:mask::rwx
default:other::r-x

The Gluster-Brick Glusterfs-Client remounted on /mnt/glusterfs on the same
host has lost this attributes

ls -la   /mnt/glusterfs/ads/home/Administrator
insgesamt 0
drwxrwxr-x 2 300 users  6 23. Aug 10:35 .


[root@s4master bricks]# getfacl  /mnt/glusterfs/ads/home/Administrator
getfacl: Entferne führende '/' von absoluten Pfadnamen
# file: mnt/glusterfs/ads/home/Administrator
# owner: 300
# group: users
user::rwx
group::rwx
other::r-x
 

The brick is mounted like this --> mount -t glusterfs
172.17.1.1:/sambacluster /mnt/glusterfs -o ac

My smb.conf:

# Global parameters
[global]
workgroup = TPLK
realm = tplk.loc
netbios name = S4MASTER
server role = active directory domain controller
server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl,
winbind, ntp_signd, kcc, dnsupdate
idmap_ldb:use rfc2307 = yes
allow dns updates = yes
follow symlinks = yes
unix extensions = no


[netlogon]
path = /usr/local/samba/var/locks/sysvol/tplk.loc/scripts
read only = No

[sysvol]
path = /usr/local/samba/var/locks/sysvol
read only = No

[home]
path= /mnt/glusterfs/ads/home
readonly = No
posix locking =NO

Any idea!?

Greetings

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] compiling samba vfs module

2013-09-10 Thread Daniel Müller
Hi again,

I did not manage to bring samba 4 up, with sambas glusterfs vfs module on
centos 6.4 with glusterfs-3.4.0, too.
I think it is only working with a special version.

Greetings
Daniel


---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Tamas Papp
Gesendet: Dienstag, 10. September 2013 23:57
An: gluster-users@gluster.org
Betreff: [Gluster-users] compiling samba vfs module

hi All,

The system is Ubuntu 12.04

I download and extracted source packages of samba and glusterfs and I built
glusterfs, so I get the right necessary structure:
glusterfs version is 3.4 and it's from ppa.

# ls
/data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs
.h
/data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs
.h


Unfortunately I'm getting this error:

# ./configure --with-samba-source=/data/samba/samba-3.6.3/
--with-glusterfs=/data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/g
lusterfs/
checking for gcc... gcc
checking whether the C compiler works... yes checking for C compiler default
output file name... a.out checking for suffix of executables...
checking whether we are cross compiling... no checking for suffix of object
files... o checking whether we are using the GNU C compiler... yes checking
whether gcc accepts -g... yes checking for gcc option to accept ISO C89...
none needed checking for a BSD-compatible install... /usr/bin/install -c
checking build system type... x86_64-unknown-linux-gnu checking host system
type... x86_64-unknown-linux-gnu checking how to run the C preprocessor...
gcc -E checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E checking for ANSI C header files... yes
checking for sys/types.h... yes checking for sys/stat.h... yes checking for
stdlib.h... yes checking for string.h... yes checking for memory.h... yes
checking for strings.h... yes checking for inttypes.h... yes checking for
stdint.h... yes checking for unistd.h... yes checking api/glfs.h
usability... no checking api/glfs.h presence... no checking for
api/glfs.h... no

Cannot find api/glfs.h. Please specify --with-glusterfs=dir if necessary



I don't see, what its problem is?


Thanks,
tamas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Error when creating volume

2013-08-22 Thread Daniel Müller
Just use another directory than export. Or:

setfattr -x trusted.glusterfs.volume-id $brick_path
setfattr -x trusted.gfid $brick_path
rm -rf $brick_path/.glusterfs

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Olivier Desport
Gesendet: Donnerstag, 22. August 2013 15:07
An: gluster-users@gluster.org
Betreff: [Gluster-users] Error when creating volume

Hello,

I've removed a volume and I can't re-create it :

gluster volume create gluster-export gluster-6:/export gluster-5:/export
gluster-4:/export gluster-3:/export
/export or a prefix of it is already part of a volume

I've formatted the partition and reinstalled the 4 gluster servers and the
error still appears.

Any idea ?

Thanks.
-- 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume creation fails with "prefix of it is already part of a volume"

2013-08-21 Thread Daniel Müller
In this case I did not have any other brick running. It was my first try.
Just a fresh install and then set up my 
first replicating vol with 3.4:
On node1  /raid5hs/glusterfs/export
On node2 /raid5hs/glusterfs/export
Just did on the first node: gluster volume create sambacluster replica 2
transport tcp  s4master:/raid5hs/glusterfs/export
s4slave:/raid5hs/glusterfs/export

And voila the error occurred!!
I think this is something more serious in 3.4. I did not had that with
3.2!!

One behaviuor I recognized with this error: Only on one node the .glusterfs
file was written in /raid5hs/glusterfs/export on the other node the file was
missing !!
This responds to the /etc/glusterfs/glusterd.vol file that was missing later
on on starting the brick!
I had to copy it over from the other node  to start it successfully.


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: Anand Avati [mailto:anand.av...@gmail.com] 
Gesendet: Donnerstag, 22. August 2013 08:23
An: Stroppa Daniele (strp)
Cc: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Volume creation fails with "prefix of it is
already part of a volume"

This is intentional behavior. We specifically brought this because creating
volumes with directories which are subdirectories of other bricks, or if a
subdirectory belongs to another brick can result in dangerous corruption of
your data.

Please create volumes with brick directories which are cleanly separate in
the namespace.

Avati

On Wed, Aug 21, 2013 at 7:51 AM, Stroppa Daniele (strp) 
wrote:
Thanks Daniel.

I'm indeed running Gluster 3.4 on CentOS 6.4.

I've tried your suggestion, it does work for me too, but it's not an
optimal solution.

Maybe someone could shed some light on this behaviour?

Thanks,
--
Daniele Stroppa
Researcher
Institute of Information Technology
Zürich University of Applied Sciences
http://www.cloudcomp.ch <http://www.cloudcomp.ch/>







On 21/08/2013 09:00, "Daniel Müller"  wrote:

>Are you running with gluster 3.4?
>I had the same issue. I solved it by deleting my subfolders again and then
>create new ones. In your case brick1 and brick2.
>Then create new subfolders in the place,ex.: mkdir /mnt/bricknew1  and
>/mnt/bricknew2 .
>This solved the problem for me, not knowing why gluster 3.4 behave like
>this.
>Good Luck
>
>
>EDV Daniel Müller
>
>Leitung EDV
>Tropenklinik Paul-Lechler-Krankenhaus
>Paul-Lechler-Str. 24
>72076 Tübingen
>Tel.: 07071/206-463, Fax: 07071/206-499
>eMail: muel...@tropenklinik.de
>Internet: www.tropenklinik.de
>
>Von: gluster-users-boun...@gluster.org
>[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Stroppa Daniele
>(strp)
>Gesendet: Dienstag, 20. August 2013 21:51
>An: gluster-users@gluster.org
>Betreff: [Gluster-users] Volume creation fails with "prefix of it is
>already
>part of a volume"
>
>Hi All,
>
>I'm setting up a small test cluster: 2 nodes (gluster-node1 and
>gluster-node4) with 2 bricks each (/mnt/brick1 and /mnt/brick2) and one
>volume (vol_icclab). When I issue the create volume command I get the
>following error:
>
># gluster volume create vol_icclab replica 2 transport tcp
>gluster-node4.test:/mnt/brick1/vol_icclab
>gluster-node1.test:/mnt/brick1/vol_icclab
>gluster-node4.test:/mnt/brick2/vol_icclab
>gluster-node1.test:/mnt/brick2/vol_icclab
>volume create: vol_icclab: failed: /mnt/brick1/vol_icclab or a prefix of
>it
>is already part of a volume
>
>I checked and found this [1], but in my case the issue it's happening when
>creating the volume for the first time, not after removing/adding a brick
>to
>a volume.
>
>Any suggestions?
>
>[1]
>http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-p
>art-of-a-volume/
>
>Thanks,
>--
>Daniele Stroppa
>Researcher
>Institute of Information Technology
>Zürich University of Applied Sciences
>http://www.cloudcomp.ch
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume creation fails with "prefix of it is already part of a volume"

2013-08-21 Thread Daniel Müller
There is another bug concerning 3.4. on Centos 6.4 .If you crate a
replicating vol there is sometimes that the glusterd.vol
is not created in /etc/glusterfs on one of the nodes. Than you have to copy
it over from one of the working nodes. 
Or you loose it on one host. Than do the same.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Stroppa Daniele (strp) [mailto:s...@zhaw.ch] 
Gesendet: Mittwoch, 21. August 2013 16:52
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: AW: [Gluster-users] Volume creation fails with "prefix of it is
already part of a volume"

Thanks Daniel.

I'm indeed running Gluster 3.4 on CentOS 6.4.

I've tried your suggestion, it does work for me too, but it's not an optimal
solution.

Maybe someone could shed some light on this behaviour?

Thanks,
--
Daniele Stroppa
Researcher
Institute of Information Technology
Zürich University of Applied Sciences
http://www.cloudcomp.ch <http://www.cloudcomp.ch/>







On 21/08/2013 09:00, "Daniel Müller"  wrote:

>Are you running with gluster 3.4?
>I had the same issue. I solved it by deleting my subfolders again and 
>then create new ones. In your case brick1 and brick2.
>Then create new subfolders in the place,ex.: mkdir /mnt/bricknew1  and
>/mnt/bricknew2 .
>This solved the problem for me, not knowing why gluster 3.4 behave like 
>this.
>Good Luck
>
>
>EDV Daniel Müller
>
>Leitung EDV
>Tropenklinik Paul-Lechler-Krankenhaus
>Paul-Lechler-Str. 24
>72076 Tübingen
>Tel.: 07071/206-463, Fax: 07071/206-499
>eMail: muel...@tropenklinik.de
>Internet: www.tropenklinik.de
>
>Von: gluster-users-boun...@gluster.org
>[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Stroppa 
>Daniele
>(strp)
>Gesendet: Dienstag, 20. August 2013 21:51
>An: gluster-users@gluster.org
>Betreff: [Gluster-users] Volume creation fails with "prefix of it is 
>already part of a volume"
>
>Hi All,
>
>I'm setting up a small test cluster: 2 nodes (gluster-node1 and
>gluster-node4) with 2 bricks each (/mnt/brick1 and /mnt/brick2) and one 
>volume (vol_icclab). When I issue the create volume command I get the 
>following error:
>
># gluster volume create vol_icclab replica 2 transport tcp 
>gluster-node4.test:/mnt/brick1/vol_icclab
>gluster-node1.test:/mnt/brick1/vol_icclab
>gluster-node4.test:/mnt/brick2/vol_icclab
>gluster-node1.test:/mnt/brick2/vol_icclab
>volume create: vol_icclab: failed: /mnt/brick1/vol_icclab or a prefix 
>of it is already part of a volume
>
>I checked and found this [1], but in my case the issue it's happening 
>when creating the volume for the first time, not after removing/adding 
>a brick to a volume.
>
>Any suggestions?
>
>[1]
>http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-
>p
>art-of-a-volume/
>
>Thanks,
>--
>Daniele Stroppa
>Researcher
>Institute of Information Technology
>Zürich University of Applied Sciences
>http://www.cloudcomp.ch
>
>


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume creation fails with "prefix of it is already part of a volume"

2013-08-21 Thread Daniel Müller
Are you running with gluster 3.4?
I had the same issue. I solved it by deleting my subfolders again and then
create new ones. In your case brick1 and brick2.
Then create new subfolders in the place,ex.: mkdir /mnt/bricknew1  and
/mnt/bricknew2 .
This solved the problem for me, not knowing why gluster 3.4 behave like
this.
Good Luck


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Stroppa Daniele
(strp)
Gesendet: Dienstag, 20. August 2013 21:51
An: gluster-users@gluster.org
Betreff: [Gluster-users] Volume creation fails with "prefix of it is already
part of a volume"

Hi All,

I'm setting up a small test cluster: 2 nodes (gluster-node1 and
gluster-node4) with 2 bricks each (/mnt/brick1 and /mnt/brick2) and one
volume (vol_icclab). When I issue the create volume command I get the
following error:

# gluster volume create vol_icclab replica 2 transport tcp
gluster-node4.test:/mnt/brick1/vol_icclab
gluster-node1.test:/mnt/brick1/vol_icclab
gluster-node4.test:/mnt/brick2/vol_icclab
gluster-node1.test:/mnt/brick2/vol_icclab
volume create: vol_icclab: failed: /mnt/brick1/vol_icclab or a prefix of it
is already part of a volume

I checked and found this [1], but in my case the issue it's happening when
creating the volume for the first time, not after removing/adding a brick to
a volume.

Any suggestions?

[1] http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-p
art-of-a-volume/

Thanks,
--
Daniele Stroppa
Researcher
Institute of Information Technology
Zürich University of Applied Sciences
http://www.cloudcomp.ch


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] standby-server

2013-08-18 Thread Daniel Müller
What ist he reason about a "spare server" ? With gluster just replicate to
the real status all the time!?
Or just do geo replication to a non gluster machine!?

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Ted Miller
Gesendet: Freitag, 16. August 2013 18:29
An: gluster-users@gluster.org
Betreff: [Gluster-users] standby-server

I am looking at glusterfs for a HA application w/o local tech support (don't
ask, third-world country, techs are hard to find).

My current plan is to do a replica-4 + hot spare server.  Of the four in-use
bricks, two will be on "servers" and the other two will be on a "client" 
machine and a "hot-backup" client machine.  No striping, all content on each
local machine, each machine using its own disk for all reading.

Part of my plan is to have a cold-spare server in the rack, not powered on.

(This server will also be a cold spare for another server).

I am wondering if this would be a viable way to set up this configuration:

Set up glusterfs as replica-5.

1. server1
2. server2
3. client
4. client-standby
5. server-spare

Initialize and set up glusterfs with all 5 bricks in the system (no file
content).
Install system at client site, and test with all 5 bricks in system.

Shut down spare server.

Once a month, power up spare server, run full heal, shut down.
Power up server-spare for any software updates.

If server1 or server2 dies (or needs maintenance), tell them to power up
server-spare, and let it heal.

It seems to me that this would be easier than setting up a replica-4 system
and then jumping through all the hoops to replace a server from scratch.

Comments, reactions, pot-shots welcome.
Ted Miller

--
"He is no fool who gives what he cannot keep, to gain what he cannot lose."
- - Jim Elliot For more information about Jim Elliot and his unusual life,
see http://www.christianliteratureandliving.com/march2003/carolyn.html.

Ted Miller
Design Engineer
HCJB Global Technology Center
2830 South 17th St
Elkhart, IN  46517
574--970-4272 my desk
574--970-4252 receptionist


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Status of the SAMBA module

2013-07-28 Thread Daniel Müller
But you need to have gluster installed!? Which version?
Samba4.1 does not compile with the lates glusterfs 3.4 on CentOs 6.4.



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von RAGHAVENDRA TALUR
Gesendet: Freitag, 26. Juli 2013 08:26
An: gluster-de...@nongnu.org; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Status of the SAMBA module

Hello Everyone,

Thank you for showing interest in this feature.

We are actively working on this and recently fixed a few bugs. We are testing 
the patches and they will be upstream after that.(estimating it to be fairly 
tested in 2 weeks).

Here is a quick overview of the new integration method

 --
 Windows Client
 --
 ||
 ||
 --
   SAMBA (version 3.6)
V
VFS PLUGIN (it a samba vfs plugin, samba allows you to write plugin which 
becomes recipient for calls which were originally supposed to go to VFS layer. 
It is activated on per-share basis.)
V
   GFAPI
 -
||
||
--
GLUSTERFSD
 -
 Legend:
---: Boundaries of a box/node
 || : Network path
V : Data transfer in both direction in same box


Change regarding usage is you need to have gluster shares defined in smb.conf 
in this way.
The entry should look like
[gluster-volname]
comment = For samba export of volume 
vfs objects = glusterfs
glusterfs:volume = 
path = /
read only =no
guest ok = yes

We will upload the detailed user-guide and other documentation soon.

Thanks
Raghavendra Talur

On Thu, Jul 25, 2013 at 7:13 AM, haiwei.xie-soulinfo  
wrote:

When do this new feature will be released?

Thanks,
terrs

On Wed, 24 Jul 2013 21:57:18 +0200
Tamas Papp  wrote:

> On 07/24/2013 09:41 PM, John Mark Walker wrote:
> > It enables Samba to connect via libgfapi to Gluster volumes, thus giving 
> > SMB clients a way of
> > easily connecting to GlusterFS.
> >
> > I've CC'd Chris Hertel, who should be able to direct you to the right area.
>
> I'm also interested very much.
>
> Chris, could you send the information to the list?
>
>
> Thanks,
> tamas
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

--
谢海威
软件项目经理
电话:  +86 10-68920588
手机:  +86 13911703586
Email:  haiwei@soulinfo.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




-- 
Raghavendra Talur 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster brick on drbd device

2013-07-28 Thread Daniel Müller
With gluster running on replicating volumes why do you need drbd!?
Just xfs or ext3/4 is sufficient.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Fiorenza Meini
Gesendet: Freitag, 26. Juli 2013 11:25
An: gluster-users@gluster.org
Betreff: [Gluster-users] Gluster brick on drbd device

Hi there,
has someone experienced glusterfs with bricks configured on drbd device?

Thanks and regards
Fiorenza
-- 

Fiorenza Meini
Spazio Web S.r.l.

V. Dante Alighieri, 10 - 13900 Biella
Tel.: 015.2431982 - 015.9526066
Fax: 015.2522600
Reg. Imprese, CF e P.I.: 02414430021
Iscr. REA: BI - 188936
Iscr. CCIAA: Biella - 188936
Cap. Soc.: 30.000,00 Euro i.v.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Crash in glusterfsd 3.4.0 beta1 and "Transport endpoint is not connected"

2013-05-23 Thread Daniel Müller
I had the same problem with gluster 3.2
Syncing two Bricks.
Look at your glusterfs-export.log, mout-glusterfs.log or something like
this.
For me the reason were some files who done that issue:
-->  [2013-04-25 12:36:19.127124] E
[afr-self-heal-metadata.c:521:afr_sh_metadata_fix] 0-sambavol-replicate-0:
Unable to self-heal permissions/ownership of
'/windows/winuser/x/xxx/xxx/xxx 2013/xxx.xls' (possible split-brain).
Please fix the file on all backend volumes

After removing this files all was up and running agein.

Good Luck
Daniel

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Alessandro De
Salvo
Gesendet: Donnerstag, 23. Mai 2013 10:18
An: gluster-users@gluster.org
Betreff: [Gluster-users] Crash in glusterfsd 3.4.0 beta1 and "Transport
endpoint is not connected"

Hi,
I have a replicated volume among two fedora 18 machines using glusterfs
3.4.0 beta1 from rawhide. All is fine with glusterd, and the replication is
perfomed correctly, but every time I try to access any file from the fuse
mounts I see this kind of errors in /var/log/glusterfs/.log,
leading to "Transport endpoint is not connected" so the filesystems get
unmounted:

[2013-05-23 08:06:24.302332] I [afr-common.c:3709:afr_notify]
0-adsroma1-gluster-data01-replicate-0: Subvolume
'adsroma1-gluster-data01-client-1' came back up; going online.
[2013-05-23 08:06:24.302706] I
[client-handshake.c:450:client_set_lk_version_cbk]
0-adsroma1-gluster-data01-client-1: Server lk version = 1
[2013-05-23 08:06:24.316318] I
[client-handshake.c:1658:select_server_supported_programs]
0-adsroma1-gluster-data01-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)
[2013-05-23 08:06:24.336718] I
[client-handshake.c:1456:client_setvolume_cbk]
0-adsroma1-gluster-data01-client-0: Connected to 127.0.0.1:49157, attached
to remote volume '/gluster/data01/files'.
[2013-05-23 08:06:24.336732] I
[client-handshake.c:1468:client_setvolume_cbk]
0-adsroma1-gluster-data01-client-0: Server and Client lk-version numbers are
not same, reopening the fds
[2013-05-23 08:06:24.344178] I [fuse-bridge.c:4723:fuse_graph_setup] 0-fuse:
switched to graph 0
[2013-05-23 08:06:24.344372] I
[client-handshake.c:450:client_set_lk_version_cbk]
0-adsroma1-gluster-data01-client-0: Server lk version = 1
[2013-05-23 08:06:24.344502] I [fuse-bridge.c:3680:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel
7.21
[2013-05-23 08:06:24.345008] I
[afr-common.c:2059:afr_set_root_inode_on_first_lookup]
0-adsroma1-gluster-data01-replicate-0: added root inode
[2013-05-23 08:06:24.345240] I [afr-common.c:2122:afr_discovery_cbk]
0-adsroma1-gluster-data01-replicate-0: selecting local read_child
adsroma1-gluster-data01-client-0




pending frames:
frame : type(1) op(READ)
frame : type(1) op(OPEN)
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-05-23 08:08:20configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.0beta1
/usr/lib64/libc.so.6[0x3c51035b50]
/usr/lib64/glusterfs/3.4.0beta1/xlator/performance/io-cache.so(ioc_open_cbk+
0x8b)[0x7fb93cd2bc4b]
/usr/lib64/glusterfs/3.4.0beta1/xlator/performance/read-ahead.so(ra_open_cbk
+0x1c1)[0x7fb93cf3a951]
/usr/lib64/glusterfs/3.4.0beta1/xlator/cluster/distribute.so(dht_open_cbk+0x
e0)[0x7fb93d37f890]
/usr/lib64/glusterfs/3.4.0beta1/xlator/cluster/replicate.so(afr_open_cbk+0x2
9c)[0x7fb93d5bf60c]
/usr/lib64/glusterfs/3.4.0beta1/xlator/protocol/client.so(client3_3_open_cbk
+0x174)[0x7fb93d82f5c4]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x3c5300e880]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x101)[0x3c5300ea81]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x3c5300b0d3]
/usr/lib64/glusterfs/3.4.0beta1/rpc-transport/socket.so(socket_event_poll_in
+0x34)[0x7fb93eefa6a4]
/usr/lib64/glusterfs/3.4.0beta1/rpc-transport/socket.so(socket_event_handler
+0x11c)[0x7fb93eefa9dc]
/usr/lib64/libglusterfs.so.0[0x3c5285923b]
/usr/sbin/glusterfs(main+0x3a4)[0x4049d4]
/usr/lib64/libc.so.6(__libc_start_main+0xf5)[0x3c51021c35]
/usr/sbin/glusterfs[0x404d49]
-


The volume is defined as follows:

Volume Name: adsroma1-gluster-data01
Type: Replicate
Volume ID: 1ca608c7-8a9d-4d8c-ac05-fabc2d2c2565
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: pc-ads-02.roma1.infn.it:/gluster/data01/files
Brick2: pc-ads-03.roma1.infn.it:/gluster/data01/files

Is it a known problem with thi

Re: [Gluster-users] Testing VM Storage with Fedora 19

2013-05-14 Thread Daniel Müller
Try cache= writethrough

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Andrew
Niemantsverdriet
Gesendet: Dienstag, 14. Mai 2013 16:00
Cc: gluster-users
Betreff: Re: [Gluster-users] Testing VM Storage with Fedora 19

It looks as though the qemu package is built without Gluster support.
So I added the --with-glusterfs flag and tried to rebuilt the RPM. I am now
getting this error:

ERROR
ERROR: User requested feature GlusterFS backend support
ERROR: configure was not able to find it ERROR

What is configure looking for that it can't find? Again glusterfs-server and
glusterfs-devel packages are installed.

On Tue, May 14, 2013 at 12:05 AM, Daniel Müller 
wrote:
> Hi,
> I am serving several vmware vmdk on a proxmox gluster using ext4 in a 
> gluster replicating volume without any problem.
>
> -------
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
> -Ursprüngliche Nachricht-
> Von: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Andrew 
> Niemantsverdriet
> Gesendet: Dienstag, 14. Mai 2013 04:21
> Cc: gluster-users
> Betreff: Re: [Gluster-users] Testing VM Storage with Fedora 19
>
> The exact error is qemu-img: Unknown protocol 
> 'gluster://test1:/vmstore/testvm
>
> I don't think is is related to the backing file system, I think it is 
> more of some sort of incompatibility with qemu-img and GlusterFS. But 
> for completeness sake the backing file system is ext4. I can't create 
> an image so I don't am not sure what the cache mode is.
>
> Thanks,
>  _
> /-\ ndrew
>
> On Mon, May 13, 2013 at 8:05 PM, Jacob Yundt  wrote:
>>> However when trying to create the image with qemu-img create I get 
>>> an
> unknown protocal error.
>>
>> What errors are you getting?
>>
>> What cache mode are you using for qemu images?
>>
>> What is the backing filesystem on your gluster bricks? [ext4, xfs, 
>> etc]
>>
>> I'm troubleshooting a problem
>> (https://bugzilla.redhat.com/show_bug.cgi?id=958781) with xfs backed 
>> filesystems + KVM/qemu and I'm wondering if you are getting the same 
>> errors.
>>
>> -Jacob
>
>
>
> --
>  _
> /-\ ndrew Niemantsverdriet
> Linux System Administrator
> Academic Computing
> (406) 238-7360
> Rocky Mountain College
> 1511 Poly Dr.
> Billings MT, 59102
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



--
 _
/-\ ndrew Niemantsverdriet
Linux System Administrator
Academic Computing
(406) 238-7360
Rocky Mountain College
1511 Poly Dr.
Billings MT, 59102
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Testing VM Storage with Fedora 19

2013-05-13 Thread Daniel Müller
Hi,
I am serving several vmware vmdk on a proxmox gluster using ext4 in a
gluster replicating volume without any problem.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Andrew
Niemantsverdriet
Gesendet: Dienstag, 14. Mai 2013 04:21
Cc: gluster-users
Betreff: Re: [Gluster-users] Testing VM Storage with Fedora 19

The exact error is qemu-img: Unknown protocol
'gluster://test1:/vmstore/testvm

I don't think is is related to the backing file system, I think it is more
of some sort of incompatibility with qemu-img and GlusterFS. But for
completeness sake the backing file system is ext4. I can't create an image
so I don't am not sure what the cache mode is.

Thanks,
 _
/-\ ndrew

On Mon, May 13, 2013 at 8:05 PM, Jacob Yundt  wrote:
>> However when trying to create the image with qemu-img create I get an
unknown protocal error.
>
> What errors are you getting?
>
> What cache mode are you using for qemu images?
>
> What is the backing filesystem on your gluster bricks? [ext4, xfs, 
> etc]
>
> I'm troubleshooting a problem
> (https://bugzilla.redhat.com/show_bug.cgi?id=958781) with xfs backed 
> filesystems + KVM/qemu and I'm wondering if you are getting the same 
> errors.
>
> -Jacob



--
 _
/-\ ndrew Niemantsverdriet
Linux System Administrator
Academic Computing
(406) 238-7360
Rocky Mountain College
1511 Poly Dr.
Billings MT, 59102
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs gives up with endpoint not connected

2013-04-09 Thread Daniel Müller
So what I did to make it more secure without my vols being "endpoint not
connected" is to run rsync not on my mounted vols but directly on my
glusterfs/export and the underlying directories. This seems to be more
stable. The only thing I
must have a look at, that none of my scripts write to glusterfs/export .
In any case there is an urgent need to get this feature "snapshot".

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Hans Lambermont [mailto:h...@shapeways.com] 
Gesendet: Dienstag, 9. April 2013 16:45
An: Daniel Müller
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Glusterfs gives up with endpoint not connected

Hi Daniel,

Daniel Müller wrote on 20130409:

> After some testing I found out that my backup using rsync caused the 
> error 'endpoint not connected'.

I see this too from time to time. I have to kill all processes that have an
open filedescriptor on the mountpoint, then umount and mount again.

> Is there a way to take backup snapshots from the volumes by gluster
itself?

Not yet, it's on the feature roadmap :
http://www.gluster.org/community/documentation/index.php/Features/snapshot

-- Hans
--
Hans Lambermont | Senior Architect
(t) +31407370104 (w) www.shapeways.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs gives up with endpoint not connected

2013-04-09 Thread Daniel Müller
After some testing I found out that my backup using rsync caused the error
'endpoint not connected'.
Stopping the cron job everything seemed to be ok.
Is there a way to take backup snapshots from the volumes by gluster itself?



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Donnerstag, 28. März 2013 14:17
An: 'muel...@tropenklinik.de'; 'Pranith Kumar K'
Cc: 'Reinhard Marstaller'; 'gluster-users@gluster.org'
Betreff: AW: [Gluster-users] Glusterfs gives up with endpoint not connected

The tird part, output /var/log/messages concerning raid5 hs:

[root@tuepdc /]# tail -f /var/log/messages Mar 28 13:21:32 tuepdc kernel:
SCSI device sdd: drive cache: write back Mar 28 13:21:32 tuepdc kernel: SCSI
device sde: 1953525168 512-byte hdwr sectors (1000205 MB) Mar 28 13:21:32
tuepdc kernel: sde: Write Protect is off Mar 28 13:21:32 tuepdc kernel: SCSI
device sde: drive cache: write back Mar 28 13:21:32 tuepdc kernel: SCSI
device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) Mar 28 13:21:32
tuepdc kernel: sdf: Write Protect is off Mar 28 13:21:32 tuepdc kernel: SCSI
device sdf: drive cache: write back Mar 28 13:21:32 tuepdc kernel: SCSI
device sdg: 1953525168 512-byte hdwr sectors (1000205 MB) Mar 28 13:21:32
tuepdc kernel: sdg: Write Protect is off Mar 28 13:21:32 tuepdc kernel: SCSI
device sdg: drive cache: write back

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Müller
Gesendet: Donnerstag, 28. März 2013 13:57
An: 'Pranith Kumar K'
Cc: 'Reinhard Marstaller'; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Glusterfs gives up with endpoint not connected

Now part to of raid5hs-glusterfs-export.log

attr (utimes) on
/raid5hs/glusterfs/export/windows/winuser/schneider/schneider/V
erwaltung/baummanahmen/Bauvorhaben
Umsetzung/Parkierung/SKIZZE_TPLK_Lageplan
.pdf failed: Read-only file system
pending frames:

patchset: v3.2.0
signal received: 11
time of crash: 2013-03-25 22:50:46
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.2.0
/lib64/libc.so.6[0x30c0a302d0]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/marker.so(marker_
setattr_cbk+0x139)[0x2ba9de79]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_setattr_cbk+0x88)[0x2b88d718]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr_cbk+0x88)[0x2b1
a834a5f28]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr_cbk+0x88)[0x2b1
a834a5f28]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/storage/posix.so(posix_set
attr+0x1fc)[0x2b2560bc]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/access-control.so
(ac_setattr_resume+0xe9)[0x2b469039]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/access-control.so
(ac_setattr+0x49)[0x2b46a979]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr+0xe9)[0x2b1a834
9f659]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_setattr_wrapper+0xe9)[0x2b890749]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(call_resume+0xd81)[0x2b1a834b01
91]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_worker+0x119)[0x2b894229]
/lib64/libpthread.so.0[0x30c160673d]
/lib64/libc.so.6(clone+0x6d)[0x30c0ad44bd]
-
[2013-03-26 08:04:48.577056] W [socket.c:419:__socket_keepalive] 0-socket:
failed to set keep idle on socket 8
[2013-03-26 08:04:48.613068] W [socket.c:1846:socket_server_event_handler]
0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
[2013-03-26 08:04:49.187484] W [graph.c:274:gf_add_cmdline_options]
0-sambavol-server: adding option 'listen-port' for volume 'sambavol-server'
with value '24009'
[2013-03-26 08:04:49.253395] W [rpc-transport.c:447:validate_volume_options]
0-tcp.sambavol-server: option 'listen-port' is deprecated, preferred is
'transport.socket.listen-port', continuing with correction
[2013-03-26 08:04:49.287651] E [posix.c:4369:init] 0-sambavol-posix:
Directory '/raid5hs/glusterfs/export' doesn't exist, exiting.
[2013-03-26

Re: [Gluster-users] Glusterfs gives up with endpoint not connected

2013-03-28 Thread Daniel Müller
The tird part, output /var/log/messages concerning raid5 hs:

[root@tuepdc /]# tail -f /var/log/messages
Mar 28 13:21:32 tuepdc kernel: SCSI device sdd: drive cache: write back
Mar 28 13:21:32 tuepdc kernel: SCSI device sde: 1953525168 512-byte hdwr
sectors (1000205 MB)
Mar 28 13:21:32 tuepdc kernel: sde: Write Protect is off
Mar 28 13:21:32 tuepdc kernel: SCSI device sde: drive cache: write back
Mar 28 13:21:32 tuepdc kernel: SCSI device sdf: 1953525168 512-byte hdwr
sectors (1000205 MB)
Mar 28 13:21:32 tuepdc kernel: sdf: Write Protect is off
Mar 28 13:21:32 tuepdc kernel: SCSI device sdf: drive cache: write back
Mar 28 13:21:32 tuepdc kernel: SCSI device sdg: 1953525168 512-byte hdwr
sectors (1000205 MB)
Mar 28 13:21:32 tuepdc kernel: sdg: Write Protect is off
Mar 28 13:21:32 tuepdc kernel: SCSI device sdg: drive cache: write back

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Müller
Gesendet: Donnerstag, 28. März 2013 13:57
An: 'Pranith Kumar K'
Cc: 'Reinhard Marstaller'; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Glusterfs gives up with endpoint not connected

Now part to of raid5hs-glusterfs-export.log

attr (utimes) on
/raid5hs/glusterfs/export/windows/winuser/schneider/schneider/V
erwaltung/baummanahmen/Bauvorhaben
Umsetzung/Parkierung/SKIZZE_TPLK_Lageplan
.pdf failed: Read-only file system
pending frames:

patchset: v3.2.0
signal received: 11
time of crash: 2013-03-25 22:50:46
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.2.0
/lib64/libc.so.6[0x30c0a302d0]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/marker.so(marker_
setattr_cbk+0x139)[0x2ba9de79]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_setattr_cbk+0x88)[0x2b88d718]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr_cbk+0x88)[0x2b1
a834a5f28]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr_cbk+0x88)[0x2b1
a834a5f28]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/storage/posix.so(posix_set
attr+0x1fc)[0x2b2560bc]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/access-control.so
(ac_setattr_resume+0xe9)[0x2b469039]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/access-control.so
(ac_setattr+0x49)[0x2b46a979]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr+0xe9)[0x2b1a834
9f659]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_setattr_wrapper+0xe9)[0x2b890749]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(call_resume+0xd81)[0x2b1a834b01
91]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_worker+0x119)[0x2b894229]
/lib64/libpthread.so.0[0x30c160673d]
/lib64/libc.so.6(clone+0x6d)[0x30c0ad44bd]
-
[2013-03-26 08:04:48.577056] W [socket.c:419:__socket_keepalive] 0-socket:
failed to set keep idle on socket 8
[2013-03-26 08:04:48.613068] W [socket.c:1846:socket_server_event_handler]
0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
[2013-03-26 08:04:49.187484] W [graph.c:274:gf_add_cmdline_options]
0-sambavol-server: adding option 'listen-port' for volume 'sambavol-server'
with value '24009'
[2013-03-26 08:04:49.253395] W [rpc-transport.c:447:validate_volume_options]
0-tcp.sambavol-server: option 'listen-port' is deprecated, preferred is
'transport.socket.listen-port', continuing with correction
[2013-03-26 08:04:49.287651] E [posix.c:4369:init] 0-sambavol-posix:
Directory '/raid5hs/glusterfs/export' doesn't exist, exiting.
[2013-03-26 08:04:49.287709] E [xlator.c:1390:xlator_init] 0-sambavol-posix:
Initialization of volume 'sambavol-posix' failed, review your volfile again
[2013-03-26 08:04:49.287721] E [graph.c:331:glusterfs_graph_init]
0-sambavol-posix: initializing translator failed
[2013-03-26 08:04:49.287731] E [graph.c:503:glusterfs_graph_activate]
0-graph: init failed
[2013-03-26 08:04:49.287982] W [glusterfsd.c:700:cleanup_and_exit]
(-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)
[0x2b38e21d63d2] (--:

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar K [ma

Re: [Gluster-users] Glusterfs gives up with endpoint not connected

2013-03-28 Thread Daniel Müller
Now part to of raid5hs-glusterfs-export.log

attr (utimes) on
/raid5hs/glusterfs/export/windows/winuser/schneider/schneider/V
erwaltung/baummanahmen/Bauvorhaben
Umsetzung/Parkierung/SKIZZE_TPLK_Lageplan
.pdf failed: Read-only file system
pending frames:

patchset: v3.2.0
signal received: 11
time of crash: 2013-03-25 22:50:46
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.2.0
/lib64/libc.so.6[0x30c0a302d0]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/marker.so(marker_
setattr_cbk+0x139)[0x2ba9de79]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_setattr_cbk+0x88)[0x2b88d718]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr_cbk+0x88)[0x2b1
a834a5f28]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr_cbk+0x88)[0x2b1
a834a5f28]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/storage/posix.so(posix_set
attr+0x1fc)[0x2b2560bc]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/access-control.so
(ac_setattr_resume+0xe9)[0x2b469039]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/features/access-control.so
(ac_setattr+0x49)[0x2b46a979]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(default_setattr+0xe9)[0x2b1a834
9f659]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_setattr_wrapper+0xe9)[0x2b890749]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0(call_resume+0xd81)[0x2b1a834b01
91]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-threads.so(
iot_worker+0x119)[0x2b894229]
/lib64/libpthread.so.0[0x30c160673d]
/lib64/libc.so.6(clone+0x6d)[0x30c0ad44bd]
-
[2013-03-26 08:04:48.577056] W [socket.c:419:__socket_keepalive] 0-socket:
failed to set keep idle on socket 8
[2013-03-26 08:04:48.613068] W [socket.c:1846:socket_server_event_handler]
0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
[2013-03-26 08:04:49.187484] W [graph.c:274:gf_add_cmdline_options]
0-sambavol-server: adding option 'listen-port' for volume 'sambavol-server'
with value '24009'
[2013-03-26 08:04:49.253395] W [rpc-transport.c:447:validate_volume_options]
0-tcp.sambavol-server: option 'listen-port' is deprecated, preferred is
'transport.socket.listen-port', continuing with correction
[2013-03-26 08:04:49.287651] E [posix.c:4369:init] 0-sambavol-posix:
Directory '/raid5hs/glusterfs/export' doesn't exist, exiting.
[2013-03-26 08:04:49.287709] E [xlator.c:1390:xlator_init] 0-sambavol-posix:
Initialization of volume 'sambavol-posix' failed, review your volfile again
[2013-03-26 08:04:49.287721] E [graph.c:331:glusterfs_graph_init]
0-sambavol-posix: initializing translator failed
[2013-03-26 08:04:49.287731] E [graph.c:503:glusterfs_graph_activate]
0-graph: init failed
[2013-03-26 08:04:49.287982] W [glusterfsd.c:700:cleanup_and_exit]
(-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)
[0x2b38e21d63d2] (--:

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar K [mailto:pkara...@redhat.com] 
Gesendet: Donnerstag, 28. März 2013 12:34
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org; Reinhard Marstaller
Betreff: Re: [Gluster-users] Glusterfs gives up with endpoint not connected

On 03/28/2013 03:48 PM, Daniel Müller wrote:
> Dear all,
>
> Right out of the blue glusterfs is not working fine any more every now 
> end the it stops working telling me, Endpoint not connected and 
> writing core files:
>
> [root@tuepdc /]# file core.15288
> core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), 
> SVR4-style, from 'glusterfs'
>
> My Version:
> [root@tuepdc /]# glusterfs --version
> glusterfs 3.2.0 built on Apr 22 2011 18:35:40 Repository revision: 
> v3.2.0 Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com> 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU 
> Affero General Public License.
>
> My /var/log/glusterfs/bricks/raid5hs-glusterfs-export.log
>
> [2013-03-28 10:47:07.243980] I [server.c:438:server_rpc_notify]
> 0-sambavol-server: disconnected connection from 192.168.130.199:1023
> [2013-03-28 10:47:07.244000] I
> [server-helpers.c:783:server_connection_destroy] 0-sambavol-server:
> destroyed connection of
> tuepdc.local-16600-2013/03/28-09:32:28:258428-sambavol-client-0
>
>
> [root@tuepdc bricks]# gluster volume info
>
> Volume Name: sambavol
> T

Re: [Gluster-users] Glusterfs gives up with endpoint not connected

2013-03-28 Thread Daniel Müller
trace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.2.0
/lib64/libc.so.6[0x30c0a302d0]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/io-cache.so(io
c_open_cbk+0x9b)[0x2ba4d7fb]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/read-ahead.so(
ra_open_cbk+0x205)[0x2b842935]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/performance/write-behind.s
o(wb_open_cbk+0xf4)[0x2b632784]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/cluster/replicate.so(afr_o
pen_cbk+0x232)[0x2b3f8a32]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/xlator/protocol/client.so(client3
_1_open_cbk+0x19f)[0x2b1bfdaf]
/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)[0x2b7ef
e01b3d2]
/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_notify+0x8d)[0x2b7efe01b5c
d]
/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_transport_notify+0x27)[0x2b7efe
0162e7]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/rpc-transport/socket.so(socket_ev
ent_poll_in+0x3f)[0x2ad705af]
/opt/glusterfs/3.2.0/lib64/glusterfs/3.2.0/rpc-transport/socket.so(socket_ev
ent_handler+0x188)[0x2ad70758]
/opt/glusterfs/3.2.0/lib64/libglusterfs.so.0[0x2b7efdddb811]
/opt/glusterfs/3.2.0/sbin/glusterfs(main+0x407)[0x405577]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x30c0a1d994]
/opt/glusterfs/3.2.0/sbin/glusterfs[0x4036f9]
-
[2013-03-27 12:14:26.544930] W [write-behind.c:3023:init]
0-sambavol-write-behind: disabling write-behind for first 0 bytes
[2013-03-27 12:14:26.544980] I [client.c:1987:build_client_config]
0-sambavol-client-1: setting ping-timeout to 5
[2013-03-27 12:14:26.547225] I [client.c:1987:build_client_config]
0-sambavol-client-0: setting ping-timeout to 5
[2013-03-27 12:14:26.549258] I [client.c:1935:notify] 0-sambavol-client-0:
parent translators are ready, attempting connect on transport
[2013-03-27 12:14:26.553418] I [client.c:1935:notify] 0-sambavol-client-1:
parent translators are ready, attempting connect on transport
Given volfile:
+---
---+
  1: volume sambavol-client-0
  2: type protocol/client
  3: option remote-host 192.168.130.199
  4: option remote-subvolume /raid5hs/glusterfs/export
  5: option transport-type tcp




Shall I seek for something more special?




-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar K [mailto:pkara...@redhat.com] 
Gesendet: Donnerstag, 28. März 2013 12:34
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org; Reinhard Marstaller
Betreff: Re: [Gluster-users] Glusterfs gives up with endpoint not connected

On 03/28/2013 03:48 PM, Daniel Müller wrote:
> Dear all,
>
> Right out of the blue glusterfs is not working fine any more every now 
> end the it stops working telling me, Endpoint not connected and 
> writing core files:
>
> [root@tuepdc /]# file core.15288
> core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), 
> SVR4-style, from 'glusterfs'
>
> My Version:
> [root@tuepdc /]# glusterfs --version
> glusterfs 3.2.0 built on Apr 22 2011 18:35:40 Repository revision: 
> v3.2.0 Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com> 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU 
> Affero General Public License.
>
> My /var/log/glusterfs/bricks/raid5hs-glusterfs-export.log
>
> [2013-03-28 10:47:07.243980] I [server.c:438:server_rpc_notify]
> 0-sambavol-server: disconnected connection from 192.168.130.199:1023
> [2013-03-28 10:47:07.244000] I
> [server-helpers.c:783:server_connection_destroy] 0-sambavol-server:
> destroyed connection of
> tuepdc.local-16600-2013/03/28-09:32:28:258428-sambavol-client-0
>
>
> [root@tuepdc bricks]# gluster volume info
>
> Volume Name: sambavol
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.130.199:/raid5hs/glusterfs/export
> Brick2: 192.168.130.200:/raid5hs/glusterfs/export
> Options Reconfigured:
> network.ping-timeout: 5
> performance.quick-read: on
>
> Gluster is running on ext3 raid5 HS on both hosts [root@tuepdc 
> bricks]# mdadm  --detail /dev/md0
> /dev/md0:
>  Version : 0.90
>Creation Time : Wed May 11 10:08:30 2011
>   Raid Level : raid5
>   Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
>  

[Gluster-users] Glusterfs gives up with endpoint not connected

2013-03-28 Thread Daniel Müller
Dear all,

Right out of the blue glusterfs is not working fine any more every now end
the it stops working telling me,
Endpoint not connected and writing core files:

[root@tuepdc /]# file core.15288
core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV),
SVR4-style, from 'glusterfs'

My Version:
[root@tuepdc /]# glusterfs --version
glusterfs 3.2.0 built on Apr 22 2011 18:35:40
Repository revision: v3.2.0
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License.

My /var/log/glusterfs/bricks/raid5hs-glusterfs-export.log

[2013-03-28 10:47:07.243980] I [server.c:438:server_rpc_notify]
0-sambavol-server: disconnected connection from 192.168.130.199:1023
[2013-03-28 10:47:07.244000] I
[server-helpers.c:783:server_connection_destroy] 0-sambavol-server:
destroyed connection of
tuepdc.local-16600-2013/03/28-09:32:28:258428-sambavol-client-0


[root@tuepdc bricks]# gluster volume info

Volume Name: sambavol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.130.199:/raid5hs/glusterfs/export
Brick2: 192.168.130.200:/raid5hs/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5
performance.quick-read: on

Gluster is running on ext3 raid5 HS on both hosts
[root@tuepdc bricks]# mdadm  --detail /dev/md0
/dev/md0:
Version : 0.90
  Creation Time : Wed May 11 10:08:30 2011
 Raid Level : raid5
 Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Mar 28 11:13:21 2013
  State : clean
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

   UUID : c484e093:018a2517:56e38f5e:1a216491
 Events : 0.250

Number   Major   Minor   RaidDevice State
   0   8   490  active sync   /dev/sdd1
   1   8   651  active sync   /dev/sde1
   2   8   972  active sync   /dev/sdg1

   3   8   81-  spare   /dev/sdf1

[root@tuepdc glusterfs]# tail -f  mnt-glusterfs.log
[2013-03-28 10:57:40.882566] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
0-sambavol-client-0: changing port to 24009 (from 0)
[2013-03-28 10:57:40.883636] I [rpc-clnt.c:1531:rpc_clnt_reconfig]
0-sambavol-client-1: changing port to 24009 (from 0)
[2013-03-28 10:57:44.806649] I
[client-handshake.c:1080:select_server_supported_programs]
0-sambavol-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version
(310)
[2013-03-28 10:57:44.806857] I [client-handshake.c:913:client_setvolume_cbk]
0-sambavol-client-0: Connected to 192.168.130.199:24009, attached to remote
volume '/raid5hs/glusterfs/export'.
[2013-03-28 10:57:44.806876] I [afr-common.c:2514:afr_notify]
0-sambavol-replicate-0: Subvolume 'sambavol-client-0' came back up; going
online.
[2013-03-28 10:57:44.811557] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse:
switched to graph 0
[2013-03-28 10:57:44.811773] I [fuse-bridge.c:2897:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel
7.10
[2013-03-28 10:57:44.812139] I [afr-common.c:836:afr_fresh_lookup_cbk]
0-sambavol-replicate-0: added root inode
[2013-03-28 10:57:44.812217] I
[client-handshake.c:1080:select_server_supported_programs]
0-sambavol-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version
(310)
[2013-03-28 10:57:44.812767] I [client-handshake.c:913:client_setvolume_cbk]
0-sambavol-client-1: Connected to 192.168.130.200:24009, attached to remote
volume '/raid5hs/glusterfs/export'.




How can I fix this issue!??

Daniel

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Switch experiences

2012-11-05 Thread Daniel Müller
I do not have any special switches and everything is running fine. GB Network 
is ok for me.

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Runar Ingebrigtsen
Gesendet: Montag, 5. November 2012 20:04
An: Gluster-users@gluster.org
Betreff: [Gluster-users] Switch experiences

Hi, 

I would like to know what experiences you all have with different brands of 
switches used with Gluster. I don't know why I should buy a Cisco or HP premium 
switch, when all I want is a Gb network. No VPN or any advanced features. 
-- 
Runar Ingebrigtsen

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on a two-node setup

2012-05-20 Thread Daniel Müller
I am running a two node gluster in replication mode. From my experience I
can tell you in a samba pdc/bdc environment. If one node is down the other
serves the clients as
if nothing has happened.

Greetings
Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von John Jolet
Gesendet: Montag, 21. Mai 2012 02:47
An: Ramon Diaz-Uriarte
Cc: ; Brian Candler
Betreff: Re: [Gluster-users] GlusterFS on a two-node setup


On May 20, 2012, at 4:55 PM, Ramon Diaz-Uriarte wrote:

> 
> 
> 
> On Sun, 20 May 2012 20:38:02 +0100,Brian Candler 
wrote:
>> On Sun, May 20, 2012 at 01:26:51AM +0200, Ramon Diaz-Uriarte wrote:
>>> Questions:
>>> ==
>>> 
>>> 1. Is using GlusterFS an overkill? (I guess the alternative would be 
>>> to use  NFS from one of the nodes to the other)
> 
>> In my opinion, the other main option you should be looking at is DRBD 
>> (www.drbd.org).  This works at the block level, unlike glusterfs 
>> which works at the file level.  Using this you can mirror your disk
remotely.
> 
> 
> Brian, thanks for your reply. 
> 
> 
> I might have to look at DRBD more carefully, but I do not think it 
> fits my
> needs: I need both nodes to be working (and thus doing I/O) at the 
> same time. These are basically number crunching nodes and data needs 
> to be accessible from both nodes (e.g., some jobs will use MPI over 
> the CPUs/cores of both nodes ---assuming both nodes are up, of course ;-).
> 
> 
> 
> 
>> If you are doing virtualisation then look at Ganeti: this is an 
>> environment which combines LVM plus DRBD and allows you to run VMs on 
>> either node and live-migrate them from one to the other.
>> http://docs.ganeti.org/ganeti/current/html/
> 
> I am not doing virtualisation. I should have said that explicitly. 
> 
> 
>> If a node fails, you just restart the VMs on the other node and away you
go.
> 
>>> 2. I plan on using a dedicated partition from each node as a brick. 
>>> Should  I use replicated or distributed volumes?
> 
>> A distributed volume will only increase the size of storage available
(e.g. 
>> combining two 600GB drives into one 1.2GB volume - as long as no 
>> single file is too large).  If this is all you need, you'd probably 
>> be better off buying bigger disks in the first place.
> 
>> A replicated volume allows you to have a copy of every file on both 
>> nodes simultaneously, kept in sync in real time, and gives you 
>> resilience against one of the nodes failing.
> 
> 
> But from the docs and the mailing list I get the impression that 
> replication has severe performance penalties when writing and some 
> penalties when reading. And with a two-node setup, it is unclear to me 
> that, even with replication, if one node fails, gluster will continue 
> to work (i.e., the other node will continue to work). I've not been 
> able to find what is the recommended procedure to continue working, 
> with replicated volumes, when one of the two nodes fails. So that is 
> why I am wondering what would replication really give me in this case.
> 
> 
replicated volumes have a performance penalty on the client.  for instance,
i have a replicated volume, with one replica on each of two nodes.  I'm
front ending this with an ubuntu box running samba for cifs sharing.  if my
windows client sends 100MB to the cifs server, the cifs server will send
100MB to each node in the replica set.  As for what you have to do to
continue working if a node went down, i have tested this.  Not on purpose,
but one of my nodes was accidentally downed.  my client saw no difference.
however, running 3.2.x, in order to get the client to use the downed node
after it was brought back up, i had to remount the share on the cifs server.
this is supposedly fixed in 3.3.

It's important to note that self-healing will create files created while the
node was offline, but does not DELETE files deleted while the node was
offline.  not sure what the official line is there, but my use is archival,
so it doesn't matter enough to me to run down (if they'd delete files, i
wouldn't need gluster..)


> Best,
> 
> R.
> 
> 
> 
> 
>> Regards,
> 
>> Brian.
> --
> Ramon Diaz-Uriarte
> Department of Biochemistry, Lab B-25
> Facultad de Medicina
> Universidad Autónoma de Madrid
> Arzobispo Morcillo, 4
&g

[Gluster-users] Compiling an running glusterfs on cygwin, host w2008 64X

2012-04-16 Thread Daniel Müller
Hello to all,

some thoughts. Did someone succeed to compile and run glusterfs on
cygwin/W2008?!

Greetings 
Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster as a storage backend to samba

2012-03-26 Thread Daniel Müller
Try:


[share]
comment = share on glusterfs brick
path = /mnt/glusterfs/windows/wingroup/share
 posix locking =NO
readonly=no
valid users = @"Domain Admins" @shareusergroup
directory mask=2770
force directory mode=2770
create mask = 2770
force create mode=2770
force security mode=2770
force directory security mode=2770
force group = shareusergroup
browseable = no

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von lejeczek
Gesendet: Montag, 26. März 2012 13:28
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] gluster as a storage backend to samba

actually I get "locking problem" very frequently, even if only copying files 
from samba/gluster net share,
is there a fix/solution for these problems?

On 26/03/12 12:14, lejeczek wrote: 
dear all,

how does as in the subject work,
I can see users run into number of different troubles, like here:
http://gluster.org/pipermail/gluster-users/2011-February/006603.html

has anybody got it worked out? m$ office stuff, does it work with gluster?

cheers
Pawel



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster as a storage backend to samba

2012-03-26 Thread Daniel Müller
Yes,
it is working for me.

Greetings
Daniel

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von lejeczek
Gesendet: Montag, 26. März 2012 13:14
An: gluster-users@gluster.org
Betreff: [Gluster-users] gluster as a storage backend to samba

dear all,

how does as in the subject work,
I can see users run into number of different troubles, like here:
http://gluster.org/pipermail/gluster-users/2011-February/006603.html

has anybody got it worked out? m$ office stuff, does it work with gluster?

cheers
Pawel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] iscsi with gluster i want to make it

2012-02-19 Thread Daniel Müller
Why use iscsi??? You can use gluster in replication on as much nodes as you 
want.
Working well on samba/cifs too. So you can mount the brick on every windows 
server.
For snapshots use rsnapshot on CENTOS. With openssh for windows you can 
integrate snapshots from your windows server too.


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Patrick Irvine
Gesendet: Samstag, 18. Februar 2012 07:19
An: gluster-users@gluster.org
Cc: viraj mastermind
Betreff: [Gluster-users] Re: iscsi with gluster i want to make it

Hi Viraj,

Gluster Ver. 3.x supports replication over WAN but currently very limited.  I 
assume it will expand as time moves on.

As for ISCSI.  I doubt Glusterfs will ever support ISCSI.  ISCSI operates at 
the block level.  Gluster only works on the filesystem level.  The only way to 
have iscsi on Gluster would be to export an iscsi target that is a file on 
gluster. 

As for snapshot and deduplication in CIFS, I really don't know,  I'll pass that 
on to the other Gluster-users :)

Hope this helped,

Pat

On 16/02/2012 9:28 PM, viraj mastermind wrote: 
Hi Pat,
 
Actually we are looking for product that will do replicated between server in 
WAN 
but it should also support iscsi as we will be mounting it on windows server, 
as we will be replicating data of windows volume
so i thought that if i could create a iscsi lun on linux system(centos) and 
mount it on windows server as a volume and then those iscsi lun can be 
replicated across the many server. actually we are preparing for DR.
we will create the DR server in AWS EC2
we tried some products and they are quite expensive so we are doing some 
research in open source.
 
Also does glusterfs support snapshot and deduplication in CIFS?
and when could glusterfs start supporting iSCSI any timeline given.
 
Thanks,
Viraj
On Tue, Feb 14, 2012 at 8:44 PM, Patrick Irvine  wrote:
Hi Viraj,
 
My experience with LVM is quite limited, but I think I can help a bit.  
Glusterfs is a file system only.  It does not contain the block level hooks 
required by LVM, so you will not be able to directly mount it as a block level 
device.  If you really want to use glusterfs as the back end for the LVM you 
would need to mount glusterfs into a directory and then use a loopback file on 
the glusterfs mount point and then use that loopback file for the LVM mount.
 
This approach seems like a lot of work.  What is your desired end product?  Do 
you want gluster to serve a filesystem that sits on top of an iscsi target?
 
Let me know if I have completely misunderstood what you where asking ☺
 
Pat.
 

From: viraj mastermind [mailto:virajmasterm...@gmail.com] 
Sent: Tuesday, February 14, 2012 5:50 AM
To: p...@cybersites.ca
Subject: iscsi with gluster i want to make it
 
Hi Patrick,
 
the gluster community is not replying to my question 
can you tell me how to mount the lvm in the gluster fs as i have made two node 
in glusterfs
i have added two harddisk to mode the node and made it as a lvm for iscsi 
target to work
now i want to mount lvm on the glusterfs as 
some what like
mount -t glusterfs server1:/test-volume /dev/vg1/lvs
instead of mounting on to a folder
mount -t glusterfs server1:/test-volume /mnt/glusterfs
Please assist
Thanks and have a nice day
Viraj



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Parallel cp?

2012-02-05 Thread Daniel Müller
Don' t you run your bricks in replication mode? So you do not have to copy
anything by hand or batch.
Ex: gluster volume create yourvol replica 2 transport tcp
xxx.xxx.xxx.xxx:/glusterfs/export yyy.yyy.yyy.yyy:/glusterfs/export

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Brian Candler
Gesendet: Sonntag, 5. Februar 2012 00:12
An: gluster-users@gluster.org
Betreff: [Gluster-users] Parallel cp?

I reckon that to quickly copy one glusterfs volume to another, I will need a
multi-threaded 'cp'.  That is, something which will take the list of files
from readdir() and copy batches of N of them in parallel.  This is so I can
keep all the component spindles busy.

Question 1: does such a thing existing already in the open source world?

Question 2: for a DHT volume, does readdir() return the files in a
round-robin fashion, i.e. one from brick 1, one from brick 2, one from brick
3 etc? Or does it return all the results from one brick, followed by all the
results from the second brick, and so on? Or something indeterminate?

Alternatively: is it possible to determine for each file which brick it
resides on?

(I don't think it's in an extended attribute; I tried 'getfattr -d' on a
file, both on the GlusterFS mount and on the underlying brick, and couldn't
see anything)

Thanks,

Brian.

P.S. I did look in the source, and I couldn't figure out how dht_do_readdir
works.  But it does have a slightly disconcerting comment:

/* TODO: do proper readdir */
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Samba and MSOffice files

2011-11-07 Thread Daniel Müller
You need to do some tricks to get msoffice files with glusterfs and samba
working.
In your samba share definition:

[example]
path = /your/path/to share
 posix locking =NO  <--- important setting
readonly=no
valid users =@yourgroup
directory mask=2770  <--set sticky bit for your group
force directory mode=2770
create mask = 2770
force create mode=2770
force security mode=2770
force directory security mode=2770
force group = yourgroup

Its working for me.

Greetings
Daniel
---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Kemal Taskin
Gesendet: Freitag, 4. November 2011 21:11
An: gluster-users@gluster.org
Betreff: [Gluster-users] Using Samba and MSOffice files

I cannot use Glusterfs(3.2.4) Samba export to edit MS Office (Word and Excel
files) on the Windows side. I see that others complained about this issue
before but I do not see any solution? I use Samba exporting other local
disks and they have no issue, so I think the issue is related with
Glusterfs.

Regards,
Brian.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] trying to create a 3 brick CIFS NAS server

2011-10-31 Thread Daniel Müller
First of all:

EX: mount -t glusterfs 172.22.0.53:/data /mnt/glustervol

Then If you have samba running you just point your share to /mnt/glustervol,
Ex: [test]

Then you do: mount -t cifs 172.22.0.53:/test

This should work


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Dave Therrien
Gesendet: Donnerstag, 20. Oktober 2011 18:25
An: gluster-users@gluster.org
Betreff: [Gluster-users] trying to create a 3 brick CIFS NAS server

Hi all
 
I am having problems connecting to a 3 brick volume from a Windows client
via samba/cifs. 

Volume Name: gluster-volume
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 172.22.0.53:/data
Brick2: 172.22.0.23:/data
Brick3: 172.22.0.35:/data

I created a /mnt/glustervol folder and then tried to mount the
gluster-volume to it using: 

mount -t cifs 172.22.0.53:/gluster-volume /mnt/glustervol

I get this error 

Retrying with upper case share name
mount error(6): No such device or address
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

What am I doing wrong here ? 
 
Thanks
Dave


-- 
Dave Therrien
Producer/Recording Engineer
5 Good Ears Recording Studio
Nashua NH 03062
www.5goodears.com 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] UCARP with NFS

2011-09-08 Thread Daniel Müller
Cmd on slave : 
usr/sbin/ucarp -z -B -M -b 1 -i bond0:0

Did you try "-b 7" at your cmd start. This solved for me the things in
another configuration




EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von anthony garnier
Gesendet: Donnerstag, 8. September 2011 15:55
An: whit.glus...@transpect.com
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] UCARP with NFS

Whit,

Here is my conf file : 
#
# Location of the ucarp pid file
UCARP_PIDFILE=/var/run/ucarp0.pid

# Define if this host is the prefered MASTER ( this aadd or remove the -P
option)
UCARP_MASTER="yes"

#
# ucarp base, Interval monitoring time 
#lower number will be perfered master
# set to same to have master stay alive as long as possible
UCARP_BASE=1

#Priority [0-255]
#lower number will be perfered master
ADVSKEW=0


#
# Interface for Ipaddress
INTERFACE=bond0:0

#
# Instance id
# any number from 1 to 255
# Master and Backup need to be the same
INSTANCE_ID=42

#
# Password so servers can trust who they are talking to
PASSWORD=glusterfs

#
# The Application Address that will failover
VIRTUAL_ADDRESS=10.68.217.3
VIRTUAL_BROADCAST=10.68.217.255
VIRTUAL_NETMASK=255.255.255.0
#

#Script for configuring interface
UPSCRIPT=/etc/ucarp/script/vip-up.sh
DOWNSCRIPT=/etc/ucarp/script/vip-down.sh

# The Maintanence Address of the local machine
SOURCE_ADDRESS=10.68.217.85


Cmd on master : 
/usr/sbin/ucarp -z -B -P -b 1 -i bond0:0 -v 42 -p glusterfs -k 0 -a
10.68.217.3 -s 10.68.217.85 --upscript=/etc/ucarp/script/vip-up.sh
--downscript=/etc/ucarp/script/vip-down.sh

Cmd on slave : 
usr/sbin/ucarp -z -B -M -b 1 -i bond0:0 \ -v 42 -p glusterfs -k 50 -a
10.68.217.3 -s 10.68.217.86 --upscript=/etc/ucarp/script/vip-up.sh
--downscript=/etc/ucarp/script/vip-down.sh


To me, to have a prefered master is necessary because I'm using RR DNS and I
want to do a kind of "active/active" failover.I'll explain the whole idea : 

SERVER 1<---> SERVER 2
VIP1 VIP2

When I access the URL glusterfs.preprod.inetpsa.com, RRDNS gives me one of
the VIP(load balancing). The main problem here  is if I use only RRDNS, if a
server goes down the client currently binded on this server will fail to. So
to avoid that I need a VIP failover. 
By this way, If a server goes down, all the client on this server will be
binded on the other one. Because I want loadbalacing, I need a prefered
master, so by default need that VIP 1 stay on server 1 and VIP 2 stay on
server 2.
Currently I trying to make it works with one VIP only.


Anthony

> Date: Thu, 8 Sep 2011 09:32:59 -0400
> From: whit.glus...@transpect.com
> To: sokar6...@hotmail.com
> CC: gluster-users@gluster.org
> Subject: Re: [Gluster-users] UCARP with NFS
> 
> On Thu, Sep 08, 2011 at 01:02:41PM +, anthony garnier wrote:
> 
> > I got a client mounted on the VIP, when the Master fall, the client
switch
> > automaticaly on the Slave with almost no delay, it works like a charm.
But when
> > the Master come back up, the mount point on the client freeze.
> > I've done a monitoring with tcpdump, when the master came up, the client
send
> > paquets on the master but the master seems to not establish the TCP
connection.
> 
> Anthony,
> 
> Your UCARP command line choices and scripts would be worth looking at
here.
> There are different UCARP behavior options for when the master comes back
> up. If the initial failover works fine, it may be that you'll have better
> results if you don't have a preferred master. That is, you can either have
> UCARP set so that the slave relinquishes the IP back to the master when
the
> master comes back up, or you can have UCARP set so that the slave becomes
> the new master, until such time as the new master goes down, in which case
> the former master becomes master again.
> 
> If you're doing it the first way, there may be a brief overlap, where both
> systems claim the VIP. That may be where your mount is failing. By doing
it
> the second way, where the VIP is held by whichever system has it until
that
> system actually goes down, there's no overlap. There shouldn't be a
reason,
> in the Gluster context, to care which system is master, is there?
> 
> Whit

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] UCARP with NFS

2011-09-08 Thread Daniel Müller
Yes,
working for me on centos 5.6 samba/glusterfs/ext3.
Seems to me ucarp is not configured to work master/slave:
Master:

ID=1
BIND_INTERFACE=eth1
#Real IP
SOURCE_ADDRESS=xxx.xxx.xxx.xxx
#slaveconfig,OPTIONS="--shutdown --preempt  -b 1 -k 50"
OPTIONS="--shutdown --preempt  -b 1"
#Virtual IP used by ucarp
VIP_ADDRESS="zzz.zzz.zzz.zzz"
#Ucarp Password
PASSWORD=

On you slave you need: OPTIONS="--shutdown --preempt  -b 1 -k 50"

Good luck
Daniel



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von anthony garnier
Gesendet: Donnerstag, 8. September 2011 15:03
An: gluster-users@gluster.org
Betreff: [Gluster-users] UCARP with NFS

Hi all,

I'm currently trying to deploy ucarp with GlusterFS, especially with nfs
access.
Ucarp works well when I ping the VIP and i shutdown Master (and then when
the Master came back up),but I face of a problem with NFS connections.

I got a client mounted on the VIP, when the Master fall, the client switch
automaticaly on the Slave with almost no delay, it works like a charm. But
when the Master come back up, the mount point on the client freeze.
I've done a monitoring with tcpdump, when the master came up, the client
send paquets on the master but the master seems to not establish the TCP
connection.


My  volume config : 

Volume Name: hermes
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/users/export
Brick2: ylal2960:/users/export
Options Reconfigured:
performance.cache-size: 1GB
performance.cache-refresh-timeout: 60
network.ping-timeout: 25
nfs.port: 2049

As Craig wrote previously, I well probed hosts and created the volume with
their real IP, I only used the VIP with the client.

Does anyone got experience with UCARP and GlusterFS ?

Anthony 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HW raid or not

2011-08-08 Thread Daniel Müller
I am using raid 5 1 spare disk without any problem on centos 5.6


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Uwe Kastens
Gesendet: Montag, 8. August 2011 08:55
An: Gluster-users@gluster.org
Betreff: [Gluster-users] HW raid or not

Hi,

I know, that there is no general answer to this question :)

Is it better to use HW Raid or LVM as gluster backend or raw disks?

Regards

Uwe

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster usable as open source solution

2011-08-04 Thread Daniel Müller
I am using gluster together with a failover samba-domain.
But only 2 nodes replicating bricks without any problems.
Having around 100 Pcs and 80 User.


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Uwe Kastens
Gesendet: Donnerstag, 4. August 2011 15:52
An: Scott Nolin
Cc: Gluster-users@gluster.org
Betreff: Re: [Gluster-users] gluster usable as open source solution

Scott,

That sound interesting.
It hasn't been going well so far. Currently we are struggling with
performance problems that seem to be linked to certain versions of the linux
kernel.

The lack of in depth and current documentation is one of the most glaring
and obvious problems.


What is your feeling about stability of the solution?

Kind Regards

Uwe

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] hardware raid controller

2011-07-17 Thread Daniel Müller
Hello again,

I think the job to do your raid controller is the part of your OS.
Glusters serves upon your file system nothing else.
Gluster 3.2 is working on my raid controller (raid 5 1 spare disk) without
any problems.


On Fri, 15 Jul 2011 10:55:11 +0200, Derk Roesink 
wrote:
> Hello!
> 
> Im trying to install my first Gluster Storage Platform server.
> 
> It has a Jetway JNF99FL-525-LF motherboard with an internal raid
> controller (based on a Intel ICH9R chipset) which has 4x 1tb drives for
> data that i would like to run in a RAID5 configuration
> 
> It seems Gluster doesnt support the raid controller.. Because i still
see
> the 4 disks as 'servers' in the WebUI.
> 
> Any ideas?!
> 
> Kind Regards,
> 
> Derk
>  
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster in a windows environment - Replacing MS fileserver?

2011-06-07 Thread Daniel Müller
On Mon, 06 Jun 2011 09:34:14 +1000, Jonathan Collingridge  wrote:

Hi,
Ive read just about everything i can find out there on Gluster, and
waded through a lot of the old posts on the user-list. I'm wondering
what the current state of gluster is for interop. with windows 7
clients. 
I work for a company that deals with images - a lot of them are
jpegs around 2Mb and raw camera images at 14-20Mb. We generate about
4-6 Tb of these each year.
Currently we use expensive (fibre channel disk) block based storage
over 4G fibre SAN; attached to this is a Ployserve (a HP product
now) active/active cluster of 4 windows servers. All of this is due
for a refresh this year (Servers with faster I/O and 10GE network)
and i'm looking at all the alternatives, HP have an (IBRIX sourced)
array with 2 front end servers, runs redhat, feature set similar to
gluster
I'm wondering you have any experience with a Gluster config, with a
similar workload? What about windows 7 enterprise clients - use NFS
or Samba?
I'm considering gluster nodes as HP DL180G6 with 25x 2.5" 300Gb 10K
SAS Drives in each (to keep the number of spindles up). A pair of
10GbE cards for connectivity. Do you have any thoughts on this
possible config? If you were building a Gluster system from scratch
with an Medium/Large business budget, how would you do it?
Thanks for any help you can give with my questions.-- 
   Jonathan Collingridge  IT Manager 


   
   jonathan.collingri...@msp.com.au  
  +61 2 6933 7746 
  skype: jonathancollingridge 
  PO Box 8459 Kooringal NSW 2650 
  
  <>___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] mounting issues with CentOS 5.4

2011-05-19 Thread Daniel Müller
I think you need :

yum groupinstall 'Development Tools'
yum groupinstall 'Development Libraries'
yum install libibverbs-devel fuse* fuse-devel

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Karim Latouche
Gesendet: Donnerstag, 19. Mai 2011 12:41
An: Gluster Users
Betreff: [Gluster-users] mounting issues with CentOS 5.4

Hi All,

I'm trying to mount a volume using this command

mount -t glusterfs host:/vol   /mnt

and I get

fusermount: fuse device not found, try 'modprobe fuse' first
fusermount: fuse device not found, try 'modprobe fuse' first
Mount failed. Please check the log file for more details.


modprobe fuse  gives

FATAL: Module fuse not found

glusterfs-fuse-3.2.0-1.x86_64.rpm   ist installed on this machine and 
the volume mounts well from an ubuntu 11.04 client

Any clues ?

Thx

KL
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] WG: Strange behaviour glusterd 3.1

2011-03-15 Thread Daniel Müller
I updated my version to 3.1.2:
glusterfs --version
glusterfs 3.1.2 built on Jan 14 2011 19:21:08
Repository revision: v3.1.1-64-gf2a067c
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License.

And the missbehaviour is gone.

But there is another question . If I have a two node setup and a file is
opened on both server from different users,
how is gluster manage the locking then. I tried opened a file , there was
no: the file is already open!? I could change it,
and only the last saved change was replicated?




---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Donnerstag, 10. März 2011 12:18
An: 'Mohit Anchlia'; 'gluster-users@gluster.org'
Betreff: AW: [Gluster-users] Strange behaviour glusterd 3.1

This is the error in  mnt-glusterfs.log
W [fuse-bridge.c:405:fuse_attr_cbk] glusterfs-fuse: 20480: FSTAT()
/windows/test/start.xlsx => -1 (File descriptor in bad state)

-----------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Mittwoch, 9. März 2011 20:10
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

If it's a problem can you please enable the debug on that volume and
then see what gets logged in the logs.

I suggest creating a bug since it sounds critical. What if it was production
:)

On Wed, Mar 9, 2011 at 10:42 AM, Daniel Müller 
wrote:
>
>
> did set gluster volume set samba-vol performance.quick-read off .
>
> vim
> a new file on node1. ssh node2 . ls new file -> read error, file not
> found.
>
> did set gluster volume set samba-vol performance.quick-read on.
>
>
> I can ls, change content one time. Then the same again. No
> solution!!!
>
> Should I delete the VOl? and build a new one?
>
> I am
> glad that it is no production environment. It would be a mess.
>
> On Wed, 09
> Mar 2011 23:10:21 +0530, Vijay Bellur  wrote: On Wednesday 09 March 2011
> 04:00 PM, Daniel Müller wrote:
> /mnt/glusterfs ist he mount point of the
> client where the samba-vol (backend:/glusterfs/export) is mounted on.
> So it
> should work. And it did work until last week.
>
>  Can you please check by
> disabling quick-read translator in your setup via the following
> command:
>
> #gluster volume set performance.quick-read off
>
> You may be
> hitting bug 2027 with 3.1.0
>
> Thanks,
> Vijay
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-10 Thread Daniel Müller
This is the error in  mnt-glusterfs.log
W [fuse-bridge.c:405:fuse_attr_cbk] glusterfs-fuse: 20480: FSTAT()
/windows/test/start.xlsx => -1 (File descriptor in bad state)

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Mittwoch, 9. März 2011 20:10
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

If it's a problem can you please enable the debug on that volume and
then see what gets logged in the logs.

I suggest creating a bug since it sounds critical. What if it was production
:)

On Wed, Mar 9, 2011 at 10:42 AM, Daniel Müller 
wrote:
>
>
> did set gluster volume set samba-vol performance.quick-read off .
>
> vim
> a new file on node1. ssh node2 . ls new file -> read error, file not
> found.
>
> did set gluster volume set samba-vol performance.quick-read on.
>
>
> I can ls, change content one time. Then the same again. No
> solution!!!
>
> Should I delete the VOl? and build a new one?
>
> I am
> glad that it is no production environment. It would be a mess.
>
> On Wed, 09
> Mar 2011 23:10:21 +0530, Vijay Bellur  wrote: On Wednesday 09 March 2011
> 04:00 PM, Daniel Müller wrote:
> /mnt/glusterfs ist he mount point of the
> client where the samba-vol (backend:/glusterfs/export) is mounted on.
> So it
> should work. And it did work until last week.
>
>  Can you please check by
> disabling quick-read translator in your setup via the following
> command:
>
> #gluster volume set performance.quick-read off
>
> You may be
> hitting bug 2027 with 3.1.0
>
> Thanks,
> Vijay
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatic Failover

2011-03-09 Thread Daniel Müller
That is part of apache -->.htaccess file will do that for you, see:
http://httpd.apache.org/docs/2.0/howto/htaccess.html

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Mittwoch, 9. März 2011 20:40
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Automatic Failover

Thanks that's what I was thinking initially! May be this question
doesn't belong in this forum but how do I protect files form being
accessed by unauthorized users in this case.

On Wed, Mar 9, 2011 at 10:48 AM, Daniel Müller 
wrote:
> It is sufficient then to point your htmldocs (root of your apache) to a
> glusterfs client share.
> Thats all.
> ex: mount -t glusterfs /glusterfs/export /srv/www/htmldocs
>
> On Wed, 9 Mar 2011 09:25:22 -0800, Mohit Anchlia 
> wrote:
>> This is providing http service to clients using apache to serve them
>> with documents/images etc. on demand. These clients can be web browser
>> or desktop.
>>
>> On Tue, Mar 8, 2011 at 11:51 PM, Daniel Müller 
>> wrote:
>>> What exactly are you trying to do?
>>> Do you mean apache glusterfs over webdav??!!
>>>
>>> ---
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus
>>> Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>>
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: muel...@tropenklinik.de
>>> Internet: www.tropenklinik.de
>>> ---
>>> -Ursprüngliche Nachricht-
>>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>>> Gesendet: Dienstag, 8. März 2011 19:45
>>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>>> Betreff: Re: [Gluster-users] Automatic Failover
>>>
>>> Thanks! I am still trying to figure out how it works with HTTP. In
>>> some docs I see that GFS supports HTTP and I am not sure how that
>>> works. Does anyone know how that works or what has to be done to make
>>> it available over HTTP?
>>>
>>> On Mon, Mar 7, 2011 at 11:14 PM, Daniel Müller
> 
>>> wrote:
>>>> If you use a http-server,samba... ex:apache you need ucarp. There is a
>>>> little blog about it:
>>>>
>>>
>
http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
>>>> r-using-glusterfs-and-ucarp
>>>>
>>>> Ucarp gives you a virt. Ip for your httpd. if the one is down the
> second
>>>> with the same IP is still alive and serving. Glusterfs serves
>>>> the data in ha mode. Try it it works like a charm.
>>>>
>>>> ---
>>>> EDV Daniel Müller
>>>>
>>>> Leitung EDV
>>>> Tropenklinik Paul-Lechler-Krankenhaus
>>>> Paul-Lechler-Str. 24
>>>> 72076 Tübingen
>>>>
>>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>>> eMail: muel...@tropenklinik.de
>>>> Internet: www.tropenklinik.de
>>>> ---
>>>>
>>>> -Ursprüngliche Nachricht-
>>>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>>>> Gesendet: Montag, 7. März 2011 18:47
>>>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>>>> Betreff: Re: [Gluster-users] Automatic Failover
>>>>
>>>> Thanks! Can you please give me some scenarios where applications will
>>>> need ucarp, are you referring to nfs clients?
>>>>
>>>> Does glusterfs support http also? can't find any documentation.
>>>>
>>>> On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller
>>>> 
>>>> wrote:
>>>>> Yes the normal way the clients will follow the failover but some
>>>>> applications and services running on top
>>>>> Of glusterfs will not and will need this.
>>>>>
>>>>> ---
>>>>> EDV Daniel Müller
>>>>>
>>>>> Leitung EDV
>>>>> Tropenklinik Paul-Lechler-Krankenhaus
>>>>> Paul-Lechler-Str. 24
>>>>> 72076 Tübingen
&

Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller


did set gluster volume set samba-vol performance.quick-read off .  

vim
a new file on node1. ssh node2 . ls new file -> read error, file not
found. 

did set gluster volume set samba-vol performance.quick-read on.


I can ls, change content one time. Then the same again. No
solution!!! 

Should I delete the VOl? and build a new one? 

I am
glad that it is no production environment. It would be a mess.

On Wed, 09
Mar 2011 23:10:21 +0530, Vijay Bellur  wrote: On Wednesday 09 March 2011
04:00 PM, Daniel Müller wrote:  
/mnt/glusterfs ist he mount point of the
client where the samba-vol (backend:/glusterfs/export) is mounted on.
So it
should work. And it did work until last week.

  Can you please check by
disabling quick-read translator in your setup via the following
command:

#gluster volume set performance.quick-read off 

You may be
hitting bug 2027 with 3.1.0

Thanks,
Vijay

 ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller
/mnt/glusterfs ist he mount point of the client where the samba-vol 
(backend:/glusterfs/export) is mounted on.
So it should work. And it did work until last week.

Greetings
Daniel


---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 10:05
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: AW: [Gluster-users] Strange behaviour glusterd 3.1

Directly editing the files on backend is not supported. Most of the editors 
delete the original file and create a new-one when you edit a file.
So the extended attributes that are stored on the old-file are gone.

Pranith
- Original Message -
From: "Daniel Müller" 
To: "Pranith Kumar. Karampuri" 
Cc: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 2:24:14 PM
Subject: AW: [Gluster-users] Strange behaviour glusterd 3.1

Both server running. I edit the file on one node1-server (in /mnt/glusterfs/) 
changed the content, saved the file. Ssh to the node2-server there still old 
content. Did " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null", 
still the same old content.
Need to restart both servers the the new content is written to node2-server.
And this equal behavior on both servers.
A few days ago all was ok.

-----------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 09:31
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

"But after vim one.txt and changing the content of that file on one node.", 
what exactly do you mean by this?. Did you edit the file on backend?. (or) 
Brought one server down and after editing brought the second server back up?.

Pranith
- Original Message -
From: "Daniel Müller" 
To: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 1:58:39 PM
Subject: [Gluster-users] Strange behaviour glusterd 3.1

Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller
Both server running. I edit the file on one node1-server (in /mnt/glusterfs/) 
changed the content, saved the file. Ssh to the node2-server there still old 
content. Did " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null", 
still the same old content.
Need to restart both servers the the new content is written to node2-server.
And this equal behavior on both servers.
A few days ago all was ok.

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 09:31
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

"But after vim one.txt and changing the content of that file on one node.", 
what exactly do you mean by this?. Did you edit the file on backend?. (or) 
Brought one server down and after editing brought the second server back up?.

Pranith
- Original Message -----
From: "Daniel Müller" 
To: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 1:58:39 PM
Subject: [Gluster-users] Strange behaviour glusterd 3.1

Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller
Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatic Failover

2011-03-08 Thread Daniel Müller
What exactly are you trying to do?
Do you mean apache glusterfs over webdav??!!

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Dienstag, 8. März 2011 19:45
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Automatic Failover

Thanks! I am still trying to figure out how it works with HTTP. In
some docs I see that GFS supports HTTP and I am not sure how that
works. Does anyone know how that works or what has to be done to make
it available over HTTP?

On Mon, Mar 7, 2011 at 11:14 PM, Daniel Müller 
wrote:
> If you use a http-server,samba... ex:apache you need ucarp. There is a
> little blog about it:
>
http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
> r-using-glusterfs-and-ucarp
>
> Ucarp gives you a virt. Ip for your httpd. if the one is down the second
> with the same IP is still alive and serving. Glusterfs serves
> the data in ha mode. Try it it works like a charm.
>
> -------
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Gesendet: Montag, 7. März 2011 18:47
> An: muel...@tropenklinik.de; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Automatic Failover
>
> Thanks! Can you please give me some scenarios where applications will
> need ucarp, are you referring to nfs clients?
>
> Does glusterfs support http also? can't find any documentation.
>
> On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller 
> wrote:
>> Yes the normal way the clients will follow the failover but some
>> applications and services running on top
>> Of glusterfs will not and will need this.
>>
>> ---
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus
>> Paul-Lechler-Str. 24
>> 72076 Tübingen
>>
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> ---
>>
>> -Ursprüngliche Nachricht-
>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>> Gesendet: Freitag, 4. März 2011 18:36
>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>> Betreff: Re: [Gluster-users] Automatic Failover
>>
>> Thanks! I thought for native clients failover is inbuilt. Isn't that
true?
>>
>> On Thu, Mar 3, 2011 at 11:46 PM, Daniel Müller 
>> wrote:
>>> I use ucarp to make a real failover for my gluster-vols
>>> For the clients:
>>> Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn... nodes):
>>>
>>> vip-001.conf on node1
>>> #ID
>>> ID=001
>>> #Network Interface
>>> BIND_INTERFACE=eth0
>>> #Real IP
>>> SOURCE_ADDRESS=192.168.132.56
>>> #Virtual IP used by ucarp
>>> VIP_ADDRESS=192.168.132.58
>>> #Ucarp Password
>>> PASSWORD=Password
>>>
>>> On node2
>>>
>>> #ID
>>> ID=002
>>> #Network Interface
>>> BIND_INTERFACE=eth0
>>> #Real IP
>>> SOURCE_ADDRESS=192.168.132.57
>>> #Virtual IP used by ucarp
>>> VIP_ADDRESS=192.168.132.58
>>> #Ucarp Password
>>> PASSWORD=Password
>>>
>>> Then
>>> mount -t glusterfs 192.168.132.58:/samba-vol /mnt/glusterfs
>>> Set enries in fstab:
>>>
>>> 192.168.132.58:/samba-vol /mnt/glusterfs glusterfs defaults 0 0
>>>
>>> Now one Server fails there is still the same service on for the clients.
>>>
>>> ---
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus
>>> Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>>
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: muel...@tropenklinik.de
>>> Internet: www.tropenklinik.de
>>> --

Re: [Gluster-users] glusterfs 3.1.2 CIFS problem

2011-03-08 Thread Daniel Müller
Could it be possible too, to point  to msoffice-files. There is no hint
about it that they cannot be managed so far.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von
schro...@iup.physik.uni-bremen.de
Gesendet: Dienstag, 8. März 2011 08:31
An: Amar Tumballi
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] glusterfs 3.1.2 CIFS problem

Zitat von Amar Tumballi :

Hi Amar,

thanks for clearing that up.
Could it be possible to get it on the webpage as well ?

http://gluster.com/community/documentation/index.php/Gluster_3.1:_Using_CIFS
_with_Gluster

To a "newbie" it sounds like that CIFS is build in (native) same as NFS.

Thanks and Regards
Heiko


> Hi Heiko,
>
> GlusterFS doesn't support native CIFS protocol (like NFS), instead what we
> say is, you can export the glusterfs mount point as the samba export, and
> then mount it using CIFS protocol.
>
> Regards,
> Amar
>
> On Mon, Mar 7, 2011 at 10:28 PM, 
wrote:
>
>> Hello,
>>
>> i'am testing glusterfs 3.1.2.
>>
>> AFAIK CIFS and NFS is incorporated in the glusterd.
>> But i could only get the NFS mount working.
>>
>> Running an "smbclient -L //glusterfs_server/..." will only produce
>> "Connection to cfstester failed (Error NT_STATUS_CONNECTION_REFUSED)".
>> So it looks like that there is no SMB/CIFS port to connect to.
>>
>> The docs are not very specific about how to use the CIFS interface.
>>
>> What has to be done from the CLI to get the CIFS protocol up and running
?
>> Is there anything special inside the vol configs to take care of ?
>>
>> Thanks and Regards
>> Heiko
>>
>> 
>> This message was sent using IMP, the Internet Messaging Program.
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>




This message was sent using IMP, the Internet Messaging Program.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.1.2 CIFS problem

2011-03-08 Thread Daniel Müller
Hello,
ON centos 5.5 X64
[root@ctdb1 ~]# smbclient  -L //192.168.132.56 <--- my glusterserver1
Enter root's password:
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Sharename   Type  Comment
-     ---
share1  Disk
office  Disk
IPC$IPC   IPC Service (Samba 3.5.6)
rootDisk  Heimatverzeichnis root
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Server   Comment
----
CTDB1Samba 3.5.6
CTDB2Samba 3.5.6

WorkgroupMaster
----
LDAP.NET CTDB1

[root@ctdb1 ~]# smbclient  -L //192.168.132.57<---my glusterserver2
Enter root's password:
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Sharename   Type  Comment
-     ---
share1  Disk
IPC$IPC   IPC Service (Samba 3.5.6)
rootDisk  Heimatverzeichnis root
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Server   Comment
----
CTDB1Samba 3.5.6
CTDB2Samba 3.5.6

WorkgroupMaster
----
LDAP.NET CTDB1

You need samba to run and a share pointing to a vol
But be aware of msoffice-files. I never managed to get a solution to save
and change them
On a glusterfs-share as it should.
This is the only point why I do not use gluster in my production
environment.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von
schro...@iup.physik.uni-bremen.de
Gesendet: Montag, 7. März 2011 17:59
An: gluster-users@gluster.org
Betreff: [Gluster-users] glusterfs 3.1.2 CIFS problem

Hello,

i'am testing glusterfs 3.1.2.

AFAIK CIFS and NFS is incorporated in the glusterd.
But i could only get the NFS mount working.

Running an "smbclient -L //glusterfs_server/..." will only produce  
"Connection to cfstester failed (Error NT_STATUS_CONNECTION_REFUSED)".
So it looks like that there is no SMB/CIFS port to connect to.

The docs are not very specific about how to use the CIFS interface.

What has to be done from the CLI to get the CIFS protocol up and running ?
Is there anything special inside the vol configs to take care of ?

Thanks and Regards
Heiko


This message was sent using IMP, the Internet Messaging Program.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatic Failover

2011-03-07 Thread Daniel Müller
If you use a http-server,samba... ex:apache you need ucarp. There is a
little blog about it:
http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
r-using-glusterfs-and-ucarp

Ucarp gives you a virt. Ip for your httpd. if the one is down the second
with the same IP is still alive and serving. Glusterfs serves
the data in ha mode. Try it it works like a charm.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Montag, 7. März 2011 18:47
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Automatic Failover

Thanks! Can you please give me some scenarios where applications will
need ucarp, are you referring to nfs clients?

Does glusterfs support http also? can't find any documentation.

On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller 
wrote:
> Yes the normal way the clients will follow the failover but some
> applications and services running on top
> Of glusterfs will not and will need this.
>
> -------
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Gesendet: Freitag, 4. März 2011 18:36
> An: muel...@tropenklinik.de; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Automatic Failover
>
> Thanks! I thought for native clients failover is inbuilt. Isn't that true?
>
> On Thu, Mar 3, 2011 at 11:46 PM, Daniel Müller 
> wrote:
>> I use ucarp to make a real failover for my gluster-vols
>> For the clients:
>> Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn... nodes):
>>
>> vip-001.conf on node1
>> #ID
>> ID=001
>> #Network Interface
>> BIND_INTERFACE=eth0
>> #Real IP
>> SOURCE_ADDRESS=192.168.132.56
>> #Virtual IP used by ucarp
>> VIP_ADDRESS=192.168.132.58
>> #Ucarp Password
>> PASSWORD=Password
>>
>> On node2
>>
>> #ID
>> ID=002
>> #Network Interface
>> BIND_INTERFACE=eth0
>> #Real IP
>> SOURCE_ADDRESS=192.168.132.57
>> #Virtual IP used by ucarp
>> VIP_ADDRESS=192.168.132.58
>> #Ucarp Password
>> PASSWORD=Password
>>
>> Then
>> mount -t glusterfs 192.168.132.58:/samba-vol /mnt/glusterfs
>> Set enries in fstab:
>>
>> 192.168.132.58:/samba-vol /mnt/glusterfs glusterfs defaults 0 0
>>
>> Now one Server fails there is still the same service on for the clients.
>>
>> ---
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus
>> Paul-Lechler-Str. 24
>> 72076 Tübingen
>>
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> ---
>> -Ursprüngliche Nachricht-
>> Von: gluster-users-boun...@gluster.org
>> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Mohit Anchlia
>> Gesendet: Freitag, 4. März 2011 03:46
>> An: gluster-users@gluster.org
>> Betreff: [Gluster-users] Automatic Failover
>>
>> I am actively trying to find out how automatic failover works but not
>> able to find how? Is it in the mount option or where do we give if
>> node 1 is down then write to node 2 or something like that. Can
>> someone please explain how it work?
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatic Failover

2011-03-03 Thread Daniel Müller
I use ucarp to make a real failover for my gluster-vols
For the clients:
Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn... nodes):

vip-001.conf on node1
#ID
ID=001
#Network Interface
BIND_INTERFACE=eth0
#Real IP
SOURCE_ADDRESS=192.168.132.56
#Virtual IP used by ucarp
VIP_ADDRESS=192.168.132.58
#Ucarp Password
PASSWORD=Password

On node2

#ID
ID=002
#Network Interface
BIND_INTERFACE=eth0
#Real IP
SOURCE_ADDRESS=192.168.132.57
#Virtual IP used by ucarp
VIP_ADDRESS=192.168.132.58
#Ucarp Password
PASSWORD=Password

Then 
mount -t glusterfs 192.168.132.58:/samba-vol /mnt/glusterfs
Set enries in fstab:

192.168.132.58:/samba-vol /mnt/glusterfs glusterfs defaults 0 0

Now one Server fails there is still the same service on for the clients.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Mohit Anchlia
Gesendet: Freitag, 4. März 2011 03:46
An: gluster-users@gluster.org
Betreff: [Gluster-users] Automatic Failover

I am actively trying to find out how automatic failover works but not
able to find how? Is it in the mount option or where do we give if
node 1 is down then write to node 2 or something like that. Can
someone please explain how it work?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] WG: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5 MSOffice Files

2011-02-07 Thread Daniel Müller
Hello,

seems that there is the most exact and right what you mentioned, but in my
case I run a pdc and bdc with samba and ctdb is not for the use of that.
Only a memberserver or fileserver can be clusterd with ctdb. If glusterfs
is just not working or doing the job for that I fall back to drbd with gfs2
or ocfs2.

Greetings
Daniel


On Mon, 7 Feb 2011 11:32:10 -0500, "Burnash, James" 
wrote:
> Hello.
> 
> I hope the Gluster developers feel free to correct me in this but ... a
> lot of people seem to misunderstand the purpose of the Gluster File
system:
> 
> - It is NOT a general purpose file system.
> - It is not really a replacement for NFS - rather, it enables NFS access
> (as a client) to allow use of Glusterfs by servers that can't (or don't
> want to) install the native GlusterFS client.
> - It is not designed for use with files under 1MB in size. That is not
to
> say it won't work, after a fashion - but it calls to mind the saying ...
> "When all you have is a hammer, everything looks like a nail!". Use the
> right tool for the right task.
> 
> GlusterFS IS optimized for large files being (mostly) read by many
clients
> for the purposes of analysis, simulation, or (in the case of large image
> files) display and / or rendering.
> - It supports parallel access to duplicate copies of files
> - It supports high availability if mirroring is configured, allowing
> single nodes to fail or be replaced / serviced /upgraded without
bringing
> down the file system - though at the expense of reduced I/O due to that
> node no longer handling file requests.
> - It can scale to very high levels of I/O with higher numbers of storage
> nodes / pools since each new storage node increases the bandwidth
available
> to access files stored in a given pool.
> - It distributes files semi / sort of evenly across multiple nodes to
> decrease the possibility of hot spots in file access.
> 
> Given the above attributes, there are some things that it is not going
to
> do as well as alternate solutions:
> - Home directory replacement - small files constantly accessed by a
single
> user
> - Software build storage - hundreds / thousands of small files being
read
> and written simultaneously
> - Samba server backend storage - see the two points above.
> 
> Samba has its own clustering software that is specifically designed to
> complement and enhance Samba - CTDB http://ctdb.samba.org/
> It does rely on an underlying cluster file system itself - but that file
> system should support small files and many clients simultaneously
accessing
> said files. NFS will work better than GlusterFS at this task - and there
> are others.
> 
> 
> I'll give the standard disclaimer, here.
> 
> I don't believe myself to be a GlusterFS expert - I've only been working
> with it for one year so far, and believe me, that is not enough time, in
my
> opinion, to make me one.
> 
> I have implemented a 6 node storage pool serving hundreds of clients.
> Originally this was implemented in version 3.0.4 - I've migrated this to
> 3.1.1 smoothly (though with a LOT of work up front), and all that
remains
> is to integrate the legacy 3.0.4 servers in to the new configuration.
> 
> This is just my opinion, and an earnest attempt to clarify at least my
> view of this amazing and useful tool. Other opinions and views are
heartily
> welcomed.
> 
> Thanks,
> 
> 
> James Burnash, Unix Engineering
> 
> -Original Message-
> From: gluster-users-boun...@gluster.org
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Daniel Müller
> Sent: Monday, February 07, 2011 10:48 AM
> To: 'Luis Cerezo'
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] WG: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5
> MSOffice Files
> 
> I did both settings. And it did not help.
> Do you have a working Gluster with samba and office files?
> 
> Here are my samba-settings:
> [test]
> path=/mnt/glusterfs/windows/test
> readonly=no
> profile acls = YES
> oplocks=NO
> level2 oplocks=NO
> #oplocks auf dem share fuer folgende file-types ausschalten
> veto oplock files =
>
/*.mdb/*.MDB/*.dbf/*.DBF/*.doc/*.docx/*.xls/*.xlsx/*.tmp/*.TMP/?~$*/~$*/*.ex
> e/*.com
> write list="Domain Users" "Domain Admins"
> create mask = 2770
> force create mode=2770
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen 
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de 
> 
> Von: Luis Cerezo [mailto:l...@l

Re: [Gluster-users] WG: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5 MSOffice Files

2011-02-07 Thread Daniel Müller
I did both settings. And it did not help.
Do you have a working Gluster with samba and office files?

Here are my samba-settings:
[test]
path=/mnt/glusterfs/windows/test
readonly=no
profile acls = YES
oplocks=NO
level2 oplocks=NO
#oplocks auf dem share fuer folgende file-types ausschalten
veto oplock files =
/*.mdb/*.MDB/*.dbf/*.DBF/*.doc/*.docx/*.xls/*.xlsx/*.tmp/*.TMP/?~$*/~$*/*.ex
e/*.com
write list="Domain Users" "Domain Admins"
create mask = 2770
force create mode=2770
    

EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de 

Von: Luis Cerezo [mailto:l...@luiscerezo.org] 
Gesendet: Montag, 7. Februar 2011 15:21
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] WG: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5
MSOffice Files

I don't think this is a gluster issue. 

This could be an XP issue,  http://support.microsoft.com/?id=812937

or a samba issue. 
Try setting this on the share in question

        oplocks = No
        level2 oplocks = No

I hope this helps..


On Feb 7, 2011, at 2:11 AM, Daniel Müller wrote:


Dear all,

no solutions!? So MSOffice Files cannot be worked on a gluster-share?!
Word 2007 docs can be created written one time to it. When opened again they
are read only (Another user). No change possible.
Excell files the same.
When I do the same actions on a non glusterfs-vol everything is as it
should!!!
If I can get a hint to work around this I have to abandon glusterd.

Daniel



-----------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Donnerstag, 3. Februar 2011 13:25
An: 'gluster-users@gluster.org'
Betreff: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5 MSOffice Files

Dear all,

I succeded in setting up a two node gluster-share (Replication) for samba.

“[root@ctdb1 test]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 87159003-b9ec-4eba-9761-74178d406609
State: Peer in Cluster (Connected)”

“[root@ctdb1 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5”

Everything is working fine but writing MSOffice word and excel-files to the
gluster-vol.
Every word-doc is read only after creating, the same with excel files!!??

Beside the gluster-vol I tested another samba share without glusterfs and
there it is working
as it should.

So how can I get it work on my gluster-vol?

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Luis E. Cerezo

http://www.luiscerezo.org
http://twitter.com/luiscerezo
http://flickr.com/photos/luiscerezo
Voice: 412 223 7396


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] WG: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5 MSOffice Files

2011-02-07 Thread Daniel Müller
Dear all,

no solutions!? So MSOffice Files cannot be worked on a gluster-share?!
Word 2007 docs can be created written one time to it. When opened again they
are read only (Another user). No change possible.
Excell files the same.
When I do the same actions on a non glusterfs-vol everything is as it
should!!!
If I can get a hint to work around this I have to abandon glusterd.

Daniel



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Daniel Müller [mailto:muel...@tropenklinik.de] 
Gesendet: Donnerstag, 3. Februar 2011 13:25
An: 'gluster-users@gluster.org'
Betreff: CentOs 5.5 Glusterfs 3.1.0 Samba 3.5 MSOffice Files

Dear all,

I succeded in setting up a two node gluster-share (Replication) for samba.

“[root@ctdb1 test]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 87159003-b9ec-4eba-9761-74178d406609
State: Peer in Cluster (Connected)”

“[root@ctdb1 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5”

 Everything is working fine but writing MSOffice word and excel-files to the
gluster-vol.
Every word-doc is read only after creating, the same with excel files!!??

Beside the gluster-vol I tested another samba share without glusterfs and
there it is working
as it should.

So how can I get it work on my gluster-vol?
 
---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] CentOs 5.5 Glusterfs 3.1.0 Samba 3.5 MSOffice Files

2011-02-03 Thread Daniel Müller
Dear all,

I succeded in setting up a two node gluster-share (Replication) for samba.

“[root@ctdb1 test]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 87159003-b9ec-4eba-9761-74178d406609
State: Peer in Cluster (Connected)”

“[root@ctdb1 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5”

 Everything is working fine but writing MSOffice word and excel-files to the
gluster-vol.
Every word-doc is read only after creating, the same with excel files!!??

Beside the gluster-vol I tested another samba share without glusterfs and
there it is working
as it should.

So how can I get it work on my gluster-vol?
 
---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1 newbe question

2010-12-22 Thread Daniel Müller
Thanks for the hint,
I changed network.ping-timeout to "5". But it seems only lightly different. I 
would expect for gluster the same behavior
As I do with drbd??!!

[r...@ctdb1 ~]# gluster volume info all

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5


What about :network.frame-timeout can I adjust this parameter to react quick if 
a node
is down???

[r...@ctdb1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 9c52b89f-a232-4f20-8ff8-9bbc6351ab79
State: Peer in Cluster (Connected)
[r...@ctdb1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.132.57
Uuid: 9c52b89f-a232-4f20-8ff8-9bbc6351ab79
State: Peer in Cluster (Connected)

Or is it in my /etc/glusterfs/glusterd.vol:
[r...@ctdb1 glusterfs]# cat glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type tcp,socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Jacob Shucart [mailto:ja...@gluster.com] 
Gesendet: Dienstag, 21. Dezember 2010 18:40
An: muel...@tropenklinik.de; 'Daniel Maher'; gluster-users@gluster.org
Betreff: RE: [Gluster-users] Gluster 3.1 newbe question

Hello,

Please don't write to /glusterfs/export as this is not compatible with 
Gluster.  There is a ping timeout which controls how long Gluster will wait 
to write for a node that went down.  By default this value is very high, so 
please run:

gluster volume set samba-vol network.ping-timeout 15

Then mount your Gluster volume somewhere and try writing to it.  You will 
see that it will pause for a while and then resume writing.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Daniel Müller
Sent: Tuesday, December 21, 2010 7:29 AM
To: muel...@tropenklinik.de; 'Daniel Maher'; gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.1 newbe question

Hm, now I did not use the mount point of the volumes.
I wrote diretctly in /glusterfs/export and gluster did not hang while the 
other peer restarted.
But now the files I wrote in the meanwhile are not replicated
How about this?
Is there a command to get them replicated to the other node?

-------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Müller
Gesendet: Dienstag, 21. Dezember 2010 16:07
An: 'Daniel Maher'; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

Even started it is the same, perhaps I missed a thing:
[r...@ctdb1 ~]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export

I created the volumes like:
gluster volume create samba-vol  replica 2 transport tcp 
192.168.132.56:/glusterfs/export 192.168.132.57:/glusterfs/export

Both are mounted:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
glusterfs#192.168.132.56:/samba-vol on /mnt/glusterfs type fuse 
(rw,allow_other,default_permissions,max_read=131072)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)



-----------
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Maher
Gesendet: Dienstag, 21. Dezember 2010 15:21
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

On 12/21/2010 02:54 PM, Daniel Müller wrote:

> I have build up a

Re: [Gluster-users] Gluster 3.1 newbe question

2010-12-21 Thread Daniel Müller
Hm, now I did not use the mount point of the volumes.
I wrote diretctly in /glusterfs/export and gluster did not hang while the other 
peer restarted.
But now the files I wrote in the meanwhile are not replicated
How about this?
Is there a command to get them replicated to the other node?

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Müller
Gesendet: Dienstag, 21. Dezember 2010 16:07
An: 'Daniel Maher'; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

Even started it is the same, perhaps I missed a thing:
[r...@ctdb1 ~]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export

I created the volumes like:
gluster volume create samba-vol  replica 2 transport tcp 
192.168.132.56:/glusterfs/export 192.168.132.57:/glusterfs/export

Both are mounted:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
glusterfs#192.168.132.56:/samba-vol on /mnt/glusterfs type fuse 
(rw,allow_other,default_permissions,max_read=131072)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Maher
Gesendet: Dienstag, 21. Dezember 2010 15:21
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

On 12/21/2010 02:54 PM, Daniel Müller wrote:

> I have build up a two peer gluster on centos 5.5 x64
> My Version:
> glusterfs --version
> glusterfs 3.1.0 built on Oct 13 2010 10:06:10
> Repository revision: v3.1.0
> Copyright (c) 2006-2010 Gluster Inc.<http://www.gluster.com>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU Affero
> General Public License.
>
> I set up my bricks and vols with success:
> [r...@ctdb1 peers]# gluster volume info
>
> Volume Name: samba-vol
> Type: Replicate
> Status: Created
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.132.56:/glusterfs/export
> Brick2: 192.168.132.57:/glusterfs/export
>
> And mounted them everything great.
> But when testing: Writing on one peer-server while the other is restarted or
> down gluster is hanging until the second
> Is online again.
> How do I manage to get around this. Users should work without interruption
> or waiting  and the files should be replicated
> Again after the peers are online again!??

Hello,

I have exactly the same set up and am (literally) testing it as i type 
this, and i can bring one of the nodes and up and down as much as i like 
without causing an interruption on the other.

I notice that your output says "Status: Created" instead of "Status: 
Started".  I don't know if that has anything to do with it, but it is 
notable.

-- 
Daniel Maher 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1 newbe question

2010-12-21 Thread Daniel Müller
Even started it is the same, perhaps I missed a thing:
[r...@ctdb1 ~]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export

I created the volumes like:
gluster volume create samba-vol  replica 2 transport tcp 
192.168.132.56:/glusterfs/export 192.168.132.57:/glusterfs/export

Both are mounted:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
glusterfs#192.168.132.56:/samba-vol on /mnt/glusterfs type fuse 
(rw,allow_other,default_permissions,max_read=131072)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)



---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von Daniel Maher
Gesendet: Dienstag, 21. Dezember 2010 15:21
An: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Gluster 3.1 newbe question

On 12/21/2010 02:54 PM, Daniel Müller wrote:

> I have build up a two peer gluster on centos 5.5 x64
> My Version:
> glusterfs --version
> glusterfs 3.1.0 built on Oct 13 2010 10:06:10
> Repository revision: v3.1.0
> Copyright (c) 2006-2010 Gluster Inc.<http://www.gluster.com>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU Affero
> General Public License.
>
> I set up my bricks and vols with success:
> [r...@ctdb1 peers]# gluster volume info
>
> Volume Name: samba-vol
> Type: Replicate
> Status: Created
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.132.56:/glusterfs/export
> Brick2: 192.168.132.57:/glusterfs/export
>
> And mounted them everything great.
> But when testing: Writing on one peer-server while the other is restarted or
> down gluster is hanging until the second
> Is online again.
> How do I manage to get around this. Users should work without interruption
> or waiting  and the files should be replicated
> Again after the peers are online again!??

Hello,

I have exactly the same set up and am (literally) testing it as i type 
this, and i can bring one of the nodes and up and down as much as i like 
without causing an interruption on the other.

I notice that your output says "Status: Created" instead of "Status: 
Started".  I don't know if that has anything to do with it, but it is 
notable.

-- 
Daniel Maher 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster 3.1 newbe question

2010-12-21 Thread Daniel Müller
First of all hello to all,

I have build up a two peer gluster on centos 5.5 x64
My Version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License.

I set up my bricks and vols with success:
[r...@ctdb1 peers]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export

And mounted them everything great.
But when testing: Writing on one peer-server while the other is restarted or
down gluster is hanging until the second
Is online again.
How do I manage to get around this. Users should work without interruption
or waiting  and the files should be replicated
Again after the peers are online again!??   

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users