.
(2011/10/14 18:17), Tomoaki Sato wrote:
Krishna,
Thank you for your comments.
I've changed the script not to repeat gluster volume set command with same
arguments.
Do you have any plans to make gluster restart-free like 'exportfs -f' of
nfs.
tomo sato
(2011/10/14 18:03), Krishna Srinivas
Hi Tomo Sato,
Using gluster volume set command will restart the nfs server, hence
you should change the script so that it does not restart the nfs
server too often.
You can consult with the person who installed the script as it is not
a part of gluster installed scripts.
Krishna
On Wed, Oct
08:40 /mnt/1GB
[root@vhead-010 ~]# rm /mnt/1GB
rm: remove regular file `/mnt/1GB'? y
[root@vhead-010 ~]# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
four-2-private:/four 412849280 270469760 142379520 66% /mnt
Best,
tomo
(2011/08/24 16:23), Krishna Srinivas
Tomo,
Are you using DNS round robin for small ?
Thanks
Krishna
On Mon, Aug 22, 2011 at 12:10 PM, Tomoaki Sato ts...@valinux.co.jp wrote:
Shejar,
Where can I see updates on this issue ?
Bugzilla ?
Thanks,
tomo
(2011/08/17 15:06), Shehjar Tikoo wrote:
Thanks for providing the exact
Troy,
Yes it would work, you can setup 4 servers in a distributed replicated
setup acting as NFS data store for VMWare. If any one of the storage
node goes down it will not be seen by the ESX hosts. Many users use
this type of config for storage HA for NFS datastores.
You can use SATA or SAS.
Dan,
volume perseus
type protocol/client
option transport-type tcp
option remote-host perseus
option remote-port 6996
option remote-subvolume brick1
end-volume
volume romulus
type protocol/client
option transport-type tcp
option remote-host romulus
option remote-port
On Tue, Jun 8, 2010 at 9:27 AM, Krishna Srinivas kris...@gluster.com wrote:
Dan,
volume perseus
type protocol/client
option transport-type tcp
option remote-host perseus
option remote-port 6996
option remote-subvolume brick1
end-volume
volume romulus
type protocol/client
Gordan,
Congratulations and Thanks!! :-)
Krishna
On Mon, Dec 21, 2009 at 4:47 PM, Anand Babu Periasamy a...@gluster.com wrote:
Sorry, small typo: It is Gordan Bobic.
Anand Babu Periasamy wrote:
2009 Gluster Hacker - Award goes to Gordon Bobic for consistently
contributing to the Gluster
On Thu, Sep 10, 2009 at 5:37 PM, Stephan von
Krawczynskisk...@ithnet.com wrote:
Only if backed up. Has the trace been shown to the linux developers?
What do they think?
Maybe we should just ask questions about the source before bothering others...
From 2.0.6 /transport/socket/src/socket.c
Hi Frederico,
You guessed it right, distribute is the new name for DHT as the name
distribute is more intuitive.
Here is the volfile for the client with distribute+replicate with out
any performance translator:
---
volume client1
type protocol/client
option transport-type tcp
option
mount point as a iscsi target, not using
iscsi drive as a gluster point .
Do you have a solution like that ?
2009-03-24
eagleeyes
发件人: Krishna Srinivas
发送时间: 2009-03-23 22:53:37
收件人: eagleeyes
抄送: gluster-users
主题
using stock kernel without any modifications. Nor I did any changes
to filesystems for extended attributes (using ext3). Would fuse work without
any problems?
Thanks!
2009/3/12 Krishna Srinivas kris...@zresearch.com
server.vol :
-
volume home1
type storage/posix # POSIX
transport-type tcp
subvolumes posix-locks-home1
option auth.addr.posix-locks-home1.allow * # Allow access to home1 volume
end-volume
Regards.
2009/3/15 Krishna Srinivas kris...@zresearch.com
Can you paste your client vol file? and the command you used to mount
the glusterfs?
Krishna
On Sun
home.
volume server
type protocol/server
option transport-type tcp
subvolumes posix-locks-home1
option auth.addr.posix-locks-home1.allow 192.168.253.41,127.0.0.1 # Allow
access to home1 volume
end-volume
2009/3/9 Krishna Srinivas kris...@zresearch.com
Stats,
I think there was nothing
On Tue, Mar 10, 2009 at 2:23 AM, Keith Freedman freed...@freeformit.com wrote:
At 05:34 AM 3/9/2009, Krishna Srinivas wrote:
Do not use single process as both server and client as we saw issues
related to locking. Can you see if using different processes for
server and client works fine w.r.t
Stas,
Was it working for your previously? Any other error logs on machine
with afr? what version are you using? If it was working previously
what changed in your setup recently? Can you paste your vol files
(just to be sure)
Krishna
On Sun, Mar 8, 2009 at 2:28 PM, Stas Oskin
replies inline:
On Mon, Feb 2, 2009 at 3:02 AM, Łukasz Mierzwa l.mier...@gmail.com wrote:
Hi,
I've tested replicate a little today and I've got this problems:
1. client-side replicate with 2 nodes, both acting as server and client
2. I start copying big file to gluster mount point
3.
17 matches
Mail list logo