Re: [Gluster-users] V3.0 and rsync crash

2009-12-14 Thread Harshavardhana
Here it goes http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=469add
yourself
to the CC list.

Regards
--
Harshavardhana
Gluster - http://www.gluster.com


On Tue, Dec 15, 2009 at 6:09 AM, Harshavardhana  wrote:

> Hi Larry,
>
> your configuration is currently not supported so you might encounter
> issues as your
> mail below.
>
> I would suggest using the following configuration for your setup
>
> volume vol1
>
>
>  type storage/posix  # POSIX FS translator
>
>  option directory /mnt/glusterfs/vol1# Export this directory
>
> end-volume
>
>
> volume vol2
>
>
>  type storage/posix
>
>  option directory /mnt/glusterfs/vol2
>
> end-volume
>
>
> ## Add network serving capability to above unified bricks
>
> volume server
>
>  type protocol/server
>
>  option transport-type tcp   # For TCP/IP transport
>
>  subvolumes vol1 vol2
>
>  option auth.addr.vol1.allow 10.0.0.*  # access to volume
>
>  option auth.addr.vol2.allow 10.0.0.*
>
> end-volume
>
>
>
> client config file:
>
>
>
> volume brick1
>
>
>  type protocol/client
>
>  option transport-type tcp# for TCP/IP transport
>
>  option remote-host gfs001# IP address of the remote volume
>
>  option remote-subvolume vol1   # name of the remote volume
>
> end-volume
> volume brick2
>
>
>  type protocol/client
>
>  option transport-type tcp# for TCP/IP transport
>
>  option remote-host gfs001# IP address of the remote volume
>
>  option remote-subvolume vol2   # name of the remote volume
>
> end-volume
>
>
> volume bricks
>
>  type cluster/distribute
>
>  subvolumes brick1 brick2
>
> end-volume
>
>
> Let us know how this works. But i will file a bug to track it.
>
> Regards
> --
> Harshavardhana
> Gluster - http://www.gluster.com
>
>
> On Tue, Dec 15, 2009 at 5:49 AM, Larry Bates 
> wrote:
>
>>  Sure.  When Ctrl-C is pressed on client (to terminate rsync)  the logs
>> show:
>>
>>
>>
>> Client log tail:
>>
>>
>>
>> [2009-12-14 08:45:40] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse:
>> FUSE init
>>
>> ed with protocol versions: glusterfs 7.13 kernel 7.8
>>
>> [2009-12-14 18:15:45] E [saved-frames.c:165:saved_frames_unwind] storage:
>> forced
>>
>>  unwinding frame type(1) op(UNLINK)
>>
>> [2009-12-14 18:15:45] W [fuse-bridge.c:1207:fuse_unlink_cbk]
>> glusterfs-fuse: 804
>>
>> 6528: UNLINK()
>> /storage/blobdata/20/40/.49f4020ad397eb689df4e83770a9.8Zirsl => -
>>
>> 1 (Transport endpoint is not connected)
>>
>> [2009-12-14 18:15:45] W [fuse-bridge.c:1167:fuse_err_cbk] glusterfs-fuse:
>> 804652
>>
>> 9: FLUSH() ERR => -1 (Transport endpoint is not connected)
>>
>> [2009-12-14 18:15:45] N [client-protocol.c:6972:notify] storage:
>> disconnected
>>
>> [2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage:
>> connection
>>
>>  to 10.0.0.91:6996 failed (Connection refused)
>>
>> [2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage:
>> connection
>>
>>  to 10.0.0.91:6996 failed (Connection refused)
>>
>>
>>
>> Server log tail:
>>
>>
>>
>> [2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server:
>> accepted
>>
>> client from 10.0.0.71:1021
>>
>> [2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server:
>> accepted
>>
>> client from 10.0.0.71:1020
>>
>> [2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick1: Access
>> to /mn
>>
>> t/glusterfs/vol1//.. (on dev 2304) is crossing device (2052)
>>
>> [2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick2: Access
>> to /mn
>>
>> t/glusterfs/vol2//.. (on dev 2304) is crossing device (2068)
>>
>> pending frames:
>>
>> frame : type(1) op(UNLINK)
>>
>>
>>
>> patchset: 2.0.1-886-g8379edd
>>
>> signal received: 11
>>
>> time of crash: 2009-12-14 18:23:08
>>
>> configuration details:
>>
>> argp 1
>>
>> backtrace 1
>>
>> dlfcn 1
>>
>> fdatasync 1
>>
>> libpthread 1
>>
>> llistxattr 1
>>
>> setfsid 1
>>
>> spinlock 1
>>
>> epoll.h 1
>>
>> xattr.h 1
>>
>> st_atim.tv_nsec 1
>>
>> package-string: glusterfs 3.0.0
>>
>> /lib64/libc.so.6[0x35702302d0]
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so[0x2ad7fd3820d3]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_cbk+0x265)[0x
>>
>> 2ad7fd383d75]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink_cbk+0x1d5)[0x
>>
>> 2ad7fd1619fa]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/storage/posix.so(posix_unlink+0x6cc)[0x2ad7fcf
>>
>> 39b9f]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink+0x530)[0x2ad7
>>
>> fd16a54f]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_resume+0x17e)
>>
>> [0x2ad7fd389a81]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_done+0x59)[0
>>
>> x2ad7fd395970]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xea)[0x
>>
>> 2ad7fd395a61]
>>
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server

Re: [Gluster-users] V3.0 and rsync crash

2009-12-14 Thread Harshavardhana
Hi Larry,

your configuration is currently not supported so you might encounter
issues as your
mail below.

I would suggest using the following configuration for your setup

volume vol1

 type storage/posix  # POSIX FS translator

 option directory /mnt/glusterfs/vol1# Export this directory

end-volume


volume vol2

 type storage/posix

 option directory /mnt/glusterfs/vol2

end-volume


## Add network serving capability to above unified bricks

volume server

 type protocol/server

 option transport-type tcp   # For TCP/IP transport

 subvolumes vol1 vol2

 option auth.addr.vol1.allow 10.0.0.*  # access to volume

 option auth.addr.vol2.allow 10.0.0.*

end-volume



client config file:



volume brick1

 type protocol/client

 option transport-type tcp# for TCP/IP transport

 option remote-host gfs001# IP address of the remote volume

 option remote-subvolume vol1   # name of the remote volume

end-volume
volume brick2

 type protocol/client

 option transport-type tcp# for TCP/IP transport

 option remote-host gfs001# IP address of the remote volume

 option remote-subvolume vol2   # name of the remote volume

end-volume


volume bricks

 type cluster/distribute

 subvolumes brick1 brick2

end-volume


Let us know how this works. But i will file a bug to track it.

Regards
--
Harshavardhana
Gluster - http://www.gluster.com


On Tue, Dec 15, 2009 at 5:49 AM, Larry Bates wrote:

>  Sure.  When Ctrl-C is pressed on client (to terminate rsync)  the logs
> show:
>
>
>
> Client log tail:
>
>
>
> [2009-12-14 08:45:40] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE
> init
>
> ed with protocol versions: glusterfs 7.13 kernel 7.8
>
> [2009-12-14 18:15:45] E [saved-frames.c:165:saved_frames_unwind] storage:
> forced
>
>  unwinding frame type(1) op(UNLINK)
>
> [2009-12-14 18:15:45] W [fuse-bridge.c:1207:fuse_unlink_cbk]
> glusterfs-fuse: 804
>
> 6528: UNLINK() /storage/blobdata/20/40/.49f4020ad397eb689df4e83770a9.8Zirsl
> => -
>
> 1 (Transport endpoint is not connected)
>
> [2009-12-14 18:15:45] W [fuse-bridge.c:1167:fuse_err_cbk] glusterfs-fuse:
> 804652
>
> 9: FLUSH() ERR => -1 (Transport endpoint is not connected)
>
> [2009-12-14 18:15:45] N [client-protocol.c:6972:notify] storage:
> disconnected
>
> [2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage:
> connection
>
>  to 10.0.0.91:6996 failed (Connection refused)
>
> [2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage:
> connection
>
>  to 10.0.0.91:6996 failed (Connection refused)
>
>
>
> Server log tail:
>
>
>
> [2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server:
> accepted
>
> client from 10.0.0.71:1021
>
> [2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server:
> accepted
>
> client from 10.0.0.71:1020
>
> [2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick1: Access
> to /mn
>
> t/glusterfs/vol1//.. (on dev 2304) is crossing device (2052)
>
> [2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick2: Access
> to /mn
>
> t/glusterfs/vol2//.. (on dev 2304) is crossing device (2068)
>
> pending frames:
>
> frame : type(1) op(UNLINK)
>
>
>
> patchset: 2.0.1-886-g8379edd
>
> signal received: 11
>
> time of crash: 2009-12-14 18:23:08
>
> configuration details:
>
> argp 1
>
> backtrace 1
>
> dlfcn 1
>
> fdatasync 1
>
> libpthread 1
>
> llistxattr 1
>
> setfsid 1
>
> spinlock 1
>
> epoll.h 1
>
> xattr.h 1
>
> st_atim.tv_nsec 1
>
> package-string: glusterfs 3.0.0
>
> /lib64/libc.so.6[0x35702302d0]
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so[0x2ad7fd3820d3]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_cbk+0x265)[0x
>
> 2ad7fd383d75]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink_cbk+0x1d5)[0x
>
> 2ad7fd1619fa]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/storage/posix.so(posix_unlink+0x6cc)[0x2ad7fcf
>
> 39b9f]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink+0x530)[0x2ad7
>
> fd16a54f]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_resume+0x17e)
>
> [0x2ad7fd389a81]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_done+0x59)[0
>
> x2ad7fd395970]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xea)[0x
>
> 2ad7fd395a61]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve+0xce)[0x2ad7
>
> fd395910]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xc5)[0x
>
> 2ad7fd395a3c]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_entry+0xb1)[
>
> 0x2ad7fd395559]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve+0x7d)[0x2ad7
>
> fd3958bf]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0x76)[0x
>
> 2ad7fd3959ed]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/serve

Re: [Gluster-users] V3.0 and rsync crash

2009-12-14 Thread Harshavardhana
Hi Larry,

  Can you give us more info with log files?

Regards
--
Harshavardhana
Gluster - http://www.gluster.com


On Mon, Dec 14, 2009 at 8:25 PM, Larry Bates wrote:

> I'm a newbie and am setting up glusterFS for the first time.  Right now I
> have a
> single server, single client setup that seemed to be working on V2.09
> properly.
>
> Just upgraded from 2.09 to 3.0 and am noticing the following problem:
>
>
>
> Server and client setup is working and glusterFS is mounting on client
> properly.
>
> Start rsync job to synchronize file between local storage and glusterFS
> volume
>
> Interrupting rsync job with Ctrl-C crashes the server.  Restarting the
> server
> and
>
> client is required.
>
>
>
> server config file:
>
>
>
> volume brick1
>
>  type storage/posix  # POSIX FS translator
>
>  option directory /mnt/glusterfs/vol1# Export this directory
>
> end-volume
>
>
>
> volume brick2
>
>  type storage/posix
>
>  option directory /mnt/glusterfs/vol2
>
> end-volume
>
>
>
> volume ns
>
>  type storage/posix
>
>  option directory /mnt/glusterfs-ns
>
> end-volume
>
>
>
> volume bricks
>
>  type cluster/distribute
>
>  option namespace ns
>
>  option scheduler alu  # adptive least usage scheduler
>
>  subvolumes brick1 brick2
>
> end-volume
>
>
>
> ## Add network serving capability to above unified bricks
>
> volume server
>
>  type protocol/server
>
>  option transport-type tcp   # For TCP/IP transport
>
>  #option transport.socket.listen-port 6996   # Default is 6996
>
>  #option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>
>  subvolumes bricks
>
>  option auth.addr.bricks.allow 10.0.0.*  # access to volume
>
> end-volume
>
>
>
> client config file:
>
>
>
> volume storage
>
>  type protocol/client
>
>  option transport-type tcp# for TCP/IP transport
>
>  option remote-host gfs001# IP address of the remote volume
>
>  option remote-subvolume bricks   # name of the remote volume
>
> end-volume
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Native GlusterFS Client with the new Gluster Storage Platform

2009-12-14 Thread Andrew Mitry
Thanks, y4m4 helped me out in the chat, I needed to create the mount point
first.

On Mon, Dec 14, 2009 at 4:49 PM, Amar Tumballi  wrote:

> > r...@uec1:~# mount -t glusterfs 192.168.123.10:volume1-tcp
> /mnt/glusterfs
> > Usage: mount.glusterfs : -o
> >  
> > Options:
> > man 8 mount.glusterfs
> >
>
> Can you see the error msg at the end of the file
> '/usr/local/var/log/glusterfs/mnt-glusterfs.log' or
> '/var/log/glusterfs/mnt-glusterfs.log' ??
>
> Regards,
> Amar
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Native GlusterFS Client with the new Gluster Storage Platform

2009-12-14 Thread Amar Tumballi
> r...@uec1:~# mount -t glusterfs 192.168.123.10:volume1-tcp /mnt/glusterfs
> Usage: mount.glusterfs : -o
>  
> Options:
> man 8 mount.glusterfs
> 

Can you see the error msg at the end of the file 
'/usr/local/var/log/glusterfs/mnt-glusterfs.log' or 
'/var/log/glusterfs/mnt-glusterfs.log' ??

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Native GlusterFS Client with the new Gluster Storage Platform

2009-12-14 Thread Andrew Mitry
When I try that command I get the following:

r...@uec1:~# mount -t glusterfs 192.168.123.10:volume1-tcp /mnt/glusterfs
Usage:  mount.glusterfs : -o 

Options:
man 8 mount.glusterfs

To display the version number of the mount helper:
mount.glusterfs --version


I am using Ubuntu 9.10 if that makes a difference...

Thanks,
Andrew

On Mon, Dec 14, 2009 at 3:06 PM, Amar Tumballi  wrote:

>
> Hi Andrew,
>
> You just need to do
>
> mount -t glusterfs 192.168.123.10:-tcp
> /mnt/glusterfs
>
> It should work fine.
>
> Regards,
> Amar
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Native GlusterFS Client with the new Gluster Storage Platform

2009-12-14 Thread Amar Tumballi

Hi Andrew,

You just need to do 

mount -t glusterfs 192.168.123.10:-tcp /mnt/glusterfs

It should work fine.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Native GlusterFS Client with the new Gluster Storage Platform

2009-12-14 Thread Andrew Mitry
Hello,

I have successfully installed the Gluster Storage Platform with a volume
(volume1) and also setup a client machine (Ubuntu) with GlusterFS 3.0 FUSE
Client, I've tried creating a volume file locally on the client with the
following settings:

volume client

type protocol/client
option transport-type tcp
option remote-host 192.168.123.10
option remote-subvolume volume1

end-volume

The file system doesn't mount and I get this error in the logs:

 glusterfs: error while getting volume file from server 192.168.123.10

Also, what is the suername/password to access the Gluster Storage Platform
via SSH?

Thanks,
Andrew
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] V3.0 and rsync crash

2009-12-14 Thread Larry Bates
I'm a newbie and am setting up glusterFS for the first time.  Right now I have a
single server, single client setup that seemed to be working on V2.09 properly.

Just upgraded from 2.09 to 3.0 and am noticing the following problem:

 

Server and client setup is working and glusterFS is mounting on client properly.

Start rsync job to synchronize file between local storage and glusterFS volume

Interrupting rsync job with Ctrl-C crashes the server.  Restarting the server
and

client is required.

 

server config file:

 

volume brick1

  type storage/posix  # POSIX FS translator

  option directory /mnt/glusterfs/vol1# Export this directory

end-volume

 

volume brick2

  type storage/posix

  option directory /mnt/glusterfs/vol2

end-volume

 

volume ns

  type storage/posix

  option directory /mnt/glusterfs-ns

end-volume

 

volume bricks

  type cluster/distribute

  option namespace ns

  option scheduler alu  # adptive least usage scheduler

  subvolumes brick1 brick2

end-volume

 

## Add network serving capability to above unified bricks

volume server

  type protocol/server

  option transport-type tcp   # For TCP/IP transport

  #option transport.socket.listen-port 6996   # Default is 6996

  #option client-volume-filename /etc/glusterfs/glusterfs-client.vol

  subvolumes bricks

  option auth.addr.bricks.allow 10.0.0.*  # access to volume

end-volume

 

client config file:

 

volume storage

  type protocol/client

  option transport-type tcp# for TCP/IP transport

  option remote-host gfs001# IP address of the remote volume

  option remote-subvolume bricks   # name of the remote volume

end-volume

 

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Glusterfs Homes and mmap 'Bus Error' starting OpenOffice.org

2009-12-14 Thread Benjamin Long
The subject says it all. I upgraded from 2.0.8 to 3.0.0 and now when I try to 
start OpenOffice.org I get a 'bus error'. My home directory is mounted as such:

fstab:
/etc/glusterfs/homes-tcp.vol /home glusterfs defaults,direct-io-
mode=disable,auto,_netdev 0 0

'mount' reports this as mounted:
/etc/glusterfs/homes-tcp.vol on /home type fuse.glusterfs 
(rw,allow_other,default_permissions,max_read=131072)


$ cat /etc/glusterfs/homes-tcp.vol  
  
## file auto generated by /usr/bin/glusterfs-volgen (mount.vol) 
  
# Cmd line: 
  
# $ /usr/bin/glusterfs-volgen --name homes --raid 1 10.10.2.3:/mnt/home/home 
10.10.2.4:/mnt/home/home 

# RAID 1
# TRANSPORT-TYPE tcp
volume 10.10.2.4-1  
type protocol/client
option transport-type tcp
option remote-host 10.10.2.4
option transport.socket.nodelay on
option transport.remote-port 6997 
option remote-subvolume home-ds-locks-io-filter
end-volume 

volume 10.10.2.3-1
type protocol/client
option transport-type tcp
option remote-host 10.10.2.3
option transport.socket.nodelay on
option transport.remote-port 6997
option remote-subvolume home-ds-locks-io-filter
end-volume

volume mirror-0
type cluster/replicate
subvolumes 10.10.2.3-1 10.10.2.4-1
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes mirror-0
end-volume

volume readahead
type performance/read-ahead
option page-count 4
subvolumes writebehind
end-volume

volume iocache
type performance/io-cache
option cache-size 1GB
option cache-timeout 1
subvolumes readahead
end-volume

volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume

volume statprefetch
type performance/stat-prefetch
subvolumes quickread
end-volume

# cat /etc/glusterfs/glusterfsd.vol 
  
# HOME DATASTORE    

volume home-ds  

  type storage/posix   # POSIX FS translator

  option directory /mnt/home/home# Export this directory

end-volume  


volume home-ds-locks
  type features/locks
  subvolumes home-ds 
end-volume   

volume home-ds-locks-io
  type performance/io-threads
  option thread-count 8  
  subvolumes home-ds-locks   
end-volume   

volume home-ds-locks-io-filter
  type testing/features/filter
#  option root-squashing enable
  subvolumes home-ds-locks-io
end-volume

# END HOME DATASTORE 


# OFFICE DATASTORE ==
volume office-ds
  type storage/posix
  option directory /mnt/home/Office_Share
end-volume

volume office-ds-locks
  type features/locks
  subvolumes office-ds
end-volume

volume office-ds-locks-io
  type performance/io-threads
  option thread-count 8
  subvolumes office-ds-locks
end-volume

volume office-ds-locks-io-filter
  type testing/features/filter
 # option root-squashing enable
  option fixed-gid 1500
  subvolumes office-ds-locks-io
end-volume

# END OFFICE DATASTORE ==




volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.home-ds-locks-io-filter.allow 10.10.*,127.* # Allow access 
to volume
  option auth.addr.office-ds-locks-io-filter.allow 10.10.*,127.* # Allow access 
to volume
  option transport.socket.listen-port 6997
  option transport.socket.nodelay on
  subvolumes home-ds-locks-io-filter office-ds-locks-io-filter
end-volume


-- 
Benjamin Long
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] VMware ESX

2009-12-14 Thread Tejas N. Bhise
Hello Richard,

Congratulations on your successful setup. Your mail text seems to have got cut 
after "We have previously setup" ...
Can you please resend so we can understand your question.

Regards,
Tejas.

- Original Message -
From: "Richard Charnley" 
To: gluster-users@gluster.org
Sent: Monday, December 14, 2009 3:04:19 AM GMT +05:30 Chennai, Kolkata, Mumbai, 
New Delhi
Subject: [Gluster-users] VMware ESX


Hi,
 
We run ESX servers in a datacentre and i have managed to succesfully setup a 
mirrored volume which works really well (took 1 node down and files still 
available). my question is how can i run an ESX guest on gluster. We have 
previously setup 
  
_
Use Hotmail to send and receive mail from your different email accounts
http://clk.atdmt.com/UKM/go/186394592/direct/01/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Storage Platform on ubuntu 9.10

2009-12-14 Thread Harshavardhana
Hi Mario,

Can you get us the screenshot?. Have you provided enough RAM for the
VM?.
can you paste your command line parameters passed to kvm?.

Thanks
--
Harshavardhana
Gluster - http://www.gluster.com


On Mon, Dec 14, 2009 at 2:30 AM, Mario Giammarco wrote:

> Hello,
> I downloaded gluster 3.0 img and I am trying to use it on ubuntu 9.10. I
> have
> tried kvm and virt-install but in each way the virtual machine hangs at
> boot.
>
> Can you tell me if you have any luck with ubuntu?
>
> Thanks in advance for any help.
>
> Mario
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users