[Gluster-users] locally mountable cloudstore service?

2009-07-07 Thread Daniel Frith
Sorry for going slightly off-topic here, but this is the only place I
know with people knowledgeable in this area. I've come to the
conclusion that setting up and administrating something like GlusterFS
is probably currently beyond my skills. So I was wondering if anyone
knew of any services for infinite scalable cloud storage (like
Amazon S3) but that was locally mountable as a file system (so it can
easily be used with existing software).

There's plenty of cloud storage providers springing up, but most of
them use peculiar APIs or are only designed for small scale and
non-intensive backup purposes.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] locally mountable cloudstore service?

2009-07-07 Thread Ate Poorthuis
Hi Daniel,

You could look at s3fs to mount S3 as a local file system:
http://code.google.com/p/s3fs/wiki/FuseOverAmazon

Ate

On Tue, Jul 7, 2009 at 2:30 PM, Daniel Frith danfr...@gmail.com wrote:

 Sorry for going slightly off-topic here, but this is the only place I
 know with people knowledgeable in this area. I've come to the
 conclusion that setting up and administrating something like GlusterFS
 is probably currently beyond my skills. So I was wondering if anyone
 knew of any services for infinite scalable cloud storage (like
 Amazon S3) but that was locally mountable as a file system (so it can
 easily be used with existing software).

 There's plenty of cloud storage providers springing up, but most of
 them use peculiar APIs or are only designed for small scale and
 non-intensive backup purposes.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] locally mountable cloudstore service?

2009-07-07 Thread Barry Jaspan

You could look at s3fs to mount S3 as a local file system: 
http://code.google.com/p/s3fs/wiki/FuseOverAmazon



FYI, I am in the process of moving to GlusterFS because s3fs proved  
too fragile and slow for my needs.  s3fs is not terrible, but be sure  
to try out all of your use cases before committing to it.


Barry

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Design Questions

2009-07-07 Thread Moritz Krinke

Hello,

while doing some research on how to build a System which enables us to  
easily rsync backups from other system on it, i came across GlusterFS,  
and after several hours of reading, im quite impressed and im looking  
forward to implement it ;)


Basicly i would like you to comment on the Design i put together, as  
there are lots of different ways to do things, some might be prefered  
over others.


We need:
15 TB Storage, at least stored in a Raid1 like fashion, Raid5/Raid6  
would be preferable, but i think this is not possible with GlusterFS?!.
While reading the docs i realised i could use this system probably too  
for hosting our images via HTTP, because of features like

- easy to expand with new storage / servers
- io-cache
- Lighttpd Plugin for direct FS access

This way we would not just gain a backup storage for our pictures,  
which are currently served by a mogilefs/varnish/lighttpd cluster but  
also a backup cluster which could serve files directly to our users.
( community-site with lots of pictures, file size varies, but most the  
files are 50 to 300 KiloBytes, but planning on storing files with  
~10MB too)


Great :-)

We've planned to use the following Hardware

5 Servers, each with:
 - quad-core
 - 16 GB Ram
 - 4 x 1,5 TB HDD, no RAID
 - dedicated GBit Ethernet Switched Network

GlusterFS Setup:

 Same config on all nodes, each with

 volume posix - volume locks - volume  with io-threads - volume  
with write-behind - volume with io-cache size of 14 GB (so 2GB is  
left for the system)

 for each of the 4 drives / mountpoints

 then having config entries for all 20 bricks, using tcp as transport  
type


 then creating cluster/replicate-volumes with always 2 disks on  
different servers,
 and creating a cluster/nufa-volume having the 10 replicate-volumes  
as subvolumes.



 As i understand it, this should provide me with the following:

 - data redundandy: if one disk fails, i can replace the disk and  
GlusterFS automaticlly replicates all the lost data back to the new disk

 AND/OR: the same if the whole server is lost/broken
 - distributed access: a read access to a specific file will always  
go to the same server/drive, regardless from which server it gets  
requested, and will therefore be cached by the io-cache layer on
the specific node which has the file on disk- ok, a little  
network-overhead, but thats better than putting the cache on top of  
the distribute-volume which would result in having the same cached  
content on all servers
 - Global Cache with no dublicates of 70GB (5 servers times 14 GB io- 
cache ram per server)
 - How exactly does the io-cache work? can i specify TTL for a file- 
pattern, or specify which files should not be cached at all... or..  
or? Cant find any specific info on this.
 - I can put apache/lighttpd on all the servers which then have  
direct access to the storage, no need for extra webservers for serving  
static  cacheable content
 - Remote access: i can mount the fs from another Location (another  
DC), securely if i wish through some kind of VPN and use it there for  
backup purposes
 - Expandable: i can just put 2 new servers online, each with 2/4/6/8  
drives


 If you have read and understood ( :-) ) this, i would highly  
appreciate if you could answer my questions and/or comment the input  
you might have.


Thanks a lot,
Moritz


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8 crashes NFS

2009-07-07 Thread Justice London
The 2.0.3 release of gluster appears so far to have fixed the crash issue I
was experiencing. What was the specific patch that fixed for it I was
wondering?

 

Great job either way! It appears that with fuse 2.8 and newer kernels that
gluster absolutely flies. With a replication environment between two crummy
testbed machines it's probably about twice as fast as 2.7.4 based fuse!

 

Justice London
jlon...@lawinfo.com



  _  

From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Justice London
Sent: Thursday, July 02, 2009 12:33 PM
To: 'Raghavendra G'
Cc: 'gluster-users'; 'Harshavardhana'
Subject: Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8 crashes
NFS

 

Sure:

 

Server:

 

### Export volume brick with the contents of /home/export directory.

volume posix

  type storage/posix   # POSIX FS translator

  option directory /home/gluster/vmglustore   # Export this directory

  option background-unlink yes

end-volume

 

volume locks

  type features/posix-locks

  subvolumes posix

end-volume

 

volume brick

  type performance/io-threads

  option thread-count 32

#  option autoscaling yes

#  option min-threads 8

#  option max-threads 200

  subvolumes locks

end-volume

 

### Add network serving capability to above brick.

volume brick-server

  type protocol/server

  option transport-type tcp

# option transport-type unix

# option transport-type ib-sdp

# option transport.socket.bind-address 192.168.1.10 # Default is to
listen on all interfaces

# option transport.socket.listen-port 6996  # Default is 6996

 

# option transport-type ib-verbs

# option transport.ib-verbs.bind-address 192.168.1.10 # Default is to
listen on all interfaces

# option transport.ib-verbs.listen-port 6996  # Default is 6996

# option transport.ib-verbs.work-request-send-size  131072

# option transport.ib-verbs.work-request-send-count 64

# option transport.ib-verbs.work-request-recv-size  131072

# option transport.ib-verbs.work-request-recv-count 64

 

  option client-volume-filename /etc/glusterfs/glusterfs.vol

  subvolumes brick

# NOTE: Access to any volume through protocol/server is denied by

# default. You need to explicitly grant access through # auth

# option.

  option auth.addr.brick.allow * # Allow access to brick volume

end-volume

 

 

Client:

 

### Add client feature and attach to remote subvolume

volume remotebrick1

  type protocol/client

  option transport-type tcp

# option transport-type unix

# option transport-type ib-sdp

  option remote-host 192.168.1.35 # IP address of the remote brick

# option transport.socket.remote-port 6996  # default server
port is 6996

 

# option transport-type ib-verbs

# option transport.ib-verbs.remote-port 6996  # default server
port is 6996

# option transport.ib-verbs.work-request-send-size  1048576

# option transport.ib-verbs.work-request-send-count 16

# option transport.ib-verbs.work-request-recv-size  1048576

# option transport.ib-verbs.work-request-recv-count 16

 

# option transport-timeout 30  # seconds to wait for a reply

   # from server for each request

  option remote-subvolume brick# name of the remote volume

end-volume

 

volume remotebrick2

  type protocol/client

  option transport-type tcp

  option remote-host 192.168.1.36

  option remote-subvolume brick

end-volume

 

volume brick-replicate

  type cluster/replicate

  subvolumes remotebrick1 remotebrick2

end-volume

 

 

volume threads

  type performance/io-threads

  option thread-count 8

#  option autoscaling yes

#  option min-threads 8

#  option max-threads 200

  subvolumes brick-replicate

end-volume

 

### Add readahead feature

volume readahead

  type performance/read-ahead

  option page-count 4   # cache per file  = (page-count x page-size)

  option force-atime-update off

  subvolumes threads

end-volume

 

### Add IO-Cache feature

#volume iocache

#  type performance/io-cache

#  option page-size 1MB

#  option cache-size 64MB

#  subvolumes readahead

#end-volume

 

### Add writeback feature

volume writeback

  type performance/write-behind

  option cache-size 8MB

  option flush-behind on

  subvolumes readahead

end-volume

 

 

Justice London
jlon...@lawinfo.com

  _  

From: Raghavendra G [mailto:raghavendra...@gmail.com] 
Sent: Thursday, July 02, 2009 10:17 AM
To: Justice London
Cc: Harshavardhana; gluster-users
Subject: Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8 crashes
NFS

 

Hi,

Can you send across the volume specification files you are using?

regards,
Raghavendra.

2009/6/24 Justice London jlon...@lawinfo.com

Here you go. Let me know if you need anything else:

Core was generated by `/usr/local/sbin/glusterfsd
-p /var/run/glusterfsd.pid -f /etc/glusterfs/gluster'.
Program terminated with signal 11, Segmentation fault.

Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8 crashes NFS

2009-07-07 Thread Amar Tumballi
Hi Justice,
Thanks for letting us know this. This crashing behavior with fuse-2.8 should
be fixed by Harsha's patch http://patches.gluster.com/patch/664/

I think the 'bigwrite' effect with two minor bug fixes went in write-behind
should have given this performance benefit.

Regards,
Amar

On Tue, Jul 7, 2009 at 3:43 PM, Justice London jlon...@lawinfo.com wrote:

  The 2.0.3 release of gluster appears so far to have fixed the crash issue
 I was experiencing. What was the specific patch that fixed for it I was
 wondering?



 Great job either way! It appears that with fuse 2.8 and newer kernels that
 gluster absolutely flies. With a replication environment between two crummy
 testbed machines it’s probably about twice as fast as 2.7.4 based fuse!



 Justice London
 jlon...@lawinfo.com

   --

 *From:* gluster-users-boun...@gluster.org [mailto:
 gluster-users-boun...@gluster.org] *On Behalf Of *Justice London
 *Sent:* Thursday, July 02, 2009 12:33 PM
 *To:* 'Raghavendra G'
 *Cc:* 'gluster-users'; 'Harshavardhana'

 *Subject:* Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8
 crashes NFS



 Sure:



 Server:



 ### Export volume brick with the contents of /home/export directory.

 volume posix

   type storage/posix   # POSIX FS translator

   option directory /home/gluster/vmglustore   # Export this directory

   option background-unlink yes

 end-volume



 volume locks

   type features/posix-locks

   subvolumes posix

 end-volume



 volume brick

   type performance/io-threads

   option thread-count 32

 #  option autoscaling yes

 #  option min-threads 8

 #  option max-threads 200

   subvolumes locks

 end-volume



 ### Add network serving capability to above brick.

 volume brick-server

   type protocol/server

   option transport-type tcp

 # option transport-type unix

 # option transport-type ib-sdp

 # option transport.socket.bind-address 192.168.1.10 # Default is to
 listen on all interfaces

 # option transport.socket.listen-port 6996  # Default is 6996



 # option transport-type ib-verbs

 # option transport.ib-verbs.bind-address 192.168.1.10 # Default is to
 listen on all interfaces

 # option transport.ib-verbs.listen-port 6996  # Default is 6996

 # option transport.ib-verbs.work-request-send-size  131072

 # option transport.ib-verbs.work-request-send-count 64

 # option transport.ib-verbs.work-request-recv-size  131072

 # option transport.ib-verbs.work-request-recv-count 64



   option client-volume-filename /etc/glusterfs/glusterfs.vol

   subvolumes brick

 # NOTE: Access to any volume through protocol/server is denied by

 # default. You need to explicitly grant access through # auth

 # option.

   option auth.addr.brick.allow * # Allow access to brick volume

 end-volume





 Client:



 ### Add client feature and attach to remote subvolume

 volume remotebrick1

   type protocol/client

   option transport-type tcp

 # option transport-type unix

 # option transport-type ib-sdp

   option remote-host 192.168.1.35 # IP address of the remote brick

 # option transport.socket.remote-port 6996  # default server
 port is 6996



 # option transport-type ib-verbs

 # option transport.ib-verbs.remote-port 6996  # default server
 port is 6996

 # option transport.ib-verbs.work-request-send-size  1048576

 # option transport.ib-verbs.work-request-send-count 16

 # option transport.ib-verbs.work-request-recv-size  1048576

 # option transport.ib-verbs.work-request-recv-count 16



 # option transport-timeout 30  # seconds to wait for a reply

# from server for each request

   option remote-subvolume brick# name of the remote volume

 end-volume



 volume remotebrick2

   type protocol/client

   option transport-type tcp

   option remote-host 192.168.1.36

   option remote-subvolume brick

 end-volume



 volume brick-replicate

   type cluster/replicate

   subvolumes remotebrick1 remotebrick2

 end-volume





 volume threads

   type performance/io-threads

   option thread-count 8

 #  option autoscaling yes

 #  option min-threads 8

 #  option max-threads 200

   subvolumes brick-replicate

 end-volume



 ### Add readahead feature

 volume readahead

   type performance/read-ahead

   option page-count 4   # cache per file  = (page-count x page-size)

   option force-atime-update off

   subvolumes threads

 end-volume



 ### Add IO-Cache feature

 #volume iocache

 #  type performance/io-cache

 #  option page-size 1MB

 #  option cache-size 64MB

 #  subvolumes readahead

 #end-volume



 ### Add writeback feature

 volume writeback

   type performance/write-behind

   option cache-size 8MB

   option flush-behind on

   subvolumes readahead

 end-volume





 Justice London
 jlon...@lawinfo.com
   --

 *From:* Raghavendra G [mailto:raghavendra...@gmail.com]
 

Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8 crashes NFS

2009-07-07 Thread Anand Avati
 The 2.0.3 release of gluster appears so far to have fixed the crash issue I
 was experiencing. What was the specific patch that fixed for it I was
 wondering?

It was http://patches.gluster.com/patch/664/. A less ugly fix is lined
up for 2.1


 Great job either way! It appears that with fuse 2.8 and newer kernels that
 gluster absolutely flies. With a replication environment between two crummy
 testbed machines it’s probably about twice as fast as 2.7.4 based fuse!

Just curious, are the observed performance improvements in terms of IO
throughput or metadata latency?

Avati

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] When will be release glusterfs 2.0.3

2009-07-07 Thread Vahriç Muhtaryan
Any info ?

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster (2.0.1 - git) with fuse 2.8 crashes NFS

2009-07-07 Thread Anand Avati
On Wed, Jul 8, 2009 at 5:32 AM, Justice Londonjlon...@lawinfo.com wrote:
 Actually, I spoke too soon. NFS still crashes, even if the mountpoint
 doesn’t.

Justice, 2.0.3 fixes issues with 2.8.0-pre2. fuse-2.8.0-pre3 needs one
more fix (http://patches.gluster.com/patch/693/) which is lined up for
the next release. Just curious, what do you mean by that NFS still
crashes even if the mountpoint doesn't? Are you running a unfs3 server
on top of the fuse mountpoint and the unfs3 server crashes?

Avati

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Question of 2.0.3 with fuse 2.8.x

2009-07-07 Thread eagleeyes
HI:
   Did that gluster 2.0.3 with fuse2.8.x must through fuse api  7.6?? or  
kernel   2.6.26? 


2009-07-08 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users