Hi Todd,
Could you provide us with your volume configuration files and log files
for both server and client?. Also can you please file a bug report with
these at http://bugs.gluster.com/
Thanks
--
Harshavardhana
Gluster - http://www.gluster.com
On Wed, Jul 22, 2009 at 12:00 AM, Todd Daugh
Hi,
I'm setting up gluster to share /usr/local among 24 compute nodes. The
basic goal is to be able to change files in /usr/local in one place, and
have it replicate out to all the other nodes.
What I'd like to avoid is having a single point of failure where one (or
several) nodes go down a
I have a 4 node cluster in test production and this is quite the problem.
Linux Fedora 10/11 client Fuse 2.74 Gluster 2.0.3
Gentoo 2.6.27-gentoo-r8 server Gluster 2.0.3
When mounted native the filesystem does not complete the writing of
Point Cloud files. When mounted via CIFS (glusterfs exported
I have a question about this paragraph from the "Understanding DHT
Translator":
"Currently hash works based on directory level distribution. i.e, a
given file's parent directory will have information of how the hash
numbers are mapped to subvolumes. So, adding new node doesn't disturb
any
Sudipto Mukhopadhyay wrote:
This could be tricky as you don't want too lookup too many
alternatives!!
But, as you are doing LD_PRELOAD, can you not ask the application to
specify the paths (I know it's going to be little error prone based on
what application supplies)
For example:
/mnt/gluste
This could be tricky as you don't want too lookup too many
alternatives!!
But, as you are doing LD_PRELOAD, can you not ask the application to
specify the paths (I know it's going to be little error prone based on
what application supplies)
For example:
/mnt/glusterfs
If the application run di