XFS is a valid alternative to ZFS on Linux. If I remember correctly any
operation that requires modifying a lot of xattr's can be slower than ext*,
have you noticed anything like that? You might see slower rebalances or self
healing?
Craig
Sent from a mobile device, please excuse my tpyos.
On
Brandon -
SQLite uses POSIX locking to implement some of its ACID compliant behavior
and requires the filesystem to fully implement POSIX advisory locks. Most
network filesystems (including Gluster native and NFS) don't support everything
that SQLite needs and so using SQLite on a networked
The only thing I might worry about is using ZFS on Linux, I still think it
might be a little early to trust it with truly critical data, plus there
doesn't seem to be a big ZFS+Linux+Gluster install base to help you if problems
come up.
I would use mdadm + LVM2 to create your RAID arrays on eac
Phil -
Gcollect is agnostic as to the data collector, it is designed to
support adding collectors on the fly. Using the collecd-exec plugin
and PUTVAL it should be easyish to add collectd support. Here is
(roughly) what you need to do -
copy ganglia.d/* to collectd.d/*
edit the bottom of each
Brian -
Most the time we just use RRDNS. Just create multiple A records with
the same name, one for each Gluster server or VIP.
Craig
Gluster
On 06/27/2011 04:17 PM, Brian Laughton wrote:
I have read the following page and was wondering if anyone had any
practical experience with it
http
another alert as soon as both releases are GA.
--
Thanks,
Craig Carl
Senior Systems Engineer | Gluster
408-829-9953 | San Francisco, CA
http://www.gluster.com/gluster-for-aws/
----
Craig Carl <mailto:cr...@gluster.com&
een the ecpagents on the hypervisors results in the VMs failing
to start. No data is lost or damaged.
Again, this issue is specific to Enomaly. Enomaly users should
immediatly upgrade to a version of Gluster =>3.1.5.
http://download.gluster.com/pub/gluster/glusterfs/
Thanks,
Craig Carl
d, i'm i right ?
>
> 2011/6/6 Craig Carl :
>> Matus -
>> If you are using the Gluster native client (mount -t glusterfs ...)
>> then ucarp/CTDB is NOT required and you should not install it. Always
>> use the real IPs when you are mounting with 'mount -t glust
over
> 10 minutes for volume
> to "wake up" but it never start to work - it never switch to another
> node, even when UCARP was already pointing there, there was lot
> of "recovery" messages on log but no attemt to connect to second node.
>
> thanks
>
ation/index.php/Gluster_3.2:_Setting_Volume_Options
Thanks,
Craig
--
Craig Carl, Senior Systems Engineer | Gluster
408.829.9953(PST) | http://gluster.com
http://www.gluster.com/gluster-for-aws/
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE5666F925A557DD8
On 6/6/11 12:09 AM, bxma...@gmail
Michael -
iSCSI on Gluster works well in theory and not so well in practice. When you
create a new iSCSI device you have no way of knowing on what node of the
Gluster cluster the file will actually be created on. In a replica environment
the iSCSI layer makes it difficult to ensure synchronou
Anything smaller than 128KB is 'small'.
Craig
On 12/09/2010 11:23 PM, Christian Fischer wrote:
On Friday 10 December 2010 07:12:47 Craig Carl wrote:
Christian -
For large files the Gluster native client will perform better than
NFS, but they are both good options.
Than
Christian -
For large files the Gluster native client will perform better than
NFS, but they are both good options.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/07/2010 11:39 PM, Christian Fischer wrote:
Morning Folks,
should I prefer NFS with UCARP or nat
Will -
While we are working on it quotas are not supported in Gluster 3.1.
We will let you know as soon as we have added that feature.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/08/2010 08:23 AM, Will Daniels wrote:
Hi,
I was hoping to use gluster to creat
Lana -
That is a documentation bug, I have fixed it. There isn't a way to
change the transport type on an existing volume, but if you delete the
volume then recreate it exactly the same way with a different transport
type it will work fine.
Thanks,
Craig
-->
Craig Carl
Senior
these issues but I'll ask the engineer primarily responsible for
gNFS to take a look at this thread and respond directly.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/08/2010 07:55 AM, William L. Sebok wrote:
I'm sorry, but right now I don't qui
I agree, and have filed an enhancement request.
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2204
Thanks for your feedback.
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/08/2010 02:45 PM, Jeff Anderson-Lee wrote:
On Wed, 2010-12-08 at 16:09 -0500, Joshua Ba
Joshua -
'gluster volume info '
or
'gluster volume info all'
Only options that are not at their default value are displayed.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/08/2010 11:00 AM, Joshua Baker-LePain wrote:
On Tue, 7 Dec 201
# gluster volume start localdata
# mount -t nfs -o vers=3 localhost:/localdata
Then copy|mv the existing data into . This eliminates
the need to run a second NFS server.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/07/2010 10:16 PM, Craig Carl wrote:
Bill -
We
[2].
[1]
http://www.gluster.com/community/documentation/index.php/Gluster_3.1_Native_Client_Guide
[2]
http://www.gluster.com/community/documentation/index.php/Gluster_3.1_NFS_Guide
Please let me know if you have any other questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engin
s not as just number of nodes but total
sub-volumes for "distribute" translator. "M" is number of additional
sub-volumes added before starting rebalance and scaling.
So for multiple exports from a single server we need to calculate the
total value moved from the server by
Bill -
As a temporary solution you can kill the process that is exporting
the volume via NFS. The Gluster NFS process can be identified by 'ps -ef
| grep nfs-server.vol'.
The process will restart when glusterd does, or when any changes to the
volume are made.
Thanks,
Craig
Bill -
We are working on a solution to this issue right now, please give
me a couple of days to get back to you with an update.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/07/2010 10:30 AM, William L. Sebok wrote:
Many of the computers in our cluster are diskl
needs to be changed, please also
remember that the documentation wiki is publicly editable, helping with
documentation is a great way for community members to support Gluster.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Glus
ster
user_8_1 user_8_2 user_8_3
ls fs7:/tempmount
user_8_1 user_8_2 user_8_3
ls fs8:/tempmount
user_8_1 user_8_2 user_8_3
Unmounting and remounting has no effect.
Servers are both Ubuntu Server 10.4, client is CentOS 5, 64bits all
around.
Thanks and regards,
Daniel
On 12/03/2010 10:
Samuel -
I was able to recreate the failure and have updated the bug you filed.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/03/2010 01:24 AM, Samuel Hassine wrote:
Craig,
I am using Debian Lenny (Proxmox 1.7)
r...@on-003:/# uname -a
Linux on-003 2.6.32-3-pve
Christian -
We're working on it everyday but we don't have a release date yet.
As soon as we have a date we will send a note to the list. Thanks for
your interest in Gluster and please let me know if you have any other
questions.
Thanks again,
Craig
-->
Craig Carl
S
ld not have worked,
it is missing a volume name - gluster volume create transport
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?
Please let us know how it goes, and please let me know if you have any
other questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Samuel -
I can't reproduce this issue locally, can you send me operating
system and hardware details for both the Gluster servers and the client?
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/02/2010 05:59 AM, Samuel Hassine wrote:
Hi all,
GlusterFS p
0-December/006001.html
Please let me know if you have any other questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/o
different parts of the striped file, or lots of different
files in a distribute cluster you would see your performance increase
significantly.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/26/2010 07:57 AM, Gotwalt, P. wrote:
Hi All,
I am doing some tests with glus
PK -
With the gNFS server performance for VM's is significantly faster,
we have lots of users using Gluster and NFS to host Xen, KVM, and VMWare
images.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/26/2010 01:22 AM, raveenpl wrote:
Hi,
I want build sto
Jens -
We are working on AD integration, unfortunately it isn't available
in the product yet. We will send a notice to this list as soon as it is.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/24/2010 07:16 AM, Simmoleit, Jens wrote:
Hi list,
I hope you can
. Move the LUN.
4. gluster add-brick server_new:/dev/sdc1
You data will be immediately available.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/22/2010 09:38 PM, Patrick Irvine wrote:
Hi list,
I am using gluster 3.1 on gentoo. I have previously been able (with
3.
Dan -
Gluster 3.1.1 s out, can you recreate the issue using that version?
Please let us know.
http://www.gluster.com/community/documentation/index.php/Gluster_3.1_Filesystem_Installation_and_Configuration_Guide
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/21/2
,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/30/2010 04:04 PM, Max Ivanov wrote:
I am trying to deploy glusterfs on Xen VM running on debian squeeze.
I've downloaded 3.1 deb file as mentioned in doc, added peers but
"gluster peers status" either hangs or crashes VM (y
Craig -
I've asked our documentation team to update the page, I'll let you
know as soon they are done. Was there a specific option you were
wondering about?
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/30/2010 10:42 AM, Craig Miller wrote:
Quick Q
John -
Can you send the volume create command you used and the commands you
used to add the brick? Also please send the output of `uname -a`,
`gluster peer status`, `gluster volume info all`.
Thanks,
Craig
On 12/02/2010 06:28 PM, John Lao wrote:
When I mount via fuse I can see the data.
Jeremy -
What version of OFED are you running? Would you mind install version
1.5.2 from source? We have seen this resolve several issues of this type.
http://www.openfabrics.org/downloads/OFED/ofed-1.5.2/
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/02/2010 10
Ben -
The ping-timeout value is set by default to 42 seconds. I recommend
reducing that to at least 25 seconds. Details on the volume options are
here -
http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
Thanks,
Craig
-->
Craig Carl
Senior Syst
James -
We will track the CentOS 6 time line, so not for a while yet. We
use Fedora 11 and 12 quite a bit so I would be very surprised if there
was an issue, please let us know how your testing goes.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/01/2010 11:59
a minor performance hit to running bricks of different sizes in the
same volume, small LUNs make that easier.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/01/2010 08:29 AM, Burnash, James wrote:
Hello.
So, here's my problem.
I have 4 storage servers that wil
umentation/index.php/Gluster_3.1:_Setting_Volume_Options.
Please let me know if you have any other questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.c...@gmail.com
Twitter -
Hugo -
How did you disable the quick-read translator?
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/22/2010 06:48 AM, Hugo Cisneiros (Eitch) wrote:
Hi :)
In another thread, I had problems with the quick-read translator that
was fixed on 3.1.1. Since I'm usi
Fixed, thanks Marcus.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/21/2010 06:28 AM, Marcus Bointon wrote:
This link to the patched fuse source:
ftp://ftp.zresearch.com/pub/gluster/glusterfs/fuse/ on this page:
http://www.gluster.com/community/documentation/index.
/2010 6:55 PM, Craig Carl wrote:
On 11/18/2010 04:33 PM, Jeremy Enos wrote:
Post is almost a year old... ever any response here? Is it
possible to export tmpfs locations w/ gluster?
thx-
Jeremy
On 12/1/2009 8:14 AM, Alexander Beregalov wrote:
Hi
Is it possible to start server on tmpfs ?
export of the directory,
and the unzip completed in 4.6 seconds which is around
the performance level I need to get.
Any ideas?
Thanks,
Rafiq
On Fri, Nov 19, 2010 at 5:30 AM, Craig Carl <mailto:cr...@gluster.com>> wrote:
Rafiq -
Gluster 3.1.1 will ship shortly, in our test
CentOS documentation -
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/19/2010 12:25 PM, Burnash, James wrote:
Hi Jacob.
The link you gave me was the page I was referring to in my p
Pavel -
I have other performance testing running in EC2 this weekend, I'll
add a kernel extract to the test suite, see what I get with various
instance types, EBS design, etc. I'll get the results out to the list on
Tuesday.
Thanks,
Craig
-->
Craig Carl
Senior Systems Eng
disk bound applications.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/18/2010 09:14 PM, Rafiq Maniar wrote:
Hi,
I'm using Glusterfs3.1 on Ubuntu 10.04 in a dual replication setup, on
Amazon EC2.
It takes 40-50 seconds to unzip an 8MB zip file full of small f
Dennis -
Not at this point. HSM is a long term goal, it won't happen soon. If
the 100Mbit machines have less available disk space they will do less
I/O than the bigger machines, maybe that helps?
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 11/16/2010
O cards are very fast and work well with Gluster, so
does the solution from RNA Networks. (http://www.rnanetworks.com/)
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http:
this will heal only
the files on that device.
If you know when you had a failure you want to recover from this is even
faster -
#cd
#find ./ -type f -mmin -exec stat
/’{}’ \; this will heal only the files on that device
changed x or more minutes ago.
Thanks,
Craig
-->
-
http://download.gluster.com/pub/gluster/glusterfs/qa-releases/glusterfs-3.1.1qa8.tar.gz.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mail
/wiki/Stub_file
Please let me know if you have any other questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
without interrupting access to it ...
On 11/14/10 03:59, Craig Carl wrote:
Anselm -
You can remove a brick online, you can't change the type of an existing volume,
if you could explain what you what to do with a 'merge' and a 'split' I could
give you a better answer, you ca
improve,
please, please post your ideas here. Most of the product ideas posted to
this mailing list in the last 6 months are being actively developed or
investigated.
Thanks again for your feedback.
[1] Chris Mason is doing incredible work along with the entire brtfs
community, nothing bu
ly
fewer create/delete operations, you might find that the performance XFS
delivers is acceptable. We have many successful deployments that use XFS
in just this way."
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
___
Gluster-user
lways keep the index files local, that
makes a big difference.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
.deb
is build?
BR
Uwe
Am 15.11.2010 um 11:02 schrieb Craig Carl:
Uwe -
There is a name resolution bug in version 3.1, that is the issue you are
seeing. If you would like to continue testing the most recent QA version,
3.1.1qa6 resolves the issue, you can get it here -
http
.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Uwe Kastens"
To: gluster-users@gluster.org
Sent: Monday, November 15, 2010 1:19:32 AM
Subject: [Gluster-users] setup trouble without DNS
Hello,
, eliminating that requirement.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: mki-gluste...@mozone.net
To: gluster-users@gluster.org
Sent: Monday, November 15, 2010 1:45:19 AM
Subject: Re: [Gluster-us
Mohan -
All the client and server volume files must be in sync, having different client
vol files on different clients will result in these types of errors, it is also
the primary cause of split-brain, so please be cautious when making these kind
of changes.
Thanks,
Craig
-->
Cr
.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Adam Lindsay"
To: gluster-users@gluster.org
Sent: Sunday, November 14, 2010 10:03:21 AM
Subject: [Gluster-users] Small Tests in EC2 failing.
to date on our progress.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: "Anselm Strauss"
To: gluster-users@gluster.org
Sent: Saturday, November 13, 2010 1:56:03 AM
Subject: [Gluster-users] Online operations
Hi,
I have done some testing with gluste
m/community/documentation/index.php/Gluster_3.0_to_3.1_Upgrade_Guide
Please let me know if you have any other questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - ( 408) 829-9953 (California, USA)
Office - ( 408) 770-1884
Gtalk - craig.c...@gmail.com
Mike -
We will have several people on-site, if I remember right we are sharing a booth
with Fusion-IO, come by anytime.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Mike Hanby"
To:
Rick -
We don't currently have a public road map but one is in the works. I'll post it
as soon as it is ready to go.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Rick King"
To: gl
to you, if you could give me some feedback I would
really appreciate it.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Jens Mueller"
To: gluster-users@gluster.org
Sent: Friday, November
r and failover will be very quick and you will not get a connection
reset, no errors to your applications. If you have multiple replica pairs use
RRDNS to load balance across the entire cluster.
http://ctdb.samba.org/configuring.html
http://www.ucarp.org/project/ucarp
Thanks,
Craig
-->
Cr
to try and reproduce the problem here on 3.1 and 3.1.1qa5.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Matt Hodson"
To: "Craig Carl"
Cc: "Jeff Kozlowski" , gluster-user
sub-volumes added before
starting rebalance and scaling.
So for multiple exports from a single server we need to calculate the total
value moved from the server by multiplying with such number of exports.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: &qu
Matt -
A couple of questions -
What is your volume config? (`gluster volume info all`)
What is the hardware config for each storage server?
What command did you run to create the test data?
What process is still writing to the file?
Thanks,
Craig
-->
Craig Carl
Gluster,
Udo -
We are tracking these sort of client side configuration issues as bug #2014 ,
however in this particular case the hosts file/DNS workaround appears to be
working well at several sites, is there any reason that process won't work for
you?
Thanks,
Craig
-->
Craig Carl
me you dump the stats they all reset to 0,
you might for example setup a cron job to dump the stats nightly.
The stats are pretty self explanatory, please let us know if you have any
specific questions.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (Calif
ed or detailed please let me know and I will make sure they get added
to the wiki or the Intro to Gluster document.
Thanks again,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Brent Clark"
To:
Peter -
You can set most options on the volume using `gluster volume set`, there is a
bug open to support adding translators dynamically using the gluster command,
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2014 .
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
C
glusterfs.io.stats.dump -v /tmp/ /mnt/client2
The error message reported by setfattr can be ignored. Latency and fd-stats
data would be present in the respective log files of the glusterfs processes.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gt
5.1.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Bernard Li"
To: "Shehjar Tikoo"
Cc: "Gluster General Discussion List"
Sent: Thursday, November 4, 2010 1:17:04 PM
Subject:
thing Gluster.
Please let me know if this helps.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Jonathan B. Horen"
To: gluster-users@gluster.org
Sent: Thursday, November 4, 2010 3:44:09 PM
Subje
Rick -
You would need to create an account, then you can add yourself to the CC list.
I went ahead and added you to the CC list in case you don't want to create an
account.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk -
Mike -
" nice and graceful like :-)"
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Mike Hanby"
To: gluster-users@gluster.org
Sent: Wednesday, November 3, 2010 2:09:17 PM
Subject:
Bug #1203 - http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1203
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Rick King"
To: "Craig Carl"
Cc: gluster-users@gluster.or
Samuele -
That happens automatically now. We do this so you can dynamically add and
remove storage to your Gluster cluster, plus users no longer have to manage or
edit config files.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - crai
There is a bug filed, Gluster should throw a warning when you start the volume.
Please keep us updated as you test, let me know f you have any other questions.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
F
Matt -
I talked to engineering, is it possible another NFS service is running on the
servers? Can you send us a ps -ef?
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Matt Hodson"
To:
Samuele -
You don't need to create a client vol file with 3.1. Please delete it from the
clients and follow these instructions to mount -
http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Manually_Mounting_Volumes
Thanks,
Craig
-->
Craig Carl
Glus
the logs instead of hard
coding.
2. Use 'which gluster' instead of hard coding.
logrotate support should be included in Gluster, I have filed
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2053 to get it added.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (
Phil -
Please check this page -
http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Rotating_Logs
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Phil Packer"
To: &qu
/OFED/ofed-1.5.1/ , even if
your disti has OFED 1.5.1 pre-packaged.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Michael Galloway"
To: "Lana Deere"
Cc: gluster-users@gluster.org
Sen
German -
A good place to start would be the Introduction to Gluster guide,
http://download.gluster.com/pub/gluster/systems-engineering/Introduction_to_Gluster.pdf
.
Please let me know if you have any questions.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer; Gluster,
.1 FS download -
http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/
and the documentation -
http://www.gluster.com/community/documentation/index.php/Main_Page
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@g
not been tested on CentOS releases earlier than 5.1, use at
your own risk.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Bernard Li"
To: "Gluster General Discussion List"
Sent: Tues
Horacio -
Gluster isn't supported on 32bit hardware. I would suggest you use NFS to
attach your 32 bit clients.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Horacio Sanson"
To: gluster
documentation for 3.1 please find the wiki
here - http://www.gluster.com/community/documentation/index.php/Main_Page . If
you think something is missing please let me know and I'll get it add ASAP.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA
est file. Once
your Gluster volumes are created Gluster will hide all of the bricks under a
single namespace.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Mike Hanby"
To: gluster-users@gluster
ss than an hour.
Thanks,
Craig
-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com
From: "Mathieu Masseboeuf - Cadran Finance"
To: "Craig Carl"
Cc: gluster-users@gluster.org
Sent: Wednesday, October 20, 2010 1
yping to setup a volume
the first time there is no downside.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: "Daniel Mons"
To: gluster-users@gluster.org
Sent: Saturday, October 23, 2010 5:38:12 AM
Subject: Re: [Gluster-users] Question about Volume Type
sdffs33:/users
This is our new 3.1 syntax that we now support. This is probably a
slightly cleaner approach since it follows traditional NFS syntax but
for Gluster native client mounts.
From: "Craig Carl"
To: "Luis E. Cerezo"
Cc: gluster-users@gluster.org
Sent: W
Brent -
Those OS'es are 13 and 14 years old, respectively. I'm all for stable but we
certainly don't test on them :) I've sent a note to engineering, I'll let you
know what they say about UDP support.
Craig
From: "Brent A Nelson"
To: "Craig Car
1 - 100 of 176 matches
Mail list logo