I recently moved from the fuse client to NFS - Now I'm seeing a bunch of
this in syslog. Is this something to be concerned about, or is it
'normal' NFS behavior?
NFS: server localhost error: fileid changed
fsid 0:15: expected fileid 0xd88ba88a97875981, got 0x40e476ef5fdfbe9f
I also see a lot
Dear All,
we are facing a problem in our computer room, we have 6 servers that act like
bricks for GlusterFS, the servers are configured in the following way:
OS: Centos 6.2 x86_64
Kernel: 2.6.32-220.4.2.el6.x86_64
Gluster RPM packages:
glusterfs-core-3.2.5-2.el6.x86_64
This might be a question asked before, but I just couldn't effectively
search the whole archives,
So please bear me and kindly advise.
We have a 3rd party application (video streaming kind of) which uses
directIO (O_DIRECT).
On the server side, in the posix translator, we added the option
I saw a quick demo of HotLava's multiport ethernet cards at CeBIT last week.
The guy there had a low-spec server with two multi-port 10Gbe cards in sitting
there sustaining 200Gbit/sec. I was quite impressed, thought it might be of
interest to gluster users. They're very pretty when they're all
On Fri, Mar 09, 2012 at 08:51:21PM -0800, Joe wrote:
I simply copied the glusterfsd.vol.sample ...
This shouldn't be necessary - just use the cluster CLI and you don't need to
touch any config files. It worked just fine for me.
We are having the same problem (see my post earlier this week). We have
four server nodes in a replicated cluster, and each is providing
localhost server facilities. The only comment I have gotten is not to
use Gluster for a cache drive, but that doesn't really address the
problem of why this
Are the Debian/Ubuntu build/config directories for public releases
available anywhere? We'd like to be able to build packages similar to
the RedHat/Gluster releases (so cribbing off the Debian released
packages is less than ideal).
These don't seem to be included in the release tarballs or the
On 03/14/2012 07:29 AM, Bill Bao wrote:
This might be a question asked before, but I just couldn't effectively
search the whole archives,
So please bear me and kindly advise.
We have a 3^rd party application (video streaming kind of) which uses
directIO (O_DIRECT).
On the server side, in
Hi Brian,
Thank you for responding. So you aren't using the vol files in
/etc/glusterfs to control anything, such as afra or unity? I am just
asking because after building my own rpms and installing them, I was able
to build like I did before and I didn't see the high CPU usage. Now the
weird
Greetings,
There are 2 imminent releases coming soon to a download server near you:
1. GlusterFS 3.2.6 - a maintenance release that fixes some bugs.
2. GlusterFS 3.3 beta 3 - the next iteration of the exciting new hotness that
will be 3.3
You can find both of these in the QA builds server:
Is there an estimate of when Gluster 3.3 would be out of beta ?
Tim Bell
CERN
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
What is the most current 3.2.6 release? It looks like 3.2.6p3?
On 03/14/2012 03:21 PM, John Mark Walker wrote:
Greetings,
There are 2 imminent releases coming soon to a download server near you:
1. GlusterFS 3.2.6 - a maintenance release that fixes some bugs.
2. GlusterFS 3.3 beta 3 - the
- Original Message -
What is the most current 3.2.6 release? It looks like 3.2.6p3?
That is correct.
-JM
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
We have a four-node, replicated cluster. When using the native gluster
client, we use the local server as the mount point (ie., mount
localhost:/glustervolume /dir). For NFS, can we do the same, or should
all of the clients mount from the same server to ensure proper locking??
I've tried using
2012/3/14 Tim Bell tim.b...@cern.ch:
Is there an estimate of when Gluster 3.3 would be out of beta ?
Before enlightenment 17 , right guys? Right ?
___
Gluster-users mailing list
Gluster-users@gluster.org
All,
For our project, we bought 8 new Supermicro servers. Each server is a
quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives.
To start out, we only populated 2 x 2TB enterprise drives in each
server and added all 8 peers with their total of 16 drives as bricks to
our
16 matches
Mail list logo