On Tue, Dec 20, 2016 at 10:32:17AM -0500, Jeff Darcy wrote:
> > Is there some known formula for getting performance out of this stack, or is
> > Samba with Glusterfs with encryption-at-rest just not that workable a
> > proposition for now?
>
> I think it's very likely that the combination you desc
Hi,
We have Glusterfs 3.8.7 with the crypt translator enabled on Ubuntu 16.04
with Samba 4.3.11. With or without the VFS module enabled, files dropped
through an SMB mount from a Windows workstation are painfully slow -- a
minute for a file that transfers in several seconds through a
similarly-con
On the one hand:
# gluster volume heal foretee info healed
Gathering list of healed entries on volume foretee has been unsuccessful on
bricks that are down. Please check if all brick processes are running.
root@bu-4t-a:/mnt/gluster# gluster volume status foretee
Status of volume: foretee
On the one hand:
# gluster volume heal foretee info healed
Gathering list of healed entries on volume foretee has been unsuccessful on
bricks that are down. Please check if all brick processes are running.
root@bu-4t-a:/mnt/gluster# gluster volume status foretee
Status of volume: foretee
Hi,
I've got a smallish Gluster 3.1.5 instance across two systems with most of the
service being by NFS mounts. Yeah, that's old. But it's generally stable and
there are other priorities ahead of upgrading it.
Recently it started to lag on directory listings on the box providing
Gluster's NFS mo
On Thu, Dec 05, 2013 at 08:19:04PM +, Nux! wrote:
> I don't understand what Rackspace has to do in all this. AFAIK Cinder does
> support Gluster.
> http://www.gluster.org/category/glusterfs-openstack-cinder/
Rackspace Private Cloud is where they rent the dedicated hardware to the
customer wit
Hi,
A firm I work with is seriously considering Rackspace's Private Cloud, which
is OpenStack, but Rackspace does not support Gluster, despite reports
elsewhere that Gluster, with IBM's and Red Hat's recent contributions, does
integrate well with OpenStack.
The VMs in this case don't require fast
> We will be deploying 2 gluster nodes (relica-mode) across 2 of our sites.
> The sites are connected over 8Gig dark fibre. Our storage disks will be
> assigned from EMC VNX storage (SAS/NL-SAS disks).
>
> This gluster volume will be mounted on 2 application servers, 1 on each
> site. The applicat
Bobby,
I'm not the one who can say. But those who can will need to know much more
than you've specified. What kind of drives are you using, with what
controllers? What's are the network components between your two nodes? What
is the range of file sizes, in what distribution over that range? How ma
On Tue, Apr 09, 2013 at 09:44:26AM -0400, Whit Blauvelt wrote:
> Had some data loss with an older - 3.1.4 - Gluster share last night. Now
> trying to see what the best lessons are to learn from it. Obviously it's too
> old a version for a bug report to matter. Wondering if anyone rec
Had some data loss with an older - 3.1.4 - Gluster share last night. Now
trying to see what the best lessons are to learn from it. Obviously it's too
old a version for a bug report to matter. Wondering if anyone recognizes
this particular sort of error condition though.
It's a 300 G replicated sh
As I read this:
> https://bugzilla.redhat.com/show_bug.cgi?id=838784
the bug is, from Gluster's POV, between ext4 and the kernel. Is this
correct, that Gluster can't safely use ext4 on recent kernels until ext4's
relationship with the kernel is fixed? If Gluster can't be simply patche
A small question: We know that one or two members of the dev team read these
emails. One said just yesterday he's more likely to see emails than bug
reports. Now, sometimes the response to an email showing an obvious bug is
"File a bug report please." But for something like this - yes timestamps ar
On Mon, Feb 11, 2013 at 12:18:38PM -0500, John Mark Walker wrote:
> Here's a write-up on the newly-released GlusterFS 3.4 alpha:
>
> http://www.gluster.org/2013/02/new-release-glusterfs-3-4alpha/
If we want to test the QEMU stuff, is there a more thorough/current writeup
than Rao's blog post at
h
On Fri, Feb 01, 2013 at 12:53:24PM -0800, Michael Colonno wrote:
> Forgot to mention: on a client system (not a brick) the glusterfs
> process is consuming ~ 68% CPU continuously. This is a much less powerful
> desktop system so the CPU load can’t be compared 1:1 with the systems
> comp
On Tue, Jan 22, 2013 at 08:37:03AM -0500, F. Ozbek wrote:
> I am surprised at Jeff's responses. What happened to freedom of speech?
> I can't say we tested 3 products and 2 failed and one passed our tests?
In fairness, you didn't say you'd been testing for 3 months. And you still
haven't said wha
On Sun, Jan 20, 2013 at 11:42:44AM -0500, John Mark Walker wrote:
> On the other hand, do you see people recommending Fedora on the Ubuntu
> lists?
Raises the question: Where do we have meta-discussions and have it be
appropriate? This isn't religion, and shouldn't be. If someone on an Ubuntu
lis
On Tue, Jan 08, 2013 at 04:49:30PM +0100, Stephan von Krawczynski wrote:
> > Pointing out that a complex system can go wrong doesn't invalidate complex
> > systems as a class. It's well established in ecological science that more
> > complex natural systems are far more resilient than simple ones.
On Tue, Jan 08, 2013 at 02:42:49PM +0100, Stephan von Krawczynski wrote:
> What an dead-end argument. _Nothing_ will save you in case of a split-brain.
So then, to your mind, there's _nothing_ Gluster can do to heal after a
split brain? Some non-trivial portion of the error scenarios discussed in
On Tue, Jan 08, 2013 at 01:11:24PM +0100, Stephan von Krawczynski wrote:
> Nobody besides you is talking about timestamps. I would simply choose an
> increasing stamp, increased by every write-touch of the file.
> In a trivial comparison this assures you choose the latest copy of the file.
> There
There's a strong trend against documentation of software, and not just in
open source. I'm old enough to remember when anything modestly complex came
with hundreds of pages of manuals, often over several volumes. Now, I can
understand why commercial software with constrained GUIs wants to pretend
t
On Sun, Dec 30, 2012 at 05:12:04PM +0100, Stephan von Krawczynski wrote:
> If I delete
> something on a disk that is far from being full it is just plain dumb to
> really erase this data from the disk. It won't help anyone. It will only hurt
> you if you deleted it accidently. Read my lips: free d
On Tue, Dec 18, 2012 at 03:26:57PM -0700, Shawn Heisey wrote:
> I have an idea I'd like to run past everyone. Every gluster peer
> would have two NICs - one "public" and the other "private" with
> different IP subnets. The idea that I am proposing would be to have
> every gluster peer have all pr
On Wed, Dec 12, 2012 at 09:28:24AM +0100, Gunnar wrote:
> The problem was that the open file limit was too low, after raising it
> with: ulimit -n 65536 (which is probably to high) I had no further crash.
That makes sense.
Whit
___
Gluster-users mailin
Have you used top or htop to see if it's CPU use, by what?
Whit
On Tue, Dec 11, 2012 at 04:13:15PM +, Tom Hall wrote:
> I have 2 gluster servers in replicated mode on EC2 with ~4G RAM
> CPU and RAM look fine but over time the system becomes sluggish, particularly
> networking.
> I notice whe
On Tue, Dec 11, 2012 at 10:10:32AM +0100, Gunnar wrote:
> after testing for a while (after copying several 10 files) it
> seems that either glusterfs or glusternfs is crashing under load.
> The the average load on the machine goes up to 8 or 9, before it was
> max around 1, but there is no acc
Gunnar,
> Second fastest is #1, nfs mount shared by Samba 4000 files in around 6 min
> Slowest is #2 where I need more than 12 min for 4000 files.
Thanks for running that test. That's a significant difference.
I wonder in the Samba > Gluster client > Gluster server scenario whether the
slowne
Gunnar,
I claim nothing special in terms of Samba knowledge. Not even that this is
optimal in any dimension. All I can say is that none of my users have
complained about performance, in a situation where speed's not critical as
long as the overall system is dependable. But my current Samba conf, f
Avanti,
For those of us willing compile kernels when there's a distinct advantage,
has this patch made it into a kernel release? If so, which?
Thanks,
Whit
On Tue, Dec 04, 2012 at 04:35:39PM -0800, Anand Avati wrote:
> Support for READDIRPLUS in FUSE improves directory listing performance
> sign
Hi,
I'm about to set up Gluster 3.3.1 in a cloud environment. The clients are
going to be nfs and cifs as well as Gluster. The Gluster docs suggest
setting up cifs as a share of a local Gluster client mount. My setup in
another, cloudless setup (w/ Gluster 3.1.5) has been of Gluster mounted on a
s
Gerald,
How's the VM's on Gluster thing working? Stable? Fast enough where speed's
not essential?
Thanks,
Whit
On Tue, Nov 27, 2012 at 09:36:24AM -0600, Gerald Brandt wrote:
> Hi,
>
> I have speed and stability increases with 3.3.0/3.3.1. If you're running
> VM's on gluster, it's a no brainer
On Sun, Nov 18, 2012 at 07:56:36PM -0500, David Coulson wrote:
> On 11/18/12 7:53 PM, Whit Blauvelt wrote:
> >Red Hat does not support upgrades between major versions. Thus Cent OS and
> >Scientific don't either.
> I work in an Enterprise environment, and in general onc
On Sun, Nov 18, 2012 at 09:58:58PM +, Brian Candler wrote:
> On Sun, Nov 18, 2012 at 09:27:41AM -0700, Shawn Heisey wrote:
> > Do you happen to know if it will be possible
> > to upgrade from CentOS 6 to CentOS 7? The lack of an upgrade path
> > from 5 to 6 has been a major headache.
>
> Sor
On Mon, Oct 22, 2012 at 09:49:21AM -0400, John Mark Walker wrote:
> Dan - please file a bug re: the NFS issue.
Glad to hear this will be treated as a bug. If NFS is to be supported at
all, being able to use a virtual-IP setup (whether mediated by CARP or
otherwise) is essential. And considering t
On Fri, Sep 14, 2012 at 09:41:42AM -0700, harry mangalam wrote:
> > > What I mean:
> > > - mounting a gluster fs via the native client,
> > > - then NFS-exporting the gluster fs to the client itself
> > > - then mounting that gluster fs via NFS3 to take advantage of the
> > > client-side caching.
On Thu, Sep 13, 2012 at 05:02:48PM -0700, Adam Brenner wrote:
> > Yeah. But then, if it could do an unattended boot, then someone who steals
> > the system isn't hindered by the encryption either.
>
> In this hypothetical situation, is securing your machine room a
> consideration? I suspect someon
On Thu, Sep 13, 2012 at 02:42:13PM -0700, Joshua Baker-LePain wrote:
> I haven't, but given that Gluster runs on top of a standard FS, I
> don't see any reason why this wouldn't work. Rather than just
> Gluster on top of ext3/4/XFS, it would be Gluster on top of
> ext3/4/XFS on top of an LUKS enc
Hi,
This may be crazy, but has anyone used filesystem encryption (e.g. LUX)
under Gluster? Or integrated encryption with Gluster in some other way?
There's a certain demand to encrypt some of our storage, in case the
hypothetical bad guy breaks into the server room and walks out with the
servers
Hi,
I don't know about Gluster support, but I use inotify via incrontab to watch
mounted Gluster filesystems and it works well. Most of my use of it is just
triggering scripts when new files arrive.
See: http://inotify.aiken.cz/?section=incron&page=doc&lang=en
There are limitations. It has to ha
On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
> If you have small files you are busted, if you have workload on the clients
> you are busted and if you have lots of concurrent FS action on the client you
> are busted. Which leaves you with test cases nowhere near real li
On Fri, Sep 07, 2012 at 07:32:14AM +0100, Brian Candler wrote:
> On Thu, Sep 06, 2012 at 03:19:53PM -0400, Whit Blauvelt wrote:
> > Here's the unexpected behavior: Gluster restored the nfs export based on the
> > backing store. But without that backing store really mounted,
Hi,
This may just be unexpected by me. But I'm wondering if there are safeguards
against it, either that are in 3.1.4 that I haven't set up, or in
subsequent versions. What happened:
Despite dual power supplies and UPSes, two Gluster hosts managed to get
suddenly shut down at once. On coming back
On Thu, Aug 30, 2012 at 06:12:46PM +0100, Brian Candler wrote:
> On Thu, Aug 30, 2012 at 09:44:30AM -0400, Whit Blauvelt wrote:
> > Given that my XFS Gluster NFS mount does not have inode64 set (although it's
> > only 500G, not the 2T barrier beyond which that's eviden
On Thu, Aug 30, 2012 at 10:42:01AM -0500, John Jolet wrote:
> i had to turn off posix locking in order to get windows machines to
> be able to write to the shares at all.
Haven't seen that problem.
Looking for background I found this presentation on SMB2 - the next version
of Samba basically, in
Whether or not this is related to my particular catastrophe yesterday,
there's inconsitent advice on whether Samba should have posix locking on or
off when Gluster is involved. On the one hand, on is advised:
http://gluster.org/community/documentation/index.php/Gluster_3.1:_Adding_the_CIFS_Protoco
Just a note:
I see that "posix locking = on" (the default) in Samba was at least at one
point a real problem for GFS2, as mentioned here:
https://fedoraproject.org/wiki/Features/GFS2ClusteredSamba
Not sure if that was ever resolved. I bring that up here because I'm
wondering if it's similarly a
> Hoping someone has a clearer view of this. Are there respects in which
> Gluster's NFS is 32-bit rather than 64? Or that xfs on a 64-bit system is?
I'm seeing that XFS is said to require an inode64 mount option to not run
into various problems, and that this if often considered an example of XFS
Taking the message from Samba to be:
> No locks available error. This can happen when using 64 bit lock offsets
> on 32 bit NFS mounted file systems.
Is Gluster 3.1.4's NFS itself 32-bit or 64-bit? From here:
http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_NFS_Frequent
Hi,
I have a couple of Gluster 3.1.4 shares on the LAN NFS mounted at
192.168.1.242 to a couple of other systems. One of those systems in turn has
access via Samba that includes those shares. This has been a stable system
for a year. Today it went crazy, where the most immediate bad effect was an
On Thu, Jul 19, 2012 at 05:16:24AM -0400, Krishna Srinivas wrote:
> It was pretty confusing to read this thread. Hope I can clarify the
> questions here.
Thanks. I was confused.
> The other discussion in this thread was related to NLM which has been
> implemented in 3.3.0. This is to support lock
On Wed, Jul 18, 2012 at 01:56:04AM -0400, Rajesh Amaravathi wrote:
> As to whether we can disable parts of kernel NFS (I'm assuming e.g NLM), I
> think
> its not really necessary since we can mount other exports with nolock option.
> If we take out NLM or disable NLM at the kernel level, then eve
ites.
>
> Regards,
> Rajesh Amaravathi,
> Software Engineer, GlusterFS
> RedHat Inc.
>
> - Original Message -
> From: "Whit Blauvelt"
> To: "Rajesh Amaravathi"
> Cc: "David Coulson" , "Gluster General Discussion
> List&q
Say, is it possible to compile a kernel without whatever part of its NFS
support competes with Gluster's locking?
Whit
On Fri, Jul 13, 2012 at 08:14:27AM -0400, Rajesh Amaravathi wrote:
> I hope you do realize that two NLM implementations of the same version
> cannot operate simultaneously in th
On Wed, Jul 11, 2012 at 05:56:00PM +, Barlow, Jim D wrote:
> As far as fencing goes, I have done nothing. I'm manually as
> carefully as I can manage using virt-manager. I've already accidentally
> started the same VM on two bricks. Watch your autostart settings on the
> VMs :-)
There's
Hi,
We've several systems running 3.1.5 since it was new, happily without
incident. The time has come to upgrade to 3.3.0, so I want to make sure I
have the instructions right to minimize downtime. From here:
http://gluster.org/community/documentation//index.php/Main_Page
I'm sent to here:
Don't know if "proto" is a synonym. But in my working auto.fs mounts I have
"mountproto=tcp" not proto.
Whit
On Sun, Jun 10, 2012 at 04:33:50AM -0500, Chris LeBlanc wrote:
> Hi friends,
>
> I'd like to use autofs but I had mixed luck. It seems as thought it mounts
> itself instead of the gluste
> I'm sorry if the release didn't address the specific features you need,
> and I'm sorry if we gave the impression that it would. Our additional
> features for 3.3 were always pretty clear, or so I thought. If you can
> find any statements from the past year that were misleading, I would be
> happ
On Tue, Jun 05, 2012 at 08:13:37AM -0700, Anand Avati wrote:
> Whit,
> There has been no drop in goal. There are many parts to "supporting a VM
> workload". The goals listed for 3.3 are definitely met - to make VMs work good
> enough (fix self-heal locking issues, some FUSE upstream work for supp
On Tue, Jun 05, 2012 at 11:09:39AM +0530, Amar Tumballi wrote:
> >I tried it to host Virtual Machines images and it didn't work at all. Was
> > hoping to be able to spread the IOPS more through the cluster. That's
> > part of what I was trying to say on the email I sent earlier today.
>
> I saw
Hi,
My experience with Gluster has been entirely with local hardware. And the
discussion here has been entirely about such use. But I the Gluster docs
hint of Gluster use within cloud environments - or at least in Amazon's.
Note this is a different issue than using Gluster as backing storage for a
I'm told it doesn't, but may in the future.
Whit
On Fri, Feb 17, 2012 at 01:32:25PM -0500, Matty wrote:
> I'm pretty new to Gluster, though I really what like I've seen and
> played with so far!! Does anyone happen to know if Gluster checksums
> data that is written to disk? I've used ZFS quite a
Yes, it's correct behavior. Only writes and edits through the gluster mount
are handled immediately by gluster.
Whit
On Wed, Mar 28, 2012 at 10:30:03AM +0800, nhadie ramos wrote:
> Hi All,
>
> I am just starting to do some tests on gluster, i have 2 servers which
> i setup as replicate:
>
>
On Wed, Feb 15, 2012 at 08:22:18PM -0500, Miles Fidelman wrote:
> i. Is it now reasonable to consider running Gluster and Xen on the same
> boxes, without hitting too much of a performance penalty?
What's in the hardware? What kind of loads are you expecting it to handle?
For a bunch of lightly-
On Tue, Feb 14, 2012 at 08:39:31AM +, Brian Candler wrote:
> On Mon, Feb 13, 2012 at 11:45:10PM -0500, Whit Blauvelt wrote:
> > If Gluster gets the geo-rep
> > thing working right, it'll be the low-cost solution there too.
>
> Do I hear implied from that that geo-re
On Mon, Feb 13, 2012 at 09:18:34PM -0600, Nathan Stratton wrote:
>
> On Mon, 13 Feb 2012, Whit Blauvelt wrote:
>
> >You don't have to leave all your redundancy to Gluster. You can put Gluster
> >on two (or more) systems which are each running RAID5, for instance. Then
You don't have to leave all your redundancy to Gluster. You can put Gluster
on two (or more) systems which are each running RAID5, for instance. Then it
would take a minimum of 4 drives failing (2 on each array) before Gluster
should lose any data. Each system would require N+1 drives, so double yo
On Mon, Feb 06, 2012 at 08:28:55PM +0530, Pranith Kumar K wrote:
> Modifying the data directly on the brick in a replica
> pair(Brick A, Brick B) is a bad thing because it will not be
> possible to decide in which direction to heal the contents of the
> file B -> A or A -> B. If the file is m
On Sun, Feb 05, 2012 at 10:46:53PM +0100, Ove K. Pettersen wrote:
> >>* Append to file on one of the bricks: hostname>>data.txt
> >Again, through a gluster/nfs mount, or a local mount?
> This was done directly to a brick (local mount) to try to simulate
> some disk-problems.
> Appending to the fil
Not sure if you're asking your questions precisely enough. The clues may be
in your inclusions, but I'm not going to read all that to figure it out so
I'll ask directly:
> Short description of my test
>
> * 4 replicas on single machine
> * glusterfs mounted locally
> *
On Sun, Feb 05, 2012 at 07:36:55PM +, Brian Candler wrote:
> On Sun, Feb 05, 2012 at 08:02:08PM +0100, Stefan Becker wrote:
> >After the debian upgrade I can
> >still mount my volumes. Reading is fine as well but it hangs on writes.
>
> Could it be that on the post-upgrade machines one
On Tue, Jan 24, 2012 at 12:50:58PM -0500, John Mark Walker wrote:
> True. Also, note that XFS is the recommended disk FS, although Ext3/4 are
> certainly still supported and will continue to be so.
Are the reasons listed somewhere? It used to be the opposite. From
http://www.gluster.org/community
ow):
/var/lib/sitedata -fstype=glusterfs 127.0.0.1:/shared
There are other options, but that should be enough. Autofs is packages for
every Linux distro AFAIK.
Whit
On Tue, Jan 17, 2012 at 06:58:05PM +0100, Marcus Bointon wrote:
> On 16 Jan 2012, at 18:59, Whit Blauvelt wrote:
>
>
On Mon, Jan 16, 2012 at 06:32:00PM +0100, Marcus Bointon wrote:
> You can see it's failing to connect to the local node, and then recording
> the gluster server starting a short time later. I'm pointing at localhost
> to get the share point rather than either of the external IPs. Is there
> some wa
Just a general question to the group: What's the current feeling about the
most stable releases so far? I've got several different instances of 3.1.4
gluster clusters running in undemanding circumstances for many months
without incident (aside from the known group-ownership issues). Since my
priori
On point 2, I've been running an older version of Gluster for some months on
several pairs of systems which are also serving other functions. In one case
they're running multiple VMs (replicated by individual DRBD mirrorings)
while also providing replicated Gluster storage used (via NFS) by other
s
On Tue, Nov 01, 2011 at 01:01:09PM -0700, Brian Pontz wrote:
> I'm wondering why the 32 bit binary has no trouble on a standard nfs mount but
> it does have trouble on the gluster mount with the gluster client.
Isn't NFS 32 bit throughout?
Whit
___
Glu
What's the technical level required to build translators? What languages are
appropriate? Is this a capability that is likely to be useful to an advanced
sysadmin, or most likely only of real interest to those with more serious
software engineering resources to devote to it?
Whit
On Thu, Oct 20,
On Mon, Oct 10, 2011 at 10:56:11AM -0400, Miles Fidelman wrote:
> GlusterFS stands out as the package that seems most capable of
> supporting this (if we were using KVM, I'd probably look at Sheepdog
> as well).
The Gluster folks have said it's best to wait for 3.3's improved VM support.
My under
tten by the same application (apache) I see no way to split that.
> What is more weird is that I managed to get it to read just one side for a
> moment, after restarting gluster because of some adjustments, it started
> reading both nodes again, even restoring config backups had no effec
On Tue, Sep 27, 2011 at 12:15:50PM +0100, Lone Wolf wrote:
>So I am assuming gluster is distributing the reads, being created with the
>CLI I tried to edit the configurations manually and added "option
>read-subvolume volume_name" to the cluster/replicate volume, being the
>read-su
> I have two replicated bricks. I mounted brick1 as NFS, and started
> writing/creating files.
Did you mount brick1 (the backing store) through some other NFS daemon, or
through the GlusterFS as NFS? If you do the first, stuff won't sync. If you
do the second, the Gluster daemons handle the sync
On Tue, Sep 13, 2011 at 11:14:06AM -0500, Gerald Brandt wrote:
> Hi,
>
> I hope I can explain this properly.
>
> 1. I have a two brick system replicating each other. (10.1.4.181 and
> 10.1.40.2)
> 2. I have a third system that mounts the gluster fileshares (192.168.30.111)
> 3. I export the sh
On Thu, Sep 08, 2011 at 01:02:41PM +, anthony garnier wrote:
> I got a client mounted on the VIP, when the Master fall, the client switch
> automaticaly on the Slave with almost no delay, it works like a charm. But
> when
> the Master come back up, the mount point on the client freeze.
> I've
>In my understanding of autofs is that it will mount as you need the
>filesystem but will it remount the FS when there is a server crash ?
Anthony,
I haven't specifically tested it in this precise context, but in general
that's exactly what it does. We went to using autofs because we had
Hi Anthony,
If you need the client to remain mounted, you need _some_ way of doing IP
takeover. You could write your own script for that.
As for remounting though, if your client is a *nix (Linux, OSX, whatever)
you can use autofs to establish the mount, and that will also handle
remounting autom
> ... the new hotness of GlusterFS 3.3, the 2nd beta of
> which we released today:
>
> http://www.gluster.com/community/documentation/index.php/3.3beta
If that beta is installed over a 3.1.6 system (not too critical to
production of course) will it work, or is there an upgrade path that simply
ha
Hi Deyan,
This may not be a useful suggestion, but why not just partition your storage
so that your bricks remain uniform? When you get more disk capacity because
you've got 3TB drives instead of 2TB in RAID5, so that you have 15TB rather
than 10TB per system, why not split the storage into 10TB a
On Sat, Aug 13, 2011 at 01:59:20PM +0200, Dipeit wrote:
> We noticed this bug too using the gluster client. I'm surprised that not
> more people noticed this lack of posix compliance. This makes gluster
> really unusable in multiuser environments. Is that because gluster is
> mostly used in large
On Wed, Aug 10, 2011 at 03:30:22PM +0200, David Pusch wrote:
> The objective is to create a redundant system. Shouldn't gluster be writing on
> all 6 nodes simultaneously rather than sequentially? Else it would seem like a
> rather poor choice for highly redundant systems.
I get the redundant part
On Wed, Aug 10, 2011 at 02:22:55PM +0200, David Pusch wrote:
> we now did another test where we mounted the Volume on the client and shut
> down all Servers but one. We then transferred a 1 GB test file to the Volume.
> The transfer took around 10 seconds. We then brought up another server from
On Sat, Jul 30, 2011 at 04:33:00PM +0530, Anand Avati wrote:
> Looks like your VM image is being attempted to open with O_DIRECT flag which
> is
> not supported in FUSE at the moment. You may try to configure your
> libvirt/qemu
> settings to open the VM without O_DIRECT. If not you may find htt
On Fri, Jul 29, 2011 at 03:22:23PM -0700, Eric wrote:
> i would prefer to use the glusterFS client for the automatic fail-over that it
> offers. is this possible?
I've built KVM VMs with virt-install writing to img files via the Gluster
client. My invocation of virt-install differed from yours in
> seeing as how the archives are, as is the norm for many lists,
> non-searchable
You can often search such archives with Google, specifying for instance
site:http://gluster/org/pipermail/gluster-users/ along with the search terms
(or fill in the site on the Advanced Search menu).
Best,
Whit
On Fri, Jul 22, 2011 at 10:26:13AM +0530, Anand Avati wrote:
> Can you post gluster client logs and check if there are any core dumps?
Okay, they are at http://transpect.com/gluster/ - may be a bit confused
because of system reboots and gluster being shut down and restarted as I was
working on it,
Okay ...
Finally got that one replicated partition back in line. A few of the
recommended
find /mnt/point -print0 | xargs --null stat
from each side seems to have done some good. Then while I'm away a second
replicated partition on the same two systems ends up with a
Transport endpoint is
> The client on the other system in the pair continues through this to have
> normal access. The system with the Gluster client problem shows no other
> symptoms.
Update: Now the second system's having the same symptoms.
Maybe I need to just go back to 3.1.3?
Whit
_
Hi,
This setup is two system which have replicated Gluster storage, and also are
the clients for it. In both cases the Gluster mount is handled by autofs.
The setup was quite stable with Gluster 3.1.3 for some weeks. Then a few
days back I upgraded to 3.1.5 - wanted to try it on a less essential s
On Thu, Jul 21, 2011 at 05:17:05PM -0400, Joe Landman wrote:
> Depending upon distro, you would need libattr, attr-dev, etc.
> Sadly, there is little/no consistency in naming these.
On Ubuntu 10.04 attr-dev is an alias for libattr1-dev, which seems to
install okay but there's still no ExtAttr.pm
On Thu, Jul 14, 2011 at 12:15:00AM -0400, Joe Landman wrote:
> Tool #2: scan_gluster.pl
Joe,
Thanks for this contributions.
scan_gluster.pl seems to depend on File::ExtAttr, which when I try to
install it from cpan to Perl v5.10.1 fails at the "make" step for reasons
cpan leaves unclear. Is the
> Is there any way to rotate client side’s log
>
> It’s getting bigger every day.
You can also configure logrotate - which is on most Linux systems - to
handle this. "man logrotate". It'll handle compression and deletion of older
files, too.
Whit
1 - 100 of 163 matches
Mail list logo