And why don't you just do the right thing and drop these semi-closed-source
stuff and use XMPP, about the only free/GPLed messenger service.
It has:
- no central provider
- no central servers
- server software free for anyone to install
- lots of free xmpp services around
- crypted or not crypted
moving to an in-kernel driver would give significantly more performance.
>
> On September 17, 2020 3:21:01 AM PDT, Alexander Iliev
> wrote:
> >On 9/17/20 3:37 AM, Stephan von Krawczynski wrote:
> >> Nevertheless you will break performance anyway by deploying
>
On Thu, 17 Sep 2020 12:21:01 +0200
Alexander Iliev wrote:
> On 9/17/20 3:37 AM, Stephan von Krawczynski wrote:
> > Nevertheless you will break performance anyway by deploying user-space
> > crawling-slow glusterfs... outcome of 10 wasted years of development in the
>
On Wed, 17 Jun 2020 00:06:33 +0300
Mahdi Adnan wrote:
> [gluster going down ]
I am following this project for quite some years now, probably longer than
most of the people nowadays on the list. The project started with the
brilliant idea of making a fs on top of classical fs's distributed over
On Thu, 18 Jun 2020 13:27:19 -0400
Alvin Starr wrote:
> > [me]
> This is an amazingly unreasonable comment.
> First off ALL distributed file systems are slower than non-distributed
> file systems.
Obviously you fail to understand my point: the design of glusterfs implies
that it can be as
On Thu, 18 Jun 2020 07:40:36 -0700
Joe Julian wrote:
> You're still here and still hurt about that? It was never intended to be in
> kernel. It was always intended to run in userspace. After all these years I
> thought you'd be over that by now.
Top Poster ;-)
And in fact, it's not true. The
On Thu, 18 Jun 2020 13:06:51 +0400
Dmitry Melekhov wrote:
> 18.06.2020 12:54, Stephan von Krawczynski пишет:
> >
> > _FS IN USERSPACE IS SH*T_ - understand that.
> >
>
> we use qemu and it uses gfapi... :-)
And exactly this kind of "insight" is bas
On Wed, 17 Jun 2020 00:06:33 +0300
Mahdi Adnan wrote:
> [gluster going down ]
I am following this project for quite some years now, probably longer than
most of the people nowadays on the list. The project started with the
brilliant idea of making a fs on top of classical fs's distributed over
On Thu, 18 Feb 2016 10:14:59 +1000
Dan Mons wrote:
> Without knowing the details, I'm putting my money on cache.
>
> Choosing how to mount Gluster is workload dependent. If you're doing
> a lot of small files with single threaded writes, I suggest NFS. Your
>
On Fri, 14 Jun 2013 14:35:26 -0700
Bryan Whitehead dri...@megahappy.net wrote:
GigE is slower. Here is ping from same boxes but using the 1GigE cards:
[root@node0.cloud ~]# ping -c 10 10.100.0.11
PING 10.100.0.11 (10.100.0.11) 56(84) bytes of data.
64 bytes from 10.100.0.11: icmp_seq=1
On Fri, 14 Jun 2013 12:13:53 -0700
Bryan Whitehead dri...@megahappy.net wrote:
I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
times (from host 172.16.1.10):
[root@node0.cloud ~]# ping -c 10 172.16.1.11
PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
64 bytes from
On Wed, 12 Jun 2013 09:04:30 -0400
Jeff Darcy jda...@redhat.com wrote:
[...]
that need to be moved, it shouldn't be too hard to combine this little
bag of tricks into a solution that meets your needs. Just let me know
if you'd like me to assist.
The true question is indeed: why does he need
On Wed, 12 Jun 2013 09:57:15 -0400
Jeff Darcy jda...@redhat.com wrote:
On 06/12/2013 09:46 AM, Stephan von Krawczynski wrote:
The true question is indeed: why does he need tricks at all to come to
something obvious for humans: a way of distributing files over the glusterfs
so that full
On Sat, 13 Apr 2013 23:47:21 +0530
Vijay Bellur vbel...@redhat.com wrote:
On 04/13/2013 08:11 PM, Stephan von Krawczynski wrote:
On Sat, 13 Apr 2013 10:37:23 -0400 (EDT)
John Walker jowal...@redhat.com wrote:
Try the new qa build for 3.3.2. We're hopeful that this will solve some
On Tue, 09 Apr 2013 03:13:10 -0700
Robert Hajime Lanning lann...@lanning.cc wrote:
On 04/09/13 01:17, Eyal Marantenboim wrote:
Hi Bryan,
We have 1G nics on all our servers.
Do you think that changing our design to distribute-replicate will
improve the performance?
Anything in the
I really do wonder if this bug in _glusterfs_ is not fixed. It really makes no
sense to do an implementation that breaks on the most used fs on linux.
And just as you said: don't wait on btrfs, it will never be production-ready.
And xfs is no solution, it is just a bad work-around.
On Fri, 8 Mar
On Wed, 30 Jan 2013 20:44:52 -0800
harry mangalam harry.manga...@uci.edu wrote:
On Thursday, January 31, 2013 11:28:04 AM glusterzhxue wrote:
Hi all,
As is known to us all, gluster provides NFS mount. However, if the mount
point fails, clients will lose connection to Gluster. While if we
On Thu, 31 Jan 2013 12:47:30 +
Brian Candler b.cand...@pobox.com wrote:
On Thu, Jan 31, 2013 at 09:18:26AM +0100, Stephan von Krawczynski wrote:
The client will still fail (in most cases) since host1 (if I follow you)
is
part of the gluster groupset. Certainly if it's
On Thu, 31 Jan 2013 09:07:50 -0800
Joe Julian j...@julianfamily.org wrote:
On 01/31/2013 08:38 AM, Stephan von Krawczynski wrote:
On Thu, 31 Jan 2013 12:47:30 +
Brian Candler b.cand...@pobox.com wrote:
On Thu, Jan 31, 2013 at 09:18:26AM +0100, Stephan von Krawczynski wrote
On Thu, 31 Jan 2013 14:17:32 -0500
Jeff Darcy jda...@redhat.com wrote:
There is *always* at least one situation, however unlikely, where you're
busted. Designing reliable systems is always about probabilities. If
none of the solutions mentioned so far suffice for you, there are still
On Thu, 31 Jan 2013 16:00:38 -0500
Jeff Darcy jda...@redhat.com wrote:
Most common network errors are not a matter of design, but of dead
iron.
It's usually both - a design that is insufficiently tolerant of
component failure, plus a combination of component failures that exceeds
that
Hi Patric,
your paper shows clearly you are infected by the fs-programmer-virus :-)
Noone else would give you tags/gfids/inode nums of a file inside a logfile
instead of the full true filename, simply because looking at the logfile
days/months/years later you know exactly nothing about the files
On Tue, 22 Jan 2013 09:05:56 -0500
Whit Blauvelt whit.glus...@transpect.com wrote:
On Tue, Jan 22, 2013 at 08:37:03AM -0500, F. Ozbek wrote:
[...]
We've not only got freedom of speech. We've got freedom of guns. Still,
walking into the meeting with your gun drawn will get you viewed as rude
On Tue, 08 Jan 2013 07:04:48 -0500
Jeff Darcy jda...@redhat.com wrote:
Timestamps are totally unreliable as a conflict resolution mechanism. Even if
one were to accept the dependency on time synchronization, there's still the
possibility of drift as yet uncorrected by the synchronization
On Mon, 07 Jan 2013 20:21:25 -0800
Joe Julian j...@julianfamily.org wrote:
I don't know the answer. I know that they want this problem to be
solved, but right now the best solution is hardware. The lower the
latency, the less of a problem you'll have.
The only solution is correct
On Tue, 08 Jan 2013 07:54:05 -0500
Jeff Darcy jda...@redhat.com wrote:
On 1/8/13 7:11 AM, Stephan von Krawczynski wrote:
Nobody besides you is talking about timestamps. I would simply choose an
increasing stamp, increased by every write-touch of the file.
In a trivial comparison
On Tue, 8 Jan 2013 08:01:16 -0500
Whit Blauvelt whit.glus...@transpect.com wrote:
On Tue, Jan 08, 2013 at 01:11:24PM +0100, Stephan von Krawczynski wrote:
Nobody besides you is talking about timestamps. I would simply choose an
increasing stamp, increased by every write-touch of the file
On Tue, 08 Jan 2013 07:55:41 -0500
Jeff Darcy jda...@redhat.com wrote:
On 1/8/13 7:35 AM, Stephan von Krawczynski wrote:
In fact I already saw just about every possibility you can think of when
accessing files, be it a simple ls or writing or reading a file.
Would you mind citing the bug
On Tue, 8 Jan 2013 09:25:05 -0500
Whit Blauvelt whit.glus...@transpect.com wrote:
On Tue, Jan 08, 2013 at 02:42:49PM +0100, Stephan von Krawczynski wrote:
What an dead-end argument. _Nothing_ will save you in case of a split-brain.
So then, to your mind, there's _nothing_ Gluster can do
On Tue, 8 Jan 2013 11:44:15 -0500
Whit Blauvelt whit.glus...@transpect.com wrote:
On Tue, Jan 08, 2013 at 04:49:30PM +0100, Stephan von Krawczynski wrote:
Pointing out that a complex system can go wrong doesn't invalidate complex
systems as a class. It's well established in ecological
On Mon, 07 Jan 2013 13:19:49 -0800
Joe Julian j...@julianfamily.org wrote:
You have a replicated filesystem, brick1 and brick2.
Brick 2 goes down and you edit a 4k file, appending data to it.
That change, and the fact that there is a pending change, is stored on
brick1.
Brick2 returns to
On Sun, 30 Dec 2012 10:13:52 -0500
Jeff Darcy jda...@redhat.com wrote:
On 12/27/12 3:36 PM, Stephan von Krawczynski wrote:
And the same goes for glusterfs. It _could_ be the greatest fs on earth, but
only if you accept:
1) Throw away all non-linux code. Because this war is over since
On Sun, 30 Dec 2012 12:29:53 -0800
Joe Julian j...@julianfamily.org wrote:
Here's were you're getting labeled as a Troll. You have a tendency to do
this on just about every mailing list except LKML (not sure why they get
your love over others, but to each their own).
There is one basic
On Wed, 26 Dec 2012 22:04:09 -0800
Joe Julian j...@julianfamily.org wrote:
It would probably be better to ask this with end-goal questions instead
of with a unspecified critical feature list and performance problems.
6 months ago, for myself and quite an extensive (and often impressive)
to fix it.
-JM
Stephan von Krawczynski sk...@ithnet.com wrote:
On Wed, 26 Dec 2012 22:04:09 -0800
Joe Julian j...@julianfamily.org wrote:
It would probably be better to ask this with end-goal questions instead
of with a unspecified critical feature list and performance problems
[mailto:gluster-users-boun...@gluster.org] On Behalf Of John Mark Walker
Sent: Thursday, December 27, 2012 12:39 PM
To: Stephan von Krawczynski
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] how well will this work
Stephan,
I'm going to make this as simple as possible
regarding
ideal architectures. Even better if you can round up developers to implement
said architecture.
-JM
Stephan von Krawczynski sk...@ithnet.com wrote:
Dear JM,
unfortunately one has to tell openly that the whole concept that is tried here
is simply wrong. The problem
Sorry, JM forgive my ignorance, but it simply does not match up what you say.
First you say:
In general, I don't recommend any distributed filesystems for VM images, but
I can also see that this is the wave of the future.
Which means you do not believe at all in one major goal of this fs. Hu?
On Sat, 13 Oct 2012 15:52:56 +0100
Brian Candler b.cand...@pobox.com wrote:
In a distributed volume (glusterfs 3.3), files within a directory are
assigned to a brick by a hash of their filename, correct?
So what happens if you do mv foo bar? Does the file get copied to another
brick? Is
On Mon, 10 Sep 2012 08:48:03 +0100
Brian Candler b.cand...@pobox.com wrote:
On Sun, Sep 09, 2012 at 09:28:47PM +0100, Andrei Mikhailovsky wrote:
While trying to figure out the cause of the bottleneck i've realised
that the bottle neck is coming from the client side as running
On Mon, 10 Sep 2012 09:39:18 +0100
Brian Candler b.cand...@pobox.com wrote:
On Mon, Sep 10, 2012 at 09:29:25AM +0800, Jack Wang wrote:
below patch should fix your bug.
Thank you Jack - that was a very quick response! I'm building a new kernel
with this patch now and will report back.
On Mon, 10 Sep 2012 09:44:26 +0100
Brian Candler b.cand...@pobox.com wrote:
On Mon, Sep 10, 2012 at 10:03:14AM +0200, Stephan von Krawczynski wrote:
Yes - so in workloads where you have many concurrent clients, this isn't a
problem. It's only a problem if you have a single client doing
On Mon, 10 Sep 2012 08:06:51 -0400
Whit Blauvelt whit.glus...@transpect.com wrote:
On Mon, Sep 10, 2012 at 11:13:11AM +0200, Stephan von Krawczynski wrote:
[...]
If you're lucky you reach something like 1/3 of the NFS
performance.
[Gluster NFS Client]
Whit
There is a reason why one
Ok, now you can see why I am talking about dropping the long-gone unix
versions (BSD/Solaris/name-one) and concentrate on doing a linux-kernel module
for glusterfs without fuse overhead. It is the _only_ way to make this project
a really successful one. Everything happening now is just a project
On Mon, 27 Aug 2012 18:43:27 +0100
Brian Candler b.cand...@pobox.com wrote:
On Mon, Aug 27, 2012 at 03:08:21PM +0200, Stephan von Krawczynski wrote:
The gluster version is 2.X and cannot be changed.
Ah, that's the important bit. If you have a way to replicate the problem
with current code
On Tue, 28 Aug 2012 09:21:57 +0100
Brian Candler b.cand...@pobox.com wrote:
On Tue, Aug 28, 2012 at 10:01:16AM +0200, Stephan von Krawczynski wrote:
Again, let me note two things:
- the current code has a lot more (other) problems than the 2.X tree, that
is
why we won't use
Top posting and kidding is a bit exaggerated for one posting ...
You are not seriously talking about 80 char terminals for an output that is
commonly used by scripts and stuff like nagios, are you?
On Tue, 28 Aug 2012 08:46:22 -0400 (EDT)
Pranith Kumar Karampuri pkara...@redhat.com wrote:
hi
but as you start
adding more fields, formatting them becomes difficult IMO.
Pranith
- Original Message -
From: Stephan von Krawczynski sk...@ithnet.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: gluster-users gluster-users@gluster.org, Gluster Devel
gluster-de...@nongnu.org
On Sun, 26 Aug 2012 20:01:20 +0100
Brian Candler b.cand...@pobox.com wrote:
On Sun, Aug 26, 2012 at 03:50:16PM +0200, Stephan von Krawczynski wrote:
I'd like to point you to [Gluster-devel] Specific bug question dated few
days ago, where I describe a trivial situation when owner changes
On Sun, 26 Aug 2012 08:53:33 +0100
Brian Candler b.cand...@pobox.com wrote:
On Fri, Aug 24, 2012 at 07:45:35PM -0600, Joe Topjian wrote:
This removed mdadm and LVM out of the equation and the problem went
away. I then tried with just LVM and still did not see this problem.
On Wed, 10 Aug 2011 12:08:39 -0700
Mohit Anchlia mohitanch...@gmail.com wrote:
Did you run dd tests on all your servers? Could it be one of the disk is
slower?
On Wed, Aug 10, 2011 at 10:51 AM, Joey McDonald j...@scare.org wrote:
Hi Joe, thanks for your response!
An order of
On Thu, 11 Aug 2011 09:13:53 -0400
Joe Landman land...@scalableinformatics.com wrote:
On 08/11/2011 09:11 AM, Burnash, James wrote:
Cogently put and helpful, Joe. Thanks. I'm filing this under good
answers to frequently asked technical questions. You have a number
of spots in that archive
On Sat, 21 May 2011 13:27:38 +0200
Tomasz Chmielewski man...@wpkg.org wrote:
If you found a bug, and even more, it's repeatable for you, please file
a bug report and describe the way to reproduce it.
Ha, very sorry that the project is not an easy-go for a dev. Creating
reproducable setups for
On Fri, 20 May 2011 17:01:22 +0200
Tomasz Chmielewski man...@wpkg.org wrote:
On 20.05.2011 15:51, Stephan von Krawczynski wrote:
most of them are just an outcome of not being able to find a working i.e.
best
solution for a problem. cache-timeout? thread-count? quick-read?
stat
On Wed, 18 May 2011 13:16:59 -0700
Anand Babu Periasamy a...@gluster.com wrote:
GlusterFS is completely free. Same versions released to the community are
used for commercial deployments too. Their issues gets higher priority
though. Code related to other proprietary software such as VMWare,
On Fri, 20 May 2011 08:35:35 -0400
Jeff Darcy jda...@redhat.com wrote:
On 05/20/2011 05:15 AM, Stephan von Krawczynski wrote:
Sorry, this clearly shows the problem: understanding. It really does
not help you a lot to hire a big number of people, you do not fail in
terms of business
On Wed, 18 May 2011 14:45:19 +0200
Udo Waechter udo.waech...@uni-osnabrueck.de wrote:
Hi there,
after reporting some trouble with group access permissions,
http://gluster.org/pipermail/gluster-users/2011-May/007619.html (which
still persist, btw.)
things get worse and worse with each
How about the _basics_ of such a fs? Create an answer to the still unresolved
question: What files are currently not in-sync?
From the very first day of glusterfs there is no answer to this fundamental
question for the user. No way to monitor the real state of a replicating fs up
to the current
On Sun, 16 Jan 2011 02:45:50 +0530
Anand Avati anand.av...@gmail.com wrote:
In any case comparing to local disk performance and network disk performance
is never right and is always misleading.
Avati
This statement is fundamentally broken.
--
Regards,
Stephan
On Sun, 2 Jan 2011 12:18:08 +0100
nurdin david duchn...@free.fr wrote:
Hello,
When i launch the server glusterFS on a reiserFS partition i got this error :
[2011-01-02 12:17:20.269951] C [posix.c:4313:init] posix: Extended attribute
not supported, exiting.
[2011-01-02 12:17:20.269973] E
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
--
MfG,
Stephan von Krawczynski
--
ith Kommunikationstechnik GmbH
Lieferanschrift
On Tue, 16 Nov 2010 16:54:07 -0800
Craig Carl cr...@gluster.com wrote:
Stephan -
Based on your feedback, and from other members of the community we have
opened discussions internally around adding support for a 32-bit client.
We have not made a decision at this point, and I can't make
On Tue, 16 Nov 2010 08:51:17 -0800
Jeff Anderson-Lee jo...@eecs.berkeley.edu wrote:
On 11/16/2010 05:36 AM, Stefano Baronio wrote:
Hi MArtin,
the XenServer Dom0 is 32bit whilst the hypervisor is 64 bit.
You need to know it when you install third part sw on the host.
Hi all,
I just read this one on the dovecot web:
---
FUSE / GlusterFS
FUSE caches dentries and file attributes internally. If you're using multiple
GlusterFS clients to access the same mailboxes, you're going to have problems.
Worst of these
On Mon, 15 Nov 2010 06:25:23 -0800
Craig Carl cr...@gluster.com wrote:
On 11/15/2010 04:57 AM, Stephan von Krawczynski wrote:
Hi all,
I just read this one on the dovecot web:
---
FUSE / GlusterFS
FUSE caches dentries and file
On Mon, 15 Nov 2010 10:18:28 -0500
Joe Landman land...@scalableinformatics.com wrote:
On 11/15/2010 09:47 AM, Stephan von Krawczynski wrote:
Stephan -
Dovecot has been a challenge in the past. We don't specifically test
with it here, if you are interested in using it with Gluster I
On Mon, 15 Nov 2010 12:17:48 -0800
Craig Carl cr...@gluster.com wrote:
Please don't think we are
not working hard to meet your expectations.
Really, Craig, I am not expecting _anything_ for _me_ from glusterfs.
I only feel very sorry for an interesting project that gave a great vision but
On Fri, 12 Nov 2010 18:26:11 -0800
Liam Slusser lslus...@gmail.com wrote:
Hey Gluster Users,
Been awhile since i've posted here. I'm looking to upgrade our 150tb
10 brick cluster from 2.0.9 to 3.1. Is there any gotcha's that i
should be aware of? Anybody run into any problems? Any
I can tell you that 3.1 does not compile under 32bit on my box - I tried
lately.
Honestly I find it a bit strange not to support 32 bit clients as there are
lots of them - and 2.9 did work on 32 bit. Which means you cannot upgrade such
setups.
Regards,
Stephan
On Sat, 13 Nov 2010 01:17:05 +1030
On Fri, 22 Oct 2010 04:46:44 -0500 (CDT)
Craig Carl cr...@gluster.com wrote:
{Resending due to incomplete response]
Brent,
Thanks for your feedback . To mount with a Solaris client use -
` mount -o proto=tcp,vers=3 nfs://SERVER-ADDR:38467/EXPORT MNT-POINT`
As to UDP access we want to
On Fri, 22 Oct 2010 15:18:09 +0200
Beat Rubischon b...@0x1b.ch wrote:
Hi Stephan!
Quoting sk...@ithnet.com (22.10.10 15:05):
We never experienced any performance problem with NFS over UDP.
Be careful when using NFSoUDP on recent networking hardware. It's simply too
fast for the
Can you check how things look like when using ext3 instead of xfs?
On Fri, 26 Mar 2010 18:04:07 +0100
Ramiro Magallanes lis...@sabueso.org wrote:
Hello there!
Im working on a 6-nodes cluster, with SuperMicro new hardware.
The cluster have to store a millons of JPG's about
...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Stephan von
Krawczynski
Sent: Wednesday, March 24, 2010 4:37 PM
To: Ian Rogers
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Setup for production - which one would you
choose?
Ok, guys, honestly
##END##
On 3/23/2010 6:02 AM, Stephan von Krawczynski wrote:
On Tue, 23 Mar 2010 02:59:35 -0600 (CST)
Tejas N. Bhisete...@gluster.com wrote:
Out of curiosity, if you want to do stuff only on one machine,
why do you want to use a distributed, multi node
On Tue, 23 Mar 2010 02:59:35 -0600 (CST)
Tejas N. Bhise te...@gluster.com wrote:
Out of curiosity, if you want to do stuff only on one machine,
why do you want to use a distributed, multi node, clustered,
file system ?
Because what he does is a very good way to show the overhead produced
Krawczynski
--
ith Kommunikationstechnik GmbH
Lieferanschrift : Reiterstrasse 24, D-94447 Plattling
Telefon : +49 9931 9188 0
Fax : +49 9931 9188 44
Geschaeftsfuehrer: Stephan von Krawczynski
Registergericht : Deggendorf HRB
On Tue, 16 Feb 2010 17:31:00 +0530
Vikas Gorur vi...@gluster.com wrote:
Olivier Le Cam wrote:
Thanks Vikas. BTW, might it be possible to have the same volume
exported both by regular-NFS and GlusterFS at the same time in order
to migrate my clients smoothly? Is there any risks to get
On Tue, 05 Jan 2010 21:39:58 +0530
Vikas Gorur vi...@gluster.com wrote:
Adrian Revill wrote:
That sounds OK
So if I have a client on server A and I write a file on server A,
would the file be copied to server B, C and D all at the same time, or
will the file be first copped to server
to wait 120 sec. for timeout and release fs.
Locked client IO for 120 sec. is not acceptable.
regards,
Stephan von Krawczynski wrote:
Try setting your ping-timeout way higher, since we use 120 we have almost no
issues in regular use. Nevertheless we do believe every problem will come
On Tue, 28 Jul 2009 16:31:44 -0500
Brian Koloszyc br...@creativemerch.com wrote:
Hi,
I am in the process of building out a sandbox glusterFS environment in
Amazon's EC2 cloud. I have successfully configured the NFS clone, but I'm
looking to transition over to gluster in order to get away
-cut here--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
--
MfG,
Stephan von Krawczynski
understand what you're saying, should I have locally copied all
the files over not using gluster before attempting an rsync?
-Original Message-
From: Stephan von Krawczynski [mailto:sk...@ithnet.com]
Sent: 05 October 2009 14:13
To: Hiren Joshi
Cc: Pavan Vilas Sondur; gluster-users
It would be nice to remember my thread about _not_ copying data initially to
gluster via the mountpoint. And one major reason for _local_ feed was: speed.
Obviously a lot of cases are merely impossible because of the pure waiting
time. If you had a live setup people would have already shot you...
On Fri, 18 Sep 2009 10:35:22 +0200
Peter Gervai grin...@gmail.com wrote:
Funny thread we have.
Just a sidenote on the last week part about userspace cannot lock up
the system: blocking resource waits / I/O waits can stall _all_ disk
access, and try to imagine what you can do with a system
On Mon, 14 Sep 2009 15:03:05 +0530
Shehjar Tikoo shehj...@gluster.com wrote:
Stephan von Krawczynski wrote:
On Mon, 14 Sep 2009 11:40:03 +0530 Shehjar Tikoo
shehj...@gluster.com wrote:
We only tried to run some bash scripts with preloaded
booster...
Do you mean the scripts
2. running bash wasnt a very useful scenario when the LD_PRELOAD
variable can be added for the bash environment as a whole. For eg.
if you just do export LD_PRELOAD=blah on the command line, you can
actually have every program started from that shell use booster.
-Shehjar
There is a
On Wed, 09 Sep 2009 19:43:15 -0400
Mark Mielke m...@mark.mielke.cc wrote:
On Wed, 9 Sep 2009 23:17:07 +0530
Anand Avatiav...@gluster.com wrote:
Please reply back to this thread only after you have a response from
the appropriate kernel developer indicating that the cause of
Only if backed up. Has the trace been shown to the linux developers?
What do they think?
Maybe we should just ask questions about the source before bothering others...
From 2.0.6 /transport/socket/src/socket.c line 867 ff:
new_trans = CALLOC (1, sizeof
On Thu, 10 Sep 2009 21:20:04 +0530
Krishna Srinivas kris...@gluster.com wrote:
Now, failing to check for NULL pointer here is a bug which we will fix
in future releases (blame it on our laziness for not doing the check
already!) Thanks for pointing it out.
Really, this was only _one_ quick
On Tue, 8 Sep 2009 10:13:17 +1000 (EST)
Jeff Evans je...@tricab.com wrote:
- server was ping'able
- glusterfsd was disconnected by the client because of missing
ping-pong - no login possible
- no fs action (no lights on the hd-stack)
- no screen (was blank, stayed blank)
This is very
On Tue, 8 Sep 2009 03:23:37 -0700
Anand Avati anand.av...@gmail.com wrote:
I doubt that this can be a real solution. My guess is that glusterfsd runs
into some race condition where it locks itself up completely.
It is not funny to debug something the like on a production setup. Best
On Tue, 8 Sep 2009 05:37:09 -0700
Anand Avati av...@gluster.com wrote:
I doubt that this can be a real solution. My guess is that glusterfsd
runs
into some race condition where it locks itself up completely.
It is not funny to debug something the like on a production setup. Best
Hello all,
last week we saw our first try to enable something like a real-world
environment on glusterfs fail.
Nevertheless we managed to get a working combination of _one_ server and _one_
client (using a replicate setup with a missing second server).
This setup worked for about 4 days, so
On Tue, 01 Sep 2009 11:33:38 +0530
Shehjar Tikoo shehj...@gluster.com wrote:
Stephan von Krawczynski wrote:
On Mon, 31 Aug 2009 19:48:46 +0530 Shehjar Tikoo
shehj...@gluster.com wrote:
Stephan von Krawczynski wrote:
Hello all,
after playing around for some weeks we decided
Hello all,
after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same data
with glusterfs. The client does some logfile processing every 5 minutes and
needs around 3,5 mins runtime in a nfs setup.
We found
On Mon, 31 Aug 2009 19:48:46 +0530
Shehjar Tikoo shehj...@gluster.com wrote:
Stephan von Krawczynski wrote:
Hello all,
after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same
data
Hello all,
as told earlier we tried to replace a nfs-server/client combination in
semi-production environment with a trivial one-server gluster setup. We
thought at first that this pretty simple setup would allow some more testing.
Unfortunately we have to stop those tests because it turns out
On Sat, 29 Aug 2009 03:46:04 +0200
supp...@citytoo.com supp...@citytoo.com wrote:
Hello,
Known Issues : Replicate will only self-heal if the files exist on the first
subvolume. Server A- B works, Server A -B does not work.
When this probleme will be fixed because it's very important ?
[...]
Glusterfs log only shows lines like this ones:
[2009-08-28 09:19:28] E [client-protocol.c:292:call_bail] data2: bailing
out frame LOOKUP(32) frame sent = 2009-08-28 08:49:18. frame-timeout = 1800
[2009-08-28 09:23:38] E [client-protocol.c:292:call_bail] data2: bailing
out frame
Hello Avati,
back to our original problem of all-hanging glusterfs servers and clients.
Today we got another hang with same look and feel, but this time we got
something in the logs, please read and tell us how to further proceed.
Configuration is as before. I send the whole log since boot, crash
1 - 100 of 126 matches
Mail list logo