Hah, sorry about that. The link in the header suggests that should have
worked.
On 2/3/23 3:38 PM, Joe Julian wrote:
On 2/3/23 3:08 PM, Gluster-jenkins wrote:
Test details:
RPM Location: Upstream
OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa)
Baseline Gluster version: glusterfs-10.1-1
On 2/3/23 3:08 PM, Gluster-jenkins wrote:
Test details:
RPM Location: Upstream
OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa)
Baseline Gluster version: glusterfs-10.1-1.
Current Gluster version: glusterfs-20221209.34ce46c-0.0
Intermediate Gluster version: No intermediate baseline
Test type:
;
>It's all _there_ _now_. If you don't use it you simply _decided_
>against it.
>Make that transparent to yourself.
>--
>Regards,
>Stephan
>
>
>
>
>On Mon, 7 Jun 2021 14:34:51 -0700
>Joe Julian wrote:
>
>> Most people already use slack for work and
Most people already use slack for work and most of the projects with
high engagement use it, too. While I agree that I wish the open source
community would adopt open source tools, the reality is we still use
GitHub, Slack, and Zoom for the majority of them. Using the tools that
your community
3.13.0, the last 3.x.0 release, was released on 2017-12-01.
Ubuntu supports LTS releases for 7 years, Red Hat, 10 years.
I would assume that storage is one of those things that should be
expected to work over the life of an enterprise software release cycle
which would imply 7-10 years or
You can also see diffs between force pushes now.
On August 26, 2019 8:06:30 AM PDT, Aravinda Vishwanathapura Krishna Murthy
wrote:
>On Mon, Aug 26, 2019 at 7:49 PM Joe Julian
>wrote:
>
>> > Comparing the changes between revisions is something
>> that GitHub does not
> Comparing the changes between revisions is something
that GitHub does not support...
It does support that, actually.___
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017
On 8/24/18 8:24 AM, Michael Adam wrote:
On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
Personally, I'd like to see the glusterd service replaced by a k8s native controller
(named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem
Personally, I'd like to see the glusterd service replaced by a k8s native
controller (named "kluster").
I'm hoping to use this vacation I'm currently on to write up a design doc.
On August 23, 2018 12:58:03 PM PDT, Michael Adam wrote:
>On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
>> Hi
On 03/14/2018 02:25 PM, Vijay Bellur wrote:
On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY
> wrote:
On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
> On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
>> *
>>
>>
The point is, I believe, that one shouldn't have to go digging through
external resources to find out why a commit exists. Please ensure the
commit message has adequate accurate information.
On 01/07/2018 07:11 PM, Atin Mukherjee wrote:
Also please refer
Nothing should ever be auto-started. Ubuntu has it wrong. If you're
going to enable any access to a machine, it should be by design, not by
default.
On 10/05/17 07:43, Niels de Vos wrote:
Following the Fedora Packaging guidelines, services should not be
started by default, or require an
On 05/30/2017 03:52 PM, Ric Wheeler wrote:
On 05/30/2017 06:37 PM, Joe Julian wrote:
On 05/30/2017 03:24 PM, Ric Wheeler wrote:
On 05/27/2017 03:02 AM, Joe Julian wrote:
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
On Wed, May 24, 2017 at 9:10 PM, Joe Julian &l
On 05/30/2017 03:24 PM, Ric Wheeler wrote:
On 05/27/2017 03:02 AM, Joe Julian wrote:
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
On Wed, May 24, 2017 at 9:10 PM, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:
Forwarded for posterity a
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
On Wed, May 24, 2017 at 9:10 PM, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:
Forwarded for posterity and follow-up.
Forwarded Message
Subject:Re: GlusterFS
Forwarded for posterity and follow-up.
Forwarded Message
Subject:Re: GlusterFS removal from Openstack Cinder
Date: Fri, 05 May 2017 21:07:27 +
From: Amye Scavarda <a...@redhat.com>
To: Eric Harney <ehar...@redhat.com>, Joe Julian <m...@joejulia
On the other hand, tracking that stat between versions with a known test
sequence may be valuable for watching for performance issues or improvements.
On May 17, 2017 10:03:28 PM PDT, Ravishankar N wrote:
>On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote:
>> +
On 04/30/2017 01:13 AM, lemonni...@ulrar.net wrote:
So I was a little but luck. If I has all the hardware part, probably i
would be firesd after causing data loss by using a software marked as stable
Yes, we lost our data last year to this bug, and it wasn't a test cluster.
We still hear from
To get a list of all the build requirements:
grep BuildRequires glusterfs.spec
or just rpmbuild -bb
On 03/27/2017 06:54 AM, Manohar Mikkili wrote:
Hi,
I am trying to build glusterfs v3.10.0 on a RHEL 6.6 (x86_64 GNU/Linux)
$./autoConfig # works fine
$ ./configure --enable-fusermount
On March 16, 2017 4:17:04 AM PDT, Ashish Pandey wrote:
>
>
>- Original Message -
>
>From: "Atin Mukherjee"
>To: "Raghavendra Talur" , gluster-devel@gluster.org,
>gluster-us...@gluster.org
>Sent: Thursday, March 16,
The only concern I have is that according to the README syntax (I
haven't looked any deeper but I suspect it's a non issue) there's a lack
of rdma support.
Otherwise I'm all for anything that simplifies bug fixing and improves
stability.
On 03/03/2017 12:50 PM, Niels de Vos wrote:
At the
On 02/17/17 09:33, Shyam wrote:
On 02/17/2017 12:01 PM, Niels de Vos wrote:
I've been preparing the GlusterFS 3.10 released for the CentOS Storage
SIG, and that means I need to craete a new centos-release-gluster310
package. This is the package that users install when they want to enable
the
Yes, the earlier a fault is detected the better.
On January 24, 2017 9:21:27 PM PST, Jeff Darcy wrote:
>> If there are no responses to be received and no requests being
>> sent to a brick, why would be a client be interested in the health of
>> server/brick?
>
>The client
On 12/12/2016 10:44 AM, Shyam wrote:
On 12/12/2016 01:29 PM, Joe Julian wrote:
On 12/08/2016 09:22 AM, Samikshan Bairagya wrote:
Hi,
Currently there is no way to know the maximum op-version that is
supported in a heterogeneous cluster. If this is made possible, it
would prove helpful to users
On 12/08/2016 09:22 AM, Samikshan Bairagya wrote:
Hi,
Currently there is no way to know the maximum op-version that is
supported in a heterogeneous cluster. If this is made possible, it
would prove helpful to users wrt knowing the maximum op-version to
which the cluster could be bumped up
IMHO, if a command will result in data loss, fall it. Period.
It should never be ok for a filesystem to lose data. If someone wanted to do
that with ext or xfs they would have to format.
On November 14, 2016 8:15:16 AM PST, Ravishankar N
wrote:
>On 11/14/2016 05:57
Does this mean race conditions are in master and are just being retried until
they're not hit?
On November 13, 2016 9:33:51 PM PST, Nithya Balachandran
wrote:
>Hi,
>
>Our smoke tests have been failing quite frequently of late. While
>re-triggering smoke several times in
Feature requests to in Bugzilla anyway.
Create your volume with the populated brick as brick one. Start it and "heal
full".
On November 11, 2016 7:12:03 AM PST, Sander Eikelenboom
wrote:
>
>Friday, November 11, 2016, 3:47:26 PM, you wrote:
>
>> Reposting to
Reposting to gluster-users as this is not development related.
On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri
wrote:
>On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri <
>pkara...@redhat.com> wrote:
>
>>
>>
>> On Fri, Nov 11, 2016 at 6:24 PM,
On 10/18/2016 09:52 AM, Shane StClair wrote:
Yes, I'm well aware that packages are built by volunteers. Right now
Gluster's Debian repos are breaking apt updates for anyone using these
repos, all of which previously worked:
https://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/
You forgot to count yourself: 6.
But still ... 6 when there was 60(?) in Berlin seems light. I know the 4am time
doesn't work for the west coast of America, but that only eliminates a very
small percentage of those.
What's up, everyone else? Can something change to make participation possible
On October 4, 2016 3:40:16 PM GMT+02:00, ABHISHEK PALIWAL
wrote:
>Hi,
>
>
>Again I am getting the duplicate peer entries in the peer status
>command
>while restarting system which causing sync failure in the gluster
>volume
>between both boards.
This is where you show
If you get credit for +1, shouldn't you also get credit for -1? It seems to me
that catching a fault is at least as valuable if not more so.
On October 3, 2016 3:58:32 AM GMT+02:00, Pranith Kumar Karampuri
wrote:
>On Mon, Oct 3, 2016 at 7:23 AM, Ravishankar N
Does this compare to ViPR?
On September 20, 2016 9:52:54 AM PDT, Ric Wheeler wrote:
>On 09/20/2016 10:23 AM, Gerard Braad wrote:
>> Hi Mrugesh,
>>
>> On Tue, Sep 20, 2016 at 3:10 PM, Mrugesh Karnik
>wrote:
>>> I'd like to introduce the Tendrl project.
On 08/27/2016 12:15 PM, Niels de Vos wrote:
On Sat, Aug 27, 2016 at 08:52:11PM +0530, Aravinda wrote:
Hi,
As part of Client events support, glustereventsd needs to be configured to
use a port. Following ports are already used by Gluster components.
24007 For glusterd
24008
On 08/23/2016 12:27 PM, Justin Clift wrote:
On 11 Aug 2016, at 21:23, Amye Scavarda wrote:
The Red Hat Gluster Storage documentation team and I had a conversation
about how we can our upstream documentation more consistent and improved
for our users, and they're willing to work with us to
That log message shows, "port=0", instead of 24007. Not sure if that's *the*
problem but it's certainly worth looking in to.
On August 20, 2016 4:54:48 AM PDT, Stephen Howell
wrote:
>I would like to follow up on a previous thread. I have here 3 machines
>running
I'd like to plead with the community to continue to support 3.6 as a
"lts" release. It's the last release version that can be used on Ubuntu
14.04 (Trusty Tahr) LTS which many users may be stuck using for quite
some time (eol of April 2019).
___
On 07/07/2016 08:58 PM, Pranith Kumar Karampuri wrote:
On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy > wrote:
> What gets measured gets managed.
Exactly. Reviewing is part of everyone's job, but reviews aren't
tracked
in any way
cron isn't installed by default on Arch rather scheduling is done by
systemd timers. We might want to consider using systemd.timer for
systemd distros and crontab for legacy distros.
On 07/08/2016 03:01 AM, Avra Sengupta wrote:
Hi,
Snaphsots in gluster have a scheduler, which relies heavily
On 05/03/2016 05:43 AM, Nigel Babu wrote:
Hello folks,
I've just started this week at Red Hat. Over the next year or so, I'll be
helping with cleaning up the existing CI pipeline and improving it so that we
have much better confidence with releases.
Amy has been helping me get an overview
On 04/20/2016 07:23 PM, Jiffin Tony Thottan wrote:
On 21/04/16 04:43, Rick Macklem wrote:
Hi,
Just to let you know, I did find the email responses to my
queries some months ago helpful and I now have a pNFS server
for FreeBSD using the GlusterFS port at the alpha test stage.
So far I have
- Original Message -
From: "Joe Julian" <j...@julianfamily.org>
To: gluster-devel@gluster.org
Sent: Thursday, February 25, 2016 11:01:10 AM
Subject: Re: [Gluster-devel] Documentation @ readthedocs.org - search broken
Perhaps we should just set up the proper redirects when
Perhaps we should just set up the proper redirects when we remove a page?
http://docs.readthedocs.org/en/latest/user-defined-redirects.html
On 02/24/2016 08:53 PM, Prashanth Pai wrote:
@Humble
I know this came up recently at the developer gathering in Brno. Any update
from Shaun on this ?
I know of at least one user that created a volume then added bricks afterword
before starting it as part of his scripted deployment method. Not sure if he
was changing replica count. They did that because of command line length
limitations. I'm not sure why they couldn't use stdin.
On February
I have multiple bricks crashing in production. Any help would be greatly
appreciated.
The crash log is in this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1307146
Looks like it's crashing in pl_inodelk_client_cleanup
___
Gluster-devel
Could this be a regression from http://review.gluster.org/7981 ?
Forwarded Message
Subject:[Gluster-devel] 3.6.8 crashing a lot in production
Date: Fri, 12 Feb 2016 16:20:59 -0800
From: Joe Julian <j...@julianfamily.org>
To: gluster-us...@gluster.org, g
I've also got several glusterfsd processes that have stopped responding.
A backtrace from a live core, strace, and state dump follow:
Thread 10 (LWP 31587):
#0 0x7f81d384289c in __lll_lock_wait () from
/lib/x86_64-linux-gnu/libpthread.so.0
#1 0x7f81d383e065 in _L_lock_858 () from
https://github.com/blog/1986-announcing-git-large-file-storage-lfs
On 02/10/2016 03:10 AM, Michael Scherer wrote:
Le mercredi 10 février 2016 à 12:11 +0530, Atin Mukherjee a écrit :
It'd be better if you can send a PR to glusterdocs with the odp.
*grmbl* top post *grmlb*
I am not sure if
btw... he was also having another crash in changelog_rollover:
https://gist.githubusercontent.com/CyrilPeponnet/11954cbca725d4b8da7a/raw/2168169f7b208d8ee6193c4a444639505efb634b/gistfile1.txt
It would be a pretty huge coincidence if these were all unique causes,
wouldn't it?
On 02/09/2016
On 02/08/2016 12:18 AM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Joe Julian" <j...@julianfamily.org>
To: gluster-devel@gluster.org
Sent: Monday, February 8, 2016 12:20:27 PM
Subject: Re: [Gluster-devel] Rebalance data migration and corruption
Is
Is this in current release versions?
On 02/07/2016 07:43 PM, Shyam wrote:
On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Raghavendra Gowdappa"
To: "Sakshi Bansal" , "Susant Palai"
Cc:
Interesting. I just encountered a hanging flush problem, too. Probably
unrelated but if you want to give this a try a temporary workaround I found was
to drop caches, "echo 3 > /proc/vm/drop_caches", on all the servers prior to
the flush operation.
On February 4, 2016 10:06:45 PM PST,
WTF?
if (!xattrs_list) {
ret = -EINVAL;
gf_msg (this->name, GF_LOG_ERROR, -ret, AFR_MSG_NO_CHANGELOG,
"Unable to fetch afr pending changelogs. Is op-version"
" >= 30707?");
goto out;
}
If the time is set on a file by the client, this increases the critical
complexity to include the clients whereas before it was only critical to
have the servers time synced, now the clients should be as well.
Just spitballing here, but what if the time was converted at the posix
layer as a
On 01/25/16 18:24, Ravishankar N wrote:
On 01/26/2016 01:22 AM, Shreyas Siravara wrote:
Just out of curiosity, what benefits do we think this throttling
xlator would provide over the "enable-least-priority" option (where
we put all the fops from SHD, etc into a least pri queue)?
For
On 01/25/16 20:36, Pranith Kumar Karampuri wrote:
On 01/26/2016 08:41 AM, Richard Wareing wrote:
If there is one bucket per client and one thread per bucket, it
would be
difficult to scale as the number of clients increase. How can we do
this
better?
On this note... consider that 10's of
The two favorite current marketing buzzwords seem to be "Hyperconverged"
and "Technology", so if we could work those in somewhere it might make
it seem more hip. Maybe "Hyperconvered Replication with Leader Technology".
On 01/20/16 20:38, Pranith Kumar Karampuri wrote:
On 01/19/2016 08:00
Does the code take advantage of multiple cpu cores?
If I assigned a single core to gluster, would it have an effect on
performance?
If yes, explain so I can determine a sane number of cores to allocate
per server.
___
Gluster-devel mailing list
On 12/14/2015 03:27 AM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Joe Julian" <j...@julianfamily.org>
To: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, December 14, 2015 2:40:14 PM
Subject: [Gluster-devel] Is there any advan
On 09/07/2015 09:25 AM, Anand Nekkunti wrote:
On 09/07/2015 05:08 PM, Joe Julian wrote:
On 09/06/2015 09:01 PM, Kaushal M wrote:
After more investigation and further discussions (sorry these happened
internally), what I've found is that there is no way currently to
dynamically change
As an upstream admin, one of the things I abhor about debian/ubuntu is how
services are enabled upon installation. I sure hope Fedora/EL doesn't follow
their broken example.
Can we enable the static firewall rule in glusterd.service?
On September 4, 2015 6:37:15 AM PDT, Christopher Blum
On 08/15/2015 09:37 AM, Emmanuel Dreyfus wrote:
Niels de Vos nde...@redhat.com wrote:
I think the Jenkins client exports a GERRIT_BRANCH environment variable.
If that is not the case, you can probably use git describe to find the
last tag in the branch and compare that to v3.6* or similar.
3.2.7 is far beyond its supported lifetime. Please upgrade. The official
PPAs can be found at https://launchpad.net/~gluster
On 08/04/2015 09:55 AM, Hafeez Bana wrote:
All,
We've been evaluating glusterfs 3.2.7 on ubuntu 14.04 LTS. All tests
were run with event-thread matching cpu-cores and
On 08/04/2015 05:53 PM, Shyam wrote:
On 08/04/2015 12:55 PM, Hafeez Bana wrote:
All,
We've been evaluating glusterfs 3.2.7 on ubuntu 14.04 LTS. All tests
were run with event-thread matching cpu-cores and lookup-unhashed
turned off
I think you are referring to lookup-optimize rather than
I would prefer python.
On 06/14/2015 11:18 AM, Niels de Vos wrote:
On Sat, Jun 13, 2015 at 06:45:45PM +0530, M S Vishwanath Bhat wrote:
On 12 June 2015 at 23:59, chris holcombe chris.holco...@canonical.com
wrote:
Yeah I have this repo but it's basically empty:
:36 PM, Joe Julian wrote:
I would prefer python.
On 06/14/2015 11:18 AM, Niels de Vos wrote:
On Sat, Jun 13, 2015 at 06:45:45PM +0530, M S Vishwanath Bhat wrote:
On 12 June 2015 at 23:59, chris holcombe
chris.holco...@canonical.com
wrote:
Yeah I have this repo but it's basically empty:
https
On 06/14/2015 11:43 AM, Raghavendra Talur wrote:
On Sun, Jun 14, 2015 at 11:02 PM, chris holcombe
chris.holco...@canonical.com mailto:chris.holco...@canonical.com
wrote:
Welcome to the party Matthew! Nice to see you're still keeping an
eye on on the list. I'm excited to see
On 05/07/2015 11:15 AM, Jeff Darcy wrote:
Last week, those of us who were together in Bangalore had a meeting to
discuss the GlusterFS 4.0 plan. Once we'd covered what's already in
the plan[1] we had a very productive brainstorming session on what else
we might want to consider adding. Here
I agree completely. This is the one that speaks volumes all on three words.
On 05/04/2015 09:08 AM, Josh Boon wrote:
Gluster: Software {re}defined storage
is one I really like. I wouldn't want to eliminate Gluster completely
as newcomers would then wonder about the binaries, package names
No Raspberry Pi servers any more?
On April 28, 2015 5:07:06 AM PDT, Justin Clift jus...@gluster.org wrote:
Does this mean we're officially no longer supporting 32 bit
architectures?
(or is that just on x86?)
+ Justin
On 28 Apr 2015, at 12:45, Kaushal M kshlms...@gmail.com wrote:
Found the
I suggested it. Some other people in North America besides just myself
expressed an interest in being involved, but could not make early (or
very early) morning meetings. Since the globe has this cool spherical
feature I thought it might be a good idea to try to get involvement from
the dark
I've used clear-locks but to be fair, it's been a while and IIRC it was to
recover from some other bug.
On April 8, 2015 6:13:44 AM PDT, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
On 04/08/2015 06:20 PM, Justin Clift wrote:
Hi Pranith,
Hagarth mentioned in the weekly IRC meeting
On 02/17/2015 01:40 PM, Justin Clift wrote:
On 17 Feb 2015, at 19:28, Tom Callaway tcall...@redhat.com wrote:
snip
Where: I know we have a lot of international Gluster contributors who
are not in the United States, so I'm open to suggestion on this point. A
quick internet search seems to imply
3.4, not 2.4... Need more coffee!!!
On 02/03/2015 11:12 AM, Joe Julian wrote:
Odd, I was using sssd with home directories on gluster from 2.0
through 2.4 and never had a problem (I'm no longer at that company,
but they still have home directories on Gluster). Might be worth
another look
Odd, I was using sssd with home directories on gluster from 2.0 through
2.4 and never had a problem (I'm no longer at that company, but they
still have home directories on Gluster). Might be worth another look.
On 02/03/2015 10:45 AM, David F. Robinson wrote:
Cancel this issue. I found the
Seems logical to me.
On January 31, 2015 4:43:39 AM PST, Justin Clift jus...@gluster.org wrote:
Hi all,
One of the things which has been a fair drag for the developer part of
the Gluster Community, is maintaining our own Jenkins infrastructure.
When chatting with the CentOS guys (pre-FOSDEM)
Is rpcbind running?
On January 26, 2015 6:57:44 AM PST, David F. Robinson
david.robin...@corvidtec.com wrote:
Tried shutting down glusterd and glusterfsd and restarting.
[2015-01-26 14:52:53.548330] I
[rpc-clnt.c:969:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
Paschalis (PeterA in #gluster) has reported these bugs and we've tried
to find the source of the problem to no avail. Worse yet, there's no way
to just reset the quotas to match what's actually there, as far as I can
tell.
What should we look for to isolate the source of this problem since
On 01/21/2015 09:32 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: Joe Julian j...@julianfamily.org
To: Gluster Devel gluster-devel@gluster.org
Cc: Paschalis Korosoglou pk...@grid.auth.gr
Sent: Thursday, January 22, 2015 12:54:44 AM
Subject: [Gluster-devel] Quota problems
On 01/12/2015 12:19 PM, Jan Holtzhausen wrote:
Can we get a filename only hash?
That's what we've always had. Shyam's proposal is to change that for the
reasons stated in his document.
On 2015/01/12, 6:45 PM, Shyam srang...@redhat.com wrote:
Hi,
There have been some discussions about
James, why not just compute the operating version? After 3.5.0 it's
always XYYZZ based on the version.
Something along the lines of
$version_array = split(${gluster_version}, '[.]')
if $version_array[0] 3 {
fail(Unsupported GlusterFS Version)
}
$operating_version = $version_array[2] ? {
On November 27, 2014 8:13:01 AM PST, Jeff Darcy jda...@redhat.com wrote:
To be sure, maintaining external daemons such as etcd or consul
creates its own problems. I think the ideal might be to embed a
consensus protocol implementation (Paxos, Raft, or Viewstamped
Replication) directly
Heal-failed can be for any reason that's not defined as split brain. The only
place I've been able to find clues is in the log files. Look at the timestamp
on the heal-failed output and match it to log entries in glustershd logs.
On November 7, 2014 6:49:36 PM PST, Peter Auyeung
Which means, of course, no redundancy until that self heal is completed.
Furthermore, replace-brick start stopped working at all some versions ago so
the removal of start and stop may as well just happen.
On October 30, 2014 12:41:17 AM PDT, Kaushal M kshlms...@gmail.com wrote:
'replace-brick
Mine is caused with qcow2 images used by kvm on a fuse mount. About a
half dozen very busy images causes the leak pretty consistently.
Apparently, though I never got a chance to check it myself or collect
any details, we had jira building and tearing down VM images, also on a
fuse mount,
On 10/14/2014 08:23 AM, Jeff Darcy wrote:
We should try comparing performance of multi-thread-epoll to
own-thread, shouldn't be hard to hack own-thread into non-SSL-socket
case.
Own-thread has always been available on non-SSL sockets, from the day it
was first implemented as part of HekaFS.
Not taking sides, though if I were I would support the kernel style
because I, personally, find it easier to read. Just to clarify the point:
$ find -name '*.c' | xargs grep '} else {' | wc -l
1284
$ find -name '*.c' | xargs grep else | grep -v '}' | wc -l
1646
On 10/13/2014 01:46 PM, Shyam
To the author: You're cross posting user questions in the devel mailing
list. You're not asking development questions. Please don't do that.
To Pranith et al:
On 10/8/2014 1:45 AM, justgluste...@gmail.com wrote:
* then I config :*
*cluster.self-heal-window-size is 1024(max value)*
Personally, I like the third option provided that doesn't cause memory issues.
In fact, read the whole thing, transfer it to the client and let the client
handle the posix syntax.
Optionally add a path cache timeout client side that stores the directory
listing for a period of time to mitigate
I'm going to reiterate to make sure I understand correctly.
You created a replica 2 volume. Mounted the new volume on a client.
Copied a directory to the client mountpoint using cp -a (I assume).
Then, on the two bricks, you checked a du -sh for that directory.
If all that is correct, then
I think we're basically talking about ODX.
http://www.google.com/patents/US20120102561
On 8/22/2014 7:29 AM, Giacomo Fazio wrote:
Hello Niels,
Thanks for your explanation. I'm happy you consider my proposal doable
and that many ideas came to your mind. I would like to contribute to
it, but I
Some people. Depends on use case. Dan's is pretty specific.
On August 14, 2014 10:58:33 AM PDT, Harshavardhana har...@harshavardhana.net
wrote:
Not sure. We can figure this out by traversing up the softlinks for
directories. But for files there is no way to find the parent at the
moment.
, 2014 at 11:12 AM, Joe Julian j...@julianfamily.org wrote:
Some people. Depends on use case. Dan's is pretty specific.
Those are majority of the users/customers.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org
On 08/10/2014 11:42 PM, Ravishankar N wrote:
On 08/09/2014 01:23 AM, Joe Julian wrote:
Thinking about it more, I'd still rather have this functionality
exposed at the client through xattrs. For 5 years I've thought about
this, and the more I encounter split-brain, the more I think
Isn't that what the discussion tab is for?
On 08/08/2014 10:20 AM, Krishnan Parthasarathi wrote:
Venky,
Could you share this document via Google docs? It would make it
convenient to provide feedback via comments.
~KP
- Original Message -
Hi folks,
Continuing the discussion on
On 08/07/2014 03:08 AM, Niels de Vos wrote:
On Thu, Aug 07, 2014 at 03:17:11PM +0530, Ravishankar N wrote:
On 08/07/2014 03:06 PM, Niels de Vos wrote:
On Thu, Aug 07, 2014 at 02:05:34PM +0530, Ravishankar N wrote:
Manual resolution of split-brains [1] has been a tedious task
involving
Thinking about it more, I'd still rather have this functionality exposed
at the client through xattrs. For 5 years I've thought about this, and
the more I encounter split-brain, the more I think this is the needed
approach.
getfattr -n trusted.glusterfs.stat returns
I would want tests of all the posix operations. Need a difference not
just in throughput, but in max iops for the various ops.
On 07/27/2014 08:27 AM, Vipul Nayyar wrote:
Hi
As guided by you, I performed the experiment regarding measurement of
the effect of always enabled profiling. I
On 07/26/2014 12:02 AM, Pranith Kumar Karampuri wrote:
On 07/26/2014 11:06 AM, Pranith Kumar Karampuri wrote:
On 07/26/2014 03:06 AM, Joe Julian wrote:
How can it come about? Is this from replacing a brick days ago? Can
I prevent it from happening?
[2014-07-25 07:00:29.287680] W [fuse
1 - 100 of 109 matches
Mail list logo