Any updates about this feature?
It was planned for v4 but seems to be postponed...
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
I think would be useful to add a cumulative cluster health output like
mdstat for mdadm.
So that with a simple command, would be possible to see:
1) how many nodes are UP and DOWN (and which nodes are DOWN)
2) any background operation running (like healing, scrubbing) with
their progress
3) any sp
Could you please share fio command line used for this test?
Additionally, can you tell me the time needed to extract the kernel source?
Il 2 nov 2017 11:24 PM, "Ramon Selga" ha scritto:
> Hi,
>
> Just for your reference we got some similar values in a customer setup
> with three nodes single Xeo
Il 5 lug 2017 11:31 AM, "Kaushal M" ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
___
2017-05-03 14:22 GMT+02:00 Atin Mukherjee :
> Fix is up @ https://review.gluster.org/#/c/17160/ . The only thing which
> we'd need to decide (and are debating on) is that should we bypass this
> validation with rebalance start force or not. What do others think?
This is a good way to manage bugs.
2017-03-27 18:59 GMT+02:00 Shyam :
> 1) Are there any pending *blocker* bugs that need to be tracked for 3.10.1?
> If so mark them against the provided tracker [2] as blockers for the
> release, or at the very least post them as a response to this mail
I think that file corruption when sharding is
Bump
Il 11 mar 2017 10:42 AM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
Hi to all
let's assume this volume info output:
Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks
Hi to all
let's assume this volume info output:
Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: Server1:/home/gfs/r2_0
Brick2: Server2:/home/gfs/r2_1
Brick3: Server1:/home/
2017-03-08 13:09 GMT+01:00 Saravanakumar Arumugam :
> We are working on a custom solution which will avoids gluster-swift
> altogether.
> We will update here once it is ready. Stay tuned.
Any ETA ?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
I'm really inerested in this.
Let me know if I understood properly, now is possible to access a
Gluster volume as object storage via S3 API ?
Is Gluster-swift (and with that, the rings, auth and so on coming from
OpenStack) still needed ?
2017-03-08 9:53 GMT+01:00 Saravanakumar Arumugam :
> Hi,
>
by partially filled shards? Shards whose sizes are not
> equal to $SHARD_BLOCK_SIZE.
>
> In the above, $FILE_SIZE can be gotten from the
> 'trusted.glusterfs.shard.file-size' extended attribute on the base file
> (the 0th block).
>
> -Krutika
>
> On Mon, Fe
of a single file together
> (although with a few caveats).
>
> -Krutika
>
> On Sun, Feb 26, 2017 at 8:52 PM, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> Would be possible to add a command to use in case of disaster recovery
>> (wher
Would be possible to add a command to use in case of disaster recovery
(where everything is broken) to recreate files from sharding ?
In example, let's assume a totally down cluster. no trusted pools and
so on but sysadmin knows which hdd is part of any distributed replica:
hdd1 + hdd2 + hdd3 are
Il 14 nov 2016 7:28 PM, "Joe Julian" ha scritto:
>
> IMHO, if a command will result in data loss, fall it. Period.
>
> It should never be ok for a filesystem to lose data. If someone wanted to
do that with ext or xfs they would have to format.
>
Exactly. I've wrote something similiar in some mail
2016-11-11 16:09 GMT+01:00 Sander Eikelenboom :
> I think that could also be useful
> when trying to recover from total disaster (where glusterfs bricks are
> brokedown
> and you end up with lose bricks. At least you would be able to keep the
> filesystem data, remove the .glusterfs metadata dir.
Il 10 nov 2016 08:22, "Raghavendra
ha scritto:
>
> Kyle,
>
> Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.
>From 13 minutes to 23 seconds, not from 13 seconds :)
___
Gluster-devel mailing list
Gluste
2016-10-31 12:40 GMT+01:00 Lindsay Mathieson :
> But you can broadcast with UDP - one packet of data through one nic to all
> nodes, so in theory you could broadcast 1GB *per nic* or 3GB via three nics.
> Minus overhead for acks, nacks and ordering :)
>
> But I'm not sure it would work at all in pr
Il 28 ott 2016 2:50 PM, "Lindsay Mathieson"
ha scritto:
>
> I'd like to experiment with broadcast udp to see if its feasible in local
networks. It would be amazing if we could write at 1GB speeds
simultaneously to all nodes.
>
Is you have replica 3 and set a 3 nic bonded interface with balance-al
Would be possible to add support for ":" in brick path?
i'm trying to use the following command:
# gluster volume replace-brick gv0 1.2.3.4:/export/brick1/brick
1.2.3.4:/export/pci-:01:00.0-scsi-0:0:2:0/brick commit force
wrong brick type: 1.2.3.4:/export/pci-:01:00.0-scsi-0:0:2:0/brick,
2016-10-28 12:32 GMT+02:00 Pranith Kumar Karampuri :
> No it is not completely valid. We will update it and announce the release
> sometime soon.
Thank you.
Could you also fix the other roadmaps with certain features and what
is being worked on?
There is a little bit confusion in this gluster area
Il 25 ott 2016 12:42, "Aravinda" ha scritto:
>
> Hi,
>
> Since Automated test framework for Gluster is in progress, we need help
from Maintainers and developers to test the features and bug fixes to
release Gluster 3.9.
>
Is the following roadmap still valid or any changes was made for this
relea
Any progress about the major issue with gluster: the small files
performance?
Anyone working on this?
I would really like to use gluster as storage for maildirs or web hosting,
but with the current performance this wouldn't be possible without adding
additional layers (like exporting huge files w
Il 20 giu 2016 8:08 AM, "B.K.Raghuram" ha scritto:
>
> We had hosted some changes to an old version of glusterfs (3.6.1) in
order to incorporate ZFS snapshot support for gluster snapshot commands.
Sorry for this OT but can someone explain me what's the meaning for these
patches?
Are you trying to
2016-09-22 18:34 GMT+02:00 Amye Scavarda :
> Nope! RHGS is the supported version and gluster.org is the open source
> version. We'd like to keep the documentation reflecting that, but good
> catch.
Ok but features should be the same, right?
___
Gluster-d
2016-09-22 17:25 GMT+02:00 Rajesh Joseph :
>
> I merged our upstream documenation with the Red Hat documentation and
> removed
> few Red Hat specific contents.
>
> Which content is RH specific? Aren't gluster and RHGS the same ?
___
Gluster-devel mailing
Il 15 ago 2016 18:32, "Amye Scavarda" ha scritto:
>
> I'm not sure what you're proposing here?
>>
I'm proposing to not move current docs from markdown to asciidoc (that
means to rewrite everithing) but just change read the docs with something
else
If read the docs is the main issue, just change
Il 13 ago 2016 1:18 AM, "Amye Scavarda" ha scritto:
> Pushing this one higher again to see if anyone has objections to looking
more into ASCIIdocs.
> Our RTD search issue is a known issue and not likely to be resolved in
the RTD platform.
> - amye
If main issue is the RTD broken search, why rewr
Il 22 lug 2016 07:54, "Frank Rothenstein"
ha scritto:
>
> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
Even with 3.8 this issue is present?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/
28 matches
Mail list logo