Any updates about this feature?
It was planned for v4 but seems to be postponed...
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
I think would be useful to add a cumulative cluster health output like
mdstat for mdadm.
So that with a simple command, would be possible to see:
1) how many nodes are UP and DOWN (and which nodes are DOWN)
2) any background operation running (like healing, scrubbing) with
their progress
3) any
Could you please share fio command line used for this test?
Additionally, can you tell me the time needed to extract the kernel source?
Il 2 nov 2017 11:24 PM, "Ramon Selga" ha scritto:
> Hi,
>
> Just for your reference we got some similar values in a customer setup
>
Il 5 lug 2017 11:31 AM, "Kaushal M" ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
2017-05-03 14:22 GMT+02:00 Atin Mukherjee :
> Fix is up @ https://review.gluster.org/#/c/17160/ . The only thing which
> we'd need to decide (and are debating on) is that should we bypass this
> validation with rebalance start force or not. What do others think?
This is a
Would be possible to add a command to use in case of disaster recovery
(where everything is broken) to recreate files from sharding ?
In example, let's assume a totally down cluster. no trusted pools and
so on but sysadmin knows which hdd is part of any distributed replica:
hdd1 + hdd2 + hdd3
Il 14 nov 2016 7:28 PM, "Joe Julian" ha scritto:
>
> IMHO, if a command will result in data loss, fall it. Period.
>
> It should never be ok for a filesystem to lose data. If someone wanted to
do that with ext or xfs they would have to format.
>
Exactly. I've wrote
2016-11-11 16:09 GMT+01:00 Sander Eikelenboom :
> I think that could also be useful
> when trying to recover from total disaster (where glusterfs bricks are
> brokedown
> and you end up with lose bricks. At least you would be able to keep the
> filesystem data, remove the
Il 10 nov 2016 08:22, "Raghavendra
ha scritto:
>
> Kyle,
>
> Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.
>From 13 minutes to 23 seconds, not from 13 seconds :)
___
2016-10-31 12:40 GMT+01:00 Lindsay Mathieson :
> But you can broadcast with UDP - one packet of data through one nic to all
> nodes, so in theory you could broadcast 1GB *per nic* or 3GB via three nics.
> Minus overhead for acks, nacks and ordering :)
>
> But I'm not
Il 28 ott 2016 2:50 PM, "Lindsay Mathieson"
ha scritto:
>
> I'd like to experiment with broadcast udp to see if its feasible in local
networks. It would be amazing if we could write at 1GB speeds
simultaneously to all nodes.
>
Is you have replica 3 and set a 3 nic
Would be possible to add support for ":" in brick path?
i'm trying to use the following command:
# gluster volume replace-brick gv0 1.2.3.4:/export/brick1/brick
1.2.3.4:/export/pci-:01:00.0-scsi-0:0:2:0/brick commit force
wrong brick type: 1.2.3.4:/export/pci-:01:00.0-scsi-0:0:2:0/brick,
2016-10-28 12:32 GMT+02:00 Pranith Kumar Karampuri :
> No it is not completely valid. We will update it and announce the release
> sometime soon.
Thank you.
Could you also fix the other roadmaps with certain features and what
is being worked on?
There is a little bit
Il 25 ott 2016 12:42, "Aravinda" ha scritto:
>
> Hi,
>
> Since Automated test framework for Gluster is in progress, we need help
from Maintainers and developers to test the features and bug fixes to
release Gluster 3.9.
>
Is the following roadmap still valid or any changes
Any progress about the major issue with gluster: the small files
performance?
Anyone working on this?
I would really like to use gluster as storage for maildirs or web hosting,
but with the current performance this wouldn't be possible without adding
additional layers (like exporting huge files
Il 20 giu 2016 8:08 AM, "B.K.Raghuram" ha scritto:
>
> We had hosted some changes to an old version of glusterfs (3.6.1) in
order to incorporate ZFS snapshot support for gluster snapshot commands.
Sorry for this OT but can someone explain me what's the meaning for these
2016-09-22 18:34 GMT+02:00 Amye Scavarda :
> Nope! RHGS is the supported version and gluster.org is the open source
> version. We'd like to keep the documentation reflecting that, but good
> catch.
Ok but features should be the same, right?
2016-09-22 17:25 GMT+02:00 Rajesh Joseph :
>
> I merged our upstream documenation with the Red Hat documentation and
> removed
> few Red Hat specific contents.
>
> Which content is RH specific? Aren't gluster and RHGS the same ?
___
Il 15 ago 2016 18:32, "Amye Scavarda" ha scritto:
>
> I'm not sure what you're proposing here?
>>
I'm proposing to not move current docs from markdown to asciidoc (that
means to rewrite everithing) but just change read the docs with something
else
If read the docs is the main
Il 13 ago 2016 1:18 AM, "Amye Scavarda" ha scritto:
> Pushing this one higher again to see if anyone has objections to looking
more into ASCIIdocs.
> Our RTD search issue is a known issue and not likely to be resolved in
the RTD platform.
> - amye
If main issue is the RTD broken
Il 22 lug 2016 07:54, "Frank Rothenstein"
ha scritto:
>
> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
Even with 3.8 this issue is present?
___
Gluster-devel mailing list
21 matches
Mail list logo