Il giorno mer 9 mag 2018 alle ore 21:57 Jim Kinney
ha scritto:
> It all depends on how you are set up on the distribute. Think RAID 10
with 4 drives - each pair strips (distribute) and the pair of pairs
replicates.
Exactly, thus I have to add the same replica count.
In a
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney
ha scritto:
> correct. a new server will NOT add space in this manner. But the original
Q was about rebalancing after adding a 4th server. If you are using
distributed/replication, then yes, a new server with be adding a
Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney
ha scritto:
> You can change the replica count. Add a fourth server, add it's brick to
existing volume with gluster volume add-brick vol0 replica 4
newhost:/path/to/brick
This doesn't add space, but only a new replica,
Ok, some more question as I'm still planning our SDS (but I'm prone to use
LizardFS, gluster is too inflexible)
Let's assume a replica 3:
1) currently, is not possbile to add a single server and rebalance like any
order SDS (Ceph, Lizard, Moose, DRBD, ), right ? In replica 3, I have
to add 3
Il giorno lun 7 mag 2018 alle ore 13:22 Dave Sherohman
ha scritto:
> I'm pretty sure that you can only have one arbiter per subvolume, and
> I'm not even sure what the point of multiple arbiters over the same data
> would be.
Multiple arbiter add availability. I can safely
is possible to add an arbiter node on the client?
Let's assume a gluster storage made with 2 storage server. This is prone to
split-brains.
An arbiter node can be added, but can I put the arbiter on one of the
client ?
Can I use multiple arbiter for the same volume ? In example, one arbiter on
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the
Hi to all
is the "famous" corruption bug when sharding enabled fixed or still a work
in progress ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Any updates about this feature?
It was planned for v4 but seems to be postponed...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
2018-04-22 15:10 GMT+02:00 Jim Kinney :
> So a stock ovirt with gluster install that uses sharding
> A. Can't safely have sharding turned off once files are in use
> B. Can't be expanded with additional bricks
If the expansion bug is still unresolved, yes :-)
2018-04-23 9:34 GMT+02:00 Alessandro Briosi :
> Is it that really so?
yes, i've opened a bug asking developers to block removal of sharding
when volume has data on it or to write a huge warning message
saying that data loss will happen
> I thought that sharding was a extended
Il dom 22 apr 2018, 10:46 Alessandro Briosi ha scritto:
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
a bad result.
> In my case I had a system that was just poorly written and it was
> using 300-1000 iops for constant operations and was choking on
> cleanup.
>
>
> On Thu, Oct 12, 2017 at 6:23 PM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
> >
How can I show the current state of a gluster cluster, like status,
replicas down, what is going on and so on ?
Something like /proc/mdstat for raid, where I can see which disks are
down, if raid is rebuilding,checking,
Anything similiar in gluster?
trying to check gluster performance with dbench
I'm using a replica 3 with a bonded dual gigabit (balance-alb) on all
servers and shard (64M) enabled.
I'm unable to over 3MB (three) MB/s from *inside* VM, thus I think
there isn't any small file issue, as from inside VM there isn't any
metadata
fine?
>
>
> That is correct.
>
>>
>> -bill
>>
>>
>>
>> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>>
>>
>>
>> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbala...@redhat.com>
>> wrote:
>>>
&
I'm testing iozone inside a VM booted from a gluster volume.
By looking at network traffic on the host (the one connected to the
gluster storage) I can
see that a simple
iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F /tmp/gluster.ioz
will make about 1200mbit/s on a bonded dual gigabit nic
FS hot tiers?
>
> Regards,
> Bartosz
>
>
> On 10.10.2017 19:59, Gandalf Corvotempesta wrote:
>
>> 2017-10-10 18:27 GMT+02:00 Jeff Darcy <j...@pl.atyp.us>:
>>
>>> Probably not. If there is, it would probably favor XFS. The developers
>>> at
icks.
>
>
>
> On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <j...@pl.atyp.us> wrote:
>
>> On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:
>> > Anyone made some performance comparison between XFS and ZFS with ZIL
>> > on SSD, in gluster en
2017-10-10 18:27 GMT+02:00 Jeff Darcy :
> Probably not. If there is, it would probably favor XFS. The developers
> at Red Hat use XFS almost exclusively. We at Facebook have a mix, but
> XFS is (I think) the most common. Whatever the developers use tends to
> become "the way
Anyone made some performance comparison between XFS and ZFS with ZIL
on SSD, in gluster environment ?
I've tried to compare both on another SDS (LizardFS) and I haven't
seen any tangible performance improvement.
Is gluster different ?
___
Gluster-users
2017-10-10 8:25 GMT+02:00 Karan Sandha :
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the
Any update about this?
I've seen some works about optimizing performance for small files, is
now gluster "usable" for storing, in example, Maildirs or git sources
?
at least in 3.7 (or 3.8, I don't remember exactly), extracting kernel
sources took about 4-5 minutes.
Any update about multiple bugs regarding data corruptions with
sharding enabled ?
Is 3.12.1 ready to be used in production?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
I'm testing GlusterFS and Lizard.
I've set both SDS in replica 3.
All servers are configured with bonding mode "balance-rr" with 2x1Gbps nic
With iperf i'm able to saturate both link with a single connection.
With Lizard i'm able to saturate both link with a single "dd" write
With gluster i'm
timal configuration
>
> http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/
>
> On Sat, Sep 23, 2017 at 10:01 Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> Is possible to create a dispersed volume 1+2 ? (Almost the same as
>>
Is possible to create a dispersed volume 1+2 ? (Almost the same as replica
3, the same as RAID-6)
If yes, how many server I have to add in the future to expand the storage?
1 or 3?
___
Gluster-users mailing list
Gluster-users@gluster.org
2017-09-08 14:11 GMT+02:00 Pavel Szalbot :
> Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
> minutes. SIGTERM on the other hand causes crash, but this time it is
> not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
> -ps
So, seems
2017-09-08 13:44 GMT+02:00 Pavel Szalbot :
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not
2017-09-08 13:21 GMT+02:00 Pavel Szalbot :
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the only way to keep I/O running is to gracefully exiting
2017-09-08 13:07 GMT+02:00 Pavel Szalbot :
> OK, so killall seems to be ok after several attempts i.e. iops do not stop
> on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the
> command. I will check the servers console during reboot to see if the VM
>
Il 5 lug 2017 11:31 AM, "Kaushal M" ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
Il 30 giu 2017 3:51 PM, ha scritto:
Note: I also noticed that you said “order”. Do you mean when we create via
volume set we have to make an order for bricks? I thought gluster handles
(and do the math) itself.
Yes, you have to specify the exact order
Gluster is not
tps://github.com/gluster/glusterfs/blob/master/extras/
> stop-all-gluster-processes.sh which automatically checks for pending
> heals etc before killing the gluster processes.
>
> -Ravi
>
>
>
>
> *De :* Gandalf Corvotempesta [mailto:gandalf.corvotempe...@gmail.com
> <ganda
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" ha scritto:
Yes. And please ensure you do this after bringing down all the glusterd
instances and then once the peer file is removed from all the nodes restart
glusterd on all the nodes one after another.
If you have to bring down
e can confirm so we can be sure it's 100% resolved.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay <kdhan...@redhat.com>
> *Sent:* Tuesday, June 6, 2017 9:17:40 AM
> *To:* Mahdi Adnan
> *Cc:* glus
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhan...@redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempe
gt;>
>>
>> Although the process went smooth, i will run another extensive test
>> tomorrow just to be sure.
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>> --
>> *From:* Krutika Dhananjay <kdhan...@re
Currently, which are the best small files optimization that we can enable
on a gluster storage?
I'm planning to move a couple of dovecot servers, with thousands mail files
(from a couple of KB to less than 10-20MB)
These optimizations are compatible with VMs workload, like sharding?
As gluster
2017-05-03 14:22 GMT+02:00 Atin Mukherjee :
> Fix is up @ https://review.gluster.org/#/c/17160/ . The only thing which
> we'd need to decide (and are debating on) is that should we bypass this
> validation with rebalance start force or not. What do others think?
This is a
2017-05-01 21:08 GMT+02:00 Vijay Bellur :
> We might also want to start thinking about spare bricks that can be brought
> into a volume based on some policy. For example, if the posix health
> checker determines that underlying storage stack has problems, we can bring
> a
2017-05-01 21:00 GMT+02:00 Shyam :
> So, Gandalf, it will be part of the roadmap, just when we maybe able to pick
> and deliver this is not clear yet (as Pranith puts it as well).
I doesn't matter when. Knowing that adding a single brick will be made
possible is enough (at
2017-05-01 20:55 GMT+02:00 Pranith Kumar Karampuri :
> Replace-brick as a command is implemented with the goal of replacing a disk
> that went bad. So the availability was already less. In 2013-2014 I proposed
> that we do it by adding brick to just the replica set and
2017-05-01 20:46 GMT+02:00 Shyam :
> Fair point. If Gandalf concurs, we will add this to our "+1 scaling" feature
> effort (not yet on github as an issue).
Everything is ok for me as long that:
a) operation must be automated (this is what i've asked initially
[1]), maybe
2017-05-01 20:43 GMT+02:00 Shyam :
> I do agree that for the duration a brick is replaced its replication count
> is down by 1, is that your concern? In which case I do note that without (a)
> above, availability is at risk during the operation. Which needs other
>
2017-05-01 20:42 GMT+02:00 Joe Julian :
> Because it's done by humans.
Exactly. I forgot to mention this.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
2017-05-01 20:36 GMT+02:00 Pranith Kumar Karampuri :
> Why?
Because you have to manually replace bricks with the newer one, format
the older one and add it back.
What happens if, by mistake, we replace the older brick with another
brick on the same disk ?
Currently you have
2017-05-01 20:30 GMT+02:00 Shyam :
> Yes, as a matter of fact, you can do this today using the CLI and creating
> nx2 instead of 1x2. 'n' is best decided by you, depending on the growth
> potential of your cluster, as at some point 'n' wont be enough if you grow
> by some
2017-05-01 20:22 GMT+02:00 Shyam :
> Brick splitting (I think was first proposed by Jeff Darcy) is to create more
> bricks out of given storage backends. IOW, instead of using a given brick as
> is, create sub-dirs and use them as bricks.
>
> Hence, given 2 local FS end points
2017-05-01 20:08 GMT+02:00 Pranith Kumar Karampuri :
> Filename can be renamed and then we lost the link because hash will be
> different. Anyways all these kinds of problems are already solved in
> distribute layer.
Filename can be renamed even with the current architecture.
2017-05-01 20:00 GMT+02:00 Pranith Kumar Karampuri :
> Let's say we have 1 disk, we format it with say XFS and that becomes a brick
> at the moment. Just curious, what will be the relationship between brick to
> disk in this case(If we leave out LVM for this example)?
No
2017-05-01 19:50 GMT+02:00 Shyam :
> Splitting the bricks need not be a post factum decision, we can start with
> larger brick counts, on a given node/disk count, and hence spread these
> bricks to newer nodes/bricks as they are added.
>
> If I understand the ceph PG count, it
2017-05-01 19:36 GMT+02:00 Pranith Kumar Karampuri :
> To know GFID of file1 you must know where the file resides so that you can
> do getxattr trusted.gfid on the file. So storing server/brick location on
> gfid is not getting us much more information that what we already
2017-05-01 18:57 GMT+02:00 Pranith Kumar Karampuri :
> Yes this is precisely what all the other SDS with metadata servers kind of
> do. They kind of keep a map of on what all servers a particular file/blob is
> stored in a metadata server.
Not exactly. Other SDS has some
2017-05-01 18:30 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Maybe a simple DB (just as an idea: sqlite, berkeleydb, ...) stored in
> a fixed location on gluster itself, being replicated across nodes.
Even better, embedding RocksDB with it's data dire
2017-05-01 18:23 GMT+02:00 Pranith Kumar Karampuri :
> IMHO It is difficult to implement what you are asking for without metadata
> server which stores where each replica is stored.
Can't you distribute a sort of file mapping to each node ?
AFAIK , gluster already has some
Il 29 apr 2017 4:12 PM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
Anyway, the proposed workaround:
https://joejulian.name/blog/how-to-expand-glusterfs-
replicated-clusters-by-one-server/
won't work with just a single volume made up of 2 replicated b
2017-04-30 10:13 GMT+02:00 :
> I was (I believe) the first one to run into the bug, it happens and I knew it
> was a risk when installing gluster.
I know.
> But since then I didn't see any warnings anywhere except here, I agree
> with you that it should be mentionned in
u don't like what the developers focus on, you are free to
> try and offer a bounty to motivate someone to look at what you want,
> or even better : go and buy a license for one of gluster's commercial
> alternatives.
>
>
> On Sat, Apr 29, 2017 at 11:43:54PM +0200, Gandalf Corvotempe
ant it to do".
>
> I'm done. You can continue to feel entitled here on the mailing list. I'll
> just set my filters to bitbucket anything from you.
>
> On 04/29/2017 01:00 PM, Gandalf Corvotempesta wrote:
>
> I repeat: I've just proposed a feature
> I'm not a C developer
gt; On April 29, 2017 11:08:45 AM PDT, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>>
>> Mine was a suggestion
>> Fell free to ignore was gluster users has to say and still keep going
>> though your way
>>
>> Usually, open source proje
e community project, not a company product,
> feature requests like these are welcome, but would be more welcome with
> either code or at least a well described method. Broad asks like these are
> of little value, imho.
>
>
> On 04/29/2017 07:12 AM, Gandalf Corvotempesta wrote:
I would like to heavy test a small gluster installation.
Anyone did this previously ?
I think that running bonnie++ for 2 or more days and trying to remove
nodes/bricks
would be enough to test everything, but how can i ensure that, after
some days, all
file stored are exactly how bonnie++ has
have any bricks to "replace"
This is something i would like to see implemented in gluster.
2017-04-29 16:08 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> 2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri <pkara...@redhat.com>:
>> Are you sugge
2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri :
> Are you suggesting this process to be easier through commands, rather than
> for administrators to figure out how to place the data?
>
> [1] http://lists.gluster.org/pipermail/gluster-users/2016-July/027431.html
Admin
2017-04-27 14:03 GMT+02:00 Pranith Kumar Karampuri :
> The bugs are not in sharding. Sharding + VM workload is exposing bugs are in
> DHT/rebalance. These bugs existed for years. They are coming to the fore
> only now. It proves to be very difficult to recreate these bugs in
2017-04-27 13:31 GMT+02:00 Pranith Kumar Karampuri :
> But even after that fix, it is still leading to pause. And these are the two
> updates on what the developers are doing as per my understanding. So that
> workflow is not stable yet IMO.
So, even after that fix, two more
2017-04-27 13:21 GMT+02:00 Serkan Çoban :
> I think this is he fix Gandalf asking for:
> https://github.com/gluster/glusterfs/commit/6e3054b42f9aef1e35b493fbb002ec47e1ba27ce
Yes, i'm talking about this.
___
Gluster-users mailing
some work needs to be done for dht_[f]xattrop. I
> believe this is the next step that is underway.
>
>
> On Thu, Apr 27, 2017 at 12:13 PM, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> Updates on this critical bug ?
>>
>> Il 18 ap
Updates on this critical bug ?
Il 18 apr 2017 8:24 PM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
> Any update ?
> In addition, if this is a different bug but the "workflow" is the same
> as the previous one, how is possible that
Sorrt for the stupid subject and for questions that probably should be
pleaced in FAQ page, but,
let's assume a replica 3 cluster made with 3 servers (1 brick per server)
1) can I add a fourth server, with one brick, increasing the total
available space? If yes, how?
2) can I increase replica
2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri :
> At least in case of EC it is with good reason. If you want to change
> volume's configuration from 6+2->7+2 you have to compute the encoding again
> and place different data on the resulting 9 bricks. Which has to be done
Il 24 apr 2017 9:40 AM, "Ashish Pandey" ha scritto:
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any
I'm still trying to figure out if adding a single server to an
existing gluster cluster is possible or not, based on EC or standard
replica.
I don't think so, because with replica 3, when each server is already
full (no more slots for disks), I need to add 3 server at once.
Is this the same even
.com>:
> Nope. This is a different bug.
>
> -Krutika
>
> On Mon, Apr 3, 2017 at 5:03 PM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>>
>> This is a good news
>> Is this related to the previously fixed bug?
>>
>> Il 3 apr 2017 10:22 A
2017-04-18 9:36 GMT+02:00 Serkan Çoban :
> Nope, healing speed is 10MB/sec/brick, each brick heals with this
> speed, so one brick or one server each will heal in one week...
Is this by design ? Is it tuneable ? 10MB/s/brick is too low for us.
We will use 10GB ethernet,
2017-04-18 9:17 GMT+02:00 Serkan Çoban :
> In my case I see 6TB data was healed within 7-8 days with above command
> running.
But is this normal? Gluster need about 7-8 days to heal 6TB ?
In case of a server failure, you need some weeks to heal ?
Let's assume a replica 3 cluster with 3 bricks used at 95%
If I add 3 bricks more , a rebalance (in addition to the corruption :-) )
will move some shards to the newly added bricks so that old bricks usage
will go down from 95% to (maybe) 50% ?
___
Just a question: is the rebalance bug that corrupt data also present in
RHGS?
If yes, why there is nothing wrote on redhat site to warn users to not
rebalance a sharded volume?
___
Gluster-users mailing list
Gluster-users@gluster.org
You have to specify the correct order of brick forming the same replica set
In example:
host1:brick1 host2:brick2 host3:brick3 host1:brick4 host2:brick5
host3:brick6
What you did is forming a replica set with all bricks on the same host,
thus an host failure will bring your cluster down
Il 4
This is a good news
Is this related to the previously fixed bug?
Il 3 apr 2017 10:22 AM, "Krutika Dhananjay" ha
scritto:
> So Raghavendra has an RCA for this issue.
>
> Copy-pasting his comment here:
>
>
>
> Following is a rough algorithm of shard_writev:
>
> 1. Based on
How can I ensure that each parity brick is stored on a different server ?
Il 30 mar 2017 6:50 AM, "Ashish Pandey" ha scritto:
> Hi Terry,
>
> There is not constraint on number of nodes for erasure coded volumes.
> However, there are some suggestions to keep in mind.
>
> If
Is rebalance and fix layout needed when adding new bricks?
Any workaround for extending a cluster without loose data?
Il 28 mar 2017 8:19 PM, "Pranith Kumar Karampuri" ha
scritto:
>
>
> On Mon, Mar 27, 2017 at 11:29 PM, Mahdi Adnan
> wrote:
>
>>
uld have a
> clone of the B snapshot.
>
> You would not have to read the whole volume image but just the changed
> blocks dramatically improving the speed of the backup.
>
> At this point you can delete the A snapshot and promote the B snapshot to
> be the A snapshot for
n" <j...@julianfamily.org> ha scritto:
> The rsync protocol only passes blocks that have actually changed. Raw
> changes fewer bits. You're right, though, that it still has to check the
> entire file for those changes.
>
> On 03/23/17 12:47, Gandalf Corvotempesta wrote:
>
> Raw
would also be good.
>
> On 03/23/17 12:36, Gandalf Corvotempesta wrote:
>
> Georep expose to another problem:
> When using gluster as storage for VM, the VM file is saved as qcow.
> Changes are inside the qcow, thus rsync has to sync the whole file every
> time
>
> A lit
Maybe exposing the volume as iscsi and then using zfs over iscsi on each
hypervisor?
In this case I'll be able to use zfs snapshot and send them to the backup
server
Il 23 mar 2017 8:36 PM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
> Georep expose
data up into smaller more manageable
> volumes where you only keep a smaller set of critical data and just back
> that up. Perhaps an object store (swift?) might handle fault tolerance
> distribution better for some workloads.
>
> There's no one right answer.
>
> On 03/23/17 1
backup
> tools like borg,attic,restic , etc...
>
> On Thu, Mar 23, 2017 at 7:48 PM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
> > Let's assume a 1PB storage full of VMs images with each brick over ZFS,
> > replica 3, sharding enabled
> >
&g
Let's assume a 1PB storage full of VMs images with each brick over ZFS,
replica 3, sharding enabled
How do you backup/restore that amount of data?
Backing up daily is impossible, you'll never finish the backup that the
following one is starting (in other words, you need more than 24 hours)
project (as wrote on gluster's homepage)
2017-03-18 14:21 GMT+01:00 Krutika Dhananjay <kdhan...@redhat.com>:
>
>
> On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>>
>> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson
2017-03-18 2:09 GMT+01:00 Lindsay Mathieson :
> Concerning, this was supposed to be fixed in 3.8.10
Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
Now let's see how much time they require to fix another CRITICAL bug.
I'm really curious.
Workload: VM hosting with sharding enabled, replica 3 (with or without
distribution, see below)
Which configuration will perform better:
a) 1 ZFS disk per brick, 1 brick per server. 1 disk for each server.
b) 1 ZFS mirror per brick, 1 brick per server. 1 disk for each server.
c) 1 ZFS disk per
I can confirm this
Any solution?
Il 14 mar 2017 8:11 PM, "Sergei Gerasenko" ha scritto:
> Hi everybody,
>
> Easy question: the output of *gluster peer status* on some of the hosts
> in the cluster has the hostname for all but one member of the cluster,
> which is listed by
2017-03-10 11:39 GMT+01:00 Cedric Lemarchand :
> I am still asking myself how such bug could happen on a clustered storage
> software, where adding bricks is a base feature for scalable solution, like
> Gluster. Or maybe is it that STM releases are really under tested
2017-03-08 13:09 GMT+01:00 Saravanakumar Arumugam :
> We are working on a custom solution which will avoids gluster-swift
> altogether.
> We will update here once it is ready. Stay tuned.
Any ETA ?
___
Gluster-users mailing list
I'm really inerested in this.
Let me know if I understood properly, now is possible to access a
Gluster volume as object storage via S3 API ?
Is Gluster-swift (and with that, the rings, auth and so on coming from
OpenStack) still needed ?
2017-03-08 9:53 GMT+01:00 Saravanakumar Arumugam
2017-03-08 11:48 GMT+01:00 Karan Sandha :
> Hi Deepak,
>
> Are you reading a small file data-set or large files data-set and secondly,
> volume is mounted using which protocol?
>
> for small files data-set :-
>
> gluster volume set vol-name cluster.lookup-optimize on
Hardware raid with ZFS should avoided
ZFS needs direct access to disks and with hardware raid you have a
controller in the middle
If you need ZFS, skip the hardware raid and use ZFS raid
Il 6 mar 2017 9:23 PM, "Dung Le" ha scritto:
> Hi,
>
> Since I am new with Gluster, need
1 - 100 of 370 matches
Mail list logo