Though remove-brick is not an usual act we would do for Gluster volume,
this has consistently failed ending in corrupted gluster volume after
Sharding has been turned on. For bug1387878, it's very similar to what i
had encountered in ESXi world. Add-brick, would run successful, but
Features and stability are not mutually exclusive.
Sometimes instability is cured by adding a feature.
Fixing a bug is not something that's solved better by having more developers
work on it.
Sometimes fixing one bug exposed a problem elsewhere.
Using free open source community projects
1016-11-14 17:01 GMT+01:00 Vijay Bellur :
> Accessing sharded data after disabling sharding is something that we
> did not visualize as a valid use case at any point in time. Also, you
> could access the contents by enabling sharding again. Given these
> factors I think this
2016-11-14 16:55 GMT+01:00 Krutika Dhananjay :
> The only way to fix it is to have sharding be part of the graph *even* if
> disabled,
> except that in this case, its job should be confined to aggregating the
> already
> sharded files during reads but NOT shard new files that
On Mon, Nov 14, 2016 at 8:54 AM, Niels de Vos wrote:
> On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> > gandalf.corvotempe...@gmail.com> wrote:
> >
> > > 2016-11-14 11:50 GMT+01:00 Pranith
On Mon, Nov 14, 2016 at 10:38 AM, Gandalf Corvotempesta
wrote:
> 2016-11-14 15:54 GMT+01:00 Niels de Vos :
>> Obviously this is unacceptible for versions that have sharding as a
>> functional (not experimental) feature. All supported features
Yes. I apologise for the delay.
Disabling sharding would knock the translator itself off the client stack,
and
being that sharding is the actual (and the only) translator that has the
knowledge of how to interpret sharded files, and how to aggregate them,
removing the translator from the stack
On Mon, Nov 14, 2016 at 8:24 PM, Niels de Vos wrote:
> On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> > gandalf.corvotempe...@gmail.com> wrote:
> >
> > > 2016-11-14 11:50 GMT+01:00 Pranith
2016-11-14 15:54 GMT+01:00 Niels de Vos :
> Obviously this is unacceptible for versions that have sharding as a
> functional (not experimental) feature. All supported features are
> expected to function without major problems (like corruption) for all
> standard Gluster
On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
> > 2016-11-14 11:50 GMT+01:00 Pranith Kumar Karampuri :
> > > To make gluster stable for VM images
On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-11-14 11:50 GMT+01:00 Pranith Kumar Karampuri :
> > To make gluster stable for VM images we had to add all these new features
> > and then fix all the bugs Lindsay/Kevin
2016-11-14 11:50 GMT+01:00 Pranith Kumar Karampuri :
> To make gluster stable for VM images we had to add all these new features
> and then fix all the bugs Lindsay/Kevin reported. We just fixed a corruption
> issue that can happen with replace-brick which will be available in
Which data corruption issue is this? Could you point me to the bug report
on bugzilla?
-Krutika
On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> > We've had a lot of
On Sat, Nov 12, 2016 at 4:28 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> > We've had a lot of problems in the past, but at least for us 3.7.12 (and
> 3.7.15)
> > seems to be working pretty well
On Sat, Nov 12, 2016 at 2:11 PM, Kevin Lemonnier
wrote:
> >
> > On the other hand at home, I tried to use GlusterFS for VM images in a
> > simple replica 2 setup with Pacemaker for HA. VMs were constantly
> > failing en masse even without making any changes. Very often the
Il 12 nov 2016 9:04 PM, "Alex Crow" ha scritto:
IMHO GlusterFS would be a great
> product if it tried to:
>
> a) Add less features per release, and/or slowing down the release cycle.
> Maybe have a "Feature"
> those that need to try new, well, features.
> b) Concentrate on
>
> On the other hand at home, I tried to use GlusterFS for VM images in a
> simple replica 2 setup with Pacemaker for HA. VMs were constantly
> failing en masse even without making any changes. Very often the images
> got corrupted and had to be restored from backups. This was over a year
> ago
> Sure, but thinking about it later we realised that it might be for the better.
> I believe when sharding is enabled the shards will be dispersed across all the
> replica sets, making it that losing a replica set will kill all your VMs.
>
> Imagine a 16x3 volume for example, losing 2 bricks
Il 12 nov 2016 19:29, "Kevin Lemonnier" ha scritto:
> I don't understand the issue. Let's say I can fit 30 VMs on a 3 node
cluster,
> whenever I need to create the VM 31 I just order 3 nodes and replicate the
> exact same cluster. I get the exact same performances as on the
Il 12 nov 2016 16:13, "David Gossage" ha
scritto:
>
> also maybe a code monkey to sit at my keyboard and screech at me whenever
I type sudo so I pay attention to what I am about to do.
>
Obviously yes, but for destructive operation a confirm should always asked
On Sat, Nov 12, 2016 at 7:42 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 14:27, "Lindsay Mathieson"
> ha scritto:
> >
> > gluster volume reset *finger twitch*
> >
> >
> > And boom! volume gone.
> >
>
> There are too many
Il 12 nov 2016 14:27, "Lindsay Mathieson" ha
scritto:
>
> gluster volume reset *finger twitch*
>
>
> And boom! volume gone.
>
There are too many destructive operations in gluster :)
More security on stored data please!
On 12/11/2016 9:58 PM, Gandalf Corvotempesta wrote:
Exactly. I've proposed a warning in the cli when changing the shard
size but this is still unfixed and this is scaring me
it's a critical bug, IMHO, and should be addressed asap or any user
could destroy the whole cluster with a simple command
Il 12 nov 2016 12:53, "Kevin Lemonnier" ha scritto:
> Sure, but thinking about it later we realised that it might be for the
better.
> I believe when sharding is enabled the shards will be dispersed across
all the
> replica sets, making it that losing a replica set will kill
>
>Having to create multiple cluster is not a solution and is much more
>expansive.
>And if you corrupt data from a single cluster you still have issues
>
Sure, but thinking about it later we realised that it might be for the better.
I believe when sharding is enabled the shards
Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> We've had a lot of problems in the past, but at least for us 3.7.12 (and
3.7.15)
> seems to be working pretty well as long as you don't add bricks. We
started doing
> multiple little clusters and abandonned the idea of
>Don't get me wrong but I'm seeing too many "critical" issues like file
>corruptions, crashes or similiar recently
>Is gluster ready for production?
>I'm scared about placing our production VMs (more or less 80) on gluster,
>in case of corruption I'll loose everything
We've
Il 12 nov 2016 03:29, "Krutika Dhananjay" ha scritto:
>
> Hi,
>
> Yes, this has been reported before by Lindsay Mathieson and Kevin
Lemonnier on this list.
> We just found one issue with replace-brick that we recently fixed.
>
Don't get me wrong but I'm seeing too many
Hi,
Yes, this has been reported before by Lindsay Mathieson and Kevin Lemonnier
on this list.
We just found one issue with replace-brick that we recently fixed.
In your case, are you doing add-brick and changing the replica count (say
from 2 -> 3) or are you adding
"replica-count" number of
Have anyone encounter this behavior?
Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha 2.3.0.
VMs are running fine without problems and with Sharding on. However, when i
either do a "add-brick" or "remove-brick start force". VM files will then
be corrupted, and the VM will not
30 matches
Mail list logo