On Fri, Sep 20, 2019 at 09:19:24AM -0400, Kaleb Keithley wrote:
> On Fri, Sep 20, 2019 at 8:39 AM Rinku Kothiya wrote:
>
> > Hi,
> >
> > Release-7 RC1 packages are built. We are planning to have a test day on
> > 26-Sep-2019, we request your participation. Do post on the lists any
> > testing
On Fri, Sep 20, 2019 at 8:39 AM Rinku Kothiya wrote:
> Hi,
>
> Release-7 RC1 packages are built. We are planning to have a test day on
> 26-Sep-2019, we request your participation. Do post on the lists any
> testing done and feedback for the same.
>
> Packages for Fedora 29, Fedora 30, RHEL 8,
Hi,
Release-7 RC1 packages are built. We are planning to have a test day on
26-Sep-2019, we request your participation. Do post on the lists any
testing done and feedback for the same.
Packages for Fedora 29, Fedora 30, RHEL 8, CentOS at
>>> I think I can reduce data on the "full" bricks, solving the problem
>>> temporarily.
>>>
>>> The thing is, that the behavior changed from 3.12 to 6.5: 3.12 didn't
>>> have problems with almost full bricks, so I thought everything was fine.
>>> Then, after the upgrade, I ran into this
On Thu, 19 Sep 2019 at 15:40, Milewski Daniel
wrote:
> I've observed an interesting behavior in Gluster 5.6. I had a file
> which was placed on incorrect subvolume (aparrently by the rebalancing
> process). I could stat and read the file just fine over FUSE mount
> point, with this entry