On Wed, Aug 3, 2016 at 7:57 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:
>
> On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it
On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:
On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via
storage add when previously it would error out. I am now creating a
test VM with default disk caching settings (pretty sure oVirt
On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via
storage add when previously it would error out. I am now creating a
test VM with default disk caching settings (pretty sure oVirt is
defaulting to none rather than
Glad the fixes worked for you. Thanks for that update!
-Krutika
On Tue, Aug 2, 2016 at 7:31 PM, David Gossage
wrote:
> So far both dd commands that failed previously worked fine on 3.7.14
>
> Once I deleted old content from test volume it mounted to oVirt via
>
So far both dd commands that failed previously worked fine on 3.7.14
Once I deleted old content from test volume it mounted to oVirt via storage
add when previously it would error out. I am now creating a test VM with
default disk caching settings (pretty sure oVirt is defaulting to none
rather
On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay
wrote:
> Yes please, could you file a bug against glusterfs for this issue?
>
https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>
>
> -Krutika
>
> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
>
Yes please, could you file a bug against glusterfs for this issue?
-Krutika
On Wed, Jul 27, 2016 at 1:39 AM, David Gossage
wrote:
> Has a bug report been filed for this issue or should l I create one with
> the logs and results provided so far?
>
> *David Gossage*
Has a bug report been filed for this issue or should l I create one with
the logs and results provided so far?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Fri, Jul 22, 2016 at 12:53 PM, David Gossage wrote:
>
>
>
> On
The point is that even if all other backend storage filesystems do
correctly untill 3.7.11 there was no error on ZFS. Something happened
nobody ever could explain in the release of 3.7.12 that makes FUSE-
mount _in ovirt_ (it partly uses dd with iflag=direct , using
iflag=direct yourself gives
On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur wrote:
> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen
> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
> gluster-test1 is
On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen wrote:
> Here is a quick way how to test this:
> GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
> gluster-test1 is server and gluster-test2 is client mounting with FUSE.
>
> Writing file with
Here is a quick way how to test this:
GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
gluster-test1 is server and gluster-test2 is client mounting with FUSE.
Writing file with oflag=direct is not ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
On Fri, Jul 22, 2016 at 8:12 AM, Vijay Bellur wrote:
> 2016-07-22 1:54 GMT-04:00 Frank Rothenstein <
> f.rothenst...@bodden-kliniken.de>:
> > The point is that even if all other backend storage filesystems do
> correctly
> > untill 3.7.11 there was no error on ZFS. Something
On Fri, Jul 22, 2016 at 8:23 AM, Samuli Heinonen
wrote:
>
> > On 21 Jul 2016, at 20:48, David Gossage
> wrote:
> >
> > Wonder if this may be related at all
> >
> > * #1347553: O_DIRECT support for sharding
> >
> On 21 Jul 2016, at 20:48, David Gossage wrote:
>
> Wonder if this may be related at all
>
> * #1347553: O_DIRECT support for sharding
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
>
> Is it possible to downgrade from 3.8 back to 3.7.x
>
> Building test
2016-07-22 1:54 GMT-04:00 Frank Rothenstein :
> The point is that even if all other backend storage filesystems do correctly
> untill 3.7.11 there was no error on ZFS. Something happened nobody ever
> could explain in the release of 3.7.12 that makes FUSE-mount
2016-07-22 2:32 GMT-05:00 Frank Rothenstein <
f.rothenst...@bodden-kliniken.de>:
> I can't tell myself, I'm using the ovirt-4.0-centos-gluster37 repo
> (from ovirt-release40). I have a second gluster-cluster as storage, I
> didn't dare to upgrade, as it simply works...not as an ovirt/vm storage.
Il 22 lug 2016 07:54, "Frank Rothenstein"
ha scritto:
>
> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
Even with 3.8 this issue is present?
___
Gluster-devel mailing list
On 22/07/2016 6:14 AM, David Gossage wrote:
https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4
* New asynchronous I/O (AIO) support.
Only for ZVOLS I think, not datasets.
--
Lindsay Mathieson
___
Gluster-devel mailing list
On 22/07/2016 4:00 AM, David Gossage wrote:
May be anecdotal with small sample size but the few people who have
had issue all seemed to have zfs backed gluster volumes.
Good point = allmy volumes are all backed by ZFS and when using it
directly for virt storage I have to enable caching due to
On Thu, Jul 21, 2016 at 2:48 PM, Kaleb KEITHLEY wrote:
> On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> > Hi all,
> >
> > I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend.
> > ...
> > Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
> GlusterFS
On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> Hi all,
>
> I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend.
> ...
> Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
> GlusterFS regarding aio?
>
Boy, if that isn't a smoking gun, I don't know what is.
--
Kaleb
Hi all,
I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend. All hypervisor and
storage nodes have CentOS 7. I was planning to upgrade to 3.7.13 during weekend
but i’ll probably wait for more information on this issue.
Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
On Thu, Jul 21, 2016 at 12:48 PM, David Gossage wrote:
> On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
>>
>>> On Sun, Jul 10, 2016 at 10:49:52AM
On Thu, Jul 21, 2016 at 9:58 AM, David Gossage
wrote:
> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
>
>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>> > Did a quick test this morning - 3.7.13 is now working with
On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
> yay!
> >
> >
> > However I do have to enable write-back or write-through
On Thu, Jul 21, 2016 at 9:33 AM, Kaleb KEITHLEY wrote:
> On 07/21/2016 10:19 AM, David Gossage wrote:
> > Has their been any release notes or bug reports about the removal of aio
> > support being intentional?
>
> Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad)
On 07/21/2016 10:19 AM, David Gossage wrote:
> Has their been any release notes or bug reports about the removal of aio
> support being intentional?
Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad) show that when
`configure` ran during the build it reported that Linux AIO was enabled.
Has their been any release notes or bug reports about the removal of aio
support being intentional? In the case of proxmox it seems to be an easy
workaround to resolve more or less.
However In the case of oVirt I can change cache method per VM with a custom
property key, but the dd process that
29 matches
Mail list logo