On Wed, Aug 3, 2016 at 7:57 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:
>
> On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it woul
On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:
On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via
storage add when previously it would error out. I am now creating a
test VM with default disk caching settings (pretty sure oVirt i
On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via
storage add when previously it would error out. I am now creating a
test VM with default disk caching settings (pretty sure oVirt is
defaulting to none rather than writeback/throu
Glad the fixes worked for you. Thanks for that update!
-Krutika
On Tue, Aug 2, 2016 at 7:31 PM, David Gossage
wrote:
> So far both dd commands that failed previously worked fine on 3.7.14
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it wo
So far both dd commands that failed previously worked fine on 3.7.14
Once I deleted old content from test volume it mounted to oVirt via storage
add when previously it would error out. I am now creating a test VM with
default disk caching settings (pretty sure oVirt is defaulting to none
rather t
On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay
wrote:
> Yes please, could you file a bug against glusterfs for this issue?
>
https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>
>
> -Krutika
>
> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> Ha
Yes please, could you file a bug against glusterfs for this issue?
-Krutika
On Wed, Jul 27, 2016 at 1:39 AM, David Gossage
wrote:
> Has a bug report been filed for this issue or should l I create one with
> the logs and results provided so far?
>
> *David Gossage*
> *Carousel Checks Inc. | Sys
Has a bug report been filed for this issue or should l I create one with
the logs and results provided so far?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Fri, Jul 22, 2016 at 12:53 PM, David Gossage wrote:
>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vija
FYI, there's been some progress on this issue and the same has been updated
on ovirt-users ML:
http://lists.ovirt.org/pipermail/users/2016-July/041413.html
-Krutika
On Fri, Jul 22, 2016 at 11:23 PM, David Gossage wrote:
>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur wrote:
>
>> On Fri,
On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur wrote:
> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen
> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
> gluster-test1 is server and gluster-test2 is client mounting
On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen wrote:
> Here is a quick way how to test this:
> GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
> gluster-test1 is server and gluster-test2 is client mounting with FUSE.
>
> Writing file with oflag=direct is not ok:
> [ro
Here is a quick way how to test this:
GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
gluster-test1 is server and gluster-test2 is client mounting with FUSE.
Writing file with oflag=direct is not ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count
On Fri, Jul 22, 2016 at 8:12 AM, Vijay Bellur wrote:
> 2016-07-22 1:54 GMT-04:00 Frank Rothenstein <
> f.rothenst...@bodden-kliniken.de>:
> > The point is that even if all other backend storage filesystems do
> correctly
> > untill 3.7.11 there was no error on ZFS. Something happened nobody ever
On Fri, Jul 22, 2016 at 8:23 AM, Samuli Heinonen
wrote:
>
> > On 21 Jul 2016, at 20:48, David Gossage
> wrote:
> >
> > Wonder if this may be related at all
> >
> > * #1347553: O_DIRECT support for sharding
> > https://bugzilla.redhat.com/show_bug.cgi?id=1347553
> >
> > Is it possible to downgrad
> On 21 Jul 2016, at 20:48, David Gossage wrote:
>
> Wonder if this may be related at all
>
> * #1347553: O_DIRECT support for sharding
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
>
> Is it possible to downgrade from 3.8 back to 3.7.x
>
> Building test box right now anyway but wond
2016-07-22 1:54 GMT-04:00 Frank Rothenstein :
> The point is that even if all other backend storage filesystems do correctly
> untill 3.7.11 there was no error on ZFS. Something happened nobody ever
> could explain in the release of 3.7.12 that makes FUSE-mount _in ovirt_ (it
> partly uses dd with
2016-07-22 2:32 GMT-05:00 Frank Rothenstein <
f.rothenst...@bodden-kliniken.de>:
> I can't tell myself, I'm using the ovirt-4.0-centos-gluster37 repo
> (from ovirt-release40). I have a second gluster-cluster as storage, I
> didn't dare to upgrade, as it simply works...not as an ovirt/vm storage.
>
I can't tell myself, I'm using the ovirt-4.0-centos-gluster37 repo
(from ovirt-release40). I have a second gluster-cluster as storage, I
didn't dare to upgrade, as it simply works...not as an ovirt/vm
storage.
Am Freitag, den 22.07.2016, 08:28 +0200 schrieb Gandalf Corvotempesta:
> > Il 22 lug 2016
Il 22 lug 2016 07:54, "Frank Rothenstein"
ha scritto:
>
> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
Even with 3.8 this issue is present?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/
The point is that even if all other backend storage filesystems do
correctly untill 3.7.11 there was no error on ZFS. Something happened
nobody ever could explain in the release of 3.7.12 that makes FUSE-
mount _in ovirt_ (it partly uses dd with iflag=direct , using
iflag=direct yourself gives als
On 22/07/2016 4:00 AM, David Gossage wrote:
May be anecdotal with small sample size but the few people who have
had issue all seemed to have zfs backed gluster volumes.
Good point = allmy volumes are all backed by ZFS and when using it
directly for virt storage I have to enable caching due to
On 22/07/2016 6:14 AM, David Gossage wrote:
https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4
* New asynchronous I/O (AIO) support.
Only for ZVOLS I think, not datasets.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@g
On Thu, Jul 21, 2016 at 2:48 PM, Kaleb KEITHLEY wrote:
> On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> > Hi all,
> >
> > I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend.
> > ...
> > Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
> GlusterFS regarding aio?
> >
> W
On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> Hi all,
>
> I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend.
> ...
> Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
> GlusterFS regarding aio?
>
Boy, if that isn't a smoking gun, I don't know what is.
--
Kaleb
_
Hi all,
I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend. All hypervisor and
storage nodes have CentOS 7. I was planning to upgrade to 3.7.13 during weekend
but i’ll probably wait for more information on this issue.
Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
G
On Thu, Jul 21, 2016 at 12:48 PM, David Gossage wrote:
> On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
>>
>>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>>> > Did a quick te
On Thu, Jul 21, 2016 at 9:58 AM, David Gossage
wrote:
> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
>
>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
>> yay!
>> >
>> >
>> > However I do have
On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay!
>
>
> However I do have to enable write-back or write-through caching in qemu
> before the vm's will start, I believe this is to do with aio support. Not
On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
> yay!
> >
> >
> > However I do have to enable write-back or write-through caching in qemu
> > before
On Thu, Jul 21, 2016 at 9:33 AM, Kaleb KEITHLEY wrote:
> On 07/21/2016 10:19 AM, David Gossage wrote:
> > Has their been any release notes or bug reports about the removal of aio
> > support being intentional?
>
> Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad) show that when
> `configu
On 07/21/2016 10:19 AM, David Gossage wrote:
> Has their been any release notes or bug reports about the removal of aio
> support being intentional?
Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad) show that when
`configure` ran during the build it reported that Linux AIO was enabled.
W
On Sunday 10 July 2016, Kevin Lemonnier wrote:
> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
> yay!
> >
>
> Is that an update in gluster or proxmox side ? Would be interested to try
> that out
> too,
32 matches
Mail list logo