On Fri, Feb 28, 2020 at 12:10 PM Kevin Wolf wrote:
>
> This sounds almost like two other bugs we got fixed recently (in the
> QEMU file-posix driver and in the XFS kernel driver) where two write
> extending the file size were in flight in parallel, but if the shorter
> one completed last, instead
Am 27.02.2020 um 23:25 hat Stefan Ring geschrieben:
> On Thu, Feb 27, 2020 at 10:12 PM Stefan Ring wrote:
> > Victory! I have a reproducer in the form of a plain C libgfapi client.
> >
> > However, I have not been able to trigger corruption by just executing
> > the simple pattern in an artificial
On Thu, Feb 27, 2020 at 10:12 PM Stefan Ring wrote:
> Victory! I have a reproducer in the form of a plain C libgfapi client.
>
> However, I have not been able to trigger corruption by just executing
> the simple pattern in an artificial way. Currently, I need to feed my
> reproducer 2 GB of data t
On Tue, Feb 25, 2020 at 3:12 PM Stefan Ring wrote:
>
> I find many instances with the following pattern:
>
> current file length (= max position + size written): p
> write request n writes from (p + hole_size), thus leaving a hole
> request n+1 writes exactly hole_size, starting from p, thus compl
On Mon, Feb 24, 2020 at 1:35 PM Stefan Ring wrote:
>
> What I plan to do next is look at the block ranges being written in
> the hope of finding overlaps there.
Status update:
I still have not found out what is actually causing this. I have not
found concurrent writes to overlapping file areas.
On Mon, Feb 24, 2020 at 2:27 PM Kevin Wolf wrote:
> > > There are quite a few machines running on this host, and we have not
> > > experienced other problems so far. So right now, only ZFS is able to
> > > trigger this for some reason. The guest has 8 virtual cores. I also
> > > tried writing dire
Am 24.02.2020 um 13:35 hat Stefan Ring geschrieben:
> On Thu, Feb 20, 2020 at 10:19 AM Stefan Ring wrote:
> >
> > Hi,
> >
> > I have a very curious problem on an oVirt-like virtualization host
> > whose storage lives on gluster (as qcow2).
> >
> > The problem is that of the writes done by ZFS, who
On Mon, Feb 24, 2020 at 1:35 PM Stefan Ring wrote:
>
> [...]. As already stated in
> the original post, the problem only occurs with multiple parallel
> write requests happening.
Actually I did not state that. Anyway, the corruption does not happen
when I restrict the ZFS io scheduler to only 1 r
On Thu, Feb 20, 2020 at 10:19 AM Stefan Ring wrote:
>
> Hi,
>
> I have a very curious problem on an oVirt-like virtualization host
> whose storage lives on gluster (as qcow2).
>
> The problem is that of the writes done by ZFS, whose sizes according
> to blktrace are a mixture of 8, 16, 24, ... 256
Am 20.02.2020 um 11:15 hat Niels de Vos geschrieben:
> On Thu, Feb 20, 2020 at 10:17:00AM +0100, Stefan Ring wrote:
> > This list seems to be used for patches only. I will re-post to qemu-discuss.
>
> Do include integrat...@gluster.org as well. There should be people on
> that list that can help w
On Thu, Feb 20, 2020 at 10:17:00AM +0100, Stefan Ring wrote:
> This list seems to be used for patches only. I will re-post to qemu-discuss.
Do include integrat...@gluster.org as well. There should be people on
that list that can help with debugging from the Gluster side.
Niels
This list seems to be used for patches only. I will re-post to qemu-discuss.
Hi,
I have a very curious problem on an oVirt-like virtualization host
whose storage lives on gluster (as qcow2).
The problem is that of the writes done by ZFS, whose sizes according
to blktrace are a mixture of 8, 16, 24, ... 256 (512 byte) blocks,
sometimes the first 4KB or more, but at least t
13 matches
Mail list logo