On 11/5/19 9:51 PM, Michael S. Tsirkin wrote:
> On Tue, Nov 05, 2019 at 07:11:03PM +0300, Denis Plotnikov wrote:
>> seg_max has a restriction to be less or equal to virtqueue size
>> according to Virtio 1.0 specification
>>
>> Although seg_max can't be set directly, it's worth to express this
>> de
On 11/1/19 4:09 PM, Vladimir Sementsov-Ogievskiy wrote:
> 01.11.2019 15:34, Max Reitz wrote:
>> On 01.11.19 12:20, Max Reitz wrote:
>>> On 01.11.19 12:16, Vladimir Sementsov-Ogievskiy wrote:
01.11.2019 14:12, Max Reitz wrote:
> On 01.11.19 11:28, Vladimir Sementsov-Ogievskiy wrote:
>>
On 10/24/19 6:35 PM, Vladimir Sementsov-Ogievskiy wrote:
> 24.10.2019 17:26, Kevin Wolf wrote:
>> Some functions require that the caller holds a certain CoMutex for them
>> to operate correctly. Add a function so that they can assert the lock is
>> really held.
>>
>> Cc: qemu-sta...@nongnu.org
>> S
On 10/24/19 1:54 PM, Kevin Wolf wrote:
> Am 24.10.2019 um 11:59 hat Denis Lunev geschrieben:
>> On 10/23/19 6:26 PM, Kevin Wolf wrote:
>>> Some functions require that the caller holds a certain CoMutex for them
>>> to operate correctly. Add a function so that they can as
On 10/24/19 1:57 PM, Kevin Wolf wrote:
> Am 24.10.2019 um 12:01 hat Denis Lunev geschrieben:
>> On 10/23/19 6:26 PM, Kevin Wolf wrote:
>>> qcow2_cache_do_get() requires that s->lock is locked because it can
>>> yield between picking a cache entry and actually taking
On 10/23/19 6:26 PM, Kevin Wolf wrote:
> Some functions require that the caller holds a certain CoMutex for them
> to operate correctly. Add a function so that they can assert the lock is
> really held.
>
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Kevin Wolf
> ---
> include/qemu/coroutine.h |
On 10/23/19 6:26 PM, Kevin Wolf wrote:
> qcow2_cache_do_get() requires that s->lock is locked because it can
> yield between picking a cache entry and actually taking ownership of it
> by setting offset and increasing the reference count.
>
> Add an assertion to make sure the caller really holds th
On 6/28/19 6:02 PM, Alberto Garcia wrote:
> On Fri 28 Jun 2019 04:57:08 PM CEST, Kevin Wolf wrote:
>> Am 28.06.2019 um 16:43 hat Alberto Garcia geschrieben:
>>> On Thu 27 Jun 2019 06:05:55 PM CEST, Denis Lunev wrote:
>>>> Please note, I am not talking now abou
On 6/28/19 5:43 PM, Alberto Garcia wrote:
> On Thu 27 Jun 2019 06:05:55 PM CEST, Denis Lunev wrote:
>>>> Thus in respect to this patterns subclusters could give us benefits
>>>> of fast random IO and good reclaim rate.
>>> Exactly, but that fast random I/O would
[snip]
>> ===
>>
>> And I think that's all. As you can see I didn't want to go much into
>> the open technical questions (I think the on-disk format would be the
>> main one), the first goal should be to decide whether this is still an
>> interesting feature or not.
>>
>> So
On 6/27/19 6:38 PM, Alberto Garcia wrote:
> On Thu 27 Jun 2019 04:19:25 PM CEST, Denis Lunev wrote:
>
>> Right now QCOW2 is not very efficient with default cluster size (64k)
>> for fast performance with big disks. Nowadays ppl uses really BIG
>> images and 1-2-3-8 Tb
On 6/27/19 4:59 PM, Alberto Garcia wrote:
> Hi all,
>
> a couple of years ago I came to the mailing list with a proposal to
> extend the qcow2 format to add subcluster allocation.
>
> You can read the original message (and the discussion thread that came
> afterwards) here:
>
>https://lists.gnu
On 2/27/19 4:00 PM, Max Reitz wrote:
> On 18.02.19 16:36, Vladimir Sementsov-Ogievskiy wrote:
>> 12.02.2019 15:35, Andrey Shinkevich wrote:
>>> Clean QCOW2 image from bitmap obsolete directory when a new one
>>> is allocated and stored. It slows down the image growth a little bit.
>>> The flag QCOW
On 12/13/18 9:18 PM, John Snow wrote:
>
> On 12/13/18 6:07 AM, Vladimir Sementsov-Ogievskiy wrote:
>> 12.12.2018 23:41, John Snow wrote:
>>>
>>> On 12/12/18 4:27 AM, Vladimir Sementsov-Ogievskiy wrote:
ping. No dependencies, apply to master.
>>> Sure thing.
>>>
>>> Staged to jsnow/bitmaps
14 matches
Mail list logo