Now that all drivers have a byte-based .bdrv_co_pdiscard(), we
no longer need to worry about the sector-based version. We can
also relax our minimum alignment to 1 for drivers that support it.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
Another step towards killing off sector-based block APIs.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/qcow2.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/block/qcow2.c b/block/qcow2.c
index
Another step towards killing off sector-based block APIs.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/raw_bsd.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/block/raw_bsd.c b/block/raw_bsd.c
index
Another step towards killing off sector-based block APIs.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/sheepdog.c | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/block/sheepdog.c
The NBD protocol doesn't have any notion of sectors, so it is
a fairly easy conversion to use byte-based read and write.
Signed-off-by: Eric Blake
Acked-by: Paolo Bonzini
---
v2: fix typo in commit message
---
block/nbd-client.h | 8
Another step towards killing off sector-based block APIs.
While at it, call directly into nbd-client.c instead of having
a pointless trivial wrapper in nbd.c.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/nbd-client.h | 3 +--
Another step towards byte-based interfaces everywhere. Replace
the sector-based bdrv_co_discard() with a new byte-based
bdrv_co_pdiscard(), which silently ignores any unaligned head
or tail. Driver callbacks will be converted in followup patches.
By calculating the alignment outside of the
Since the raw format driver is just passing things through, we can
do byte-based read and write if the underlying protocol does
likewise.
There's one tricky part - if we probed the image format, we document
that we restrict operations on the initial sector. It's easiest to
keep this guarantee by
Another step towards killing off sector-based block APIs.
Unlike write_zeroes, where we can be handed unaligned requests
and must fail gracefully with -ENOTSUP for a fallback, we are
guaranteed that discard requests are always aligned because the
block layer already ignored unaligned head/tail.
Another step towards killing off sector-based block APIs.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/blkreplay.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/block/blkreplay.c b/block/blkreplay.c
Another step towards byte-based interfaces everywhere. Replace
the sector-based driver callback .bdrv_aio_discard() with a new
byte-based .bdrv_aio_pdiscard(). Only raw-posix and RBD drivers
are affected, so it was not worth splitting into multiple patches.
Signed-off-by: Eric Blake
Change sector-based blk_discard(), blk_co_discard(), and
blk_aio_discard() to instead be byte-based blk_pdiscard(),
blk_co_pdiscard(), and blk_aio_pdiscard(). NBD gets a lot
simpler now that ignoring the unaligned portion of a
byte-based discard request is handled under the hood by
the block
There's enough drivers with a sector-based callback that it will
be easier to switch one at a time. This patch adds a byte-based
callback, and then after all drivers are swapped, we'll drop the
sector-based callback.
[checkpatch doesn't like the space after coroutine_fn in
block_int.h, but it's
BlockRequest is the internal struct used by bdrv_aio_*. At the
moment, all such calls were sector-based, but we will eventually
convert to byte-based; start by changing the internal variables
to be byte-based. No change to behavior, although the read and
write code can now go byte-based through
The only remaining uses of paio_submit() were flush (with no
offset or count) and discard (which we are switching to byte-based);
furthermore, the similarly named paio_submit_co() is already
byte-based.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
Another step towards byte-based interfaces everywhere. Replace
the sector-based bdrv_aio_discard() with a new byte-based
bdrv_aio_pdiscard(), which silently ignores any unaligned head
or tail. Driver callbacks will be converted in followup patches.
Signed-off-by: Eric Blake
Another step towards byte-based interfaces everywhere. Replace
the sector-based bdrv_discard() with a new byte-based
bdrv_pdiscard(), which silently ignores any unaligned head
or tail.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
v2:
The internal function converts to byte-based before calling into
RBD code; hoist the conversion to the callers so that callers
can then be switched to byte-based themselves.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/rbd.c | 18
Allow NBD to pass a byte-aligned discard request over the wire.
Prerequisite: Kevin's block branch merged with current qemu.git master,
plus my work on auto-fragmenting (v3 at the moment):
https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg03550.html
Also available as a tag at:
git fetch
Another step towards killing off sector-based block APIs.
Signed-off-by: Eric Blake
Reviewed-by: Stefan Hajnoczi
---
block/gluster.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/block/gluster.c b/block/gluster.c
Public bug reported:
Hello,
REPRODUCE
$ qemu-system-x86_64 -s -S -nographic
QEMU: Terminated via GDBStub
$ gdb
(gdb) target remote :1234
(gdb) load /bin/ls
(gdb) target exec
A program is being debugged already. Kill it? (y or no) y
No executable file now.
EXPECTED
Enable program to be
** Changed in: qemu (Ubuntu Xenial)
Status: New => In Progress
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1490611
Title:
Using qemu >=2.2.1 to convert raw->VHD (fixed) adds extra padding
On Thu, 14 Jul 2016 21:59:45 +1000
David Gibson wrote:
> On Thu, Jul 14, 2016 at 03:50:56PM +0530, Bharata B Rao wrote:
> > On Thu, Jul 14, 2016 at 3:24 PM, Peter Maydell
> > wrote:
> > > On 14 July 2016 at 08:57, David Gibson
On 06/22/2016 09:50 AM, Eric Blake wrote:
> Another step towards byte-based interfaces everywhere. Replace
> the sector-based bdrv_co_discard() with a new byte-based
> bdrv_co_pdiscard(), which silently ignores any unaligned head
> or tail. Driver callbacks will be converted in followup patches.
On Fri, Jul 15, 2016 at 08:38:35PM +0200, Igor Mammedov wrote:
> On Fri, 15 Jul 2016 14:43:53 -0300
> Eduardo Habkost wrote:
> > On Fri, Jul 15, 2016 at 06:30:41PM +0200, Andreas Färber wrote:
> > > Am 15.07.2016 um 18:10 schrieb Eduardo Habkost:
> > > > On Fri, Jul 15, 2016
Am 15.07.2016 um 21:31 schrieb Sergey Fedorov:
> From: Sergey Fedorov
>
> This will fix a compiler warning with -Wclobbered:
>
> http://lists.nongnu.org/archive/html/qemu-devel/2016-07/msg03347.html
>
> Reported-by: Stefan Weil
> Signed-off-by: Sergey
From: Fam Zheng
Acked-by: John Snow
Signed-off-by: Fam Zheng
Signed-off-by: John Snow
---
tests/test-hbitmap.c | 139 +++
1 file changed, 139 insertions(+)
diff --git
From: Fam Zheng
Callers can create an iterator of meta bitmap with
bdrv_dirty_meta_iter_new(), then use the bdrv_dirty_iter_* operations on
it. Meta iterators are also counted by bitmap->active_iterators.
Also add a couple of functions to retrieve granularity and count.
From: Vladimir Sementsov-Ogievskiy
Several functions to provide necessary access to BdrvDirtyBitmap for
block-migration.c
Signed-off-by: Vladimir Sementsov-Ogievskiy
[Add the "finish" parameters. - Fam]
Signed-off-by: Fam Zheng
From: Fam Zheng
HBitmap is an implementation detail of block dirty bitmap that should be hidden
from users. Introduce a BdrvDirtyBitmapIter to encapsulate the underlying
HBitmapIter.
A small difference in the interface is, before, an HBitmapIter is initialized
in place, now the
From: Fam Zheng
For dirty bitmap users to get the size and the name of a
BdrvDirtyBitmap.
Signed-off-by: Fam Zheng
Reviewed-by: John Snow
Reviewed-by: Max Reitz
Signed-off-by: John Snow
---
From: Fam Zheng
Signed-off-by: Fam Zheng
Reviewed-by: John Snow
Reviewed-by: Max Reitz
Signed-off-by: John Snow
---
tests/test-hbitmap.c | 116 +++
1 file
From: Vladimir Sementsov-Ogievskiy
Functions to serialize / deserialize(restore) HBitmap. HBitmap should be
saved to linear sequence of bits independently of endianness and bitmap
array element (unsigned long) size. Therefore Little Endian is chosen.
These functions
From: Fam Zheng
We use a loop over bs->dirty_bitmaps to make sure the caller is
only releasing a bitmap owned by bs. Let's also assert that in this case
the caller is releasing a bitmap that does exist.
Signed-off-by: Fam Zheng
Reviewed-by: Max Reitz
v6: Rebase.
02: Added documentation changes as suggested by Max.
v5: Rebase: first 5 patches from last revision are already merged.
Addressed Max's comments:
01: - "block.c" -> "block/dirty-bitmap.c" in commit message.
- "an BdrvDirtyBitmapIter" -> "an BdrvDirtyBitmapIter"
From: Fam Zheng
Upon each bit toggle, the corresponding bit in the meta bitmap will be
set.
Signed-off-by: Fam Zheng
[Amended text inline. --js]
Signed-off-by: John Snow
---
include/qemu/hbitmap.h | 21 +++
util/hbitmap.c
From: Fam Zheng
The added group of operations enables tracking of the changed bits in
the dirty bitmap.
Signed-off-by: Fam Zheng
Reviewed-by: Max Reitz
Signed-off-by: John Snow
---
block/dirty-bitmap.c | 52
Hi all,
Just noticed this patch and wanted to leave a quick comment. The original
issue wasn't with cross-page writes - it was with cross-TB writes.
Cross-page writes become an issue once you reverse the order of the loop, so
that part of the patch is necessary. But someone might want to leave
On 15/07/16 09:45, Stefan Weil wrote:
> Hi,
>
> Am 11.05.2016 um 12:21 schrieb Sergey Fedorov:
> [...]
>> int cpu_exec(CPUState *cpu)
>> @@ -516,8 +576,6 @@ int cpu_exec(CPUState *cpu)
>> CPUArchState *env = _cpu->env;
>> #endif
>> int ret;
>> -TranslationBlock *tb, *last_tb;
>> -
From: Sergey Fedorov
This will fix a compiler warning with -Wclobbered:
http://lists.nongnu.org/archive/html/qemu-devel/2016-07/msg03347.html
Reported-by: Stefan Weil
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
On 07/15/2016 11:08 AM, Lluís Vilanova wrote:
> Signed-off-by: Lluís Vilanova
> ---
> bsd-user/main.c | 16
> 1 file changed, 16 insertions(+)
>
> @@ -754,6 +760,8 @@ int main(int argc, char **argv)
>
> cpu_model = NULL;
>
> +
From: Sergey Fedorov
This will be useful to enable CPU work on user mode emulation.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
---
cpus.c | 7 ++-
1
From: Sergey Fedorov
Use async_safe_run_on_cpu() to make tb_flush() thread safe.
It can happen that multiple threads schedule a safe work to flush the
translation buffer. To keep statistics and debugging output sane, always
check if the translation buffer has already been
From: Sergey Fedorov
A single variable 'pending_cpus' was used for both counting currently
running CPUs and for signalling the pending exclusive operation request.
To prepare for supporting operations which requires a quiescent state,
like translation buffer flush, it is
From: Sergey Fedorov
Make CPU work core functions common between system and user-mode
emulation. User-mode does not have BQL, so process_queued_cpu_work() is
protected by 'exclusive_lock'.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
From: Sergey Fedorov
Convert pthread_mutex_t and pthread_cond_t to QemuMutex and QemuCond.
This will allow to make some locks and conditional variables common
between user and system mode emulation.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey
From: Sergey Fedorov
Move the code common between run_on_cpu() and async_run_on_cpu() into a
new function queue_work_on_cpu().
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
From: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
---
linux-user/main.c | 10 ++
1 file changed, 10 insertions(+)
diff --git
From: Sergey Fedorov
It is a minimalistic support because bsd-linux claims to be _not_
threadsafe.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
bsd-user/main.c | 15 +++
1 file changed, 15
From: Sergey Fedorov
Hi,
This is a v4 for the series [1]. There's only a small change to keep
tb_flush() statistic and debugging output sane. I also picked up
"Reviewed-by" tags.
This series is available at a public git repository:
From: Sergey Fedorov
To avoid possible confusion, rename flush_queued_work() to
process_queued_cpu_work().
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
---
From: Alex Bennée
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive
From: Sergey Fedorov
This patch is based on the ideas found in work of KONRAD Frederic [1],
Alex Bennée [2], and Alvise Rigo [3].
This mechanism allows to perform an operation safely in a quiescent
state. Quiescent state means: (1) no vCPU is running and (2) BQL in
From: Alex Bennée
Useful for counting down.
Signed-off-by: Alex Bennée
Signed-off-by: Sergey Fedorov
---
include/qemu/atomic.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/qemu/atomic.h
On Fri, 15 Jul 2016 14:43:53 -0300
Eduardo Habkost wrote:
> On Fri, Jul 15, 2016 at 06:30:41PM +0200, Andreas Färber wrote:
> > Am 15.07.2016 um 18:10 schrieb Eduardo Habkost:
> > > On Fri, Jul 15, 2016 at 11:11:38AM +0200, Igor Mammedov wrote:
> > >> On Fri, 15 Jul 2016
Now that the block layer will honor max_transfer, we can simplify
our code to rely on that guarantee.
The readv code can call directly into nbd-client, just as the
writev code has done since commit 52a4650.
Interestingly enough, while qemu-io 'w 0 40m' splits into a 32M
and 8M transaction, 'w -z
The raw format layer supports all flags via passthrough - but
it only makes sense to pass through flags that the lower layer
actually supports.
The next patch gives stronger reasoning for why this is correct.
At the moment, the raw format layer ignores the max_transfer
limit of its protocol
Now that the block layer honors max_request, we don't need to
bother with an EINVAL on overlarge requests, but can instead
assert that requests are well-behaved.
Signed-off-by: Eric Blake
Reviewed-by: Fam Zheng
Reviewed-by: Stefan Hajnoczi
Drivers should be able to rely on the block layer honoring the
max transfer length, rather than needing to return -EINVAL
(iscsi) or manually fragment things (nbd). We already fragment
write zeroes at the block layer; this patch adds the fragmentation
for normal writes, after requests have been
Drivers should be able to rely on the block layer honoring the
max transfer length, rather than needing to return -EINVAL
(iscsi) or manually fragment things (nbd). This patch adds
the fragmentation in the block layer, after requests have been
aligned (fragmenting before alignment would lead to
Now that NBD relies on the block layer to fragment things, we no
longer need to track an offset argument for which fragment of
a request we are actually servicing.
While at it, use true and false instead of 0 and 1 for a bool
parameter.
Signed-off-by: Eric Blake
Reviewed-by:
We have max_transfer documented in BlockLimits, but while we
honor it during pwrite_zeroes, we were blindly ignoring it
during pwritev and preadv, leading to multiple drivers having
to implement fragmentation themselves. This series moves
fragmentation to the block layer, then fixes the NBD and
"Dr. David Alan Gilbert" wrote on 07/15/2016
07:29:24 AM:
>
> * Matthew Garrett (mj...@coreos.com) wrote:
>
> Hi Matthew,
> (Ccing in Stefan who has been trying to get vTPM in for years and
>Paolo for any x86ism and especially the ACPIisms, and Daniel for
> crypto
From: Paolo Bonzini
This has better performance because it executes fewer system calls
and does not use a bottom half per disk.
Originally proposed by Ming Lei.
Acked-by: Stefan Hajnoczi
Signed-off-by: Paolo Bonzini
Message-id:
From: Cao jin
replace tab with spaces
Signed-off-by: Cao jin
Message-id: 1468501843-14927-1-git-send-email-caoj.f...@cn.fujitsu.com
Signed-off-by: Stefan Hajnoczi
---
async.c | 2 +-
1 file changed, 1 insertion(+), 1
On 07/15/2016 08:04 AM, Max Reitz wrote:
> On 14.07.2016 22:00, John Snow wrote:
>> On 06/22/2016 11:53 AM, Max Reitz wrote:
>>> On 03.06.2016 06:32, Fam Zheng wrote:
The added group of operations enables tracking of the changed bits in
the dirty bitmap.
Signed-off-by: Fam
From: Vladimir Sementsov-Ogievskiy
We have only one flag for now - Empty Image flag. The patch fixes unused
bits specification and marks bit 1 as usused.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Denis V. Lunev
On 15.07.2016 20:23, Eric Blake wrote:
On 07/15/2016 02:08 AM, Evgeny Yakovlev wrote:
+ * Write sector 0 with random data to make AHCI storage dirty
If we ever have a case where we open a disk without specifying -raw, the
random data _might_ resemble some other format and cause probe to
From: Roman Pen
Invoking io_setup(MAX_EVENTS) we ask kernel to create ring buffer for us
with specified number of events. But kernel ring buffer allocation logic
is a bit tricky (ring buffer is page size aligned + some percpu allocation
are required) so
From: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
---
cpu-exec.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
On 15 July 2016 at 18:43, Peter Maydell wrote:
> In some configurations we implement sys_utimensat() via a wrapper
> that calls either futimens() or utimensat(), depending on the
> arguments (to handle a case where the Linux syscall API diverges
> from the glibc API).
From: Alex Bennée
This ensures that if we find the TB on the slow path that tb->page_addr
is correctly set before being tested.
Signed-off-by: Alex Bennée
Reviewed-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
From: Sergey Fedorov
These functions are not too big and can be merged together. This makes
locking scheme more clear and easier to follow.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex
The following changes since commit 14c7d99333e4a474c65bdae6f99aa8837e8078e6:
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20160714'
into staging (2016-07-14 17:32:53 +0100)
are available in the git repository at:
git://github.com/stefanha/qemu.git
From: Sergey Fedorov
Ensure atomicity and ordering of CPU's 'tb_flushed' access for future
translation block lookup out of 'tb_lock'.
This field can only be touched from another thread by tb_flush() in user
mode emulation. So the only access to be sequential atomic is:
*
From: Sergey Fedorov
These functions will be used to make translation block invalidation safe
with concurrent lockless lookup in the global hash table.
Most targets don't use 'cs_base'; so marking TB as invalid is as simple
as assigning -1 to 'cs_base'. SPARC target stores
From: Sergey Fedorov
In fact, this function does not exactly perform a lookup by physical
address as it is descibed for comment on get_page_addr_code(). Thus
it may be a bit confusing to have "physical" in it's name. So rename it
to tb_htable_lookup() to better reflect its
From: Sergey Fedorov
'HF_SOFTMMU_MASK' is only set when 'CONFIG_SOFTMMU' is defined. So
there's no need in this flag: test 'CONFIG_SOFTMMU' instead.
Suggested-by: Paolo Bonzini
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey
From: Alex Bennée
Lock contention in the hot path of moving between existing patched
TranslationBlocks is the main drag in multithreaded performance. This
patch pushes the tb_lock() usage down to the two places that really need
it:
- code generation (tb_gen_code)
-
From: Paolo Bonzini
It is naturally expected that some memory ordering should be provided
around qht_insert() and qht_lookup(). Document these assumptions in the
header file and put some comments in the source to denote how that
memory ordering requirements are fulfilled.
From: Sergey Fedorov
When invalidating a translation block, set an invalid CPU state into the
TranslationBlock structure first.
As soon as the TB is marked with an invalid CPU state, there is no need
to remove it from CPU's 'tb_jmp_cache'. However it will be necessary to
From: Sergey Fedorov
This is a small clean up. tb_find_fast() is a final consumer of this
variable so no need to pass it by reference. 'last_tb' is always updated
by subsequent cpu_loop_exec_tb() in cpu_exec().
This change also simplifies calling cpu_exec_nocache() in
From: Sergey Fedorov
Hi,
This is a respin of this series [1].
Here I used a modified version of Paolo's patch to docuement memory
ordering assumptions for certain QHT operations.
The last patch is a suggestion for renaming tb_find_physicall().
This series can be fetch
From: Sergey Fedorov
Ensure atomicity of CPU's 'tb_jmp_cache' access for future translation
block lookup out of 'tb_lock'.
Note that this patch does *not* make CPU's TLB invalidation safe if it
is done from some other thread while the CPU is in its execution loop.
On 07/15/2016 03:46 AM, Stefan Hajnoczi wrote:
> Renames look like this with git-diff(1) when diff.renames = true is set:
>
> diff --git a/a b/b
> similarity index 100%
> rename from a
> rename to b
>
> This raises the "Does not appear to be a unified-diff format patch"
> error because
On 15 July 2016 at 18:48, Lluís Vilanova wrote:
> Peter Maydell writes:
>
>> On 15 July 2016 at 18:08, Lluís Vilanova wrote:
>>> Adds three commandline arguments to the main *-user programs, following
>>> what's
>>> already available in softmmu:
>>>
>>>
Peter Maydell writes:
> On 15 July 2016 at 18:08, Lluís Vilanova wrote:
>> Adds three commandline arguments to the main *-user programs, following
>> what's
>> already available in softmmu:
>>
>> * -trace-enable
>> * -trace-events
>> * -trace-file
> So when would you want
Implement the FS_IOC_GETFLAGS and FS_IOC_SETFLAGS ioctls, as used
by chattr.
Note that the type information encoded in these ioctl numbers
is at odds with the actual type the kernel accesses, as discussed
in http://thread.gmane.org/gmane.linux.file-systems/80164.
Signed-off-by: Peter Maydell
On Fri, Jul 15, 2016 at 06:30:41PM +0200, Andreas Färber wrote:
> Am 15.07.2016 um 18:10 schrieb Eduardo Habkost:
> > On Fri, Jul 15, 2016 at 11:11:38AM +0200, Igor Mammedov wrote:
> >> On Fri, 15 Jul 2016 08:35:30 +0200
> >> Andrew Jones wrote:
> >>> On Thu, Jul 14, 2016 at
In some configurations we implement sys_utimensat() via a wrapper
that calls either futimens() or utimensat(), depending on the
arguments (to handle a case where the Linux syscall API diverges
from the glibc API). Fix a corner case in this handling:
if the syscall is passed a NULL pathname and
QEMU supports ARI on downstream ports and assigned devices may support
ARI in their extended capabilities. The endpoint ARI capability
specifies the next function, such that the OS doesn't need to walk
each possible function, however this next function is relative to the
host, not the guest.
On 07/15/2016 02:08 AM, Evgeny Yakovlev wrote:
>>> + * Write sector 0 with random data to make AHCI storage dirty
>> If we ever have a case where we open a disk without specifying -raw, the
>> random data _might_ resemble some other format and cause probe to
>> misbehave; as such, we also have
On 15 July 2016 at 18:08, Lluís Vilanova wrote:
> Adds three commandline arguments to the main *-user programs, following what's
> already available in softmmu:
>
> * -trace-enable
> * -trace-events
> * -trace-file
So when would you want to use these rather than the existing
On 07/15/2016 12:56 AM, Xiao Guangrong wrote:
>>> Note that you don't have to call visit_next_list() in a virtual visit.
>>> For an example, see prop_get_fdt(). Good enough already?
>>
>> Yes, definitely! I'm queueing Guangrong's patch because it fixes a
>> crash and the leak existed before,
Signed-off-by: Lluís Vilanova
---
bsd-user/main.c | 16
1 file changed, 16 insertions(+)
diff --git a/bsd-user/main.c b/bsd-user/main.c
index 4819b9e..3bef796 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -21,6 +21,7 @@
#include "qapi/error.h"
Signed-off-by: Lluís Vilanova
---
linux-user/main.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/linux-user/main.c b/linux-user/main.c
index 617a179..53be5dd 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -24,6 +24,7 @@
#include
Adds three commandline arguments to the main *-user programs, following what's
already available in softmmu:
* -trace-enable
* -trace-events
* -trace-file
Changes in v2
=
* Tell user to use 'help' instead of '?' [Eric Blake].
* Remove newlines on argument docs for bsd-user [Eric
Stefan Hajnoczi writes:
> On Wed, Jun 22, 2016 at 12:04:30PM +0200, Lluís Vilanova wrote:
>> Adds three commandline arguments to the main *-user programs, following
>> what's
>> already available in softmmu:
>>
>> * -trace-enable
>> * -trace-events
>> * -trace-file
>>
>>
>> Changes in v2
>>
From: "Dr. David Alan Gilbert"
If a migration fails/is cancelled during the postcopy stage we currently
end up with the runstate as finish-migrate, where it should be post-migrate.
There's a small window in precopy where I think the same thing can
happen, but I've never seen
megasas_enqueue_frame always returns with non-NULL cmd->frame.
Remove the "else" part as it is dead code.
Signed-off-by: Paolo Bonzini
---
hw/scsi/megasas.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/hw/scsi/megasas.c b/hw/scsi/megasas.c
index
1 - 100 of 274 matches
Mail list logo