Re: [Qemu-devel] qemu 2.9.0 qcow2 file failed to open after hard server reset

2017-12-27 Thread Vasiliy Tolstov
2017-12-22 1:58 GMT+03:00 John Snow <js...@redhat.com>:
>
>
> On 12/21/2017 05:13 PM, Vasiliy Tolstov wrote:
>> Hi! Today my server have forced reboot and one of my vm can't start
>> with message:
>> qcow2: Marking image as corrupt: L2 table offset 0x3f786d6c207600
>> unaligned (L1 index: 0); further corruption events will be suppressed
>>
>> i'm use debian jessie with hand builded qemu 2.9.0, i'm try to
>> qemu-img check but it not helps. How can i recover data inside qcow2
>> file? (i'm not use compression or encryption inside it).
>>
>
> Not looking good if you're missing the very first L2 table in its entirety.
>
> You might be able to go through this thing by hand and learn for
> yourself where the L2 table is (it will be a 64KiB region, aligned to a
> 64KiB boundary, that all contain 64bit, 64KiB aligned pointers that will
> be less than the size of the file. the offset of this missing region is
> not likely to be referenced elsewhere in your file.)
>
> and then, once you've found it, you can update the pointer that's wrong.
> However, where there's smoke there's often fire, so...
>
> best of luck.
>
> --js

If i use raw image as i understand this corruption can't happening in raw?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] qemu 2.9.0 qcow2 file failed to open after hard server reset

2017-12-27 Thread Vasiliy Tolstov
2017-12-22 12:23 GMT+03:00 Daniel P. Berrange <berra...@redhat.com>:
> On Thu, Dec 21, 2017 at 05:58:47PM -0500, John Snow wrote:
>>
>>
>> On 12/21/2017 05:13 PM, Vasiliy Tolstov wrote:
>> > Hi! Today my server have forced reboot and one of my vm can't start
>> > with message:
>> > qcow2: Marking image as corrupt: L2 table offset 0x3f786d6c207600
>> > unaligned (L1 index: 0); further corruption events will be suppressed
>> >
>> > i'm use debian jessie with hand builded qemu 2.9.0, i'm try to
>> > qemu-img check but it not helps. How can i recover data inside qcow2
>> > file? (i'm not use compression or encryption inside it).
>> >
>>
>> Not looking good if you're missing the very first L2 table in its entirety.
>>
>> You might be able to go through this thing by hand and learn for
>> yourself where the L2 table is (it will be a 64KiB region, aligned to a
>> 64KiB boundary, that all contain 64bit, 64KiB aligned pointers that will
>> be less than the size of the file. the offset of this missing region is
>> not likely to be referenced elsewhere in your file.)

Too hard, client restored backup because he don't want to wait for me =))

>
> Fun. That rather makes you wish that every single distinct type of table
> in QCow2 files had a unique UUID value stored in it, to make forensics
> like this easier :-)
>

=)

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] qemu 2.9.0 qcow2 file failed to open after hard server reset

2017-12-21 Thread Vasiliy Tolstov
Hi! Today my server have forced reboot and one of my vm can't start
with message:
qcow2: Marking image as corrupt: L2 table offset 0x3f786d6c207600
unaligned (L1 index: 0); further corruption events will be suppressed

i'm use debian jessie with hand builded qemu 2.9.0, i'm try to
qemu-img check but it not helps. How can i recover data inside qcow2
file? (i'm not use compression or encryption inside it).

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] sheepdog block driver and read write error policy

2017-11-16 Thread Vasiliy Tolstov
2017-11-16 11:27 GMT+03:00 Fam Zheng <f...@redhat.com>:
> On Thu, 11/16 11:11, Vasiliy Tolstov wrote:
>> Hi. I'm try to write own sheepdog compatible daemon and test it with qemu.
>> Sometimes ago in qemu added read write error policy to allow to stop
>> domain or continue or something else. As i see in case of sheepdog
>> this policy is ignored and qemu try to reconnect to sheepdog daemon.
>> If nobody wants i can try to fix this, and if policy is not specified
>> work like now.
>> Where i need to start to easy understand how this works in case of file raw ?
>
> The driver callbacks (sd_co_readv/sd_co_writev) should simply return error
> instead of retrying.
>


Thanks,  how can i pass options to block driver? (as i understand
read/write policy affects concrete hardware - scsi/ide and need to be
passed down to lower level)

Also about sheepdog driver, i'm re-read discussion
https://patchwork.ozlabs.org/patch/501533/ and have a question - if
all logic about overlapping requests and oids calculation go from qemu
to sheepdog daemon,
does this be slowdown or not in case of iops? (I think that if qemu
driver only read/write/discard some block data it can be more simplify
and have less errors, and sheepdog daemon can handle internally all
overlapping and inode updates)
I'm understand that this may be completely new sheepdog qemu driver,
but i think that if sheepdog driver is more like nbd/rbd it can be
more simple...


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] sheepdog block driver and read write error policy

2017-11-16 Thread Vasiliy Tolstov
Hi. I'm try to write own sheepdog compatible daemon and test it with qemu.
Sometimes ago in qemu added read write error policy to allow to stop
domain or continue or something else. As i see in case of sheepdog
this policy is ignored and qemu try to reconnect to sheepdog daemon.
If nobody wants i can try to fix this, and if policy is not specified
work like now.
Where i need to start to easy understand how this works in case of file raw ?

Also i'm fedora user with virt-preview repo and see, that if sheepdog
daemon is unavailable 2 or more minutes and started after that qemu
crashed. Does i need to retest with latest qemu version?
b2587932582333197c88bf663785b19f441989d7
f1af3251f885f3b8adf73ba078500f2eeefbedae
5eceb01adfbe513c0309528293b0b86e32a6e27d


qemu-system-x86_64 -machine q35 -accel kvm -m 512M -vnc 0.0.0.0:1
-device virtio-scsi-pci,id=scsi0 -drive
aio=native,rerror=stop,werror=stop,if=none,format=raw,id=drive-scsi-disk0,cache=none,file=sheepdog:test,discard=unmap,detect-zeroes=off
 -device scsi-hd,bus=scsi0.0,drive=drive-scsi-disk0,id=device-scsi-disk0
qemu-system-x86_64: failed to get the header, Invalid argument
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: failed to send a req, Bad file descriptor
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused
qemu-system-x86_64: Failed to connect socket: Connection refused



qemu-system-x86_64: /builddir/build/BUILD/qemu-2.9.0/util/iov.c:167:
iov_send_recv: Assertion `niov < iov_cnt' failed.
Makefile:5: recipe for target 'test' failed
make: *** [test] Aborted (core dumped)

And if someone from fedora/redhat team is present - when new qemu
version available in virt preview repo for fedora 25 or it EOL?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] qemu and own disk driver writing questions

2017-04-13 Thread Vasiliy Tolstov
Hi! If i want to develop some sort of qemu network block driver to own
storage. I can write qemu driver or tcmu / scst userspace driver and
attach to qemu block device (i'm use linux)
So in theory what path have minimal overhead in case of memory copy
and performance?
Where i can find some info how qemu works with data that needs to be
written or readed from block device passed to vm?
Thanks.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [RFC][PATCH 0/6] "bootonceindex" property

2017-03-15 Thread Vasiliy Tolstov
14 Мар 2017 г. 19:58 пользователь "Gerd Hoffmann" 
написал:

  Hi,

>   - Does this approach make sense? Any better ideas?

What is the use case?

Sometimes I'm wondering why we actually need the "once" thing.  Looks
like people use it for installs, but I fail to see why.  I usually
configure my guests with hard disk as bootindex=1 and install media
(cdrom or net) as bootindex=2.  In case the hard disk is blank it
automatically falls back to the second and boots the installer, and when
the installer is finished the hard disk isn't blank any more and the
just installed system gets booted ...

cheers,
  Gerd

This is useful for reinstalls. In this case hard disk already have boot
data and it can be used as boot device.


[Qemu-devel] questions and help with tight png encoding in qemu

2017-01-20 Thread Vasiliy Tolstov
Hi! I'm try to add support for TightPng to some node js package (not noVNC)
and stuck with some issues. I found some rfb spec description but may
be it not accurate in case of tight png...
https://github.com/rfbproto/rfbproto/blob/master/rfbproto.rst#tight-encoding

1)  Rectangle with tightpng encoding sended with Encoding Tight or TightPng ?
As i see qemu sends encoding TightPng (-260)

2) I'm try to read len of png data by reading byte (b) from the stream
and if (b 0x80) then len += (b & 0x7f) << 7 and  read next byte, if (b
& 0x80) then len += (b & 0xff) << 14; but after that i need to skip 8
byte from stream to get png data... Can somebody correct me with len
calculating?

Thanks for any help!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH] block/mirror: enable detect zeroes when driving mirror

2016-11-20 Thread Vasiliy Tolstov
21 Ноя 2016 г. 4:27 пользователь "Yang Wei"  написал:
>
> In order to preserve sparse disk image, detect_zeroes
> should also be enabled when bdrv_get_block_status_above()
> returns BDRV_BLOCK_DATA
>
> Signed-off-by: Yang Wei 
> ---

Hi, does this patch fixes bug
https://bugzilla.redhat.com/show_bug.cgi?id=1219541 ? Or it unrelated to
this issue?

>  block/mirror.c | 7 +++
>  1 file changed, 7 insertions(+)
>
> diff --git a/block/mirror.c b/block/mirror.c
> index b2c1fb8..8b20b7a 100644
> --- a/block/mirror.c
> +++ b/block/mirror.c
> @@ -76,6 +76,7 @@ typedef struct MirrorOp {
>  QEMUIOVector qiov;
>  int64_t sector_num;
>  int nb_sectors;
> +BlockdevDetectZeroesOptions backup_detect_zeroes;
>  } MirrorOp;
>
>  static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
> @@ -132,6 +133,8 @@ static void mirror_write_complete(void *opaque, int
ret)
>  {
>  MirrorOp *op = opaque;
>  MirrorBlockJob *s = op->s;
> +BlockDriverState *target = s->target;
> +target->detect_zeroes = op->backup_detect_zeroes;
>  if (ret < 0) {
>  BlockErrorAction action;
>
> @@ -148,6 +151,7 @@ static void mirror_read_complete(void *opaque, int
ret)
>  {
>  MirrorOp *op = opaque;
>  MirrorBlockJob *s = op->s;
> +BlockDriverState *target = s->target;
>  if (ret < 0) {
>  BlockErrorAction action;
>
> @@ -160,6 +164,9 @@ static void mirror_read_complete(void *opaque, int
ret)
>  mirror_iteration_done(op, ret);
>  return;
>  }
> +op->backup_detect_zeroes = target->detect_zeroes;
> +target->detect_zeroes = s->unmap ?
BLOCKDEV_DETECT_ZEROES_OPTIONS_UNMAP :
> +BLOCKDEV_DETECT_ZEROES_OPTIONS_ON;
>  blk_aio_pwritev(s->target, op->sector_num * BDRV_SECTOR_SIZE,
>qiov,
>  0, mirror_write_complete, op);
>  }
> --
> 2.10.2
>
>


Re: [Qemu-devel] Automated testing of block/gluster.c with upstream Gluster

2016-06-28 Thread Vasiliy Tolstov
2016-06-28 13:27 GMT+03:00 Niels de Vos <nde...@redhat.com>:
> I'm not familiar with packer, but it seems very similar to virt-builder.
> It does not look to be available in standard CentOS repositories.
> Because the tests will run in the CentOS CI, I'd prefer to use as few
> external tools as possible.


packer is the static binary so you can download it if you ci allow out
coming connections...
If you took some tools, please provide info about you test environment
with tools in this email

-- 
Vasiliy Tolstov,
e-mail: v.tols...@yoctocloud.net



Re: [Qemu-devel] Automated testing of block/gluster.c with upstream Gluster

2016-06-28 Thread Vasiliy Tolstov
I'm recommend to use packer for this,it able to run via qemu VM,run scripts
and output artifacts.
28 Июн 2016 г. 12:10 пользователь "Niels de Vos" 
написал:

> Hi,
>
> it seems we broke the block/gluster.c functionality with a recent patch
> in upstream Gluster. In order to prevent this from happening in the
> future, I would like to setup a Jenkins job that installs a plan CentOS
> with its version of QEMU, and nightly builds of upstream Gluster.
> Getting a notification about breakage the day after a patch got merged
> seems like a reasonable approach.
>
> The test should at least boot the generic CentOS cloud image (slightly
> modified with libguestfs) and return a success/fail. I am wondering if
> there are automated tests like this already, and if I could (re)use some
> of the scripts for it. At the moment, I am thinking to so it like this:
>  - download the image [1]
>  - set kernel parameters to output on the serial console
>  - add a auto-login user/script
>  - have the script write "bootup complete" or something
>  - have the script poweroff the VM
>  - script that started the VM checks for the "bootup complete" message
>  - return success/fail
>
> Ideas and suggestions for running more heavy I/O in the VM are welcome
> too.
>
> Thanks,
> Niels
>
>
> 1. http://cloud.centos.org/centos/7/images/
>


Re: [Qemu-devel] Why is SeaBIOS used with -kernel?

2016-04-01 Thread Vasiliy Tolstov
2016-04-01 11:47 GMT+03:00 Paolo Bonzini <pbonz...@redhat.com>:
> That's an interesting idea.  We can look at it for 2.7.


That's fine =) I'm waiting too.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PULL 0/5] ipv4-only and ipv6-only support

2016-03-31 Thread Vasiliy Tolstov
2016-03-31 15:10 GMT+03:00 Samuel Thibault <samuel.thiba...@gnu.org>:
> Which prefix do you get inside the guest?  Which version of Debian the
> guest is?  I'm really surprised that fec0:: gets prefered over an ipv4
> address.


I forget, but maybe can try  in near feature.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PULL 0/5] ipv4-only and ipv6-only support

2016-03-31 Thread Vasiliy Tolstov
2016-03-31 15:14 GMT+03:00 Samuel Thibault <samuel.thiba...@gnu.org>:
> But your host system does not have a default ipv6 route, right?  In
> that case, qemu gets an error, and with the latest version of the patch
> reports it to the guest via icmpv6, which then promptly fallbacks to
> ipv4.


This fine, i'm try to update older patch and check.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PULL 0/5] ipv4-only and ipv6-only support

2016-03-31 Thread Vasiliy Tolstov
2016-03-31 15:03 GMT+03:00 Vasiliy Tolstov <v.tols...@selfip.ru>:
>
> This is debian. I'm use old patch for qemu-2.1.0.


May be my problem that my host system have ipv6 address but i'm not
able to connect to remote servers via ipv6.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PULL 0/5] ipv4-only and ipv6-only support

2016-03-31 Thread Vasiliy Tolstov
2016-03-31 14:57 GMT+03:00 Samuel Thibault <samuel.thiba...@gnu.org>:
> I'm surprised that the vm tries to access the network via ipv6 by
> default.  Which OS is this?  With the default fec0:: prefix, ipv4
> should be preferred.  Latest versions of the patch (which was commited)
> also make qemu send a net unreachable icmpv6 response when ipv6 is not
> available on the host, so that the guest knows without delay that ipv6
> is not actually available.


This is debian. I'm use old patch for qemu-2.1.0.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PULL 0/5] ipv4-only and ipv6-only support

2016-03-31 Thread Vasiliy Tolstov
2016-03-31 14:44 GMT+03:00 Samuel Thibault <samuel.thiba...@gnu.org>:
> The first commit, at least, would very probably be welcome: now that
> ipv6 will be enabled by default, this may disturb some people for
> whatever reason, so they may want to have a way to disable it.
>
> Also, some people may want to start testing ipv6-only setups. The 4
> subsequent commits are useful in such setups, to automatically setup the
> DNS server address. It is not strictly needed, since the user can set by
> hand inside the guest an external DNS server IPv6 address to be used.


Yes, i'm using older version of this patch and without working ipv6 on
host my vm tries to access network via ipv6 (prefered) and not able to
connect. I need to flush ipv6 address inside vm.
So disabling ipv6 very needed feature.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] -qmp tcp:localhost:xxx, server, nowait inside console and in programm

2016-02-16 Thread Vasiliy Tolstov
Hi! I have strange think, i'm try to run qemu with -qmp flag, when i'm
run qemu with -qmp tcp:localhost:,server,nowait inside linux
console all works fine and telnet gets qemu banner, but when i'm run
inside golang programm telnet connected but does not get banner.
qemu args identical.
the difference is that qemu inside programm does not have stding (it
/dev/null) and stdout/stderr goest to additional writers (log writer).

Why this can happening?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [Qemu-block] [PATCH 00/10] qcow2: Implement image locking

2015-12-23 Thread Vasiliy Tolstov
2015-12-23 18:08 GMT+03:00 Denis V. Lunev <den-li...@parallels.com>:
> you should do this by asking running QEMU not by
> qemu-img, which is badly wrong.
>
> Den


Ok, if this is possible via qmp/hmp qemu, no problem.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH 00/10] qcow2: Implement image locking

2015-12-23 Thread Vasiliy Tolstov
2015-12-22 19:46 GMT+03:00 Kevin Wolf <kw...@redhat.com>:
> Enough innocent images have died because users called 'qemu-img snapshot' 
> while
> the VM was still running. Educating the users doesn't seem to be a working
> strategy, so this series adds locking to qcow2 that refuses to access the 
> image
> read-write from two processes.
>
> Eric, this will require a libvirt update to deal with qemu crashes which leave
> locked images behind. The simplest thinkable way would be to unconditionally
> override the lock in libvirt whenever the option is present. In that case,
> libvirt VMs would be protected against concurrent non-libvirt accesses, but 
> not
> the other way round. If you want more than that, libvirt would have to check
> somehow if it was its own VM that used the image and left the lock behind. I
> imagine that can't be too hard either.


This breaks ability to create disk only snapshot while vm is running.
Or i miss something?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] -loadvm and disk only snapshot

2015-12-16 Thread Vasiliy Tolstov
2015-12-16 19:19 GMT+03:00 Eric Blake <ebl...@redhat.com>:
> Won't work (qemu is not able to load disk snapshots without memory).
> What libvirt does instead is to use qemu-img snapshot -c to change the
> snapshot back to the active layer, then boot qemu fresh on the correct
> contents.
>

qemu-img snapshot -a ? as i see -c creates new snapshot.

> Of course, patches to change behavior aren't out of the question, but
> there's already a lot of cruft there to be aware of, and making sure we
> don't regress libvirt behavior.

My needed use case - create multilayered qcow2 image. Base layer -
clean fresh (for example debian) system, next layer - LAMP, next layer
RAILS (base on top of clean debian system) and so on.
I want to create images from packer and want to write packer plugin
for this case (installer that able to read from qcow2 file i create
later).

Does it possible with qemu-img snapshots with my use case? I don't
want to snapshot memory because i don't need it. To get consistent
snapshots i can sync disk or freeze fs by ioctl.




-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] -loadvm and disk only snapshot

2015-12-16 Thread Vasiliy Tolstov
Hi. I'm try to find some info how to run qemu vm from snapshot, but
all pages contains info about running vm from full vm snapshot with
memory state.
What happening when i'm run qemu with -loadvm from disk only snapshot
(created byblockdev-snapshot-internal-sync)?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] blockcopy qemu fail and libvirt

2015-12-03 Thread Vasiliy Tolstov
Hi, i'm use qemu 2.4.0 and libvirt 1.2.16 and try blockcopy to migrate vm disk

virsh -c qemu+ssh://root@xxx/system blockcopy domain sda /dev/nbd2
--wait --pivot

libvirt says:
Successfully pivoted
2015-12-01 14:37:18.188+: 18288: info : libvirt version: 1.2.16
2015-12-01 14:37:18.188+: 18288: warning :
virEventPollUpdateTimeout:268 : Ignoring invalid update timer -1

but virsh command returns zero exit code

qemu blockcopy failed and target device contains zeroes.

i see some qemu git commits about block/migration and block that can
fix this isse, does qemu-devs confirm that this bug is fixed?

Also libvirt devs - do you know about this libvirt issue (when virsh
doing blockcopy and qemu process does not success complete blockcopy)
?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH RFC] allow to delete sheepdog snapshot

2015-12-01 Thread Vasiliy Tolstov
2015-12-02 8:23 GMT+03:00 Hitoshi Mitake <mitake.hito...@gmail.com>:
> Seems that your patch violates the coding style of qemu. You can check the
> style with scripts/checkpatch.pl.
>
>
> Comment outed code isn't good. You should remove it (in addition, it
> wouldn't be required).


Thanks!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] [PATCH RFC] allow to delete sheepdog snapshot

2015-12-01 Thread Vasiliy Tolstov
Signed-off-by: Vasiliy Tolstov <v.tols...@selfip.ru>
---
 block/sheepdog.c | 59 ++--
 1 file changed, 57 insertions(+), 2 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index d80e4ed..c3fae50 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -2484,8 +2484,63 @@ static int sd_snapshot_delete(BlockDriverState *bs,
   const char *name,
   Error **errp)
 {
-/* FIXME: Delete specified snapshot id.  */
-return 0;
+uint32_t snap_id = 0;
+uint32_t vdi = 0;
+char snap_tag[SD_MAX_VDI_TAG_LEN];
+Error *local_err = NULL;
+int fd, ret;
+char buf[SD_MAX_VDI_LEN + SD_MAX_VDI_TAG_LEN];
+BDRVSheepdogState *s = bs->opaque;
+unsigned int wlen = SD_MAX_VDI_LEN + SD_MAX_VDI_TAG_LEN, rlen = 0;
+
+memset(buf, 0, sizeof(buf));
+memset(snap_tag, 0, sizeof(snap_tag));
+pstrcpy(buf, SD_MAX_VDI_LEN, s->name);
+snap_id = strtoul(snapshot_id, NULL, 10);
+if (!snap_id) {
+pstrcpy(snap_tag, sizeof(snap_tag), snapshot_id);
+pstrcpy(buf + SD_MAX_VDI_LEN, SD_MAX_VDI_TAG_LEN, snap_tag);
+}
+
+ret = find_vdi_name(s, s->name, snap_id, snap_tag, , true, 
_err);
+if (ret) {
+return ret;
+}
+
+SheepdogVdiReq hdr = {
+.opcode = SD_OP_DEL_VDI,
+.data_length = wlen,
+.flags = SD_FLAG_CMD_WRITE,
+.snapid = snap_id,
+};
+SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)
+
+fd = connect_to_sdog(s, _err);
+if (fd < 0) {
+error_report_err(local_err);
+return -1;
+}
+
+ret = do_req(fd, s->aio_context, (SheepdogReq *),
+ buf, , );
+closesocket(fd);
+if (ret) {
+return ret;
+}
+
+switch (rsp->result) {
+case SD_RES_NO_VDI:
+error_report("%s was already deleted", s->name);
+case SD_RES_SUCCESS:
+break;
+default:
+error_report("%s, %s", sd_strerror(rsp->result), s->name);
+return -1;
+}
+
+//ret = reload_inode(s, snap_id, snap_tag);
+
+return ret;
 }

 static int sd_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
--
2.5.0



[Qemu-devel] [PATCH RFC] allow to delete sheepdog snapshot

2015-12-01 Thread Vasiliy Tolstov
This is first try to get ability to remove snapshot from sheepdog,
I'm stuck ad the simple thing: sheepdog does not allow to remove
snapshot with objects with message:
ERROR [deletion] delete_vdi_work(1811) vid: X still has objects

I'm check dog/vdi.c code and see, that in case of deletion vdi sheepdog write
to all objects. Does i realy need to do this thing?
Can somebody helps me?
Also as i see block/sheepdog.c have delete vdi code and it very simple compared
to dog/vdi.c...
So, please help me =)

Vasiliy Tolstov (1):
  allow to delete sheepdog snapshot

 block/sheepdog.c | 59 ++--
 1 file changed, 57 insertions(+), 2 deletions(-)

--
2.5.0



Re: [Qemu-devel] poor virtio-scsi performance (fio testing)

2015-11-25 Thread Vasiliy Tolstov
2015-11-25 12:35 GMT+03:00 Stefan Hajnoczi <stefa...@gmail.com>:
> You can get better aio=native performance with qemu.git/master.  Please
> see commit fc73548e444ae3239f6cef44a5200b5d2c3e85d1 ("virtio-blk: use
> blk_io_plug/unplug for Linux AIO batching").


Thanks Stefan! Does this patch only for virtio-blk or it can increase
iops for virtio-scsi ?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] poor virtio-scsi performance (fio testing)

2015-11-25 Thread Vasiliy Tolstov
2015-11-25 13:08 GMT+03:00 Alexandre DERUMIER <aderum...@odiso.com>:
> Maybe could you try to create 2 disk in your vm, each with 1 dedicated 
> iothread,
>
> then try to run fio on both disk at the same time, and see if performance 
> improve.
>

Thats fine, but by default i have only one disk inside vm, so i prefer
increase single disk speed.

>
> But maybe they are some write overhead with lvmthin (because of copy on 
> write) and sheepdog.
>
> Do you have tried with classic lvm or raw file ?

I'm try with classic lvm - sometimes i get more iops, but stable
results is the same =)


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] poor virtio-scsi performance (fio testing)

2015-11-25 Thread Vasiliy Tolstov
2015-11-25 13:27 GMT+03:00 Alexandre DERUMIER <aderum...@odiso.com>:
> I have tested with a raw file, qemu 2.4 + virtio-scsi (without iothread), I'm 
> around 25k iops
> with an intel ssd 3500. (host cpu are xeon v3 3,1ghz)


What scheduler you have on host system? May be my default cfq slowdown?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] poor virtio-scsi performance (fio testing)

2015-11-19 Thread Vasiliy Tolstov
I'm test virtio-scsi on various kernels (with and without scsi-mq)
with deadline io scheduler (best performance). I'm test with lvm thin
volume and with sheepdog storage. Data goes to ssd that have on host
system is about 30K iops.
When i'm test via fio
[randrw]
blocksize=4k
filename=/dev/sdb
rw=randrw
direct=1
buffered=0
ioengine=libaio
iodepth=32
group_reporting
numjobs=10
runtime=600


I'm always stuck at 11K-12K iops with sheepdog or with lvm.
When i'm switch to virtio-blk and enable data-plane i'm get around 16K iops.
I'm try to enable virtio-scsi-data-plane but may be miss something
(get around 13K iops)
I'm use libvirt 1.2.16 and qemu 2.4.1

What can i do to get near 20K-25K iops?

(qemu testing drive have cache=none io=native)

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] qemu-guest-agent question

2015-10-23 Thread Vasiliy Tolstov
2015-10-21 18:28 GMT+03:00 Michael Roth <mdr...@linux.vnet.ibm.com>:
>
> I assumed you were referring to 'commands' via the recent
> guest-exec command that was added, but in case that's not what you were
> asking about:
>
> The guest agent commands themselves are synchronous, and qga will
> process and respond to requests as it recieves them, one at a time,
> from start to finish.


Thanks! This is very helpful. Does it possible to add this info to qga docs ?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] qemu-guest-agent question

2015-10-19 Thread Vasiliy Tolstov
I'm try to understand sources of qga and have a question-  does agent
execute commands synchronous or if i'm send firstly long running
command and after that send  short lived command, short lived command
response can be sended before first command result?
Thanks!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH v3 0/6] qemu: guest agent: implement guest-exec command

2015-10-02 Thread Vasiliy Tolstov
2015-10-02 0:42 GMT+03:00 Michael Roth <mdr...@linux.vnet.ibm.com>:
> Not sure how regularly that archive is updated, but it's here now if
> you haven't gotten it yet:
>
>   http://thread.gmane.org/gmane.comp.emulators.qemu/366140


Thanks!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH v3 0/6] qemu: guest agent: implement guest-exec command

2015-10-01 Thread Vasiliy Tolstov
2015-10-01 11:00 GMT+03:00 Denis V. Lunev <den-li...@parallels.com>:
> Subject: [PATCH 0/5] simplified QEMU guest exec
> Date: Thu, 1 Oct 2015 10:37:58 +0300
> Message-ID: <1443685083-6242-1-git-send-email-...@openvz.org>


hm... i don't see it and google and
http://lists.nongnu.org/archive/html/qemu-devel/2015-10/threads.html
says nothing =(

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH v3 0/6] qemu: guest agent: implement guest-exec command

2015-10-01 Thread Vasiliy Tolstov
2015-10-01 10:43 GMT+03:00 Denis V. Lunev <den-li...@parallels.com>:
> new simplified version posted. Can you pls look & review?


Sorry, i cant find it in qemu list

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH v3 0/6] qemu: guest agent: implement guest-exec command

2015-10-01 Thread Vasiliy Tolstov
2015-10-01 10:43 GMT+03:00 Denis V. Lunev <den-li...@parallels.com>:
> new simplified version posted. Can you pls look & review?


Thanks, i'll check it

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] internal snapshots with sheepdog

2015-09-18 Thread Vasiliy Tolstov
2015-09-18 12:02 GMT+03:00 Kevin Wolf <kw...@redhat.com>:
> Doesn't sheepdog already support storing snapshots in the same image?
> I thought it would just work; at least, there's some code there for it.


Yes, qemu and sheepdog have ability to create internal snapshot, i
miss that, but when i'm try in libvirt create live snapshot - libvirt
says, that this is not possible with raw image.


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] help with understanding qcow2 file format

2015-09-17 Thread Vasiliy Tolstov
2015-09-16 16:46 GMT+03:00 Eric Blake <ebl...@redhat.com>:

> qemu-img map file.qcow2
>
>
Offset  Length  Mapped to   File
qemu-img: File contains external, encrypted or compressed clusters.


> is a great way to learn which physical host offsets hold the data at
> which guest offsets.
>
> As for coding interactions with qcow2, see the source under block/qcow2.c.
>
> You may also be interested in the visual representation of qcow2 in my
> KVM Forum slides, part 1:
>
>
> http://events.linuxfoundation.org/sites/events/files/slides/2015-qcow2-expanded.pdf
>


Thanks for slides, they are very useful.


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru


Re: [Qemu-devel] help with understanding qcow2 file format

2015-09-16 Thread Vasiliy Tolstov
2015-09-16 14:04 GMT+03:00 Laszlo Ersek <ler...@redhat.com>:

> All I can say is, "docs/specs/qcow2.txt".
>

Thanks! Can you provide me ordered steps that i need to do to get file
contents?


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru


[Qemu-devel] help with understanding qcow2 file format

2015-09-16 Thread Vasiliy Tolstov
Hi, I'm need help to understand qcow2 file format, can somebody explain to
me, for example if i need to read 1K from offset 512?

As i'm understand i need to calculate offset in qcow2 file using some
things from header, can somebody explains my how can i do that?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru


Re: [Qemu-devel] [PATCH v3 0/6] qemu: guest agent: implement guest-exec command

2015-09-15 Thread Vasiliy Tolstov
2015-06-15 10:07 GMT+03:00 Denis V. Lunev <d...@odin.com>:
> _PING_


Any news about this feature? I'm need to run inside guest some scripts
and want non blocking exec, so this feature very useful for me.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH v3 0/6] qemu: guest agent: implement guest-exec command

2015-09-15 Thread Vasiliy Tolstov
2015-09-15 11:15 GMT+03:00 Denis V. Lunev <d...@odin.com>:
> we have discussed new approach on KVM forum
> with a bit reduced set (guest-pipe-open/close
> code should not be in the first version) and in
> progress with a rework. I think that I'll post new version
> at the end of this week or next week.


Ok, i'm wait.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] internal snapshots with sheepdog

2015-09-11 Thread Vasiliy Tolstov
Hi! qcow2 have abilit to store vm state inside qcow2 file in space
that not allocated to guest, but if i want to store vm state inside
sheepdog storage with raw image what is the preferred place to store
memory data?
I simply can create additional vdi equa to memory size of the guest
and save memory to this file...

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH RFC 0/5] disk deadlines

2015-09-08 Thread Vasiliy Tolstov
2015-09-08 11:00 GMT+03:00 Denis V. Lunev <d...@openvz.org>:
>
> VM remains stopped until all requests from the disk which caused VM's stopping
> are completed. Furthermore, if there is another disks with 'disk-deadlines=on'
> whose requests are waiting to be completed, do not start VM : wait completion
> of all "late" requests from all disks.
>
> Furthermore, all requests which caused VM stopping (or those that just were 
> not
> completed in time) could be printed using "info disk-deadlines" qemu monitor
> option as follows:


Nice feature for networked filesystems and block storages. Thanks for
this nice patch!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH 2/2] sheepdog: refine discard support

2015-09-04 Thread Vasiliy Tolstov
04 сент. 2015 г. 12:57 пользователь "Hitoshi Mitake" <
mitake.hito...@gmail.com> написал:
>
> On Fri, Sep 4, 2015 at 5:51 PM, Hitoshi Mitake <mitake.hito...@gmail.com>
wrote:
> > On Wed, Sep 2, 2015 at 9:36 PM, Vasiliy Tolstov <v.tols...@selfip.ru>
wrote:
> >> 2015-09-01 6:03 GMT+03:00 Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp
>:
> >>> This patch refines discard support of the sheepdog driver. The
> >>> existing discard mechanism was implemented on SD_OP_DISCARD_OBJ, which
> >>> was introduced before fine grained reference counting on newer
> >>> sheepdog. It doesn't care about relations of snapshots and clones and
> >>> discards objects unconditionally.
> >>>
> >>> With this patch, the driver just updates an inode object for updating
> >>> reference. Removing the object is done in sheep process side.
> >>>
> >>> Cc: Teruaki Ishizaki <ishizaki.teru...@lab.ntt.co.jp>
> >>> Cc: Vasiliy Tolstov <v.tols...@selfip.ru>
> >>> Cc: Jeff Cody <jc...@redhat.com>
> >>> Signed-off-by: Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp>
> >>
> >>
> >> I'm test this patch and now discard working properly and no errors in
> >> sheepdog log file.
> >>
> >> Tested-by: Vasiliy Tolstov <v.tols...@selfip.ru>
> >
> > On the second thought, this patch has a problem of handling snapshot.
> > Please drop this one (1st patch is ok to apply).
> >
> > I'll solve the problem in sheepdog side.
>
> On the third thought, this patch can work well ;) Please pick this patch,
Jeff.
>
> I considered about a case of interleaving of snapshotting and
> discarding requests like below:
> 1. user invokes dog vdi snapshot, a command for making current VDI
> snapshot (updating inode objects)
> 2. discard request from VM before the request of 1 completes
> 3. discard completes
> 4. request of 1 completes
>
> In this case, some data_vdi_id of original inode can be overwritten
> before completion of snapshotting. However, this behavior is valid
> because dog vdi snapshot doesn't return ack to the user.
>

This is normal for snapshots, to get consistent data you need to cooperate
with guest system to get freeze it fs before snapshot.
Quemu-ga already have this via sending freeze to fs beforw and thaw after
snapshot .

> Thanks,
> Hitoshi
>
> >
> > Thanks,
> > Hitoshi
> >
> >>
> >> --
> >> Vasiliy Tolstov,
> >> e-mail: v.tols...@selfip.ru
> >> --
> >> sheepdog mailing list
> >> sheep...@lists.wpkg.org
> >> https://lists.wpkg.org/mailman/listinfo/sheepdog


Re: [Qemu-devel] [PATCH 2/2] sheepdog: refine discard support

2015-09-02 Thread Vasiliy Tolstov
2015-09-01 6:03 GMT+03:00 Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp>:
> This patch refines discard support of the sheepdog driver. The
> existing discard mechanism was implemented on SD_OP_DISCARD_OBJ, which
> was introduced before fine grained reference counting on newer
> sheepdog. It doesn't care about relations of snapshots and clones and
> discards objects unconditionally.
>
> With this patch, the driver just updates an inode object for updating
> reference. Removing the object is done in sheep process side.
>
> Cc: Teruaki Ishizaki <ishizaki.teru...@lab.ntt.co.jp>
> Cc: Vasiliy Tolstov <v.tols...@selfip.ru>
> Cc: Jeff Cody <jc...@redhat.com>
> Signed-off-by: Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp>


I'm test this patch and now discard working properly and no errors in
sheepdog log file.

Tested-by: Vasiliy Tolstov <v.tols...@selfip.ru>

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH 1/2] sheepdog: use per AIOCB dirty indexes for non overlapping requests

2015-09-02 Thread Vasiliy Tolstov
2015-09-01 6:03 GMT+03:00 Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp>:
> n the commit 96b14ff85acf, requests for overlapping areas are
> serialized. However, it cannot handle a case of non overlapping
> requests. In such a case, min_dirty_data_idx and max_dirty_data_idx
> can be overwritten by the requests and invalid inode update can
> happen e.g. a case like create(1, 2) and create(3, 4) are issued in
> parallel.
>
> This patch lets SheepdogAIOCB have dirty data indexes instead of
> BDRVSheepdogState for avoiding the above situation.
>
> This patch also does trivial renaming for better description:
> overwrapping -> overlapping
>
> Cc: Teruaki Ishizaki <ishizaki.teru...@lab.ntt.co.jp>
> Cc: Vasiliy Tolstov <v.tols...@selfip.ru>
> Cc: Jeff Cody <jc...@redhat.com>
> Signed-off-by: Hitoshi Mitake <mitake.hito...@lab.ntt.co.jp>

I'm test this patch and now discard working properly and no errors in
sheepdog log file.

Tested-by: Vasiliy Tolstov <v.tols...@selfip.ru>

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] speedup qemu booting guest kernel

2015-08-25 Thread Vasiliy Tolstov
Hi! Sometimes ago i see clearlinux release, and after that some qemu
discussion how to speedup (mainly seabios part) kernel booting.
But i can't google this info =(. Can somebody provide me links to
discussion about seabios speedup and what progress in this area?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-08-09 Thread Vasiliy Tolstov
2015-08-05 21:58 GMT+03:00 Jeff Cody jc...@redhat.com:
 Hi Vasiliy,

 If you run configure with  --disable-strip, it will not strip the
 debugging symbols from the binary after the build.  Then, you can run
 gdb on qemu, and do a backtrace after you hit the segfault ('bt').
 That may shed some light, and is a good place to start.


I'm try to debug (disable-strip), but i'm start vps from libvirt (i'm
add -s flag to qemu, to start gdb), but when i'm attach to remote
session,qemu aready dies, or all works fine.
does it possible to determine by this dmesg what happening in qemu
binary with debug symbols?
qemu-system-x86[34046]: segfault at 401364 ip 7f33f52a1ff8 sp
7f3401ecad30 error 4 in qemu-system-x86_64[7f33f4efd000+518000]

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-08-04 Thread Vasiliy Tolstov
2015-08-03 3:41 GMT+03:00 Liu Yuan namei.u...@gmail.com:
 What did you mean by 'not able to completely install'? It is a installation
 problem or something else?


i have simple test that works as:
1) boot 3.18.x kernel and initrd with static compiled golang binary
2) binary bringup dhcp network
3) fetch from http gzipped raw qemu image
3) in parallel decompress gzipped image and write it to disk

this use-case determine previous error in qemu (that brings this discussion)

 My QEMU or my patch? I guess you can test the same QEMU with my patch and with
 Hitoshi's patch separately. You know, different QEMU base might cover the
 difference.


I'm test you and Hitoshi patch with qemu 2.4-rc0, i'm compile two
independent debian packages with you patch and with Hitoshi.
Install qemu with you patch to one node and with Hitoshi on another.
Runs sheep with local cluster driver.
I don't know how to determine qemu segfault in my case, does it
possible to build qemu with debugging via configure flags ang get more
detailed descriptions?

 Thanks,
 Yuan




-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-08-02 Thread Vasiliy Tolstov
to compare fio results from host system with file places on the ssd
(ext4,nodiscard, trimmed before run):
randrw: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=32
2.0.8
Starting 10 processes
Jobs: 10 (f=10): [mm] [100.0% done] [50892K/49944K /s]
[12.8K/12.5K iops] [eta 00m:00s]
randrw: (groupid=0, jobs=10): err= 0: pid=7439
  read : io=739852KB, bw=18491KB/s, iops=4622 , runt= 40012msec
slat (usec): min=2 , max=12573 , avg=308.09, stdev=955.59
clat (usec): min=137 , max=107321 , avg=12750.90, stdev=7865.72
 lat (usec): min=167 , max=107364 , avg=13059.31, stdev=7932.79
clat percentiles (usec):
 |  1.00th=[ 2512],  5.00th=[ 4192], 10.00th=[ 5280], 20.00th=[ 6816],
 | 30.00th=[ 8096], 40.00th=[ 9408], 50.00th=[10816], 60.00th=[12480],
 | 70.00th=[14528], 80.00th=[17536], 90.00th=[22656], 95.00th=[28032],
 | 99.00th=[40192], 99.50th=[45824], 99.90th=[59648], 99.95th=[68096],
 | 99.99th=[94720]
bw (KB/s)  : min=0, max= 6679, per=27.96%, avg=5170.37, stdev=665.70
  write: io=737492KB, bw=18432KB/s, iops=4607 , runt= 40012msec
slat (usec): min=4 , max=12670 , avg=305.68, stdev=945.24
clat (usec): min=31 , max=107147 , avg=11268.07, stdev=7542.08
 lat (usec): min=72 , max=107181 , avg=11574.10, stdev=7615.99
clat percentiles (usec):
 |  1.00th=[ 1624],  5.00th=[ 3216], 10.00th=[ 4256], 20.00th=[ 5664],
 | 30.00th=[ 6816], 40.00th=[ 8032], 50.00th=[ 9408], 60.00th=[10944],
 | 70.00th=[12864], 80.00th=[15552], 90.00th=[20608], 95.00th=[26240],
 | 99.00th=[38144], 99.50th=[43264], 99.90th=[2], 99.95th=[63744],
 | 99.99th=[89600]
bw (KB/s)  : min=0, max= 6767, per=27.39%, avg=5047.81, stdev=992.62
lat (usec) : 50=0.01%, 100=0.01%, 250=0.06%, 500=0.12%, 750=0.06%
lat (usec) : 1000=0.06%
lat (msec) : 2=0.74%, 4=5.47%, 10=42.71%, 20=38.26%, 50=12.26%
lat (msec) : 100=0.25%, 250=0.01%
  cpu  : usr=0.27%, sys=67.69%, ctx=339142, majf=0, minf=70
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, =64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, =64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, =64=0.0%
 issued: total=r=184963/w=184373/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=739852KB, aggrb=18490KB/s, minb=18490KB/s, maxb=18490KB/s,
mint=40012msec, maxt=40012msec
  WRITE: io=737492KB, aggrb=18431KB/s, minb=18431KB/s, maxb=18431KB/s,
mint=40012msec, maxt=40012msec

Disk stats (read/write):
dm-5: ios=183885/183290, merge=0/0, ticks=1070388/977428,
in_queue=2053044, util=35.35%, aggrios=184963/184383, aggrmerge=0/3,
aggrticks=1072368/978452, aggrin_queue=2052424, aggrutil=35.39%
  sdd: ios=184963/184383, merge=0/3, ticks=1072368/978452,
in_queue=2052424, util=35.39%

2015-08-02 14:52 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
 2015-07-31 15:08 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
 Please wait to performance comparison. As i see Liu's patch may be
 more slow then Hitoshi.


 I'm switch to local cluster driver to test only local ssd and not
 network overhead.

 But now Liu's qemu does not able to completely install vps:
 sheep runs as:
 /usr/sbin/sheep --log level=debug format=server dir=/var/log/
 dst=default --bindaddr 0.0.0.0 --cluster local --directio --upgrade
 --myaddr 192.168.240.134 --pidfile /var/run/sheepdog.pid
 /var/lib/sheepdog/sd_meta
 /var/lib/sheepdog/vg0-sd_data,/var/lib/sheepdog/vg1-sd_data,/var/lib/sheepdog/vg2-sd_data,/var/lib/sheepdog/vg3-sd_data,/var/lib/sheepdog/vg4-sd_data,/var/lib/sheepdog/vg5-sd_data

 qemu runs as:
 qemu-system-x86_64 -enable-kvm -name 30681 -S -machine
 pc-i440fx-1.7,accel=kvm,usb=off -m 1024 -realtime
 mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
 49771611-5e45-70c9-1a78-0043bf91 -no-user-config -nodefaults
 -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/30681.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
 -no-shutdown -boot strict=on -device
 piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
 virtio-scsi-pci,id=scsi0,num_queues=1,bus=pci.0,addr=0x4 -device
 virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
 file=sheepdog:30681,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none,discard=ignore,aio=native
 -device 
 scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 -drive if=none,id=drive-scsi0-0-1-0,readonly=on,format=raw -device
 scsi-cd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0
 -netdev tap,fd=37,id=hostnet0,vhost=on,vhostfd=39 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:00:42:8d,bus=pci.0,addr=0x3,rombar=0
 -chardev pty,id=charserial0 -device
 isa-serial,chardev=charserial0,id=serial0 -chardev
 socket,id=charchannel0,path=/var/lib/libvirt/qemu/30681.agent,server,nowait
 -device 
 virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id

Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-08-02 Thread Vasiliy Tolstov
2015-07-31 15:08 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
 Please wait to performance comparison. As i see Liu's patch may be
 more slow then Hitoshi.


I'm switch to local cluster driver to test only local ssd and not
network overhead.

But now Liu's qemu does not able to completely install vps:
sheep runs as:
/usr/sbin/sheep --log level=debug format=server dir=/var/log/
dst=default --bindaddr 0.0.0.0 --cluster local --directio --upgrade
--myaddr 192.168.240.134 --pidfile /var/run/sheepdog.pid
/var/lib/sheepdog/sd_meta
/var/lib/sheepdog/vg0-sd_data,/var/lib/sheepdog/vg1-sd_data,/var/lib/sheepdog/vg2-sd_data,/var/lib/sheepdog/vg3-sd_data,/var/lib/sheepdog/vg4-sd_data,/var/lib/sheepdog/vg5-sd_data

qemu runs as:
qemu-system-x86_64 -enable-kvm -name 30681 -S -machine
pc-i440fx-1.7,accel=kvm,usb=off -m 1024 -realtime
mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
49771611-5e45-70c9-1a78-0043bf91 -no-user-config -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/30681.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,num_queues=1,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
file=sheepdog:30681,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none,discard=ignore,aio=native
-device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
-drive if=none,id=drive-scsi0-0-1-0,readonly=on,format=raw -device
scsi-cd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0
-netdev tap,fd=37,id=hostnet0,vhost=on,vhostfd=39 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:00:42:8d,bus=pci.0,addr=0x3,rombar=0
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/30681.agent,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
-device usb-mouse,id=input0 -device usb-kbd,id=input1 -vnc
[::]:0,password -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object
rng-random,id=objrng0,filename=/dev/random -device
virtio-rng-pci,rng=objrng0,id=rng0,max-bytes=1024,period=2000,bus=pci.0,addr=0x7
-msg timestamp=on

sheep.log:
https://gist.github.com/raw/31201454b0afe1b86d00

dmesg:
[Sun Aug  2 14:48:45 2015] qemu-system-x86[6894]: segfault at 401364
ip 7f212f060ff8 sp 7f213efe5f40 error 4 in
qemu-system-x86_64[7f212ecbc000+518000]

fio job file:
[randrw]
blocksize=4k
filename=/dev/sdb
rw=randrw
direct=1
buffered=0
ioengine=libaio
iodepth=32
group_reporting
numjobs=10
runtime=40

For clean test i'm create new vdi and attach it to running vm: qemu
cache=none, io=native, discard=ignore
Hitoshi patch:
randrw: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.1.11
Starting 10 processes
Jobs: 10 (f=10): [m(10)] [100.0% done] [13414KB/12291KB/0KB /s]
[3353/3072/0 iops] [eta 00m:00s]
randrw: (groupid=0, jobs=10): err= 0: pid=1318: Sun Aug  2 11:51:25 2015
  read : io=368004KB, bw=9191.9KB/s, iops=2297, runt= 40036msec
slat (usec): min=1, max=408908, avg=1316.35, stdev=10057.52
clat (usec): min=229, max=972302, avg=67203.98, stdev=65126.08
 lat (usec): min=294, max=972306, avg=68520.61, stdev=66741.02
clat percentiles (msec):
 |  1.00th=[4],  5.00th=[   16], 10.00th=[   30], 20.00th=[   36],
 | 30.00th=[   40], 40.00th=[   44], 50.00th=[   48], 60.00th=[   52],
 | 70.00th=[   59], 80.00th=[   85], 90.00th=[  128], 95.00th=[  186],
 | 99.00th=[  367], 99.50th=[  441], 99.90th=[  570], 99.95th=[  619],
 | 99.99th=[  799]
bw (KB  /s): min=6, max= 1824, per=10.19%, avg=936.59, stdev=434.25
  write: io=366964KB, bw=9165.9KB/s, iops=2291, runt= 40036msec
slat (usec): min=1, max=338238, avg=1320.61, stdev=9586.81
clat (usec): min=912, max=972334, avg=69411.58, stdev=64715.54
 lat (usec): min=920, max=972338, avg=70732.51, stdev=66193.96
clat percentiles (msec):
 |  1.00th=[8],  5.00th=[   30], 10.00th=[   33], 20.00th=[   38],
 | 30.00th=[   42], 40.00th=[   45], 50.00th=[   49], 60.00th=[   53],
 | 70.00th=[   60], 80.00th=[   86], 90.00th=[  128], 95.00th=[  184],
 | 99.00th=[  379], 99.50th=[  441], 99.90th=[  570], 99.95th=[  635],
 | 99.99th=[  824]
bw (KB  /s): min=7, max= 1771, per=10.21%, avg=935.65, stdev=439.55
lat (usec) : 250=0.01%, 500=0.02%, 750=0.05%, 1000=0.04%
lat (msec) : 2=0.20%, 4=0.57%, 10=1.53%, 20=1.78%, 50=50.50%
lat (msec) : 100=28.04%, 250=14.56%, 500=2.47%, 750=0.23%, 1000=0.03%
  cpu  : usr=0.17%, sys=0.26%, ctx=16905, majf=0, minf=78
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, =64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32

Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-07-31 Thread Vasiliy Tolstov
2015-07-30 16:27 GMT+03:00 Jeff Cody jc...@redhat.com:
 I'd rather see some sort consensus amongst Liu, Hitoshi, yourself, or
 others more intimately familiar with sheepdog.

 Right now, we have Hitoshi's patch in the main git repo, slated for
 2.4 release (which is Monday).  It sounds, from Liu's email, as this
 may not fix the root cause.

 Vasiliy said he would test Liu's patch; if he can confirm this new
 patch fix, then I would be inclined to use Liu's patch, based on the
 detailed analysis of the issue in the commit message.

Liu's patch also works for me. But also like in Hitoshi patch breaks
when using discards in qemu =(.



-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-07-31 Thread Vasiliy Tolstov
2015-07-31 14:55 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
 Liu's patch also works for me. But also like in Hitoshi patch breaks
 when using discards in qemu =(.


Please wait to performance comparison. As i see Liu's patch may be
more slow then Hitoshi.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-07-30 Thread Vasiliy Tolstov
2015-07-29 12:31 GMT+03:00 Liu Yuan namei.u...@gmail.com:
 Technically, it won't affect the performance because index updates are not 
 range
 but concrete in terms of underlying 4M block size. Only 2 or 3 indexes in a
 range will be updated and 90+% updates will be only 1. So if 2 updates stride 
 a
 large range, it will actually worse the performance of sheepdog because many
 additional unref of object will be executed by sheep internally.

 It is not a performance problem but more the right fix. Even with your patch,
 updates of inode can overlap. You just don't allow overlapped requests go to
 sheepdog, which is a overkill approach. IMHO, we should only adjust to avoid
 the overlapped inode updates, which can be done easily and incrementally on 
 top
 of old code, rather than take on a complete new untested overkill mechanism. 
 So
 what we get from your patch? Covering the problem and lock every requests?

 Your patch actually fix nothing but just cover the problem by slowing down the
 request and even with your patch, the problem still exists because inode 
 updates
 can overlap. Your commit log doesn't explain what is the real problem and why
 your fix works. This is not your toy project that can commit whatever you 
 want.

 BTW, sheepdog project was already forked, why don't you fork the block
 driver, too?

 What makes you think you own the block driver?

 We forked the sheepdog project because it is low quality of code partly and 
 mostly
 some company tries to make it a private project. It is not as open source 
 friendly
 as before and that is the main reason Kazutaka and I chose to fork the 
 sheepdog
 project. But this doesn't mean we need to fork the QEMU project, it is an
 open source project and not your home toy.

 Kazutaka and I are the biggest contributers of both sheepdog and QEMU sheepdog
 block driver for years, so I think I am eligible to review the patch and
 responsible to suggest the right fix. If you are pissed off when someone else
 have other opinions, you can just fork the code and play with it at home or 
 you
 follow the rule of open source project.


Jeff Cody, please be the judge, patch from Hitoshi solved my problem
that i emailed in sheepdog list (i have test environment with 8 hosts
on each 6 SSD disks and infiniband interconnect between hosts) before
Hitoshi patch, massive writing to sheepdog storage breaks file system
and corrupt it.
After the patch i don't see issues.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [sheepdog] [PATCH] sheepdog: fix overlapping metadata update

2015-07-30 Thread Vasiliy Tolstov
2015-07-30 12:13 GMT+03:00 Liu Yuan namei.u...@gmail.com:
 Hi Vasiliy,

 Did you test my patch? I think both patch can make the problem gone. The
 trick is that which way is the right fix. This is quit technical because
 sometimes the problem is gone not because being fixed but is covered out.

 If you have problem with applying the patch, feel free to mail me and I'll
 package a patched QEMU tree for you.


Ok, i'm check you patch too. Please wait.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH] sheepdog: serialize requests to overwrapping area

2015-07-27 Thread Vasiliy Tolstov
2015-07-27 18:23 GMT+03:00 Jeff Cody jc...@redhat.com:
 Thanks, applied to my block branch:
 https://github.com/codyprime/qemu-kvm-jtc/tree/block


Thanks! Waiting for adding to qemu rc =)

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [ANNOUNCE] QEMU 2.4.0-rc2 is now available

2015-07-23 Thread Vasiliy Tolstov
2015-07-23 1:33 GMT+03:00 Michael Roth mdr...@linux.vnet.ibm.com:
 On behalf of the QEMU Team, I'd like to announce the availability of the
 third release candidate for the QEMU 2.4 release.  This release is meant
 for testing purposes and should not be used in a production environment.

 http://wiki.qemu.org/download/qemu-2.4.0-rc2.tar.bz2

 You can help improve the quality of the QEMU 2.4 release by testing this
 release and reporting bugs on Launchpad:

 https://bugs.launchpad.net/qemu/

 The release plan for the 2.4 release is available at:

 http://wiki.qemu.org/Planning/2.4

 Please add entries to the ChangeLog for the 2.4 release below:

 http://wiki.qemu.org/ChangeLog/2.4

 Changes since 2.4.0-rc1:


I don't see sheepdog patch sent on 20 July, does it possible to
include it in 2.4 release?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [POC] colo-proxy in qemu

2015-07-20 Thread Vasiliy Tolstov
2015-07-20 14:55 GMT+03:00 zhanghailiang zhang.zhanghaili...@huawei.com:
 Agreed, besides, it is seemed that slirp is not supporting ipv6, we also
 have to supplement it.


patch for ipv6 slirp support some times ago sended to qemu list, but i
don't know why in not accepted.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PATCH] sheepdog: serialize requests to overwrapping area

2015-07-17 Thread Vasiliy Tolstov
2015-07-17 19:44 GMT+03:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
 Current sheepdog driver only serializes create requests in oid
 unit. This mechanism isn't enough for handling requests to
 overwrapping area spanning multiple oids, so it can result bugs like
 below:
 https://bugs.launchpad.net/sheepdog-project/+bug/1456421

 This patch adds a new serialization mechanism for the problem. The
 difference from the old one is:
 1. serialize entire aiocb if their targetting areas overwrap
 2. serialize all requests (read, write, and discard), not only creates

 This patch also removes the old mechanism because the new one can be
 an alternative.

 Cc: Kevin Wolf kw...@redhat.com
 Cc: Stefan Hajnoczi stefa...@redhat.com
 Cc: Teruaki Ishizaki ishizaki.teru...@lab.ntt.co.jp
 Cc: Vasiliy Tolstov v.tols...@selfip.ru
 Signed-off-by: Hitoshi Mitake mitake.hito...@lab.ntt.co.jp


Tested-by: Vasiliy Tolstov v.tols...@selfip.ru

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] unable to auth to vnc with qemu 2.4.0-rc0

2015-07-14 Thread Vasiliy Tolstov
vncviewer can't auto to qemu vnc with plain auth:
Client sock=23 ws=0 auth=2 subauth=0
New client on socket 23
vnc_set_share_mode/23: undefined - connecting
Write Plain: Pending output 0x7fc8eba5dbe0 size 1036 offset 12. Wait SSF 0
Wrote wire 0x7fc8eba5dbe0 12 - 12
Read plain (nil) size 0 offset 0
Read wire 0x7fc8eaec3890 4096 - 12
Client request protocol version 3.8
Telling client we support auth 2
Write Plain: Pending output 0x7fc8eba5dbe0 size 1036 offset 2. Wait SSF 0
Wrote wire 0x7fc8eba5dbe0 2 - 2
Read plain 0x7fc8eaec3890 size 5120 offset 0
Read wire 0x7fc8eaec3890 4096 - 1
Client requested auth 2
Start VNC auth
Write Plain: Pending output 0x7fc8eba5dbe0 size 1036 offset 16. Wait SSF 0
Wrote wire 0x7fc8eba5dbe0 16 - 16
Read plain 0x7fc8eaec3890 size 5120 offset 0
Read wire 0x7fc8eaec3890 4096 - 16
Client challenge response did not match
Write Plain: Pending output 0x7fc8eba5dbe0 size 1036 offset 30. Wait SSF 0
Wrote wire 0x7fc8eba5dbe0 30 - 30
Closing down client sock: protocol error
vnc_set_share_mode/23: connecting - disconnected


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] unable to auth to vnc with qemu 2.4.0-rc0

2015-07-14 Thread Vasiliy Tolstov
2015-07-14 15:01 GMT+03:00 Paolo Bonzini pbonz...@redhat.com:
 Can you send the config.log and config-host.mak file from the build
 directory?


i can send only jenkins build log =(, may be it helps:
http://cdn.selfip.ru/public/qemu.log

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] unable to auth to vnc with qemu 2.4.0-rc0

2015-07-14 Thread Vasiliy Tolstov
2015-07-14 17:05 GMT+03:00 Paolo Bonzini pbonz...@redhat.com:
 Which patch is it?


From: Wolfgang Bumiller w.bumil...@proxmox.com

Commit 800567a61 updated the code to the generic crypto API
and mixed up encrypt and decrypt functions in
procotol_client_auth_vnc.
(Used to be: deskey(key, EN0) which encrypts, and was
changed to qcrypto_cipher_decrypt in 800567a61.)
Changed it to qcrypto_cipher_encrypt now.

Signed-off-by: Wolfgang Bumiller w.bumil...@proxmox.com
Signed-off-by: Gerd Hoffmann kra...@redhat.com

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] unable to auth to vnc with qemu 2.4.0-rc0

2015-07-14 Thread Vasiliy Tolstov
2015-07-14 15:01 GMT+03:00 Paolo Bonzini pbonz...@redhat.com:
 Can you send the config.log and config-host.mak file from the build
 directory?


I see in list that patch already created. Thanks!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] [PULL for-2.4 0/1] vnc: fix vnc client authentication

2015-07-14 Thread Vasiliy Tolstov
2015-07-14 16:39 GMT+03:00 Gerd Hoffmann kra...@redhat.com:

 Here comes the vnc queue with a single one-line fix.

 please pull,
   Gerd

 The following changes since commit 6169b60285fe1ff730d840a49527e721bfb30899:

   Update version for v2.4.0-rc0 release (2015-07-09 17:56:56 +0100)


Thanks! This vnc patch fix my issue.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] [Bug 1329956] Re: multi-core FreeBSD guest hangs after warm reboot

2015-06-22 Thread Vasiliy Tolstov
In my testing disable apicv does not needed, but i need latest stable
seabios from seabios site.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1329956

Title:
  multi-core FreeBSD guest hangs after warm reboot

Status in QEMU:
  Incomplete

Bug description:
  On some Linux KVM hosts in our environment, FreeBSD guests fail to
  reboot properly if they have more than one CPU (socket, core, and/or
  thread). They will boot fine the first time, but after issuing a
  reboot command via the OS the guest starts to boot but hangs during
  SMP initialization. Fully shutting down and restarting the guest works
  in all cases.

  The only meaningful difference between hosts with the problem and those 
without is the CPU. Hosts with Xeon E5-26xx v2 processors have the problem, 
including at least the Intel(R) Xeon(R) CPU E5-2667 v2 and the Intel(R) 
Xeon(R) CPU E5-2650 v2.
  Hosts with any other CPU, including Intel(R) Xeon(R) CPU E5-2650 0, 
Intel(R) Xeon(R) CPU E5-2620 0, or AMD Opteron(TM) Processor 6274 do not 
have the problem. Note the v2 in the names of the problematic CPUs.

  On hosts with a v2 Xeon, I can reproduce the problem under Linux
  kernel 3.10 or 3.12 and Qemu 1.7.0 or 2.0.0.

  The problem occurs with all currently-supported versions of FreeBSD,
  including 8.4, 9.2, 10.0 and 11-CURRENT.

  On a Linux KVM host with a v2 Xeon, this command line is adequate to
  reproduce the problem:

  /usr/bin/qemu-system-x86_64 -machine accel=kvm -name bsdtest -m 512
  -smp 2,sockets=1,cores=1,threads=2 -drive
  file=./20140613_FreeBSD_9.2-RELEASE_ufs.qcow2,if=none,id=drive0,format=qcow2
  -device virtio-blk-pci,scsi=off,drive=drive0 -vnc 0.0.0.0:0 -net none

  I have tried many variations including different models of -machine
  and -cpu for the guest with no visible difference.

  A native FreeBSD installation on a host with a v2 Xeon does not have
  the problem, nor do a paravirtualized FreeBSD guests under bhyve (the
  BSD legacy-free hypervisor) using the same FreeBSD disk images as on
  the Linux hosts. So it seems unlikely the cause is on the FreeBSD side
  of things.

  I would greatly appreciate any feedback or developer attention to
  this. I am happy to provide additional details, test patches, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1329956/+subscriptions



Re: [Qemu-devel] incorrect memory size inside vm

2015-06-19 Thread Vasiliy Tolstov
2015-06-19 15:01 GMT+03:00 Andrey Korolyov and...@xdel.ru:

 Please don`t top-post in technical mailing lists. Do you have a
 crashkernel-reserved area there in the boot arguments? What
 distro/guest kernel are running in this guest and what is dimm
 configuration?


disabling crashkernel in my case save is about 130-150 Mb,

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] incorrect memory size inside vm

2015-06-17 Thread Vasiliy Tolstov
2015-06-18 1:52 GMT+03:00 Andrey Korolyov and...@xdel.ru:
 Whoosh... technically it is possible but it would be an incompatible
 fork for the upstreams for both SeaBIOS and Qemu, because the generic
 way of plugging DIMMs in is available down to at least generic 2.6.32.
 Except may be Centos where broken kabi would bring great consequences,
 it may be better to just provide a backport repository with newer
 kernels, but it doesn`t sound very optimistic. For the history
 records, the initial hotplug support proposal provided by Vasilis
 Liaskovitis a couple of years ago worked in an exact way you are
 suggesting to, but its resurrection would mean emulator and rom code
 alteration, as I said above.


Ok, i'm try to build latest libvirt and check all oses for memory
hotplug support =).

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] incorrect memory size inside vm

2015-06-17 Thread Vasiliy Tolstov
2015-06-18 1:40 GMT+03:00 Andrey Korolyov and...@xdel.ru:

 Yes, but I`m afraid that I don`t fully understand why do you need this
 when pure hotplug mechanism is available, aside may be nice memory
 stats from balloon and easy-to-use deflation. Just populate a couple
 of static dimms with small enough 'base' e820 memory and use  balloon
 on this setup, you`ll get the reserved memory footprint as small as it
 would be in setup with equal overall amount of memory populated via
 BIOS. For example, you may use -m 128 ... {handful amount of memory
 placed in memory slots} setup to achieve the thing you want.


I have debian wheezy guests with 3.4 kernels (or 3.2..) and many
others like 32 centos 6, opensue , ubuntu, and other.
Does memory hotplug works with this distros (kernels)?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] incorrect memory size inside vm

2015-06-17 Thread Vasiliy Tolstov
2015-06-17 19:26 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
 This is band news =( i have debian wheezy that have old kernel...


Does it possible to get proper results with balloon ? For example by
patching qemu or something like this?


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] incorrect memory size inside vm

2015-06-17 Thread Vasiliy Tolstov
Hi. I have issue with incorrect memory side inside vm. I'm try utilize
memory balloon (not memory hotplug, because i have guest without
memory hotplug (may be)).

When domain started with static memory all works fine, but then i'm
specify in libvirt
memory = 16384 , maxMemory = 16384 and currentMemory = 1024, guest in
f/rpoc/meminfo says that have only 603608 Kb memory. Then i set memory
via virsh setmem to 2Gb, guest see only 1652184 Kb memory.

software versions
libvirt: 1.2.10
qemu: 2.3.0
Guest OS: centos 6.

qemu.log:
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm
-name 26543 -S -machine pc-i440fx-1.7,accel=kvm,usb=off -m 1024
-realtime mlock=off -smp 1,maxcpus=4,sockets=4,cores=1,threads=1 -uuid
4521fb01-c2ca-4269-d2d6-035fd910 -no-user-config -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/26543.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,num_queues=1,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
file=/dev/vg4/26543,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none,discard=unmap,aio=native,iops=5000
-device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
-drive if=none,id=drive-scsi0-0-1-0,readonly=on,format=raw -device
scsi-cd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=52 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:00:34:f7,bus=pci.0,addr=0x3,rombar=0
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/26543.agent,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
-device usb-mouse,id=input0 -device usb-kbd,id=input1 -vnc
[::]:8,password -device VGA,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object
rng-random,id=rng0,filename=/dev/random -device
virtio-rng-pci,rng=rng0,max-bytes=1024,period=2000,bus=pci.0,addr=0x7
-msg timestamp=on

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] incorrect memory size inside vm

2015-06-17 Thread Vasiliy Tolstov
2015-06-17 17:09 GMT+03:00 Andrey Korolyov and...@xdel.ru:
 The rest of visible memory is eaten by reserved kernel areas, for us
 this was a main reason to switch to a hotplug a couple of years ago.
 You would not be able to scale a VM by an order of magnitude with
 regular balloon mechanism without mentioned impact, unfortunately.
 Igor Mammedov posted hotplug-related patches for 2.6.32 a while ago,
 though RHEL6 never adopted them by some reason.


Hmm.. Thanks for info, from what version of kernel memory hotplug works?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] incorrect memory size inside vm

2015-06-17 Thread Vasiliy Tolstov
2015-06-17 18:38 GMT+03:00 Andrey Korolyov and...@xdel.ru:
 Currently QEMU memory hotplug should work with 3.8 and onwards.
 Mentioned patches are an adaptation for an older frankenkernel of 3.8`
 functionality.


This is band news =( i have debian wheezy that have old kernel...

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



[Qemu-devel] poor virtio-scsi performance

2015-06-08 Thread Vasiliy Tolstov
Hi all!

I suspected poor performance of virtio-scsi driver.
I did a few tests:
   Host machine: linux 3.19.1, QEMU emulator version 2.3.0
   Guest machine: linux 4.0.4

   part of domain xml:
emulator/usr/bin/kvm/emulator
disk type='block' device='disk'
  driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/
  source dev='/dev/ram0'/
  backingStore/
  target dev='sda' bus='scsi'/
  alias name='scsi0-0-0-1'/
  address type='drive' controller='0' bus='0' target='0' unit='1'/
/disk

/dev/ram0 I got by running `modprobe brd rd_size=$((5*1024*1024))` on
host machine.

fio conf:
  [readtest]
  blocksize=4k
  filename=/dev/sdb (/dev/ram0 whe test from host machine)
  rw=randread
  direct=1
  buffered=0
  ioengine=libaio
  iodepth=32


results:
  from host:
bw=1594.6MB/s, iops=408196, clat=76usec
  from guest:
bw=398MB/s, iops=99720, clat=316usec

Both host and guest system I boot with `scsi_mod.use_blk_mq=Y`.

Why difference in 4 times?!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] poor virtio-scsi performance

2015-06-08 Thread Vasiliy Tolstov
2015-06-08 16:10 GMT+03:00 Andrey Korolyov and...@xdel.ru:
 You are probably hitting the (more or less fundamental) ceiling which
 was the reason for introducing dataplane backend recently, in other
 words, it does not matter how fast your backend is, the operation
 number will be limited by 50..100kIOPS without dataplane. As far as I
 remember the recent development made possible using it with
 virtio-scsi as well as with virtio-blk, so you`d possibly want to try
 it.


I can't find dataplane for virtio-scsi, only for virtio-blk.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] poor virtio-scsi performance

2015-06-08 Thread Vasiliy Tolstov
2015-06-08 16:46 GMT+03:00 Paolo Bonzini pbonz...@redhat.com:
 virtio-scsi dataplane is not yet 100% thread-safe, but in practice it
 should work in 2.3+.  It's definitely good enough for benchmarking.


Minimal qemu version is 2.3 or something from git?
Also how can i enable dataplane ?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] Announcing qboot, a minimal x86 firmware for QEMU

2015-05-25 Thread Vasiliy Tolstov
2015-05-23 6:55 GMT+03:00 Kevin O'Connor ke...@koconnor.net:
 Out of curiosity, I ran some additional timing tests.  With SeaBIOS
 fully stripped down (via Kconfig), it takes ~20ms to get to the boot
 phase on my old AMD system.  Of that 20ms, ~7ms is to enable shadow
 ram, 2ms is to calibrate the cpu timestamp counter, 4ms is for pci
 init, and ~6ms is to make the shadow ram area read-only.  The time in
 the remaining parts of the SeaBIOS code is so small that it's hard to
 measure.


Can you share config for seabios? As i understand i can safety to
remove keybord, ps2, usb, ata/ahci and leave only virtio (in case of
using qemu to boo linux/freebsd/windows systems) ?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru



Re: [Qemu-devel] can't bot from scsi http cdrom

2015-04-13 Thread Vasiliy Tolstov
2015-04-10 13:53 GMT+03:00 Paolo Bonzini pbonz...@redhat.com:
 Try using target='1' unit='0'.  SeaBIOS only boots from LUN0.


Thanks! This is works fine. Last question - does it possible to create
empty cdrom with type='network'?
I'm try this, but libvrit complains with error:
  disk type='network' device='cdrom'
driver name='qemu' type='raw'/
target dev='sdb' bus='scsi' tray='open'/
address type='drive' controller='0' target='1' bus='0' unit='1'/
readonly/
  /disk



-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



[Qemu-devel] can't bot from scsi http cdrom

2015-04-06 Thread Vasiliy Tolstov
Hi. I'm try to boot from http cdrom and can't do that in case of scsi:
disk type='network' device='cdrom'
  driver name='qemu' type='raw' cache='unsafe'/
  source protocol='http' name='/vps/rescue-4.5.2'
host name='xx.xx' port='80'/
  /source
  backingStore/
  target dev='sdb' bus='scsi'/
  readonly/
  alias name='scsi0-0-0-1'/
  address type='drive' controller='0' bus='0' target='0' unit='1'/
/disk

but in case of ide:
disk type='network' device='cdrom'
  driver name='qemu' type='raw' cache='unsafe'/
  source protocol='http' name='/vps/rescue-4.5.2'
host name='xx.xx' port='80'/
  /source
  backingStore/
  target dev='hda' bus='ide'/
  readonly/
  alias name='ide0-0-0'/
  address type='drive' controller='0' bus='0' target='0' unit='0'/
/disk

all works fine.

libvirt - 1.2.10
qemu - 2.0.0
seabios - 1.7.5

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



[Qemu-devel] qemu rdma live migration

2014-12-03 Thread Vasiliy Tolstov
Hello. I don't find info about how qemu doing live migration via RDAM
- my specific qeustion - does it need ethernet connection for some
info, or can use native RDMA and resolve nodes via GUIDs ?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] qemu rdma live migration

2014-12-03 Thread Vasiliy Tolstov
2014-12-03 16:38 GMT+03:00 Dr. David Alan Gilbert dgilb...@redhat.com:
 The entire migration stream is passed over the RDMA; however
 I'm not clear how the address lookup works, what you pass to the
 migration command is the IP associated with the interface.

 Dave


Yes i see this. So as i understand working ethernet connection between
nodes needs to be established before live migration. Thanks

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] qemu rdma live migration

2014-12-03 Thread Vasiliy Tolstov
2014-12-03 16:55 GMT+03:00 Dr. David Alan Gilbert dgilb...@redhat.com:
 I don't think it needs ethernet; it needs an IP address but that
 can be IPoIB.


IPoIB not very good =( and it does not need for other things.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



[Qemu-devel] strange behavior when using iotune

2014-11-24 Thread Vasiliy Tolstov
Hi. I'm try to shape disk via total_iops_sec in libvirt
libvirt 1.2.10
qemu 2.0.0

Firstly when i'm run vm with predefined
total_iops_sec5000/total_iops_sec i have around 11000 iops (dd
if=/dev/sda bs=512K of=/dev/null)
After that i'm try to set via virsh --total_iops_sec 10 to want to
minimize io, but nothing changed.
After that i'm reboot vm with total_iops_sec10/total_iops_sec and
get very slow io, but this expected. But libvirt says that i have is
around 600 iops.

My questions is - why i can't change total_iops_sec in run-time, and
why entered values does not equal values getting from libvirt ?

Thanks for any suggestions and any help.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] strange behavior when using iotune

2014-11-24 Thread Vasiliy Tolstov
2014-11-24 16:57 GMT+03:00 Andrey Korolyov and...@xdel.ru:
 Hello Vasiliy,

 can you please check actual values via qemu-monitor-command domid '{
 execute: query-block}', just to be sure to pin the potential
 problem to the emulator itself?

virsh qemu-monitor-command 11151 '{ execute: query-block}' | jq '.'
{
  return: [
{
  io-status: ok,
  device: drive-scsi0-0-0-0,
  locked: false,
  removable: false,
  inserted: {
iops_rd: 0,
image: {
  virtual-size: 21474836480,
  filename: /dev/vg3/11151,
  format: raw,
  actual-size: 0,
  dirty-flag: false
},
iops_wr: 0,
ro: false,
backing_file_depth: 0,
drv: raw,
iops: 5000,
bps_wr: 0,
encrypted: false,
bps: 0,
bps_rd: 0,
iops_max: 500,
file: /dev/vg3/11151,
encryption_key_missing: false
  },
  type: unknown
}
  ],
  id: libvirt-22
}

i'm used this site
http://www.ssdfreaks.com/content/599/how-to-convert-mbps-to-iops-or-calculate-iops-from-mbs
root@11151:~# dd if=/dev/sda bs=4K of=/dev/null
5242880+0 records in
5242880+0 records out
21474836480 bytes (21 GB) copied, 45.2557 s, 475 MB/s

so in case of 5000 iops i need to get only 19-20 MB/s


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] strange behavior when using iotune

2014-11-24 Thread Vasiliy Tolstov
2014-11-24 17:18 GMT+03:00 Andrey Korolyov and...@xdel.ru:
 I am not sure for friendliness of possible dd interpretations for new
 leaky bucket mechanism, as its results can be a little confusing even
 for fio (all operations which are above the limit for long-running
 test will have 250ms latency, putting down score numbers in most
 popular tests like UnixBench), also w/o sync options these results are
 almost meaningless. May be fio with direct=1|fsync=1 (for fs) will
 give a more appropriate numbers in your case.


My fail. I'm forget to add iflag=direct to dd. Now all fine i get is
around 20 MB/s which compared to 5000 iops.
Thanks.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-22 Thread Vasiliy Tolstov
2014-07-22 0:49 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 Alternatively, you can track the parameters branch, which I don't
 regenerate.


Thanks. Now all works fine.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



[Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
Hello. I have now slirp qemu network with ipv4 address, but for tests
i need also ipv6 addresses in slirp network. How can i provide this
args for qemu via libvirt or (best) via qemu args?
I found some commits about ipv6 slirp network but can't find args
needed to enable it.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 15:41 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 ? The support was submitted to the list, but not commited.
 You can find the patches on
 http://lists.gnu.org/archive/html/qemu-devel/2014-03/msg05731.html
 and later.
 Unfortunately, nobody has yet found the time to review them.
 Documentation is added by patch 18, you can choose the prefix being
 exposed to userland just like with ipv4.


In case of ipv4 i have some default settings like network, dns and so.
In case of ipv6 does i have some default?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 15:49 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 Yes, see the added documentation:

 +@item ip6-net=@var{addr}[/@var{int}]
 +Set IPv6 network address the guest will see. Optionally specify the prefix
 +size, as number of valid top-most bits. Default is fec0::/64.
 +
 +@item ip6-host=@var{addr}
 +Specify the guest-visible IPv6 address of the host. Default is the 2nd IPv6 
 in
 +the guest network, i.e. ::2.

 +@item ip6-dns=@var{addr}
 +Specify the guest-visible address of the IPv6 virtual nameserver. The address
 +must be different from the host address. Default is the 3rd IP in the guest
 +network, i.e. ::3.

 Most probably you'll want to set ip6-net to something else than
 fec0::/64 since OSes usually prefer IPv4 over fec0::/64.


Big thanks! I'm try apply patches and check.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
Does it possible to get this patches via git branch or by another way?
As i see i can't get it via email because of html contents =(

2014-07-21 15:55 GMT+04:00 Vasiliy Tolstov v.tols...@selfip.ru:
 2014-07-21 15:49 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 Yes, see the added documentation:

 +@item ip6-net=@var{addr}[/@var{int}]
 +Set IPv6 network address the guest will see. Optionally specify the prefix
 +size, as number of valid top-most bits. Default is fec0::/64.
 +
 +@item ip6-host=@var{addr}
 +Specify the guest-visible IPv6 address of the host. Default is the 2nd IPv6 
 in
 +the guest network, i.e. ::2.

 +@item ip6-dns=@var{addr}
 +Specify the guest-visible address of the IPv6 virtual nameserver. The 
 address
 +must be different from the host address. Default is the 3rd IP in the guest
 +network, i.e. ::3.

 Most probably you'll want to set ip6-net to something else than
 fec0::/64 since OSes usually prefer IPv4 over fec0::/64.


 Big thanks! I'm try apply patches and check.

 --
 Vasiliy Tolstov,
 e-mail: v.tols...@selfip.ru
 jabber: v...@selfip.ru



-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 16:06 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 I have pushed it to http://dept-info.labri.fr/~thibault/qemu-ipv6 , in
 the tosubmit branch.


Thanks!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 16:06 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 I have pushed it to http://dept-info.labri.fr/~thibault/qemu-ipv6 , in
 the tosubmit branch.


Inside packer (i'm try to build some centos 7 boxes) i have errors
Qemu stderr: qemu-system-x86_64: slirp/tcp_input.c:1543: tcp_mss:
Assertion `0' failed.
I'm disable debug when patching. Does i need to reenable it and send
some debug output?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 18:29 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 Uh?!  Does the switch statement properly have both AF_INET and AF_INET6
 cases?

 A gdb backtrace would also be useful.


I'm compile 2.1.0-rc2 with
./configure --cc=x86_64-pc-linux-gnu-gcc --prefix=/usr
--libdir=/usr/lib64 --localstatedir=/var --python=/usr/bin/python2
--sysconfdir=/etc --enable-cap-ng --enable-curl --enable-curses
--enable-fdt --enable-guest-agent --enable-guest-base --enable-kvm
--enable-linux-user --enable-modules --enable-seccomp
--enable-stack-protector --enable-system --enable-tcg-interpreter
--enable-tpm --enable-user --enable-uuid --enable-vhdx
--enable-vhost-net --enable-vhost-scsi --disable-brlapi
--disable-glusterfs --disable-rdma --disable-sparse --disable-strip
--disable-werror --disable-xen --disable-xen-pci-passthrough
--disable-xfsctl --with-system-pixman --without-vss-sdk
--without-win-sdk --iasl=/usr/bin/iasl --enable-lzo --enable-snappy
--enable-linux-aio --enable-virtio-blk-data-plane --disable-bluez
--enable-vnc-tls --enable-vnc-ws --disable-libiscsi --enable-vnc-jpeg
--disable-netmap --disable-libnfs --disable-glx --enable-vnc-png
--disable-quorum --disable-rbd --disable-vnc-sasl --disable-sdl
--disable-smartcard-nss --enable-spice --disable-libssh2
--disable-libusb --enable-usb-redir --disable-vde --enable-virtfs
--enable-attr --disable-debug-info --disable-gtk --audio-drv-list=
--target-list=x86_64-linux-user,x86_64-softmmu,i386-linux-user,i386-softmmu

Yes, tcp_mss function have both:
  switch (so-so_ffamily) {
  case AF_INET:
  mss = min(IF_MTU, IF_MRU) - sizeof(struct tcphdr)
+ sizeof(struct ip);
  break;
  case AF_INET6:
  mss = min(IF_MTU, IF_MRU) - sizeof(struct tcphdr)
+ sizeof(struct ip6);
  break;
  default:
  assert(0);
  break;
  }


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 18:29 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 A gdb backtrace would also be useful.


Can you provide me info how can i get it from running qemu? As i
understand i need to gdb load-symbol and specify debuging simbols (i
have stripped it)
after that i need to attach PID...
?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 18:52 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 For instance, yes, but you probably want to try my patch before.


Yes, patch fix issue. And now all works fine. Last question - is that
possible to enable/disable ipv4/ipv6 slirp stacks? This is very useful
feature to check dual stack and single stack apps.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-21 23:49 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 There's no current support for this, but that could be useful to add
 indeed.


I hit another issue, but may be not related to the patch. In centos 7
(test another boxes) i have running qemu with:
/usr/bin/qemu-system-x86_64 -device virtio-scsi-pci,id=scsi0 -device
scsi-hd,bus=scsi0.0,drive=drive0 -device virtio-net,netdev=user.0
-drive 
if=none,cache=unsafe,id=drive0,discard=unmap,file=output/centos-7-x86_64-qemu/centos-7-x86_64.raw
-display none -netdev user,id=user.0 -boot once=d -redir tcp:3213::22
-m size=1024Mib -name centos-7-x86_64 -machine type=pc-1.0,accel=kvm
-cdrom 
/home/vtolstov/devel/vtolstov/packer-builder/templates/centos/packer_cache/cffdc67bdc73fef02507fdcaff2d9fb7cbc8780ac3c17e862a5451ddbf8bf39d.iso
-vnc 0.0.0.0:47

and try to ssh to localhost -p 3213, but get timeout. in tcpdump i
have send packets but not recieved. inside qemu vm listen for 22 port
on ipv4 and ipv6. Inside vm i see in tcpdump incoming packets but not
outcoming.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] ipv6 slirp network

2014-07-21 Thread Vasiliy Tolstov
2014-07-22 0:13 GMT+04:00 Samuel Thibault samuel.thiba...@gnu.org:
 It is related to the patch.  I have fixed it, and pushed a newer
 tosubmit branch.


Nothing new in tosubmit branch

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] Super nested KVM

2014-07-04 Thread Vasiliy Tolstov
2014-07-04 1:00 GMT+04:00 Richard W.M. Jones rjo...@redhat.com:
 Well I got bored with the TCG test after about 4 hours.  It appears to
 hang launching the L3 guest, reproducible on two different hosts both
 running qemu 2.0.0.


In my tests qemu tcg sometimes fail when run first vm. (kernel 3.12)

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



[Qemu-devel] strange kernel panic

2014-03-11 Thread Vasiliy Tolstov
] Stack:
[   14.640174]  88001faefd70 88001e025280 0021
1000
[   14.640174]  88001f9d5e00 ea70ebc0 88001faefe58
8110a1ca
[   14.640174]  88001f9d5e10 88001faefe90 88001faefea0

[   14.640174] Call Trace:
[   14.640174]  [8110a1ca] generic_file_aio_read+0x2ca/0x700
[   14.640174]  [811a11f6] blkdev_aio_read+0x46/0x70
[   14.640174]  [8116c355] do_sync_read+0x55/0x90
[   14.640174]  [8116c9e0] vfs_read+0x90/0x160
[   14.640174]  [8116d4e4] SyS_read+0x44/0xa0
[   14.640174]  [816ab07d] system_call_fastpath+0x1a/0x1f
[   14.640174] Code: 38 31 c0 66 66 90 c6 01 00 66 66 90 85 c0 0f 85
bf 00 00 00 49 63 fc 48 8d 7c 39 ff 48 31 f9 48 f7 c1 00 f0 ff ff 74
11 66 66 90 c6 07 00 66 66 90 85 c0 0f 85 c6 00 00 00 65 ff 04 25 e0
c7 00
[   14.640174] Kernel panic - not syncing: Machine halted.
[   14.644148] CPU: 1 PID: 126 Comm: systemd-udevd Not tainted 3.13.3 #2
[   14.644148] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[   14.644148] task: 88001fa42f80 ti: 88001fa84000 task.ti:
88001fa84000
[   14.644148] RIP: 0010:[81109df8]  [81109df8]
file_read_actor+0x58/0x160
[   14.644148] RSP: 0018:88001fa85d60  EFLAGS: 00010206
[   14.644148] RAX:  RBX: 88001fa85e10 RCX: 100f
[   14.644148] RDX:  RSI: ea708340 RDI: 7f94d60cb037
[   14.644148] RBP: 88001fa85d90 R08: 0002 R09: ea70835c
[   14.644148] R10: 003b R11: 0001 R12: 1000
[   14.644148] R13: 7000 R14: 88001f9d3900 R15: 1000
[   14.640174] Shutting down cpus with NMI
[   14.644148] FS:  7f94d60dc7c0() GS:88001f50()
knlGS:
[   14.644148] CS:  0010 DS:  ES:  CR0: 80050033
[   14.644148] CR2:  CR3: 1fa67000 CR4: 06e0
[   14.644148] Stack:
[   14.644148]  88001fa85d70 88001e2522c0 0039
1000
[   14.644148]  88001f9d3900 ea708340 88001fa85e58
8110a1ca
[   14.644148]  88001f9d3910 88001fa85e90 88001fa85ea0

[   14.644148] Call Trace:
[   14.644148]  [8110a1ca] generic_file_aio_read+0x2ca/0x700
[   14.644148]  [811a11f6] blkdev_aio_read+0x46/0x70
[   14.644148]  [8116c355] do_sync_read+0x55/0x90
[   14.644148]  [8116c9e0] vfs_read+0x90/0x160
[   14.644148]  [8116d4e4] SyS_read+0x44/0xa0
[   14.644148]  [816ab07d] system_call_fastpath+0x1a/0x1f
[   14.644148] Code: 38 31 c0 66 66 90 c6 01 00 66 66 90 85 c0 0f 85
bf 00 00 00 49 63 fc 48 8d 7c 39 ff 48 31 f9 48 f7 c1 00 f0 ff ff 74
11 66 66 90 c6 07 00 66 66 90 85 c0 0f 85 c6 00 00 00 65 ff 04 25 e0
c7 00

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



Re: [Qemu-devel] examples or tutorial/docs for writing block drivers for qemu

2013-12-25 Thread Vasiliy Tolstov
2013/12/24 Hani Benhabiles kroo...@gmail.com:

 I haven't taken the time to look at it yet, but there is a talk from this 
 year's
 KVM Forum (Implementing New Block Drivers: A QEMU Developer Primer - Jeff 
 Cody)
 that could help you.


Thank you!

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru



  1   2   >