[Qemu-devel] [Bug 1228285] [NEW] e1000 nic TCP performances

2013-09-20 Thread Vincent Autefage
Public bug reported:

Hi,

Here is the context :

$ qemu -name A -m 1024 -net nic vlan=0,model=e1000 -net 
socket,vlan=0,listen=127.0.0.1:7000
$ qemu -name B -m 1024 -net nic vlan=0,model=e1000 -net 
socket,vlan=0,connect=127.0.0.1:7000

The bandwidth is really tiny :

. Iperf3 reports about 30 Mb/sec
. NetPerf reports about 50 Mb/sec


With UDP sockets, there is no problem at all :

. Iperf3 reports about 1 Gb/sec
. NetPerf reports about 950 Mb/sec


I've noticed this fact only with the e1000 NIC, not with others 
(rtl8139,virtio, etc.)
I've used the main GIT version of QEMU.


Thanks in advance.

See you,
VInce

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1228285

Title:
  e1000 nic TCP performances

Status in QEMU:
  New

Bug description:
  Hi,

  Here is the context :

  $ qemu -name A -m 1024 -net nic vlan=0,model=e1000 -net 
socket,vlan=0,listen=127.0.0.1:7000
  $ qemu -name B -m 1024 -net nic vlan=0,model=e1000 -net 
socket,vlan=0,connect=127.0.0.1:7000

  The bandwidth is really tiny :

  . Iperf3 reports about 30 Mb/sec
  . NetPerf reports about 50 Mb/sec

  
  With UDP sockets, there is no problem at all :

  . Iperf3 reports about 1 Gb/sec
  . NetPerf reports about 950 Mb/sec

  
  I've noticed this fact only with the e1000 NIC, not with others 
(rtl8139,virtio, etc.)
  I've used the main GIT version of QEMU.

  
  Thanks in advance.

  See you,
  VInce

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1228285/+subscriptions



[Qemu-devel] [Bug 1073585] [NEW] Deleting UDP socket in monitor mode

2012-10-31 Thread Vincent Autefage
Public bug reported:

Hi,

Here is the problem.
I start an empty QEMU in monitor mode :

QEMU 1.1.2 monitor - type 'help' for more information
Warning: vlan 0 is not connected to host network
(qemu) info network
VLAN 0 devices:
  e1000.0: type=nic,model=e1000,macaddr=a2:00:00:00:00:01
Devices not on any VLAN:
(qemu) host_net_add socket 
vlan=0,name=socket.0,udp=127.0.0.1:7000,localaddr=127.0.0.1:7001
(qemu) info network 
VLAN 0 devices:
  e1000.0: type=nic,model=e1000,macaddr=a2:00:00:00:00:01
  socket.0: type=socket,socket: udp=127.0.0.1:7000
Devices not on any VLAN:
(qemu) host_net_remove 0 socket.0
invalid host network device socket.0
(qemu) info network 
VLAN 0 devices:
  e1000.0: type=nic,model=e1000,macaddr=a2:00:00:00:00:01
  socket.0: type=socket,socket: udp=127.0.0.1:7000


I cannot delete the socket (however, it works with TCP sockets)

Thanks!

Vince

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1073585

Title:
  Deleting UDP socket in monitor mode

Status in QEMU:
  New

Bug description:
  Hi,

  Here is the problem.
  I start an empty QEMU in monitor mode :

  QEMU 1.1.2 monitor - type 'help' for more information
  Warning: vlan 0 is not connected to host network
  (qemu) info network
  VLAN 0 devices:
e1000.0: type=nic,model=e1000,macaddr=a2:00:00:00:00:01
  Devices not on any VLAN:
  (qemu) host_net_add socket 
vlan=0,name=socket.0,udp=127.0.0.1:7000,localaddr=127.0.0.1:7001
  (qemu) info network 
  VLAN 0 devices:
e1000.0: type=nic,model=e1000,macaddr=a2:00:00:00:00:01
socket.0: type=socket,socket: udp=127.0.0.1:7000
  Devices not on any VLAN:
  (qemu) host_net_remove 0 socket.0
  invalid host network device socket.0
  (qemu) info network 
  VLAN 0 devices:
e1000.0: type=nic,model=e1000,macaddr=a2:00:00:00:00:01
socket.0: type=socket,socket: udp=127.0.0.1:7000

  
  I cannot delete the socket (however, it works with TCP sockets)

  Thanks!

  Vince

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1073585/+subscriptions



[Qemu-devel] [Bug 1003054] Re: Socket not closed when a connection ends

2012-06-07 Thread Vincent Autefage
** Changed in: qemu
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1003054

Title:
  Socket not closed when a connection ends

Status in QEMU:
  Fix Released

Bug description:
  Hi,

  I've noticed in the QEMU monitor that when a TCP connection between to
  QEMU virtual machines is closed in one side, the other side is not
  closed. Consequence is that the network behavior is completely messed
  up in case of a reconnection.

  For instance, we consider that we have 2 virtual machines :

  $ qemu -name A -net nic vlan=0 -net socket,vlan=0,listen=127.0.0.1:7000
  $ qemu -name B -net nic vlan=0 -net socket,vlan=0,connect=127.0.0.1:7000

  If the socket of B is closed (error or machine down), the socket in A
  is not closed :

  B % host_net_remove 0 socket.0

  A % info network
e1000.0: ...
socket.0: ... (The removed connection)

  B % host_net_add socket vlan=0,connect=127.0.0.1:7000

  A % info network
e1000.0: ...
socket.0: ...  (The removed connection)
socket.1: ...  (The new connection)

  By not perform any close on sockets of A, the new communication
  between A and B is corrupted (duplicated packets, invalid
  transmission, etc.).

  In the case of the close was performed by A, B should detect a problem
  on the socket and  retry a new connection, unfortunately, this is not
  the case.

  
  Those two problems corrupt the dynamicity of a QEMU topology which could be 
strongly problematic for the development of network tools based on QEMU.


  Thanks a lot.
  Vince

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1003054/+subscriptions



[Qemu-devel] [Bug 1003054] Re: Socket not closed when a connection ends

2012-05-29 Thread Vincent Autefage
This implies serious duplication problem in case of reboot :

A [192.168.0.1] -- B [192.168.0.2]


A# ping 192.168.0.1
PING 192.168.0.1 56(84) bytes of data
64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=3.82 ms
64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.344 ms
64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.325 ms

B# reboot

A# ping 192.168.0.1 
PING 192.168.0.1 56(84) bytes of data
64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=3.82 ms
64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=3.82 ms (DUP!)
64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.344 ms
64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.344 ms (DUP!)
64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.325 ms
64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.325 ms (DUP!)

Vince

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1003054

Title:
  Socket not closed when a connection ends

Status in QEMU:
  New

Bug description:
  Hi,

  I've noticed in the QEMU monitor that when a TCP connection between to
  QEMU virtual machines is closed in one side, the other side is not
  closed. Consequence is that the network behavior is completely messed
  up in case of a reconnection.

  For instance, we consider that we have 2 virtual machines :

  $ qemu -name A -net nic vlan=0 -net socket,vlan=0,listen=127.0.0.1:7000
  $ qemu -name B -net nic vlan=0 -net socket,vlan=0,connect=127.0.0.1:7000

  If the socket of B is closed (error or machine down), the socket in A
  is not closed :

  B % host_net_remove 0 socket.0

  A % info network
e1000.0: ...
socket.0: ... (The removed connection)

  B % host_net_add socket vlan=0,connect=127.0.0.1:7000

  A % info network
e1000.0: ...
socket.0: ...  (The removed connection)
socket.1: ...  (The new connection)

  By not perform any close on sockets of A, the new communication
  between A and B is corrupted (duplicated packets, invalid
  transmission, etc.).

  In the case of the close was performed by A, B should detect a problem
  on the socket and  retry a new connection, unfortunately, this is not
  the case.

  
  Those two problems corrupt the dynamicity of a QEMU topology which could be 
strongly problematic for the development of network tools based on QEMU.


  Thanks a lot.
  Vince

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1003054/+subscriptions



[Qemu-devel] [Bug 1003054] [NEW] Socket not closed when a connection ends

2012-05-22 Thread Vincent Autefage
Public bug reported:

Hi,

I've noticed in the QEMU monitor that when a TCP connection between to
QEMU virtual machines is closed in one side, the other side is not
closed. Consequence is that the network behavior is completely messed up
in case of a reconnection.

For instance, we consider that we have 2 virtual machines :

$ qemu -name A -net nic vlan=0 -net socket,vlan=0,listen=127.0.0.1:7000
$ qemu -name B -net nic vlan=0 -net socket,vlan=0,connect=127.0.0.1:7000

If the socket of B is closed (error or machine down), the socket in A is
not closed :

B % host_net_remove 0 socket.0

A % info network
  e1000.0: ...
  socket.0: ... (The removed connection)

B % host_net_add socket vlan=0,connect=127.0.0.1:7000

A % info network
  e1000.0: ...
  socket.0: ...  (The removed connection)
  socket.1: ...  (The new connection)

By not perform any close on sockets of A, the new communication between
A and B is corrupted (duplicated packets, invalid transmission, etc.).

In the case of the close was performed by A, B should detect a problem
on the socket and  retry a new connection, unfortunately, this is not
the case.


Those two problems corrupt the dynamicity of a QEMU topology which could be 
strongly problematic for the development of network tools based on QEMU.


Thanks a lot.
Vince

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1003054

Title:
  Socket not closed when a connection ends

Status in QEMU:
  New

Bug description:
  Hi,

  I've noticed in the QEMU monitor that when a TCP connection between to
  QEMU virtual machines is closed in one side, the other side is not
  closed. Consequence is that the network behavior is completely messed
  up in case of a reconnection.

  For instance, we consider that we have 2 virtual machines :

  $ qemu -name A -net nic vlan=0 -net socket,vlan=0,listen=127.0.0.1:7000
  $ qemu -name B -net nic vlan=0 -net socket,vlan=0,connect=127.0.0.1:7000

  If the socket of B is closed (error or machine down), the socket in A
  is not closed :

  B % host_net_remove 0 socket.0

  A % info network
e1000.0: ...
socket.0: ... (The removed connection)

  B % host_net_add socket vlan=0,connect=127.0.0.1:7000

  A % info network
e1000.0: ...
socket.0: ...  (The removed connection)
socket.1: ...  (The new connection)

  By not perform any close on sockets of A, the new communication
  between A and B is corrupted (duplicated packets, invalid
  transmission, etc.).

  In the case of the close was performed by A, B should detect a problem
  on the socket and  retry a new connection, unfortunately, this is not
  the case.

  
  Those two problems corrupt the dynamicity of a QEMU topology which could be 
strongly problematic for the development of network tools based on QEMU.


  Thanks a lot.
  Vince

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1003054/+subscriptions



[Qemu-devel] [Bug 938937] [NEW] Slirp -- Abort when operate dhclient

2012-02-22 Thread Vincent Autefage
Public bug reported:

Hi,

Let's consider the following line:

$ qemu -enable-kvm -name opeth -hda debian.img -k fr -localtime -m 512
-net user,vlan=0,net=192.160.0.0/24 -net
nic,vlan=0,model=$model,macaddr=a2:00:00:00:00:10

In my Guest Virtual Machine, I'm going to call the internal Slirp DHCP
Server:

guest@debian$ dhclient eth0


The QEMU process craches and reports this:

qemu-system-x86_64: slirp/arp_table.c:41: arp_table_add: Assertion
`(ip_addr  (__extension__ ({ register unsigned int __v, __x = (~(0xf 
28)); if (__builtin_constant_p (__x)) __v = __x)  0xff00) 
24) | (((__x)  0x00ff)  8) | (((__x)  0xff00)  8) |
(((__x)  0x00ff)  24)); else __asm__ (bswap %0 : =r (__v) :
0 (__x)); __v; }))) != 0' failed.

It's a new bug, never seen it before the 1.0 version (also tested with the last 
GIT version).
Tested on a 64bit and a 32bit system.


See you,
Vince

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/938937

Title:
  Slirp -- Abort when operate dhclient

Status in QEMU:
  New

Bug description:
  Hi,

  Let's consider the following line:

  $ qemu -enable-kvm -name opeth -hda debian.img -k fr -localtime -m 512
  -net user,vlan=0,net=192.160.0.0/24 -net
  nic,vlan=0,model=$model,macaddr=a2:00:00:00:00:10

  In my Guest Virtual Machine, I'm going to call the internal Slirp DHCP
  Server:

  guest@debian$ dhclient eth0

  
  The QEMU process craches and reports this:

  qemu-system-x86_64: slirp/arp_table.c:41: arp_table_add: Assertion
  `(ip_addr  (__extension__ ({ register unsigned int __v, __x = (~(0xf
   28)); if (__builtin_constant_p (__x)) __v = __x)  0xff00)
   24) | (((__x)  0x00ff)  8) | (((__x)  0xff00)  8) |
  (((__x)  0x00ff)  24)); else __asm__ (bswap %0 : =r (__v) :
  0 (__x)); __v; }))) != 0' failed.

  It's a new bug, never seen it before the 1.0 version (also tested with the 
last GIT version).
  Tested on a 64bit and a 32bit system.

  
  See you,
  Vince

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/938937/+subscriptions



[Qemu-devel] [Bug 938945] [NEW] Slirp cannot be forward and makes segmentation faults

2012-02-22 Thread Vincent Autefage
Public bug reported:

Hi,

Let's consider the following lines:

$ qemu -enable-kvm -name opeth -hda debian1.img -k fr -localtime -m 512
-net user,vlan=0 -net nic,vlan=0,model=$model,macaddr=a2:00:00:00:00:10
-net socket,vlan=1,listen=127.0.0.1:5900 -net
nic,vlan=1,model=$model,macaddr=a2:00:00:00:00:04

$qemu -enable-kvm -name nightwish -hda debian2.img -k fr -localtime -m
512 -net socket,vlan=0,connect=127.0.0.1:5900 -net
nic,vlan=0,model=$model,macaddr=a2:00:00:00:00:02


My configuration is clear and allows to transmit packets between the Slirp and 
the guest nightwish.
But when I try to do on nightwish :

$ wget www.qemu.org

The opeth QEMU makes a segfault :11586 Segmentation Fault

This phenomenon is not always present... If the Segfault does not
appear, nightwish cannot enable a connection with internet :(


Thanks
Vince

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/938945

Title:
  Slirp cannot be forward and makes segmentation faults

Status in QEMU:
  New

Bug description:
  Hi,

  Let's consider the following lines:

  $ qemu -enable-kvm -name opeth -hda debian1.img -k fr -localtime -m
  512 -net user,vlan=0 -net
  nic,vlan=0,model=$model,macaddr=a2:00:00:00:00:10 -net
  socket,vlan=1,listen=127.0.0.1:5900 -net
  nic,vlan=1,model=$model,macaddr=a2:00:00:00:00:04

  $qemu -enable-kvm -name nightwish -hda debian2.img -k fr -localtime -m
  512 -net socket,vlan=0,connect=127.0.0.1:5900 -net
  nic,vlan=0,model=$model,macaddr=a2:00:00:00:00:02

  
  My configuration is clear and allows to transmit packets between the Slirp 
and the guest nightwish.
  But when I try to do on nightwish :

  $ wget www.qemu.org

  The opeth QEMU makes a segfault :11586 Segmentation Fault

  This phenomenon is not always present... If the Segfault does not
  appear, nightwish cannot enable a connection with internet :(

  
  Thanks
  Vince

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/938945/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2012-02-10 Thread Vincent Autefage
Hi,

No I don't try, i will :)
The probleme is not present with another NIC so I use another one for 
the moment.

Vincent


Le 09/02/2012 20:05, Henrique Rodrigues a écrit :
 Vincent,

 Have you tried to change the mtu of the tbf qdisc? The traffic control should 
 work well if you set it to 65kb.
 I believe that this is happening due to the napi gro functionality. I'm still 
 not sure, but the problem seems to be related to that.

 Henrique


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2012-01-30 Thread Vincent Autefage
Hi,

The problem seems to come from the implementation of the Intel e1000 
network cards (which is the default one used by QEMU).
If you use another one, the problem does not appear ;)

Vince

Le 29/01/2012 05:49, Henrique Rodrigues a écrit :
 Hi guys,

 I'm having the same problem with a ubuntu 11.04 (natty) host. I tried to
 set the rate controllers using tc both at the host and inside the guest
 i.e.:

 tc qdisc add vnic0 root tbf rate 20mbit burst 20480 latency 50ms (host - to 
 control the traffic going to the guest vm) and
 tc qdisc add eth0 root tbf rate 20mbit burst 20480 latency 50ms (guest)

 And the results are the same reported initially: ~140kbit/sec. I also
 tried to use policing filters at the host but I got the same results.

 However, if I use htb I can get reasonable throughputs (~20mbit). I used
 these commands (both for host and guest):

 tc qdisc add devDEV  root handle 1: htb default 255
 tc class add devDEV  parent 1: classid 1:1 htb rate 20mbit burst 20480
 tc filter add devDEV  parent 1: prio 255 proto ip u32 match ip src 
 0.0.0.0/0 flowid 1:1

 It seems that the problem is related with the root qdisc only. Have you
 guys found an answer for this?


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-15 Thread Vincent Autefage
Ok,

So the e1000.c and the e1000_hw.h have absolutely no difference between 
the original and the ubuntu version...
The only differences witch refers to the *Intel e1000* in the wall 
sources is this one :


diff -ru qemu//hw/pc_piix.c qemu-kvm-0.14.0+noroms//hw/pc_piix.c
--- qemu//hw/pc_piix.c  2011-12-15 15:37:28.53929 +0100
+++ qemu-kvm-0.14.0+noroms//hw/pc_piix.c2011-02-22 
14:34:38.0 +0100

@@ -123,7 +141,7 @@
  if (!pci_enabled || (nd-model  strcmp(nd-model, 
ne2k_isa) == 0))
  pc_init_ne2k_isa(nd);
  else
-pci_nic_init_nofail(nd, e1000, NULL);
+pci_nic_init_nofail(nd, rtl8139, NULL);
  }

  if (drive_get_max_bus(IF_IDE) = MAX_IDE_BUS) {


Vincent


Le 15/12/2011 09:07, Stefan Hajnoczi a écrit :
 On Wed, Dec 14, 2011 at 02:42:12PM -, Vincent Autefage wrote:
 Ok so the *Intel e1000* seems the only card which is impacted by the
 bug.
 Let me recap with a summary of your debugging:

 QEMU 0.14.0, 0.15.0, and 1.0 built from source all have poor network
 performance below a 20 Mbit/s limit set with tc inside the guest.

 Ubuntu's 0.14.0 QEMU package does not have poor network performance.

 This problem only occurs with the emulated e1000 device.  All other
 emulated NICs operate correctly.

 Now you could diff the e1000 emulation code to get the code changes
 between vanilla and Ubuntu:

   $ diff -u qemu-0.14.0-vanilla/hw/e1000.c qemu-0.14.0-ubuntu/hw/e1000.c

 (It's possible that there are no significant changes and this bug is
 caused by something outside e1000.c but this is place to check first.)

 Or you could even try copying Ubuntu's e1000.c into the vanilla QEMU
 source tree and retesting to see if the behavior changes.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-15 Thread Vincent Autefage
Here is the problem !

The Ubuntu version works only because it not uses an *Intel e1000* but a 
*rtl8139*.
Therefore, the problem about the e1000 is present in *all* version 
(original or ubuntu ones).

Thus, the file *e1000.c* must contain some instructions which imply the 
bad TC behavior.

Vincent

Le 15/12/2011 17:09, Stefan Hajnoczi a écrit :
 On Thu, Dec 15, 2011 at 3:03 PM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 Ok,

 So the e1000.c and the e1000_hw.h have absolutely no difference between
 the original and the ubuntu version...
 The only differences witch refers to the *Intel e1000* in the wall
 sources is this one :


 diff -ru qemu//hw/pc_piix.c qemu-kvm-0.14.0+noroms//hw/pc_piix.c
 --- qemu//hw/pc_piix.c  2011-12-15 15:37:28.53929 +0100
 +++ qemu-kvm-0.14.0+noroms//hw/pc_piix.c2011-02-22
 14:34:38.0 +0100

 @@ -123,7 +141,7 @@
   if (!pci_enabled || (nd-model  strcmp(nd-model,
 ne2k_isa) == 0))
   pc_init_ne2k_isa(nd);
   else
 -pci_nic_init_nofail(nd, e1000, NULL);
 +pci_nic_init_nofail(nd, rtl8139, NULL);
   }

   if (drive_get_max_bus(IF_IDE)= MAX_IDE_BUS) {
 That looks like it is only changing the default NIC from e1000 to
 rtl8139.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-14 Thread Vincent Autefage
Well,

I have checked differences between the GIT repository (V0.14.0) and the 
Ubuntu version (V0.14.0) and generated patch diff file.
The patch contains about 5000 lines...

What's the next step ?

Vincent


Le 05/12/2011 12:11, Stefan Hajnoczi a écrit :
 On Mon, Dec 5, 2011 at 10:45 AM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 So we have another problem...
 The thing is that the 0.14.0 (and all 0.14.0 rc) built from GIT has the
 same problem.
 However, the package 0.14.0 from Ubuntu does not has this bug...
 Okay, that's actually a good thing because the issue is now isolated
 to two similar builds: 0.14.0 from source and 0.14.0 from Ubuntu.

 Either there is an environmental difference in the build configuration
 or Ubuntu has applied patches on top of vanilla 0.14.0.

 I think the next step is to grab the Ubuntu 0.14.0 source package and
 rebuild it to confirm that it does *not* have the bug.

 Then it's just a matter of figuring out what the difference is by a
 (manual) bisection.

 Are you using qemu-kvm?  I found Ubuntu's 0.14.0-based package here:
 http://packages.ubuntu.com/natty/qemu-kvm

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-14 Thread Vincent Autefage
I've just checked the problem with a *ne2k_pci* instead of the default 
e1000 and the problem does not exist with the *ne2k_pci*... (Version 
0.14-1 of qemu)

I'm going to check other cards right now

Vincent


Le 05/12/2011 12:11, Stefan Hajnoczi a écrit :
 On Mon, Dec 5, 2011 at 10:45 AM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 So we have another problem...
 The thing is that the 0.14.0 (and all 0.14.0 rc) built from GIT has the
 same problem.
 However, the package 0.14.0 from Ubuntu does not has this bug...
 Okay, that's actually a good thing because the issue is now isolated
 to two similar builds: 0.14.0 from source and 0.14.0 from Ubuntu.

 Either there is an environmental difference in the build configuration
 or Ubuntu has applied patches on top of vanilla 0.14.0.

 I think the next step is to grab the Ubuntu 0.14.0 source package and
 rebuild it to confirm that it does *not* have the bug.

 Then it's just a matter of figuring out what the difference is by a
 (manual) bisection.

 Are you using qemu-kvm?  I found Ubuntu's 0.14.0-based package here:
 http://packages.ubuntu.com/natty/qemu-kvm

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-14 Thread Vincent Autefage
Ok so the *Intel e1000* seems the only card which is impacted by the
bug.

Vincent


Le 05/12/2011 12:11, Stefan Hajnoczi a écrit :
 On Mon, Dec 5, 2011 at 10:45 AM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 So we have another problem...
 The thing is that the 0.14.0 (and all 0.14.0 rc) built from GIT has the
 same problem.
 However, the package 0.14.0 from Ubuntu does not has this bug...
 Okay, that's actually a good thing because the issue is now isolated
 to two similar builds: 0.14.0 from source and 0.14.0 from Ubuntu.

 Either there is an environmental difference in the build configuration
 or Ubuntu has applied patches on top of vanilla 0.14.0.

 I think the next step is to grab the Ubuntu 0.14.0 source package and
 rebuild it to confirm that it does *not* have the bug.

 Then it's just a matter of figuring out what the difference is by a
 (manual) bisection.

 Are you using qemu-kvm?  I found Ubuntu's 0.14.0-based package here:
 http://packages.ubuntu.com/natty/qemu-kvm

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-07 Thread Vincent Autefage
Well

I've compiled the ubuntu package.
When I've launched qemu, I've got this :
*
*$ *qemu-system-x86_64 -hda debian.img -m 512
qemu: could not load PC BIOS 'bios.bin'
**
*I've checked the content of the *pc-bios* directory and no bios are 
generated but I've got strange file like :
**.bin
*.dtb
openbios-*
*
I think that the *configure* interprets the *** as a base character...
Therefore, I've copied the content of*pc-bios* directory of 0.15.1 in 
the *pc-bios* directory of 0.14.0

Finally, the bug of rate have disappeared !!
*Iperf* gave me a rate of 19mbit which is the desired rate.

Vincent


Le 05/12/2011 12:11, Stefan Hajnoczi a écrit :
 On Mon, Dec 5, 2011 at 10:45 AM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 So we have another problem...
 The thing is that the 0.14.0 (and all 0.14.0 rc) built from GIT has the
 same problem.
 However, the package 0.14.0 from Ubuntu does not has this bug...
 Okay, that's actually a good thing because the issue is now isolated
 to two similar builds: 0.14.0 from source and 0.14.0 from Ubuntu.

 Either there is an environmental difference in the build configuration
 or Ubuntu has applied patches on top of vanilla 0.14.0.

 I think the next step is to grab the Ubuntu 0.14.0 source package and
 rebuild it to confirm that it does *not* have the bug.

 Then it's just a matter of figuring out what the difference is by a
 (manual) bisection.

 Are you using qemu-kvm?  I found Ubuntu's 0.14.0-based package here:
 http://packages.ubuntu.com/natty/qemu-kvm

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-05 Thread Vincent Autefage
Hi,

So we have another problem...
The thing is that the 0.14.0 (and all 0.14.0 rc) built from GIT has the 
same problem.
However, the package 0.14.0 from Ubuntu does not has this bug...


Le 05/12/2011 09:26, Stefan Hajnoczi a écrit :
 On Sun, Dec 04, 2011 at 03:54:12PM -, Vincent Autefage wrote:
 The result without TC is about 120 Mbit/s.
 I check the bandwidth with lot of programs (not only with Iperf) and the
 result is also the same

 However, if I use the same raw image and the same TC configuration with
 the version 0.14.0 of QEMU or with some real physical hosts, the result
 with TC is about 19.2 Mbit/s what is the desired result...
 Thanks for checking if tc is involved in this bug.

 Git bisect can identify which commit introduced the bug between QEMU
 0.14.0 and 0.14.1.  The following steps show how to do this:

 Clone the QEMU git repository:
 $ git clone git://git.qemu.org/qemu.git
 $ cd qemu

 Double-check that 0.14.1 has the bug:
 $ git checkout v0.14.1
 $ make distclean
 $ ./configure --target-list=x86_64-softmmu
 $ make
 $ # test x86_64-softmmu/qemu-system-x86_64 binary

 Double-check that 0.14.0 does *not* have the bug:
 $ git checkout v0.14.0
 $ make distclean
 $ ./configure --target-list=x86_64-softmmu
 $ make
 $ # test x86_64-softmmu/qemu-system-x86_64 binary

 Now you can be confident that 0.14.0 and 0.14.1 do indeed behave
 differently when built from source.  It's time to perform the bisect,
 you can read more about what this does in the git-bisect(1) man page.

 Find the commit that introduced the bug:
 $ git bisect start v0.14.1 0.14.0
 $ make distclean
 $ ./configure --target-list=x86_64-softmmu
 $ make
 $ # test x86_64-softmmu/qemu-system-x86_64 binary

 If tc achieves ~20 Mbit/s:
 $ git bisect good

 Otherwise:
 $ git bisect bad

 Git bisect will keep splitting the commit history in half until it
 reaches the point where QEMU's behavior changes from good to bad.  So
 you typically need to build and test a couple of times until the guilty
 commit has been identified.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-05 Thread Vincent Autefage
Yes this is the package that seems to not include the bug.
I'm going  to check sources of this package.

Vincent Autefage


Le 05/12/2011 12:11, Stefan Hajnoczi a écrit :
 On Mon, Dec 5, 2011 at 10:45 AM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 So we have another problem...
 The thing is that the 0.14.0 (and all 0.14.0 rc) built from GIT has the
 same problem.
 However, the package 0.14.0 from Ubuntu does not has this bug...
 Okay, that's actually a good thing because the issue is now isolated
 to two similar builds: 0.14.0 from source and 0.14.0 from Ubuntu.

 Either there is an environmental difference in the build configuration
 or Ubuntu has applied patches on top of vanilla 0.14.0.

 I think the next step is to grab the Ubuntu 0.14.0 source package and
 rebuild it to confirm that it does *not* have the bug.

 Then it's just a matter of figuring out what the difference is by a
 (manual) bisection.

 Are you using qemu-kvm?  I found Ubuntu's 0.14.0-based package here:
 http://packages.ubuntu.com/natty/qemu-kvm

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899143] [NEW] Raw img not recognized by Windows

2011-12-04 Thread Vincent Autefage
Ok thanks a lot :)

Vincent Autefage
Le 03/12/2011 19:45, Stefan Hajnoczi a écrit :
 On Fri, Dec 2, 2011 at 2:45 PM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 $ qemu-img create -f raw root.img 100GB
 $ mkntfs -F root.img
 $ qemu -name W -sdl -m 2048 -enable-kvm -localtime -k fr -hda root.img
 -cdrom windows7.iso -boot d -net nic,macaddr=a0:00:00:00:00:01 -net
 user,vlan=0
 QEMU does recognize the raw image.  You can check this by running
 'info block' at the QEMU monitor (Ctrl-Alt-2) and you'll see ide-hd0
 is the raw image file you specified.  Press Ctrl-Alt-1 to get back to
 the VM display.

 The problem is that the Windows installer does not like the disk image
 you have prepared.  A normal harddisk has a master boot record but you
 created a raw image without a master boot record.  The Windows
 installer is being picky/careful and not displaying this non-standard
 disk you created.

 Skip the mkntfs step and the installer works fine.  There's no need to
 create the file system because the installer will do it for you.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899143

Title:
  Raw img not recognized by Windows

Status in QEMU:
  New

Bug description:
  Hi,

  The installation process of Windows (XP/Vista/7) doesn’t seem to recognize a 
raw img generated by qemu-img.
  The installer does not see any hard drive...

  The problem exists only with a raw img but not with a vmdk for
  instance.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899143/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-04 Thread Vincent Autefage
The result without TC is about 120 Mbit/s.
I check the bandwidth with lot of programs (not only with Iperf) and the 
result is also the same

However, if I use the same raw image and the same TC configuration with 
the version 0.14.0 of QEMU or with some real physical hosts, the result 
with TC is about 19.2 Mbit/s what is the desired result...

Vincent


Le 03/12/2011 19:48, Stefan Hajnoczi a écrit :
 On Fri, Dec 2, 2011 at 2:42 PM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 *root@A# tc qdisc add dev eth0 root tbf rate 20mbit burst 20480 latency
 50ms*

 *root@B# **ifconfig eth0 192.168.0.2*

 Then if we check with /Iperf/, the real rate will be about 100kbit/s :
 What is the iperf result without tc?  It's worth checking what rate
 the unlimited interface saturates at before applying tc.  Perhaps this
 setup is just performing very poorly and it has nothing to do with tc.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



[Qemu-devel] [Bug 899140] [NEW] Problem with Linux Kernel Traffic Control

2011-12-02 Thread Vincent Autefage
Public bug reported:

Hi,

The two last main versions of QEMU (0.15 and 1.0) have an important problem 
when running on a Linux distribution which running itself a Traffic Control 
(TC) instance.
Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

For instance, lets consider the following configuration :

# tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

The effective rate will be about 100kbit/s ! (verified with iperf)
I've encountered this problem on versions 0.15 and 1.0 but not with the 0.14...
In the 0.14, we have a rate of 19.2 mbit/s which is quiet normal.

I've done the experimentation on several hosts :
 
- Debian 32bit core i7, 4GB RAM
- Debian 64bit core i7, 8GB RAM
- 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 128GB 
of RAM

The problem is always the same... The problem is also seen with a Class
Based Queuing (CBQ) in TC.

Thanks

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The two last main versions of QEMU (0.15 and 1.0) have an important problem 
when running on a Linux distribution which running itself a Traffic Control 
(TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.15 and 1.0 but not with the 
0.14...
  In the 0.14, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :
   
  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



[Qemu-devel] [Bug 899143] [NEW] Raw img not recognized by Windows

2011-12-02 Thread Vincent Autefage
Public bug reported:

Hi,

The installation process of Windows (XP/Vista/7) doesn’t seem to recognize a 
raw img generated by qemu-img.
The installer does not see any hard drive...

The problem exists only with a raw img but not with a vmdk for instance.

Thanks

** Affects: qemu
 Importance: Undecided
 Status: New

** Description changed:

  Hi,
  
  The installation process of Windows (XP/Vista/7) doesn’t seem to recognize a 
raw img generated by qemu-img.
- The installer do not see any hard drive...
+ The installer does not see any hard drive...
  
  The problem exists only with a raw img but not with a vmdk for instance.
  
  Thanks

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899143

Title:
  Raw img not recognized by Windows

Status in QEMU:
  New

Bug description:
  Hi,

  The installation process of Windows (XP/Vista/7) doesn’t seem to recognize a 
raw img generated by qemu-img.
  The installer does not see any hard drive...

  The problem exists only with a raw img but not with a vmdk for
  instance.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899143/+subscriptions



[Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-02 Thread Vincent Autefage
** Description changed:

  Hi,
  
- The two last main versions of QEMU (0.15 and 1.0) have an important problem 
when running on a Linux distribution which running itself a Traffic Control 
(TC) instance.
+ The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.
  
  For instance, lets consider the following configuration :
  
  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms
  
  The effective rate will be about 100kbit/s ! (verified with iperf)
- I've encountered this problem on versions 0.15 and 1.0 but not with the 
0.14...
- In the 0.14, we have a rate of 19.2 mbit/s which is quiet normal.
+ I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
+ In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.
  
  I've done the experimentation on several hosts :
-  
+ 
  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM
  
  The problem is always the same... The problem is also seen with a Class
  Based Queuing (CBQ) in TC.
  
  Thanks

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899140] Re: Problem with Linux Kernel Traffic Control

2011-12-02 Thread Vincent Autefage
Hi,

So, the host command lines are :
*$ qemu -name A -sdl -m 512 -enable-kvm -localtime -k fr -hda 
debian1.img -net nic,macaddr=a0:00:00:00:00:01 -net 
socket,mcast=230.0.0.1:7000*

The second is
*$ qemu -name B -sdl -m 512 -enable-kvm -localtime -k fr -hda 
debian2.img -net nic,macaddr=a0:00:00:00:00:02 -net 
socket,mcast=230.0.0.1:7000*

On virual machines :

*root@A# ifconfig eth0 192.168.0.1*
*root@A# tc qdisc add dev eth0 root tbf rate 20mbit burst 20480 latency 
50ms*

*root@B# **ifconfig eth0 192.168.0.2*

Then if we check with /Iperf/, the real rate will be about 100kbit/s :

*root@B# iperf -s*

*root@A# iperf -c 192.168.0.1*

Vincent


Le 02/12/2011 14:34, Stefan Hajnoczi a écrit :
 Hi Vincent,
 Please give steps to reproduce the problem including the QEMU
 command-lines you used and what commands need to be run inside the
 guest and on the host.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899140

Title:
  Problem with Linux Kernel Traffic Control

Status in QEMU:
  New

Bug description:
  Hi,

  The last main versions of QEMU (0.14.1, 0.15 and 1.0) have an important 
problem when running on a Linux distribution which running itself a Traffic 
Control (TC) instance.
  Indeed, when TC is configured with a Token Bucket Filter (TBF) with a 
particular rate, the effective rate is very slower than the desired one.

  For instance, lets consider the following configuration :

  # tc qdisc add dev eth0 root tbf rate 20mbit burst 20k latency 50ms

  The effective rate will be about 100kbit/s ! (verified with iperf)
  I've encountered this problem on versions 0.14.1, 0.15 and 1.0 but not with 
the 0.14.0...
  In the 0.14.0, we have a rate of 19.2 mbit/s which is quiet normal.

  I've done the experimentation on several hosts :

  - Debian 32bit core i7, 4GB RAM
  - Debian 64bit core i7, 8GB RAM
  - 3 different high performance servers : Ubuntu 64 bits, 48 AMD Opteron, 
128GB of RAM

  The problem is always the same... The problem is also seen with a
  Class Based Queuing (CBQ) in TC.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899140/+subscriptions



Re: [Qemu-devel] [Bug 899143] [NEW] Raw img not recognized by Windows

2011-12-02 Thread Vincent Autefage
Hi,*

$ qemu-img create -f raw root.img 100GB
$ mkntfs -F root.img
$ qemu -name W -sdl -m 2048 -enable-kvm -localtime -k fr -hda root.img 
-cdrom windows7.iso -boot d -net nic,macaddr=a0:00:00:00:00:01 -net 
user,vlan=0
*

Vincent Autefage


Le 02/12/2011 14:35, Stefan Hajnoczi a écrit :
 On Fri, Dec 2, 2011 at 1:00 PM, Vincent Autefage
 899...@bugs.launchpad.net  wrote:
 The installation process of Windows (XP/Vista/7) doesn’t seem to recognize a 
 raw img generated by qemu-img.
 The installer does not see any hard drive...

 The problem exists only with a raw img but not with a vmdk for instance.
 Please post your QEMU command-line so it is possible to reproduce this
 and see which emulated devices you have configured.

 Stefan


-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/899143

Title:
  Raw img not recognized by Windows

Status in QEMU:
  New

Bug description:
  Hi,

  The installation process of Windows (XP/Vista/7) doesn’t seem to recognize a 
raw img generated by qemu-img.
  The installer does not see any hard drive...

  The problem exists only with a raw img but not with a vmdk for
  instance.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/899143/+subscriptions