flight 32624 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32624/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-winxpsp3 7 windows-install fail REGR. vs. 26303
Regressions which are
flight 32623 linux-next real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32623/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-winxpsp3 7 windows-install fail REGR. vs. 32564
Regressions which are
Hi
This is test report on Xen 4.5 RC4, from Intel OTC VMM Team.
Platform: Grantley-EP, Ivytown-EP
We found these issue blow. Hoping corresponding patches can be committed in Xen
4.5 release.
Issue 1 -- detach a vt-d assigned device from guest, then reattach it to guest,
will fail.
http://list
flight 32616 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32616/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail REGR. vs. 32564
Regressions which are
On Wed, Dec 24, 2014 at 1:30 PM, David Matlack wrote:
> On Mon, Dec 22, 2014 at 4:39 PM, Andy Lutomirski wrote:
>> The pvclock vdso code was too abstracted to understand easily and
>> excessively paranoid. Simplify it for a huge speedup.
>>
>> This opens the door for additional simplifications,
flight 32617 libvirt real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32617/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-pvops 5 kernel-build fail REGR. vs. 32596
Tests which did not succe
flight 32612 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32612/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-amd64-i386-xl-qemut-debianhvm-amd64 13 guest-localmigrate/x10 fail pass
in 32594
test-amd64-i386-xl-qemuu-o
On 19-12-2014 22:23, Herbert Xu wrote:
David Vrabel wrote:
After d75b1ade567ffab085e8adbbdacf0092d10cd09c (net: less interrupt
masking in NAPI) the napi instance is removed from the per-cpu list
prior to calling the n->poll(), and is only requeued if all of the
budget was used. This inadverten
Starting a vm with memory over 2T resulted in an overflow error.
memory = 2097152 defined as number of megabytes returns the error
"OverflowError: signed integer is greater than maximum"
The error is the result of the python extension argument translator
defining max_memkb as a signed int instead
On 12/22/14 02:04, Singhal, Upanshu wrote:
Hello Don,
xen_emul_unplug=unnecessary does the trick, I am able to see the
vmxnet3 driver using lspci and ethtool --I eth0. Thanks a lot for your
help, much appreciated.
Performance between vmxnet3 running on XEN is about 1/3 of vmxnet3
running o
flight 32611 qemu-mainline real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32611/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf-libvirt 5 libvirt-build fail REGR. vs. 32598
test-amd64-i386-qem
On 12/23/14 05:51, Singhal, Upanshu wrote:
Hello Don,
I am not trying to configure VMW PVSCSI type of device but not able to
do so. Though, PVSCSI is available on the distribution I am using. Any
inputs on how to configure PVSCSI type disk device?
device_model_args_hvm = [
"-device",
"pvs
flight 32607 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/32607/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-libvirt 5 libvirt-build fail REGR. vs. 26303
test-amd64-amd64-xl-qe
Use the 'xl pci-attach $DomU $BDF' command to attach more then
one PCI devices to the guest, then detach the devices with
'xl pci-detach $DomU $BDF', after that, re-attach these PCI
devices again, an error message will be reported like following:
libxl: error: libxl_qmp.c:287:qmp_handle_error_resp
On Tue, Dec 23, 2014 at 03:47:35PM +, Andrew Cooper wrote:
> On 23/12/2014 08:54, Chao Peng wrote:
> >Intel Memory Bandwidth Monitoring(MBM) is a new hardware feature
> >which builds on the CMT infrastructure to allow monitoring of system
> >memory bandwidth. Event codes are provided to monitor
On Tue, Dec 23, 2014 at 03:46:41PM +, Andrew Cooper wrote:
>
> On 23/12/2014 08:54, Chao Peng wrote:
> >This is the xc side wrapper for XEN_SYSCTL_PSR_CMT_get_l3_event_mask
> >of XEN_SYSCTL_psr_cmt_op. Additional check for event id against value
> >got from this routine is also added.
> >
> >S
16 matches
Mail list logo