On Thu, Sep 15, 2016 at 2:15 AM, Denis V. Lunev <d...@openvz.org> wrote:
> On 09/13/2016 11:59 PM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Sep 01, 2016 at 10:57:48AM -0700, Ed Swierk wrote:
> >> Windows 8, 10 and Server 2012 guests hang intermittently while booting
>
r to me why it works, or if it's just papering
over a bug elsewhere, or if there are any possible side effects.
Suggested-by: Andrew Jones <drjo...@redhat.com>
Signed-off-by: Ed Swierk <eswi...@skyportsystems.com>
diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index 6eac66d..c891b67 100644
--- a/h
On Tue, Aug 16, 2016 at 3:07 AM, Juergen Gross wrote:
> On 15/08/16 17:02, Jan Beulich wrote:
>> This should really only be done for XS_TRANSACTION_END messages, or
>> else at least some of the xenstore-* tools don't work anymore.
>>
>> Fixes: 0beef634b8 ("xenbus: don't BUG() on
I'm seeing the xenwatch kernel thread hang intermittently when
destroying a domU on recent stable xen 4.5, with Linux 4.4.11 + grsec
dom0.
The domU is created with a virtual network interface connected to a
physical interface (ixgbevf) via an openvswitch virtual switch.
Everything works fine
On Wed, May 25, 2016 at 9:58 AM, David Vrabel wrote:
> This occurs in dom0? Or the guest that's being destroyed?
The lockdep warning comes from dom0 when the HVM guest is being destroyed.
> It's a bug but...
>
>> ==
...along with
https://git.kernel.org/cgit/linux/kernel/git/xen/tip.git/commit/?id=702f9260
which should also go into v4.4, IMO.
On Wed, May 25, 2016 at 4:17 AM, Ed Swierk <eswi...@skyportsystems.com> wrote:
> On Tue, May 24, 2016 at 11:29 PM, Ingo Molnar <mi...@kernel.org&g
The following lockdep dump occurs whenever I destroy an HVM domain, on
Linux 4.4 Dom0 with CONFIG_XEN_BALLOON=n on recent stable Xen 4.5.
Any clues whether this is a real potential deadlock, or how to silence
it if not?
==
[ INFO:
On Tue, May 24, 2016 at 11:29 PM, Ingo Molnar <mi...@kernel.org> wrote:
> Do they apply, build and boot cleanly in that order on top of v4.4, v4.5 and
> v4.6?
> If yes then:
>
> Acked-by: Ingo Molnar <mi...@kernel.org>
I confirm that they do so on top of v4.4.
Yes, we're just now moving to 4.4 stable, and will be there for a
while, so backporting would be very helpful.
--Ed
On Tue, May 24, 2016 at 7:53 AM, Kani, Toshimitsu <toshi.k...@hpe.com> wrote:
> On Mon, 2016-05-23 at 15:52 -0700, Ed Swierk wrote:
>> Good question. I ran
/JoJKbCOxV0U/PM0I9d1v60kJ
--Ed
On Mon, May 23, 2016 at 1:13 PM, Boris Ostrovsky
<boris.ostrov...@oracle.com> wrote:
> On 05/23/2016 10:15 AM, Konrad Rzeszutek Wilk wrote:
>> On Fri, May 20, 2016 at 04:58:09PM -0700, Ed Swierk wrote:
>>> (XEN) traps.c:459:d0v0 Unhandled invalid opcode
I've encountered two problems booting a Linux 4.4 dom0 on recent
stable xen 4.5 on VMware ESXi 5.5.0.
One has the same "ata_piix: probe of :00:07.1 failed with error
-22" symptom discussed some time ago, and prevents the kernel from
seeing any of the virtual IDE drives exposed by VMware.
Nice implementation. I tested it and it fixes the problem on the affected
system.
Just a minor typo in a comment: "it's duty" should be "its duty".
--Ed
On Wed, May 18, 2016 at 4:44 AM, Juergen Gross <jgr...@suse.com> wrote:
> On 17/05/16 22:50, Ed Swierk
I added some more instrumentation and discovered that the result of
xen_count_remap_pages() (0x85dea) is one less than the actual number
of pages remapped by xen_set_identity_and_remap() (0x85deb).
The two functions differ in their handling of a xen_e820_map entry
whose size is not a multiple of
Here is the instrumented output with dom0_mem=18432M,max:18432M.
...
[0.00] xen_count_remap_pages(max_pfn=0x48) == 0x85dea
[0.00] max_pages 0x505dea
[0.00] xen_add_extra_mem(48, 85dea)
[0.00] memblock_reserve(0x48000, 0x85dea000)
[0.00] Released
I'm trying to figure out a crash when booting a Linux 4.4 dom0 on
a recent stable xen 4.5. I'm capping the dom0 memory by setting
dom0_mem=18432M,max:18432M on the xen command line, and the kernel
config has CONFIG_XEN_BALLOON unset.
The crash seems dependent on the contents of the e820 table;
I tested on VMware Fusion with 3, 4 and 8 CPUs, and it works in all cases.
(XEN) Xen version 4.6.1-pre ( 4.6.1~pre-1skyport1)
(eswi...@skyportsystems.com) (gcc (Debian 5.2.1-19.1skyport1) 5.2.1
20150930) debug=n Wed Dec 2 07:22:20 PST 2015
(XEN) Bootloader: SYSLINUX 4.05 20140113
(XEN) Command
A few more data points: I also tested Xen 4.6 on VMware ESXi 5.5, and
it yields similar results. Not surprising, since Fusion uses basically
the same virtualization engine.
However, ESXi offers many more choices of number of processors, number
of cores, hyperthreading, etc. The weird processor ID
RFC. Boot tested on VMware Fusion, and on a 2-socket Xeon server.
diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h
index ea07888..a41ce2d 100644
--- a/xen/include/asm-x86/smp.h
+++ b/xen/include/asm-x86/smp.h
@@ -67,7 +67,7 @@ extern unsigned int nr_sockets;
void
On Tue, Nov 24, 2015 at 2:34 AM, Jan Beulich wrote:
> Indeed, and I think I had said so. The algorithm does, however, tell
> us that with the above output CPU 3 (APIC ID 6) is on socket 6 (both
> shifts being zero), which for the whole system results in sockets 1,
> 3, and 5
I instrumented detect_extended_topology() and ran again with 4 CPUs.
Loading xen-4.6-amd64.gz... ok
Loading vmlinuz-3.14.51-grsec-dock... ok
Loading initrd.img-3.14.51-grsec-dock... ok
(XEN) Xen version 4.6.1-pre (Debian 4.6.1~pre-1skyport1) (
eswi...@skyportsystems.com) (gcc (Debian 4.9.3-4)
I instrumented set_nr_sockets() and smp_store_cpu_info(), and re-ran with
varying numbers of CPUs.
With 4 CPUs, nr_sockets=4, so smp_store_cpu_info() exceeds the bounds of
the socket_cpumask array when socket=4 or 6.
Loading xen-4.6-amd64.gz... ok
Loading vmlinuz-3.14.51-grsec-dock... ok
Loading
Xen staging-4.6 crashes when booting on VMware Fusion 8.0.2 (with VT-x/EPT
enabled), with 4 virtual CPUs:
Loading xen-4.6-amd64.gz... ok
Loading vmlinuz-3.14.51-grsec-dock... ok
Loading initrd.img-3.14.51-grsec-dock... ok
(XEN) Xen version 4.6.1-pre (Debian 4.6.1~pre-1skyport1) (
On Tue, Sep 22, 2015 at 5:35 AM, Ed Swierk <eswi...@skyportsystems.com> wrote:
> So if the contract is that Dom0 tells Xen about mmcfgs before the
> devices they cover, then Linux ought to call pci_mmcfg_reserved from
> (or immediately after) both pci_mmcfg_early_init() and
> p
On Mon, Sep 21, 2015 at 10:21 PM, Jan Beulich wrote:
> I don't follow: Surely Dom0 first establishes MCFG areas to be used, and
> only then scans the buses for devices, resulting in them to be reported to
> the hypervisor?
That seems like a reasonable expectation, but while
mU. Both problems are fixed by this change.
>
> Signed-off-by: Ed Swierk <eswi...@skyportsystems.com>
> ---
> drivers/xen/pci.c | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
> index 7494dbe..
82599 VFs, since vf_rlen has
not been initialized by pci_add_device(). And on Xen 4.5, Xen nukes the
DomU due to "Potentially insecure use of MSI-X" when the VF driver loads
in the DomU. Both problems are fixed by this change.
Signed-off-by: Ed Swierk <eswi...@skyportsystems.com>
On Tue, Dec 2, 2014 at 6:00 AM, Andrew Cooper andrew.coop...@citrix.com wrote:
The automatically generating doesn't actually work. Depending on the
relative timestamps caused by a SCM checkout, or a tarball extraction,
the files will be attempted to be regenerated.
These files are
- Use %lex-param instead of obsolete YYLEX_PARAM to override lex scanner
parameter
- Change deprecated %name-prefix= to %name-prefix
Tested against bison 2.4.1 and 3.0.2.
Signed-off-by: Ed Swierk eswi...@skyportsystems.com
---
tools/libxl/libxlu_cfg_y.y | 6 +++---
1 file changed, 3
28 matches
Mail list logo