On Tue, 28 Apr 2009 02:34:40 +0200 "Erwin van Maanen"
<open...@acmeweb.nl> wrote:

> I've tried to do include the panic and trace with the screenshots i
> attached, i'm afraid i dont know another way to get the info across.
> I can appreciate the devs not being able to look at the/each
> virtualization issue, i was just hopeing someone knew what was going
> on.
>
> Before reading on: the system seems to work fine with the bsd.mp of
> the 4.5 snapshot of 26/4/2009 as Stuart Henderson suggested.
>
> Now to be of some use atleast:
>
> " tricked network card to flexible "
> Default the vmware esxi only makes the E1000 network card available
> to the "Other 64-bit" guest os. (which is also recommended by vmware)
> If you set it to linux 32-bit or something along those lines, you can
> add a "flexible" network card, which openbsd picks up on as a pcn/AMD
> PCnet-PCI device.
> After which, you can switch back to "Other 64-bit" and the network
> card will stay as flexible.
>
> With a bit of testing on performance, i found this "network card" to
> perform much better than the e1000 over a virtual switch in vmware
> with no actual network card attached to it. (This was OpenBSD 4.4
> unpatched). I'd be happy to test this out with 4.5 current as well.
>

It's an interesting approach, but the flopping back and forth to
fool the VM and Guest OS seems more than a bit iffy. The fact you're
using a "virtual switch in vmware" tells me you're talking between two
or more guest operating system instances running simultaneously. My
problems are the exact opposite, namely talking to other real systems
in the real world.

> The actual (relevant?) hardware in the server:
> proc: AMD Phenom 9350e Quad-Core processor 4x2Ghz
> mobo: Supermicro H8SMI-2 rev 2 (MCP55 Pro chipset, incl dual lan)
> mem: 8GB ECC bank interleaving set
> (still waiting on the raid card and the ipmi device)
>
> That is not actually 2 physical sockets/processors on the board, but
> the hardware chosen is in the supported list on the vmware site.
> I will look into this a bit further, cheers!
>

There seems to be a large amount of discrepancy between what user
report to work, and what VMware Inc says will work. This combined with
the VMware Inc nonsense of constantly renaming their products leads to
a lot of confusion.

http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_installation_guide.
pdf

The link above might wrap, but page 21 "At least two processors"
and pg 25:

        "There are specific hardware requirements for 64
        bit guest operating system support.  For AMD
        Opteron based systems, the processors must be
        Opteron Rev E and later. For Intel Xeon based
        systems, the processors must include support for
        Intel Virtualization Technology (VT). Many servers
        that include CPUs with VT support might ship with
        VT disabled by default, and VT must be enabled
        manually. If your CPUs support VT but you do not
        see this option in the BIOS, contact your vendor
        to request a BIOS version that lets you enable VT
        support."

According to the support engineer I spoke to, they really do mean that
you must have two physical sockets/processors to run 64-bit guest
operating systems.

Most folks use VM's for "consolidation" and similar buzz words. In
contrast, my needs are fairly simple; a lab environment for testing
compatibility with a stack of operating systems. At present, I'm
still not convinced virtualization is a good way to do things for a test
lab environment.

--
J.C. Roberts

Reply via email to