Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Robin Axelsson

On 2012-01-25 21:50, James Carlson wrote:

Robin Axelsson wrote:

I'm confused.  If VirtualBox is just going to talk to the physical
interface itself, why is plumbing IP necessary at all?  It shouldn't be
needed.

Maybe I'm the one being confused here. I just believed that the IP must
be visible to the host for VirtualBox to be able to find the interface
in first place but maybe that is not the case. When choosing an adapter
for bridged networking on my system, the drop-down menu will give me the
options e1000g1, e1000g2 and rge0. So I'm not sure how or what part of
the system that gives the physical interfaces those names. I mean if the
host can't see those interfaces how will VirtualBox be able to see them?
At least that was my reasoning behind it.

The names come from the datalink layer.  It has nothing whatsoever to do
with IP.  IP (like many other network layers) can open and use datalink
layer interfaces if desired.

They're quite distinct in terms of implementation, though the user
interfaces (and documentation :-) tend to blur the lines.  The datalink
layer object is managed by dladm.  It has a name that defaults to the
driver's name plus an instance number, but that can be configured by the
administrator if desired.  There are also virtual interfaces at this
level for various purposes.  You can think of it as being the Ethernet
port, assuming no VLANs are involved.

The IP layer object is managed by ifconfig.  It's used only by IP.
Other protocols don't (or at least _shouldn't_) use the IP objects.  In
general terms, these objects each contain an IP address, a subnet mask,
and a set of IFF_* flags.

By default, the first IP layer object created on a given datalink layer
object has the same name as that datalink layer object -- even though
they're distinct ideas.  The second and subsequent such objects created
get the somewhat-familiar :N addition to the name.  It's sometimes
helpful to think of that first object as being really named e1000g1:0
at the IP layer, in order to keep it distinct from the e1000g1
datalink layer item.


This sounds like an implementation of the OSI model which separates the 
datalink layer from the network layer.  When speaking of blurred lines, 
it seems that the line between the network layer, transport layer and 
session layer (as specified in the OSI model) is also quite blurry.



Since VirtualBox is providing datalink layer objects to the guest
operating system (through a simulated driver), it needs access to the
datalink layer on the host system.  That means e1000g1 or something
like that.

It doesn't -- and can't -- use the IP layer objects.  Those allow you to
send only IP datagrams.

If VirtualBox used the IP objects from the host operating system, what
would happen when the guest attempts to send an ARP message or (heaven
help us) IPX?  Those work at a layer below IP.


There are different ways to virtualize networking in VirtualBox. Apart 
from bridged networking, which is also used by default by hypervisors 
such as Xen (don't know about KVM), you can also use NAT.


In NAT mode the VirtualBox hypervisor acts as a virtual router that sits 
between the IP stack of the host and the VMs. It has its obvious 
limitations as you have pointed out which is why I don't use this mode. 
This mode also has (or at least had in prior versions of VirtualBox) 
stability/performance issues.





There's probably a way to do this with ipadm, but I'm too lazy to read
the man page for it.  I suggest it, though, as a worthwhile thing to do
on a lazy Sunday afternoon.

I'll look into it if all else fails. I see that the manual entry for
ipadm is missing in OI. I will also see if there is more up-to-date
documentation on the ipmp. I assume that when a ClientID value is
generated a MAC address also comes with it, at least when it negotiates
with the DHCP server.

Nope.  See section 9.14 of RFC 2132 for a description of the DHCP
Client-identifier option, and the numerous references to client
identifiers in RFC 2131.

Back in the bad old days of BOOTP, clients were in fact forced to use
hardware (MAC) addresses for identification.  That's still the default
for DHCP, just to make things easy, but a fancy client (such as the one
in OpenIndiana) can create client identifiers from all sorts of sources,
including thin air.


I guess that this ClientID feature is only used by more advanced 
routers, because all my machines (virtual and physical) are identified 
by a given MAC address in the router. Or maybe it is capable of ClientID 
and I'm not aware of it, the router documentation is mum about this 
feature though.



Try man ifconfig, man ipmpstat, man if_mpadm.  Those should be
reasonable starting points.

Thanks, these man pages exists. I saw in the ifconfig that there is some
info about ipmp although it is brief.

It's possible that some of meem's Clearview project documentation that
revamped large parts of IPMP are available on line as well.


I'll look into it, I see that it is on 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Al Slater

On 26/01/2012 08:32, Open Indiana wrote:

Please skip the whole ifconfig and plumb this or that discussion.

Virtualbox works on any interface that is plumbed since only then the
interface is visible in the menu.


That is untrue.  All interfaces are visible in VB regardless of whether 
they are plumbed or not.  Just verified that on OI and Solaris 10.


--
Al Slater



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread James Carlson
On 01/27/12 07:51, Robin Axelsson wrote:
 On 2012-01-25 21:50, James Carlson wrote:
 By default, the first IP layer object created on a given datalink layer
 object has the same name as that datalink layer object -- even though
 they're distinct ideas.  The second and subsequent such objects created
 get the somewhat-familiar :N addition to the name.  It's sometimes
 helpful to think of that first object as being really named e1000g1:0
 at the IP layer, in order to keep it distinct from the e1000g1
 datalink layer item.
 
 This sounds like an implementation of the OSI model which separates the
 datalink layer from the network layer.

Yes.  The design (since Solaris 2.0) has always been internally a bit
more deliberately layered, but during OpenSolaris development we put a
lot of work into making the distinctions more visible in the
administrative interfaces, because they're actually fairly useful when
it comes to virtualization and other new functions.

  When speaking of blurred lines,
 it seems that the line between the network layer, transport layer and
 session layer (as specified in the OSI model) is also quite blurry.

Network and transport are fairly clear (though ICMP is sort of a sticky
bit).  Session is basically absent for TCP/IP.

 In NAT mode the VirtualBox hypervisor acts as a virtual router that sits
 between the IP stack of the host and the VMs. It has its obvious
 limitations as you have pointed out which is why I don't use this mode.
 This mode also has (or at least had in prior versions of VirtualBox)
 stability/performance issues.

OK

 Back in the bad old days of BOOTP, clients were in fact forced to use
 hardware (MAC) addresses for identification.  That's still the default
 for DHCP, just to make things easy, but a fancy client (such as the one
 in OpenIndiana) can create client identifiers from all sorts of sources,
 including thin air.
 
 I guess that this ClientID feature is only used by more advanced
 routers, because all my machines (virtual and physical) are identified
 by a given MAC address in the router. Or maybe it is capable of ClientID
 and I'm not aware of it, the router documentation is mum about this
 feature though.

As I said, using the MAC address to construct a client identifier is
generally the default, so it's not at all surprising that this is what
you see.  The point is that it's not the only possibility.

 You have to tell ifconfig what you want to do.  If you want to modify
 the separate IPv6 interfaces, then specify inet6, as in:

 ifconfig e1000g0 inet6 unplumb
 
 Ok. When I plumb/unplumb a connection with ifconfig, will this
 (un)plumbing be permanent or only be in effect until next reboot or
 power cycle?

It's non-permanent.  It affects only the running system.

 I would expect that this plumbing configuration is in some
 /etc/config file somewhere...

Yes.  If you're running with the old-style start-up scripts, then the
existence of /etc/hostname.XXX causes interface XXX to be plumbed for
IPv4 and for the contents of that file to be used as the arguments and
options to ifconfig.  A similar thing is true for /etc/hostname6.XXX,
but for IPv6.  And if /etc/dhcp.XXX exists, the interface is plumbed and
DHCP is run.

Things changed quite a bit with both NWAM and ipadm.  The latter
provides both transient and permanent configuration features, and no
longer relies on the /etc/hostname.* files.

 In modern -- CIDR -- IPv4, you don't normally refer to a subnet mask as
 something like 255.255.255.128, but rather as a prefix length, like
 /25.  Besides being more compact, the prefix length notation avoids
 dumb (and pointless) mistakes that occur when someone accidentally
 specifies non-contiguous mask bits.
 
 Ok, so I assume that the /n means that the /n/ least significant
 consecutive bits are 0 in the network mask.

No.  It means that the n most significant bits are 1, and the 32-n
least significant bits are 0.  So, /24 is 255.255.255.0.  /8 is
255.0.0.0.  /28 is 255.255.255.240.

 So that means I'm no longer
 able to specify network masks such as 170.85.144.70 where for example
 144 in binary form is 1001, i.e. that the zeroes in the mask are not
 consecutive/contiguous as there are ones in between.

For IPv4 may still be able to use non-contiguous masks, but they were
never a rational choice at all -- anywhere.  And with certain routing
protocols, they simply didn't and could not work.  They're a definite
case of so, what exactly were you trying to do?

All that matters is the number of bits that are set, so having them be
contiguous is no loss of functionality.

 So I understand this restriction is enforced so as to prevent the
 accidental creation of subnets that not mutually exclusive, i.e that
 overlap each other.

That's part of it.  Certain protocols (e.g., BGP) rely on having the
bits be contiguous.  Aiding human understanding is certainly another part.

But, at least as far as I'm concerned, the crucial fact is that making
them 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Robin Axelsson

On 2012-01-26 09:32, Open Indiana wrote:

Please skip the whole ifconfig and plumb this or that discussion.

Virtualbox works on any interface that is plumbed since only then the
interface is visible in the menu. A working IP-adress is not necessary.
Please put your interfaces on manual IP-assignment and disable nwam.
Give an IP-address to 1 interface so you can manage your host-server.

A bridged interface in VirtualBox just means that VB allows all traffic to
go directly to your VM client.

So:
1. disconnect all ethernetcables from the hostserver
2. put one back in an interface
3. login to the server
4. disable nwam
5. give the online interface a working IP-configuration via ifconfig /
nsconfig / nslookup config files
6. make sure no configuration for the other interfaces is to be found inside
/etc
7. plumb up a free interface by ifconfig and put a cable in this interface
8. start VirtualBox Gui
9. configure client to use the interface of point 7
10. start the client
11. grab some coffee
12. enjoy



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss




Are you really sure that only plumbed interfaces are visible in 
VirtualBox? All physical interfaces were still visible in VirtualBox 
even after I unplumbed them on my system. Perhaps their visibility 
depends on how you set up the virtual networking. In NAT mode, 
VirtualBox as I understand it communicates via the IP stack of the host, 
so it makes sense that in this mode, only the plumbed interfaces are 
visible to VirtualBox. However, in bridged mode as the documentation 
implies, the vboxflt driver goes past the IP stack so It would make 
sense that even the unplumbed interfaces should be visible in that mode. 
Remember that the NAT mode is the default networking mode in VirtualBox.


Normally I shouldn't really worry about this. I should be able to get 
away with only one network connection and the Bridged Network of the 
VirtualBox hypervisor should run seamlessly together with the host's IP 
stack on the datalink without hickups. But in reality this is not the 
case and nobody knows when these issues will be sorted out.


The vboxflt driver (which is the driver being used for bridged 
networking in VirtualBox ) is buggy. After every third boot or so, the 
VMs using this driver (i.e. bridged networking) fails to initialize. So 
I have to go superuser and rem_drv, add_drv the vboxflt driver to fix 
this issue and repeat this procedure every time the VMs fail.


So what I'm doing is isolating VirtualBox from the IP stack of the host 
and it seems to have worked for me. It isn't a complicated thing to do. 
All I had to do was to disable all NICs but one in the nwam properties 
window and then choose one of the disabled NICs for bridged networking 
in VirtualBox. The vboxflt driver still fails after every third boot but 
at least it doesn't interfere with the IP stack of the host.


But although this is not a complicated thing to do it is very useful to 
*know* what you are doing and understand the limitations of your system 
which I think James Carlson has provided good insights into.


One way to make the system user-friendly is to make nwam automatically 
configure IPMP when it detects two properly working ethernet connections 
within the same subnet. Perhaps it already does so. If not it should at 
least unplumb one connection to prevent the interference issues the 
James Carlson was talking about, or at least give warning messages about 
it. If we want to make it even more user friendly it could also have 
monitoring features (such as /sbin/route monitor) and offer some 
troubleshooting functionality or even warn about buggy drivers such as 
the rge driver.


I think it is a little strange that the Realtek RTL8111DL used to work 
well and now that I swapped to a Realtek RTL8111E everything went 
haywire. Maybe the drivers have been altered in some way in the 
development process of OpenIndiana, perhaps they have accidentally 
reverted to some old and buggy driver. I don't know about the 8111 chip 
but I believe that the D and E in the name/ID suggest that E is a later 
generation of the circuit and in the future we will expect to see an 
RTL8111F chip in production. The L merely denotes the type of 
encapsulation (PLCC) of the chip as it comes in different 
encapsulations.  When comparing generations it seems that the chip is 
essentially the same but is revised with some new features or fixes.


It is not the first time I've had this experience, I have experienced 
things that used to work well suddenly become buggy. In the past when I 
was running OpenSolaris b111 I had problems getting the ntfs-3g 
filesystem to work. But after compiling a recent version of fuse it 
worked fine. Then I updated OpenSolaris to b134 and once again it 
started to crash the system.


Robin.




Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Robin Axelsson

On 2012-01-26 14:05, James Carlson wrote:

Open Indiana wrote:

Please skip the whole ifconfig and plumb this or that discussion.

Virtualbox works on any interface that is plumbed since only then the
interface is visible in the menu.

Oh, yuck.  It should be using the libdlpi interfaces (see
dlpi_walk(3DLPI)), at least on OpenIndiana.

If portability to older Solaris systems is necessary, it should be
enumerating DLPI interfaces using libdevinfo.

Piggybacking on IP isn't right at all.

Note that VirtualBox offers several types of virtualized networking 
modes. In NAT mode where it communicates with the IP stack of the host 
maybe this makes sense whereas it doesn't in bridged mode.


If VirtualBox and OpenIndiana did what they promised without hickups I 
wouldn't even need two network interfaces to ensure the operation of the 
system. But when bugs occur we try to get around them until they are 
fixed, and that requires at least some knowledge about the system which 
I'm really happy to have acquired.


I can even see on the activity LEDs on the switch that the VMs' network 
is independent of the hosts network.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Open Indiana
I'm very sorry to have misinformed you. I didn't check my remarks on a
working system and I mixed up two things.

Virtualbox will discover even an unplumbed device.

It was the zones booting that I had in mind. On my solaris 10 server I need
to plumb all interfaces that are used inside a zone on the global zone
before the zone can use it.

Ooops, sorry


-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: vrijdag 27 januari 2012 14:38
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] CIFS performance issues

On 2012-01-26 14:05, James Carlson wrote:
 Open Indiana wrote:
 Please skip the whole ifconfig and plumb this or that discussion.

 Virtualbox works on any interface that is plumbed since only then
 the interface is visible in the menu.
 Oh, yuck.  It should be using the libdlpi interfaces (see
 dlpi_walk(3DLPI)), at least on OpenIndiana.

 If portability to older Solaris systems is necessary, it should be
 enumerating DLPI interfaces using libdevinfo.

 Piggybacking on IP isn't right at all.

Note that VirtualBox offers several types of virtualized networking modes.
In NAT mode where it communicates with the IP stack of the host maybe this
makes sense whereas it doesn't in bridged mode.

If VirtualBox and OpenIndiana did what they promised without hickups I
wouldn't even need two network interfaces to ensure the operation of the
system. But when bugs occur we try to get around them until they are fixed,
and that requires at least some knowledge about the system which I'm really
happy to have acquired.

I can even see on the activity LEDs on the switch that the VMs' network is
independent of the hosts network.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread James Carlson
On 01/27/12 08:37, Robin Axelsson wrote:
 If VirtualBox and OpenIndiana did what they promised without hickups I
 wouldn't even need two network interfaces to ensure the operation of the
 system. But when bugs occur we try to get around them until they are
 fixed, and that requires at least some knowledge about the system which
 I'm really happy to have acquired.

There's obviously more to this story.  I've used VirtualBox on
OpenIndiana with a single network interface without a whole lot of
fanfare, so it's a little surprising to hear that someone is going to a
lot of effort to make things more complex in order to avoid problems.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread James Carlson
On 01/27/12 08:28, Robin Axelsson wrote:
 One way to make the system user-friendly is to make nwam automatically
 configure IPMP when it detects two properly working ethernet connections
 within the same subnet.

My recollection is that automatic configuration of IPMP was on the list
of things to do, but that it never got done.  It just wasn't the focus,
because NWAM was initially designed to handle laptops and other simple
systems, not servers.

What NWAM is supposed to do is configure only one usable interface
(guided by user selection criteria) for the system.  The fact that you
got multiple interfaces configured is indeed an anomaly, and one I can't
explain.  I don't know how you got there in the first place.  It
shouldn't have happened.

Someone with a deeper understanding of the new NWAM would have to look
at your system to find out what went wrong.  Unfortunately, I only
remember details about the old one ...

 Perhaps it already does so. If not it should at
 least unplumb one connection to prevent the interference issues the
 James Carlson was talking about, or at least give warning messages about
 it. If we want to make it even more user friendly it could also have
 monitoring features (such as /sbin/route monitor) and offer some
 troubleshooting functionality or even warn about buggy drivers such as
 the rge driver.

That sounds backwards to me.  If a buggy driver exists, then the bugs
should be fixed, or the driver should be discarded.  There's no reason
on Earth to have some other bit of software warning users about
someone else's software design failures, whether real or otherwise.  At
best, that other software would just become a repository of uselessly
independent misjudgment -- as new, unknown buggy drivers are written and
old ones are repaired.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Robin Axelsson

On 2012-01-27 15:32, James Carlson wrote:

On 01/27/12 08:28, Robin Axelsson wrote:

One way to make the system user-friendly is to make nwam automatically
configure IPMP when it detects two properly working ethernet connections
within the same subnet.

My recollection is that automatic configuration of IPMP was on the list
of things to do, but that it never got done.  It just wasn't the focus,
because NWAM was initially designed to handle laptops and other simple
systems, not servers.

What NWAM is supposed to do is configure only one usable interface
(guided by user selection criteria) for the system.  The fact that you
got multiple interfaces configured is indeed an anomaly, and one I can't
explain.  I don't know how you got there in the first place.  It
shouldn't have happened.


I don't agree with you on that. Many motherboards come with dual 
ethernet ports (i.e. dual NICs, I have counted the chips myself) and it 
is not uncommon with laptops with one wired ethernet interface and a 
wireless one. So as you said, there is the potential risk of 
interference between the two interfaces even though one may not even be 
connected.




Someone with a deeper understanding of the new NWAM would have to look
at your system to find out what went wrong.  Unfortunately, I only
remember details about the old one ...


Perhaps it already does so. If not it should at
least unplumb one connection to prevent the interference issues the
James Carlson was talking about, or at least give warning messages about
it. If we want to make it even more user friendly it could also have
monitoring features (such as /sbin/route monitor) and offer some
troubleshooting functionality or even warn about buggy drivers such as
the rge driver.

That sounds backwards to me.  If a buggy driver exists, then the bugs
should be fixed, or the driver should be discarded.  There's no reason
on Earth to have some other bit of software warning users about
someone else's software design failures, whether real or otherwise.  At
best, that other software would just become a repository of uselessly
independent misjudgment -- as new, unknown buggy drivers are written and
old ones are repaired.



True, but what do you do when *all* you've got is a buggy driver that 
_may_ work well on your system? Either you use the driver or you go get 
a network card that is proven to work well with Solaris/OpenIndana. The 
thing is that a substantial part of the development of OI depends on the 
charity of willing developers and their spare time, so you have to make 
the best out of what you have at your disposal.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread James Carlson
Robin Axelsson wrote:
 On 2012-01-27 15:32, James Carlson wrote:
 What NWAM is supposed to do is configure only one usable interface
 (guided by user selection criteria) for the system.  The fact that you
 got multiple interfaces configured is indeed an anomaly, and one I can't
 explain.  I don't know how you got there in the first place.  It
 shouldn't have happened.
 
 I don't agree with you on that. Many motherboards come with dual
 ethernet ports (i.e. dual NICs, I have counted the chips myself) and it
 is not uncommon with laptops with one wired ethernet interface and a
 wireless one. So as you said, there is the potential risk of
 interference between the two interfaces even though one may not even be
 connected.

Don't agree how?

NWAM's original mission in life was to make sure that only one interface
was configured at a time, regardless of how many might be available.
I'm pretty darned sure that's true, because I was involved in that project.

That's sort of the whole point.  NWAM looks over the available
interfaces, figures out which ones are usable, then applies a set of
policies to determine which one of all of those will be used.  It then
disables the others and properly configures the rest of the system to
use that one chosen interface.  If conditions change, then it
reevaluates the situation.

The canonical example would be a laptop with wired and wireless
interfaces.  The default rule would be to use wired if it's connected
and working, and otherwise use the wireless interface.  Not both at any
one time.

Things changed a bit in the next incarnation of NWAM, and I wasn't as in
touch with that one.  However, the fundamental design goal of producing
a working configuration -- and avoiding known unworkable configurations
-- wasn't abandoned.  So, what you saw was either a bug or a
misconfiguration of some sort, and you'd need to find someone who knows
more about that second NWAM.

 That sounds backwards to me.  If a buggy driver exists, then the bugs
 should be fixed, or the driver should be discarded.  There's no reason
 on Earth to have some other bit of software warning users about
 someone else's software design failures, whether real or otherwise.  At
 best, that other software would just become a repository of uselessly
 independent misjudgment -- as new, unknown buggy drivers are written and
 old ones are repaired.

 
 True, but what do you do when *all* you've got is a buggy driver that
 _may_ work well on your system? Either you use the driver or you go get
 a network card that is proven to work well with Solaris/OpenIndana. The
 thing is that a substantial part of the development of OI depends on the
 charity of willing developers and their spare time, so you have to make
 the best out of what you have at your disposal.

I think we have differing viewpoints on system architecture.

To me, it would be a very poor design choice to embed detailed knowledge
of some driver writer's parental marital status into an independent part
of the system.

If all compromised drivers exposed a I'm potentially garbage flag,
then, fine, that independent part could read that flag and do whatever
it wants based on it.  But merely reading the letters rge and deciding
to impugn the connection based on some history or accusations strikes me
as untenable.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread Robin Axelsson

On 2012-01-27 16:45, James Carlson wrote:

Robin Axelsson wrote:

On 2012-01-27 15:32, James Carlson wrote:

What NWAM is supposed to do is configure only one usable interface
(guided by user selection criteria) for the system.  The fact that you
got multiple interfaces configured is indeed an anomaly, and one I can't
explain.  I don't know how you got there in the first place.  It
shouldn't have happened.

I don't agree with you on that. Many motherboards come with dual
ethernet ports (i.e. dual NICs, I have counted the chips myself) and it
is not uncommon with laptops with one wired ethernet interface and a
wireless one. So as you said, there is the potential risk of
interference between the two interfaces even though one may not even be
connected.

Don't agree how?


That systems with multiple interfaces are an anomaly, but maybe that's 
not what you meant.




NWAM's original mission in life was to make sure that only one interface
was configured at a time, regardless of how many might be available.
I'm pretty darned sure that's true, because I was involved in that project.

That's sort of the whole point.  NWAM looks over the available
interfaces, figures out which ones are usable, then applies a set of
policies to determine which one of all of those will be used.  It then
disables the others and properly configures the rest of the system to
use that one chosen interface.  If conditions change, then it
reevaluates the situation.

The canonical example would be a laptop with wired and wireless
interfaces.  The default rule would be to use wired if it's connected
and working, and otherwise use the wireless interface.  Not both at any
one time.

Things changed a bit in the next incarnation of NWAM, and I wasn't as in
touch with that one.  However, the fundamental design goal of producing
a working configuration -- and avoiding known unworkable configurations
-- wasn't abandoned.  So, what you saw was either a bug or a
misconfiguration of some sort, and you'd need to find someone who knows
more about that second NWAM.


That sounds backwards to me.  If a buggy driver exists, then the bugs
should be fixed, or the driver should be discarded.  There's no reason
on Earth to have some other bit of software warning users about
someone else's software design failures, whether real or otherwise.  At
best, that other software would just become a repository of uselessly
independent misjudgment -- as new, unknown buggy drivers are written and
old ones are repaired.


True, but what do you do when *all* you've got is a buggy driver that
_may_ work well on your system? Either you use the driver or you go get
a network card that is proven to work well with Solaris/OpenIndana. The
thing is that a substantial part of the development of OI depends on the
charity of willing developers and their spare time, so you have to make
the best out of what you have at your disposal.

I think we have differing viewpoints on system architecture.

To me, it would be a very poor design choice to embed detailed knowledge
of some driver writer's parental marital status into an independent part
of the system.

If all compromised drivers exposed a I'm potentially garbage flag,
then, fine, that independent part could read that flag and do whatever
it wants based on it.  But merely reading the letters rge and deciding
to impugn the connection based on some history or accusations strikes me
as untenable.



The only opinion that I have is that it should work and reliably so. The 
rge driver is apparently buggy and that's what people say about it in 
mailing lists. It is included with the OI distribution/repository. If I 
had the time and knowledge I would try and fix it myself but 
unfortunately I don't.


I have told myself to get to learn dtrace someday but I guess that I 
will have to get through the Solaris Internals to even be able to 
understand the output it generates.





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-27 Thread James Carlson
Robin Axelsson wrote:
 On 2012-01-27 16:45, James Carlson wrote:
 Robin Axelsson wrote:
 On 2012-01-27 15:32, James Carlson wrote:
 What NWAM is supposed to do is configure only one usable interface
 (guided by user selection criteria) for the system.  The fact that you
 got multiple interfaces configured is indeed an anomaly, and one I
 can't
 explain.  I don't know how you got there in the first place.  It
 shouldn't have happened.
 I don't agree with you on that. Many motherboards come with dual
 ethernet ports (i.e. dual NICs, I have counted the chips myself) and it
 is not uncommon with laptops with one wired ethernet interface and a
 wireless one. So as you said, there is the potential risk of
 interference between the two interfaces even though one may not even be
 connected.
 Don't agree how?
 
 That systems with multiple interfaces are an anomaly, but maybe that's
 not what you meant.

Likely not, as multiple interfaces were exactly the reason that NWAM was
invented in the first place.  Without multiple interfaces, there's not a
whole lot of need for something like it.

The primary use-case for NWAM was a laptop with wired and wireless, and
being able to switch between them when advantageous without having to
get the user involved.

 If all compromised drivers exposed a I'm potentially garbage flag,
 then, fine, that independent part could read that flag and do whatever
 it wants based on it.  But merely reading the letters rge and deciding
 to impugn the connection based on some history or accusations strikes me
 as untenable.

 
 The only opinion that I have is that it should work and reliably so. The
 rge driver is apparently buggy and that's what people say about it in
 mailing lists. It is included with the OI distribution/repository. If I
 had the time and knowledge I would try and fix it myself but
 unfortunately I don't.

That's all well and good, but, as I was saying, I don't agree that
modifying NWAM (or any other part of the system for that matter) to
disparage particular drivers is a good thing at all.  At a minimum, the
energy spent in creating the illuminated manuscript of bad drivers
would be better spent debugging and fixing the darned things.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-26 Thread Open Indiana
Please skip the whole ifconfig and plumb this or that discussion.

Virtualbox works on any interface that is plumbed since only then the
interface is visible in the menu. A working IP-adress is not necessary.
Please put your interfaces on manual IP-assignment and disable nwam.
Give an IP-address to 1 interface so you can manage your host-server.

A bridged interface in VirtualBox just means that VB allows all traffic to
go directly to your VM client.

So:
1. disconnect all ethernetcables from the hostserver
2. put one back in an interface
3. login to the server
4. disable nwam
5. give the online interface a working IP-configuration via ifconfig /
nsconfig / nslookup config files
6. make sure no configuration for the other interfaces is to be found inside
/etc
7. plumb up a free interface by ifconfig and put a cable in this interface
8. start VirtualBox Gui
9. configure client to use the interface of point 7
10. start the client
11. grab some coffee
12. enjoy



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-26 Thread James Carlson
Open Indiana wrote:
 Please skip the whole ifconfig and plumb this or that discussion.
 
 Virtualbox works on any interface that is plumbed since only then the
 interface is visible in the menu.

Oh, yuck.  It should be using the libdlpi interfaces (see
dlpi_walk(3DLPI)), at least on OpenIndiana.

If portability to older Solaris systems is necessary, it should be
enumerating DLPI interfaces using libdevinfo.

Piggybacking on IP isn't right at all.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-25 Thread Robin Axelsson

On 2012-01-24 21:59, James Carlson wrote:

Robin Axelsson wrote:

If you have two interfaces inside the same zone that have the same IP
prefix, then you have to have IPMP configured, or all bets are off.
Maybe it'll work.  But probably not.  And was never been supported that
way by Sun.

The idea I have with using two NICs is to create a separation between
the virtual machine(s) and the host system so that the network activity
of the virtual machine(s) won't interfere with the network activity of
the physical host machine.

Nice idea, but it unfortunately won't work.  When two interfaces are
plumbed up like that -- regardless of what VM or bridge or hub or
virtualness there might be -- the kernel sees two IP interfaces
configured with the same IP prefix (subnet), and it considers them to be
completely interchangeable.  It can (and will!) use either one at any
time.  You don't have control over where the packets go.

Well, unless you get into playing tricks with IP Filter.  And if you do
that, then you're in a much deeper world of hurt, at least in terms of
performance.

Here's what the virtualbox manul says about bridged networking:

*Bridged networking*:

This is for more advanced networking needs such as network simulations 
and running servers in a guest. When enabled, VirtualBox connects to one 
of your installed network cards and exchanges network packets directly, 
circumventing your host operating system's network stack.


With bridged networking, VirtualBox uses a device driver on your 
_*host*_ system that filters data from your physical network adapter. 
This driver is therefore called a net filter driver. This allows 
VirtualBox to intercept data from the physical network and inject data 
into it, effectively creating a new network interface in software. When 
a guest is using such a new software interface, it looks to the host 
system as though the guest were physically connected to the interface 
using a network cable: the host can send data to the guest through that 
interface and receive data from it. This means that you can set up 
routing or bridging between the guest and the rest of your network.


For this to work, VirtualBox needs a device driver on your host system. 
The way bridged networking works has been completely rewritten with 
VirtualBox 2.0 and 2.1, depending on the host operating system. From the 
user perspective, the main difference is that complex configuration is 
no longer necessary on any of the supported host operating systems.



The virtual hub that creates the bridge between the VM network ports and
the physical port tap into the network stack of the host machine and I
suspect that this configuration is not entirely seamless. I think that
the virtual bridge interferes with the network stack so letting the
virtual bridge have its own network port to play around with has turned
out to be a good idea, at least when I was running OSOL b134 - OI148a.

I think you're going about this the wrong way, at least with respect to
these two physical interfaces.

I suspect that the right answer is to plumb only *ONE* of them in the
zone, and then use the other by name inside the VM when creating the
virtual hub.  That second interface should not be plumbed or configured
to use IP inside the regular OpenIndiana environment.  That way, you'll
have two independent paths to the network.
Perhaps the way to do it is to create a dedicated jail/zone for 
VIrtualBox to run in and plumb the e1000g2 to that zone. I'm a little 
curious as to how this would affect the performance I'm not sure if you 
have to split up the CPU cores etc between zones or if that is taken 
care of as the zones pretty much share the same kernel (and its task 
scheduler).

I suppose I could try to configure the IPMP, I guess I will have to
throw away the DHCP configuration and go for fixed IP all the way as
DHCP only gives two IP addresses and I will need four of them. But then
we have the problem with the VMs and how to separate them from the
network stack of the host.

It's possible to have DHCP generate multiple addresses per interface.
And it's possible to use IPMP with just one IP address per interface (in
fact, you can use it with as little as one IP address per *group*).  And
it's possible to configure an IPMP group with some static addresses and
some DHCP.
In order to make DHCP generate more IP addresses I guess I have to 
generate a few (virtual) MAC addresses. Maybe ifconfig hadles this 
internally.


But read the documentation in the man pages.  IPMP may or may not be
what you really want here.  Based on the isolation demands mentioned,
I suspect it's not.  The only reason I mentioned it is that your current
IP configuration is invalid (unsupported, might not work, good luck with
that) without IPMP -- that doesn't mean you should use IPMP, but that
you should rethink the whole configuration.

One of the many interesting problems that happens with multiple
interfaces configured on the same network is 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-25 Thread James Carlson
Robin Axelsson wrote:
 On 2012-01-24 21:59, James Carlson wrote:
 Well, unless you get into playing tricks with IP Filter.  And if you do
 that, then you're in a much deeper world of hurt, at least in terms of
 performance.
 Here's what the virtualbox manul says about bridged networking:
 
 *Bridged networking*:
 
 This is for more advanced networking needs such as network simulations
 and running servers in a guest. When enabled, VirtualBox connects to one
 of your installed network cards and exchanges network packets directly,
 circumventing your host operating system's network stack.

Note what it says above.  It says nothing about plumbing that interface
for IP on the host operating system.

I'm suggesting that you should _not_ do that, because you (apparently)
want to have separate interfaces for both host and the VirtualBox guests.

If that's not what you want, then I think you should clarify.

Perhaps the right answer is to put the host and guests on different
subnets, so that you have two interfaces with different subnets
configured on the same physical network.  That can have some risks with
respect to multicast, but at least it works far better than duplicating
a subnet.

 I suspect that the right answer is to plumb only *ONE* of them in the
 zone, and then use the other by name inside the VM when creating the
 virtual hub.  That second interface should not be plumbed or configured
 to use IP inside the regular OpenIndiana environment.  That way, you'll
 have two independent paths to the network.
 Perhaps the way to do it is to create a dedicated jail/zone for
 VIrtualBox to run in and plumb the e1000g2 to that zone. I'm a little
 curious as to how this would affect the performance I'm not sure if you
 have to split up the CPU cores etc between zones or if that is taken
 care of as the zones pretty much share the same kernel (and its task
 scheduler).

I'm confused.  If VirtualBox is just going to talk to the physical
interface itself, why is plumbing IP necessary at all?  It shouldn't be
needed.

 It's possible to have DHCP generate multiple addresses per interface.
 And it's possible to use IPMP with just one IP address per interface (in
 fact, you can use it with as little as one IP address per *group*).  And
 it's possible to configure an IPMP group with some static addresses and
 some DHCP.
 In order to make DHCP generate more IP addresses I guess I have to
 generate a few (virtual) MAC addresses. Maybe ifconfig hadles this
 internally.

You don't have to work that hard.  You can configure individual IPv4
interfaces to use DHCP, and the system will automatically generate a
random DHCPv4 ClientID value for those interfaces.

For example, you can do this:

ifconfig e1000g0:1 plumb
ifconfig e1000g0:2 plumb
ifconfig e1000g0:1 dhcp
ifconfig e1000g0:2 dhcp

Using the old-style configuration interfaces, you can do touch
/etc/dhcp.e1000g0:1 to set the system to plumb up and run DHCP on
e1000g0:1.

There's probably a way to do this with ipadm, but I'm too lazy to read
the man page for it.  I suggest it, though, as a worthwhile thing to do
on a lazy Sunday afternoon.

 But those are just two small ways in which multiple interfaces
 configured in this manner are a Bad Thing.  A more fundamental issue is
 that it was just never designed to be used that way, and if you do so,
 you're a test pilot.
 This was very interesting and insightful. I've always wondered how
 Windows tell the difference between two network connections in a
 machine, now I see that it doesn't. Sometimes this can get corrupted in
 Windows and sever the internet connection completely. If I understand
 correctly, the TCP stack in Windows is borrowed from Sun. I guess this
 is a little OT, it's just a reflection.

No, I don't think they're related in any significant way.  The TCP/IP
stack that Sun acquired long, long ago came from Mentat, and greatly
modified since then.  I suspect that Windows derives from of the BSD
code, but I don't have access to the Windows internals to make sure.

In any event, they all come from the basic constraints of the protocol
design itself, particularly RFC 791, and the weak ES model.

 I will follow these instructions if I choose to configure IPMP:
 http://www.sunsolarisadmin.com/networking/configure-ipmp-load-balancing-resilience-in-sun-solaris/

 Wow, that's old.  You might want to dig up something a little more
 modern.  Before OpenIndiana branched off of OpenSolaris (or before
 Oracle slammed the door shut), a lot of work went into IPMP to make it
 much more flexible.
 I'll see if there is something more up-to-date. There are no man entries
 for 'ipmp' in OI and 'apropos' doesn't work for me.

Try man ifconfig, man ipmpstat, man if_mpadm.  Those should be
reasonable starting points.

 In terms of getting the kernel's IRE entries correct, it doesn't matter
 so much where the physical wires go.  It matters a whole lot what you do
 with ifconfig.
 Ok, but when it is not 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-25 Thread Robin Axelsson

On 2012-01-25 19:03, James Carlson wrote:

Robin Axelsson wrote:

On 2012-01-24 21:59, James Carlson wrote:

Well, unless you get into playing tricks with IP Filter.  And if you do
that, then you're in a much deeper world of hurt, at least in terms of
performance.

Here's what the virtualbox manul says about bridged networking:

*Bridged networking*:

This is for more advanced networking needs such as network simulations
and running servers in a guest. When enabled, VirtualBox connects to one
of your installed network cards and exchanges network packets directly,
circumventing your host operating system's network stack.

Note what it says above.  It says nothing about plumbing that interface
for IP on the host operating system.

I'm suggesting that you should _not_ do that, because you (apparently)
want to have separate interfaces for both host and the VirtualBox guests.

If that's not what you want, then I think you should clarify.

Perhaps the right answer is to put the host and guests on different
subnets, so that you have two interfaces with different subnets
configured on the same physical network.  That can have some risks with
respect to multicast, but at least it works far better than duplicating
a subnet.


I suspect that the right answer is to plumb only *ONE* of them in the
zone, and then use the other by name inside the VM when creating the
virtual hub.  That second interface should not be plumbed or configured
to use IP inside the regular OpenIndiana environment.  That way, you'll
have two independent paths to the network.

Perhaps the way to do it is to create a dedicated jail/zone for
VIrtualBox to run in and plumb the e1000g2 to that zone. I'm a little
curious as to how this would affect the performance I'm not sure if you
have to split up the CPU cores etc between zones or if that is taken
care of as the zones pretty much share the same kernel (and its task
scheduler).

I'm confused.  If VirtualBox is just going to talk to the physical
interface itself, why is plumbing IP necessary at all?  It shouldn't be
needed.


Maybe I'm the one being confused here. I just believed that the IP must 
be visible to the host for VirtualBox to be able to find the interface 
in first place but maybe that is not the case. When choosing an adapter 
for bridged networking on my system, the drop-down menu will give me the 
options e1000g1, e1000g2 and rge0. So I'm not sure how or what part of 
the system that gives the physical interfaces those names. I mean if the 
host can't see those interfaces how will VirtualBox be able to see them? 
At least that was my reasoning behind it.



It's possible to have DHCP generate multiple addresses per interface.
And it's possible to use IPMP with just one IP address per interface (in
fact, you can use it with as little as one IP address per *group*).  And
it's possible to configure an IPMP group with some static addresses and
some DHCP.

In order to make DHCP generate more IP addresses I guess I have to
generate a few (virtual) MAC addresses. Maybe ifconfig hadles this
internally.

You don't have to work that hard.  You can configure individual IPv4
interfaces to use DHCP, and the system will automatically generate a
random DHCPv4 ClientID value for those interfaces.

For example, you can do this:

ifconfig e1000g0:1 plumb
ifconfig e1000g0:2 plumb
ifconfig e1000g0:1 dhcp
ifconfig e1000g0:2 dhcp

Using the old-style configuration interfaces, you can do touch
/etc/dhcp.e1000g0:1 to set the system to plumb up and run DHCP on
e1000g0:1.

There's probably a way to do this with ipadm, but I'm too lazy to read
the man page for it.  I suggest it, though, as a worthwhile thing to do
on a lazy Sunday afternoon.


I'll look into it if all else fails. I see that the manual entry for 
ipadm is missing in OI. I will also see if there is more up-to-date 
documentation on the ipmp. I assume that when a ClientID value is 
generated a MAC address also comes with it, at least when it negotiates 
with the DHCP server.





But those are just two small ways in which multiple interfaces
configured in this manner are a Bad Thing.  A more fundamental issue is
that it was just never designed to be used that way, and if you do so,
you're a test pilot.

This was very interesting and insightful. I've always wondered how
Windows tell the difference between two network connections in a
machine, now I see that it doesn't. Sometimes this can get corrupted in
Windows and sever the internet connection completely. If I understand
correctly, the TCP stack in Windows is borrowed from Sun. I guess this
is a little OT, it's just a reflection.

No, I don't think they're related in any significant way.  The TCP/IP
stack that Sun acquired long, long ago came from Mentat, and greatly
modified since then.  I suspect that Windows derives from of the BSD
code, but I don't have access to the Windows internals to make sure.

In any event, they all come from the basic constraints of 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-25 Thread James Carlson
Robin Axelsson wrote:
 I'm confused.  If VirtualBox is just going to talk to the physical
 interface itself, why is plumbing IP necessary at all?  It shouldn't be
 needed.
 
 Maybe I'm the one being confused here. I just believed that the IP must
 be visible to the host for VirtualBox to be able to find the interface
 in first place but maybe that is not the case. When choosing an adapter
 for bridged networking on my system, the drop-down menu will give me the
 options e1000g1, e1000g2 and rge0. So I'm not sure how or what part of
 the system that gives the physical interfaces those names. I mean if the
 host can't see those interfaces how will VirtualBox be able to see them?
 At least that was my reasoning behind it.

The names come from the datalink layer.  It has nothing whatsoever to do
with IP.  IP (like many other network layers) can open and use datalink
layer interfaces if desired.

They're quite distinct in terms of implementation, though the user
interfaces (and documentation :-) tend to blur the lines.  The datalink
layer object is managed by dladm.  It has a name that defaults to the
driver's name plus an instance number, but that can be configured by the
administrator if desired.  There are also virtual interfaces at this
level for various purposes.  You can think of it as being the Ethernet
port, assuming no VLANs are involved.

The IP layer object is managed by ifconfig.  It's used only by IP.
Other protocols don't (or at least _shouldn't_) use the IP objects.  In
general terms, these objects each contain an IP address, a subnet mask,
and a set of IFF_* flags.

By default, the first IP layer object created on a given datalink layer
object has the same name as that datalink layer object -- even though
they're distinct ideas.  The second and subsequent such objects created
get the somewhat-familiar :N addition to the name.  It's sometimes
helpful to think of that first object as being really named e1000g1:0
at the IP layer, in order to keep it distinct from the e1000g1
datalink layer item.

Since VirtualBox is providing datalink layer objects to the guest
operating system (through a simulated driver), it needs access to the
datalink layer on the host system.  That means e1000g1 or something
like that.

It doesn't -- and can't -- use the IP layer objects.  Those allow you to
send only IP datagrams.

If VirtualBox used the IP objects from the host operating system, what
would happen when the guest attempts to send an ARP message or (heaven
help us) IPX?  Those work at a layer below IP.

 There's probably a way to do this with ipadm, but I'm too lazy to read
 the man page for it.  I suggest it, though, as a worthwhile thing to do
 on a lazy Sunday afternoon.
 
 I'll look into it if all else fails. I see that the manual entry for
 ipadm is missing in OI. I will also see if there is more up-to-date
 documentation on the ipmp. I assume that when a ClientID value is
 generated a MAC address also comes with it, at least when it negotiates
 with the DHCP server.

Nope.  See section 9.14 of RFC 2132 for a description of the DHCP
Client-identifier option, and the numerous references to client
identifiers in RFC 2131.

Back in the bad old days of BOOTP, clients were in fact forced to use
hardware (MAC) addresses for identification.  That's still the default
for DHCP, just to make things easy, but a fancy client (such as the one
in OpenIndiana) can create client identifiers from all sorts of sources,
including thin air.

 Try man ifconfig, man ipmpstat, man if_mpadm.  Those should be
 reasonable starting points.
 
 Thanks, these man pages exists. I saw in the ifconfig that there is some
 info about ipmp although it is brief.

It's possible that some of meem's Clearview project documentation that
revamped large parts of IPMP are available on line as well.

 Or perhaps you're running NWAM, and that daemon is undoing your work
 behind your back.  You probably don't want to use NWAM with a reasonably
 complex system configuration like this.
 
 I think it is a bit strange that the changes only apply to the IPv4
 settings but maybe it doesn't matter as the router only uses IPv4 (I
 think).

You have to tell ifconfig what you want to do.  If you want to modify
the separate IPv6 interfaces, then specify inet6, as in:

ifconfig e1000g0 inet6 unplumb

 Hmm, I'm starting to wonder how netmasks and subnets work in
 IPv6 as none appears to be specified in ifconfig -a I'm starting to
 realize that you don't need nwam for dhcp.

The prefix certainly should be there as /n.  It's basically the same
as in modern IPv4, except with 128 bits instead of 32.

In modern -- CIDR -- IPv4, you don't normally refer to a subnet mask as
something like 255.255.255.128, but rather as a prefix length, like
/25.  Besides being more compact, the prefix length notation avoids
dumb (and pointless) mistakes that occur when someone accidentally
specifies non-contiguous mask bits.

 dladm you say. I trust that VirtualBox does what 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-24 Thread Robin Axelsson
The system I'm using is not that beefy. It's a 4-core Phenom II using 
a server grade hard drive as system drive and 8 consumer grade drives 
for the storage pool that are behind an LSI SAS 1068e controller. I have 
4GB RAM in it.


I have experienced freeze-ups due to failing hard drives in the storage 
pool in the past. When they happened, they affected the CIFS connection 
(of course) but not the SSH connection. Moreover, I could see errors 
with iostat -En. I don't know if you have iostat in Linux but I'm 
afraid you don't.


I experienced a series of shorter freeze-ups today (3-5 seconds long) 
while monitoring the system using System Monitor through the 
'vncserver' and 'top' over SSH. Those freeze-ups affected th CIFS 
connection, SSH, and VNC connection (but did not sever them). The 
freeze-ups were not long enough so that I could get to check the RDP 
connection to the VM.


When those freeze-ups occurred, the system monitor gracefully showed 
this as a dip in the real-time network history chart so these freeze-ups 
don't seem to stagger the operation of the network monitor. The CPU 
utilization was around 10-15% and the memory usage was around 13.5% 
(540MB) all the time so I don't think capping the ARC would do much good.


I looked into the /var/adm/messages and found the

nwamd[99]: [ID 234669 daemon.error] 3: nwamd_door_switch: need 
solaris.network.autoconf.read for request type 1


errors during the time. I'll look more carefully next time and see if 
the time-stamps of these entries match the time at which I experience 
those freeze-ups. I suspect that they do. No errors are found with 
iostat -E. I'll also look into the iowait to see if it will give any 
clues, I'm not sure though how to keep a history of iowait the way 
system monitor keeps a history of cpu utilization, memory usage and 
network activity.


I have also been suggested to try out the prestable version of OI and 
see if theses freeze-ups occur when using static IP (i.e. not nwam).


Robin.


On 2012-01-24 06:39, Robbie Crash wrote:

I had problems that sound nearly identical to what you're describing when
running ZFS Native under Ubuntu, but without the VM aspect. They seemed to
happen when the server would begin to flush memory after large reads or
writes to the ZFS pool. How much RAM does your machine have? Have you
considered evil tuning your ARC cache for testing?  SSH would disconnect
and fileshares would become unavailable.
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache


What is the rest of the system reporting? CPU? Memory in use? IO Wait? Are
you using consumer grade hard drives? These could be doing their lovely 2
minute read recovery thing and causing headaches with the pool access. Does
the host have any CIFS shares that you can attempt to access while the
guest is frozen?

I found that forcing ZFS to stay 2.5GB under max, rather than the
default(?) 1GB improved stability vastly.

I haven't had the same issues after moving to OI, but I've also quadrupled
the amount of RAM in my box. Sorry if any of this is horribly off the mark,
most of my ZFS/CIFS/SMB problems happened while running ZFS on Ubuntu, and
I'm pretty new to OI.

On Mon, Jan 23, 2012 at 16:17, Open Indianaopenindi...@out-side.nl  wrote:


What happens if you disable nwam and use the basic/manual ifconfig setup?


-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: maandag 23 januari 2012 15:10
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] CIFS performance issues

No, I'm not doing anything in particular in the virtual machine. The media
file is played on another computer in the (physical) network over CIFS.
Over
the network I also access the server using Remote Desktop/Terminal Services
to communicate to the virtual machine (using the VirtualBox RDP interface,
i.e. not the guest OS RDP), VNC (to access OI using vncserver) and SSH (to
OI).

I wouldn't say that the entire server stops responding, only the connection
to CIFS and SSH. I wasn't running VNC when it happened yesterday so I don't
know about it, but the RDP connection and the Virtual Machine inside this
server was unaffected while CIFS and SSH was frozen.

I tried today to start the virtual machine but it failed because it could
not find the connection (e1000g2):

Error: failed to start machine. Error message: Failed to open/create the
internal network 'HostInterfaceNetworking-e1000
g2 - Intel PRO/1000 Gigabit Ethernet' (VERR_SUPDRV_COMPONENT_NOT_FOUND).
Failed to attach the network LUN (VERR_SUPDRV_COMPONENT_NOT_FOUND).
Unknown error creating VM (VERR_SUPDRV_COMPONENT_NOT_FOUND)

ifconfig -a returns:
...
e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4  mtu
1500 index 2
 inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4  mtu
1500 index 3
 inet 10.40.137.196 netmask ff00

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-24 Thread Gary Mills
On Tue, Jan 24, 2012 at 04:39:42PM +0100, Robin Axelsson wrote:
 ifconfig -a returns:
 ...
 e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4  mtu
 1500 index 2
  inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
 e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4  mtu
 1500 index 3
  inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
 rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4  mtu 1500
 index
 4

Do you really have two ethernet ports on the same network?  You can't
do that without some sort of link aggregation on both ends of the
connection.

 I experienced a series of shorter freeze-ups today (3-5 seconds
 long) while monitoring the system using System Monitor through the
 'vncserver' and 'top' over SSH. Those freeze-ups affected th CIFS
 connection, SSH, and VNC connection (but did not sever them). The
 freeze-ups were not long enough so that I could get to check the RDP
 connection to the VM.

-- 
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-24 Thread Robin Axelsson

On 2012-01-24 16:52, Gary Mills wrote:

On Tue, Jan 24, 2012 at 04:39:42PM +0100, Robin Axelsson wrote:

ifconfig -a returns:
...
e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4   mtu
1500 index 2
 inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4   mtu
1500 index 3
 inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4   mtu 1500
index
4

Do you really have two ethernet ports on the same network?  You can't
do that without some sort of link aggregation on both ends of the
connection.
I don't see why not. I've done this before and it used to work just 
fine. These are two different controllers that work independently and I 
do it so that the VM(s) could have its own NIC to work with as I believe 
the virtual network bridge interferes with other network activity.


If we assume that both ports give rise to problems because they run 
without teaming/link aggregation (which I think not) then there wouldn't 
be any issues if I only used one network port. I have tried with only 
one port and the issues are considerably worse in that configuration.



I experienced a series of shorter freeze-ups today (3-5 seconds
long) while monitoring the system using System Monitor through the
'vncserver' and 'top' over SSH. Those freeze-ups affected th CIFS
connection, SSH, and VNC connection (but did not sever them). The
freeze-ups were not long enough so that I could get to check the RDP
connection to the VM.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-24 Thread James Carlson
Robin Axelsson wrote:
 On 2012-01-24 16:52, Gary Mills wrote:
 On Tue, Jan 24, 2012 at 04:39:42PM +0100, Robin Axelsson wrote:
 ifconfig -a returns:
 ...
 e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4   mtu
 1500 index 2
  inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
 e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4   mtu
 1500 index 3
  inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
 rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4   mtu
 1500
 index
 4
 Do you really have two ethernet ports on the same network?  You can't
 do that without some sort of link aggregation on both ends of the
 connection.
 I don't see why not. I've done this before and it used to work just
 fine. These are two different controllers that work independently and I
 do it so that the VM(s) could have its own NIC to work with as I believe
 the virtual network bridge interferes with other network activity.

It's never worked quite right (whatever right might mean here) on
Solaris.

If you have two interfaces inside the same zone that have the same IP
prefix, then you have to have IPMP configured, or all bets are off.
Maybe it'll work.  But probably not.  And was never been supported that
way by Sun.

 If we assume that both ports give rise to problems because they run
 without teaming/link aggregation (which I think not) then there wouldn't
 be any issues if I only used one network port. I have tried with only
 one port and the issues are considerably worse in that configuration.

That's an interesting observation.  When running with one port, do you
unplumb the other?  Or is one port just an application configuration
issue?

If you run /sbin/route monitor when the system is working fine and
leave it running until a problem happens, do you see any output produced?

If so, then this could fairly readily point the way to the problem.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-24 Thread Robin Axelsson

On 2012-01-24 19:14, James Carlson wrote:

Robin Axelsson wrote:

On 2012-01-24 16:52, Gary Mills wrote:

On Tue, Jan 24, 2012 at 04:39:42PM +0100, Robin Axelsson wrote:

ifconfig -a returns:
...
e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4mtu
1500 index 2
  inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4mtu
1500 index 3
  inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4mtu
1500
index
4

Do you really have two ethernet ports on the same network?  You can't
do that without some sort of link aggregation on both ends of the
connection.

I don't see why not. I've done this before and it used to work just
fine. These are two different controllers that work independently and I
do it so that the VM(s) could have its own NIC to work with as I believe
the virtual network bridge interferes with other network activity.

It's never worked quite right (whatever right might mean here) on
Solaris.

If you have two interfaces inside the same zone that have the same IP
prefix, then you have to have IPMP configured, or all bets are off.
Maybe it'll work.  But probably not.  And was never been supported that
way by Sun.
The idea I have with using two NICs is to create a separation between 
the virtual machine(s) and the host system so that the network activity 
of the virtual machine(s) won't interfere with the network activity of 
the physical host machine.


The virtual hub that creates the bridge between the VM network ports and 
the physical port tap into the network stack of the host machine and I 
suspect that this configuration is not entirely seamless. I think that 
the virtual bridge interferes with the network stack so letting the 
virtual bridge have its own network port to play around with has turned 
out to be a good idea, at least when I was running OSOL b134 - OI148a.


I suppose I could try to configure the IPMP, I guess I will have to 
throw away the DHCP configuration and go for fixed IP all the way as 
DHCP only gives two IP addresses and I will need four of them. But then 
we have the problem with the VMs and how to separate them from the 
network stack of the host.


I will follow these instructions if I choose to configure IPMP:
http://www.sunsolarisadmin.com/networking/configure-ipmp-load-balancing-resilience-in-sun-solaris/




If we assume that both ports give rise to problems because they run
without teaming/link aggregation (which I think not) then there wouldn't
be any issues if I only used one network port. I have tried with only
one port and the issues are considerably worse in that configuration.

That's an interesting observation.  When running with one port, do you
unplumb the other?  Or is one port just an application configuration
issue?

If you run /sbin/route monitor when the system is working fine and
leave it running until a problem happens, do you see any output produced?

If so, then this could fairly readily point the way to the problem.

WIth one port I mean that only one port is physically connected to the 
switch, all other ports but one are disconnected. So I guess ifconfig 
port_id unplumb would have no effect on such ports.


I managed to reproduce a few short freezes while /sbin/route monitor 
was running over ssh but it didn't spit out any messages, perhaps I 
should run it on a local terminal instead. I looked at the time stamps 
of the entries in the /var/adm/messages and they do not match the 
freeze-ups by the minute.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-24 Thread James Carlson
Robin Axelsson wrote:
 If you have two interfaces inside the same zone that have the same IP
 prefix, then you have to have IPMP configured, or all bets are off.
 Maybe it'll work.  But probably not.  And was never been supported that
 way by Sun.
 The idea I have with using two NICs is to create a separation between
 the virtual machine(s) and the host system so that the network activity
 of the virtual machine(s) won't interfere with the network activity of
 the physical host machine.

Nice idea, but it unfortunately won't work.  When two interfaces are
plumbed up like that -- regardless of what VM or bridge or hub or
virtualness there might be -- the kernel sees two IP interfaces
configured with the same IP prefix (subnet), and it considers them to be
completely interchangeable.  It can (and will!) use either one at any
time.  You don't have control over where the packets go.

Well, unless you get into playing tricks with IP Filter.  And if you do
that, then you're in a much deeper world of hurt, at least in terms of
performance.

 The virtual hub that creates the bridge between the VM network ports and
 the physical port tap into the network stack of the host machine and I
 suspect that this configuration is not entirely seamless. I think that
 the virtual bridge interferes with the network stack so letting the
 virtual bridge have its own network port to play around with has turned
 out to be a good idea, at least when I was running OSOL b134 - OI148a.

I think you're going about this the wrong way, at least with respect to
these two physical interfaces.

I suspect that the right answer is to plumb only *ONE* of them in the
zone, and then use the other by name inside the VM when creating the
virtual hub.  That second interface should not be plumbed or configured
to use IP inside the regular OpenIndiana environment.  That way, you'll
have two independent paths to the network.

 I suppose I could try to configure the IPMP, I guess I will have to
 throw away the DHCP configuration and go for fixed IP all the way as
 DHCP only gives two IP addresses and I will need four of them. But then
 we have the problem with the VMs and how to separate them from the
 network stack of the host.

It's possible to have DHCP generate multiple addresses per interface.
And it's possible to use IPMP with just one IP address per interface (in
fact, you can use it with as little as one IP address per *group*).  And
it's possible to configure an IPMP group with some static addresses and
some DHCP.

But read the documentation in the man pages.  IPMP may or may not be
what you really want here.  Based on the isolation demands mentioned,
I suspect it's not.  The only reason I mentioned it is that your current
IP configuration is invalid (unsupported, might not work, good luck with
that) without IPMP -- that doesn't mean you should use IPMP, but that
you should rethink the whole configuration.

One of the many interesting problems that happens with multiple
interfaces configured on the same network is that you get multicast and
broadcast traffic multiplication: each single message will be received
and processed by each of the interfaces.  Besides the flood of traffic
this causes (and the seriously bad things that will happen if you do any
multicast forwarding), it can also expose timing problems in protocols
that are listening to those packets.  When using IPMP, one working
interface is automatically designated to receive all incoming broadcast
and multicast traffic, and the others are disabled and receive unicast
only.  Without IPMP, you don't have that protection.

Another interesting problem is source address usage.  When the system
sends a packet, it doesn't really care what source address is used, so
long as the address is valid on SOME interface on the system.  The
output interface is chosen only by the destination IP address on the
packet -- not the source -- so you'll see packets with source address
A going out interface with address B.  You might think you're
controlling interface usage by binding some local address, but you're
really not, because that's not how IP actually works.  With IPMP,
there's special logic engaged that picks source IP addresses to match
the output interface within the group, and then keeps the connection (to
the extent possible) on the same interface.

But those are just two small ways in which multiple interfaces
configured in this manner are a Bad Thing.  A more fundamental issue is
that it was just never designed to be used that way, and if you do so,
you're a test pilot.

 I will follow these instructions if I choose to configure IPMP:
 http://www.sunsolarisadmin.com/networking/configure-ipmp-load-balancing-resilience-in-sun-solaris/

Wow, that's old.  You might want to dig up something a little more
modern.  Before OpenIndiana branched off of OpenSolaris (or before
Oracle slammed the door shut), a lot of work went into IPMP to make it
much more flexible.

 If you run /sbin/route monitor when 

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-23 Thread Robbie Crash
I had problems that sound nearly identical to what you're describing when
running ZFS Native under Ubuntu, but without the VM aspect. They seemed to
happen when the server would begin to flush memory after large reads or
writes to the ZFS pool. How much RAM does your machine have? Have you
considered evil tuning your ARC cache for testing?  SSH would disconnect
and fileshares would become unavailable.
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache


What is the rest of the system reporting? CPU? Memory in use? IO Wait? Are
you using consumer grade hard drives? These could be doing their lovely 2
minute read recovery thing and causing headaches with the pool access. Does
the host have any CIFS shares that you can attempt to access while the
guest is frozen?

I found that forcing ZFS to stay 2.5GB under max, rather than the
default(?) 1GB improved stability vastly.

I haven't had the same issues after moving to OI, but I've also quadrupled
the amount of RAM in my box. Sorry if any of this is horribly off the mark,
most of my ZFS/CIFS/SMB problems happened while running ZFS on Ubuntu, and
I'm pretty new to OI.

On Mon, Jan 23, 2012 at 16:17, Open Indiana openindi...@out-side.nl wrote:

 What happens if you disable nwam and use the basic/manual ifconfig setup?


 -Original Message-
 From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
 Sent: maandag 23 januari 2012 15:10
 To: openindiana-discuss@openindiana.org
 Subject: Re: [OpenIndiana-discuss] CIFS performance issues

 No, I'm not doing anything in particular in the virtual machine. The media
 file is played on another computer in the (physical) network over CIFS.
 Over
 the network I also access the server using Remote Desktop/Terminal Services
 to communicate to the virtual machine (using the VirtualBox RDP interface,
 i.e. not the guest OS RDP), VNC (to access OI using vncserver) and SSH (to
 OI).

 I wouldn't say that the entire server stops responding, only the connection
 to CIFS and SSH. I wasn't running VNC when it happened yesterday so I don't
 know about it, but the RDP connection and the Virtual Machine inside this
 server was unaffected while CIFS and SSH was frozen.

 I tried today to start the virtual machine but it failed because it could
 not find the connection (e1000g2):

 Error: failed to start machine. Error message: Failed to open/create the
 internal network 'HostInterfaceNetworking-e1000
 g2 - Intel PRO/1000 Gigabit Ethernet' (VERR_SUPDRV_COMPONENT_NOT_FOUND).
 Failed to attach the network LUN (VERR_SUPDRV_COMPONENT_NOT_FOUND).
 Unknown error creating VM (VERR_SUPDRV_COMPONENT_NOT_FOUND)

 ifconfig -a returns:
 ...
 e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu
 1500 index 2
 inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
 e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu
 1500 index 3
 inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
 rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu 1500
 index
 4
 inet 0.0.0.0 netmask ff00
 ...

 i.e. e1000g1 and e1000g2 appears to be running just fine, wtf !?! I found
 the following entries in the /var/adm/messages:

 Jan 23 13:50:49 computername nwamd[95]: [ID 234669 daemon.error] 3:
 nwamd_door_switch: need solaris.network.autoconf.read for request type 1
 Jan
 23 13:56:59 computername last message repeated 75 times Jan 23 13:57:04
 computername nwamd[95]: [ID 234669 daemon.error] 3:
 nwamd_door_switch: need solaris.network.autoconf.read for request type 1
 Jan
 23 13:58:19 computername last message repeated 15 times Jan 23 13:58:22
 computername gnome-session[916]: [ID 702911 daemon.warning] WARNING:
 Unable to determine session: Unable to lookup session information for
 process '916'
 Jan 23 13:58:24 computername nwamd[95]: [ID 234669 daemon.error] 3:
 nwamd_door_switch: need solaris.network.autoconf.read for request type 1
 Jan
 23 14:03:24 computername last message repeated 60 times Jan 23 14:03:26
 computername gnome-session[916]: [ID 702911 daemon.warning] WARNING:
 Unable to determine session: Unable to lookup session information for
 process '916'
 Jan 23 14:03:29 computername nwamd[95]: [ID 234669 daemon.error] 3:
 nwamd_door_switch: need solaris.network.autoconf.read for request type 1
 Jan
 23 14:03:34 computername last message repeated 1 time Jan 23 14:03:39
 computername nwamd[95]: [ID 234669 daemon.error] 3:
 nwamd_door_switch: need solaris.network.autoconf.read for request type 1

 Some errors here... I looked into the log of the nwam service
 (/var/svc/log/network-physical\:nwam.log):

 [ Jan 23 13:03:15 Enabled. ]
 [ Jan 23 13:03:16 Executing start method (/lib/svc/method/net-nwam
 start).
 ]
 /lib/svc/method/net-nwam[548]: /sbin/ibd_upgrade: not found [No such file
 or
 directory] [ Jan 23 13:03:17 Method start exited with status 0. ] [ Jan
 23
 13:03:17 Rereading configuration. ] [ Jan 23 13:03:17 Executing refresh

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-23 Thread Open Indiana
Ok,

So if I read it correct your virtual machine is playing an audio file and
then the server stops responding. That could mean the hardware that
virtualbox uses to play the soundfile if flooded or that the drivers of the
soundcard in your server/PC are not working very well?
What soundcard are you using?


-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: zondag 22 januari 2012 23:38
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] CIFS performance issues

I don't understand what you mean with PCI-x settings and where to check them
out. The hardware is not PCI-X, it is PCIe. The affected LSI HBA is a
discrete PCIe card that operates in IT-mode. As in system logs I assume you
mean /var/adm/messages and I could not find anything there.

If this was only a hard disk controller issue (I made sure that there are
enough lanes for it) then I wouldn't expect applications such as SSH to be
affected by it.

The settings of the Intel NIC card is not in the BIOS, at least not what I
can see (i.e. there is no visible BIOS of the discrete NIC like it is for
the LSI SAS controller during POST). So, I'm not entirely sure what settings
for the NIC you are referring to.
Robin.


On 2012-01-22 20:28, Open Indiana wrote:
 A very stupid answer, but have you looked at the bios and inspected
 the settings of the network devices and /or PCIx ? How is your bios
 setup (AHCI or raid or ??) ?

 Do you see any error in the system logs?

 To my opinion your system swallows in the datatransfers. Either on the
 NIC-montherboard side or at the montherboard-  harddiskcontroller
side.
 Do your extra NIC's and the LSI share the same PCI-x settings? Do they
 both support all settings?

 B,

 Roelof
 -Original Message-
 From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
 Sent: zondag 22 januari 2012 19:38
 To: OpenIndiana-discuss@openindiana.org
 Subject: [OpenIndiana-discuss] CIFS performance issues

 In the past, I used OpenSolaris b134 which I then updated to
 OpenIndiana
 b148 and never did I experience performance issues related to the
 network connection (and that was when using two of the infamous
 RTL8111DL OnBoard ports). Now that I have swapped the motherboard and
 the hard drive and later added a 2-port Intel EXPI9402PT NIC (because
 of driver issues with the Realtek NIC that wasn't there before), I
 performed a fresh install of OpenIndiana.

 Since then I experience intermittent network freeze-ups that I cannot
 link to faults of the storage pool (iostat -E returns 0 errors). I
 have had this issue both with the dual port Intel controller as well
 as with a single port Intel controller (EXPI9400PT) and the Realtek
 8111E OnBoard NIC. The storage pool is behind an LSI MegaRAID 1068e
 based controller using no port extenders.

 In detail (9400PT+8111E):
 -
 I was running a Virtual Machine with VirtualBox 3.2.14 with (1) a
 bridged network connection and was accessed over the network using (2)
 VBox RDP connection and (3) a ZFS based CIFS share to be accessed from
 a Windows computer over the network. These applications were
 administrated both over
 (4) SSH (port 2244) and (5) VNC (using vncserver). A typical start of
 the VM was done with 'screen VBoxHeadless --startvm ...'

 I assigned the network ports the following way:

 e1000g: VBox RDP, VNC, SSH
 rge0: Virtual Machine Network Connection (Bridged)

 I tried various combinations but the connection froze intermittently
 for all applications. The bridged network connection was worst. When I
 SSHed over rge0, the connection was frequently severed which is was not
over e1000.

 So I pulled the plug on the rg0 and let everything go through the
 e1000 connection. freeze-ups became more frequent and it seemed like
 the Bridged connection was causing this issue because the connection
 didn't freeze like that when the VM wasn't running.

 Note that I didn't assign the CIFS share to any particular port but
 calls to computername  were assigned to the e1000 port in the
/etc/inet/hosts file.
 -

 In detail (9402PT):
 ---
 In this setup I run essentially the same applications but all through
 the 9402PT which has two ports (e1000g1 and e1000g2). So I assign the
 applications the following way:

 e1000g1: VBox RDP, SSH,computername  (in /etc/inet/hosts)
 e1000g2: Bridged connection to the virtual machine

 So while running the virtual machine on the server, having an open SSH
 connection to it and a command prompt pointing (cd x:\) at the CIFS
 share (which is mapped as a network drive, say X:) I started a media
 player and played an audio file over the CIFS share which made the
connection freeze.

 The freezing affected the media player and the command prompt but the
 RDP connection worked and access to internet inside the VM was flawless.
 The SSH connection was frozen as well. After a few minutes it became
 responsive and iostat -E reported

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-23 Thread Robin Axelsson
No, I'm not doing anything in particular in the virtual machine. The 
media file is played on another computer in the (physical) network over 
CIFS. Over the network I also access the server using Remote 
Desktop/Terminal Services to communicate to the virtual machine (using 
the VirtualBox RDP interface, i.e. not the guest OS RDP), VNC (to access 
OI using vncserver) and SSH (to OI).


I wouldn't say that the entire server stops responding, only the 
connection to CIFS and SSH. I wasn't running VNC when it happened 
yesterday so I don't know about it, but the RDP connection and the 
Virtual Machine inside this server was unaffected while CIFS and SSH was 
frozen.


I tried today to start the virtual machine but it failed because it 
could not find the connection (e1000g2):


Error: failed to start machine. Error message: Failed to open/create 
the internal network 'HostInterfaceNetworking-e1000

g2 - Intel PRO/1000 Gigabit Ethernet' (VERR_SUPDRV_COMPONENT_NOT_FOUND).
Failed to attach the network LUN (VERR_SUPDRV_COMPONENT_NOT_FOUND).
Unknown error creating VM (VERR_SUPDRV_COMPONENT_NOT_FOUND)

ifconfig -a returns:
...
e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu 
1500 index 2

inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu 
1500 index 3

inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu 1500 
index 4

inet 0.0.0.0 netmask ff00
...

i.e. e1000g1 and e1000g2 appears to be running just fine, wtf !?! I 
found the following entries in the /var/adm/messages:


Jan 23 13:50:49 computername nwamd[95]: [ID 234669 daemon.error] 3: 
nwamd_door_switch: need solaris.network.autoconf.read for request type 1

Jan 23 13:56:59 computername last message repeated 75 times
Jan 23 13:57:04 computername nwamd[95]: [ID 234669 daemon.error] 3: 
nwamd_door_switch: need solaris.network.autoconf.read for request type 1

Jan 23 13:58:19 computername last message repeated 15 times
Jan 23 13:58:22 computername gnome-session[916]: [ID 702911 
daemon.warning] WARNING: Unable to determine session: Unable to lookup 
session information for process '916'
Jan 23 13:58:24 computername nwamd[95]: [ID 234669 daemon.error] 3: 
nwamd_door_switch: need solaris.network.autoconf.read for request type 1

Jan 23 14:03:24 computername last message repeated 60 times
Jan 23 14:03:26 computername gnome-session[916]: [ID 702911 
daemon.warning] WARNING: Unable to determine session: Unable to lookup 
session information for process '916'
Jan 23 14:03:29 computername nwamd[95]: [ID 234669 daemon.error] 3: 
nwamd_door_switch: need solaris.network.autoconf.read for request type 1

Jan 23 14:03:34 computername last message repeated 1 time
Jan 23 14:03:39 computername nwamd[95]: [ID 234669 daemon.error] 3: 
nwamd_door_switch: need solaris.network.autoconf.read for request type 1


Some errors here... I looked into the log of the nwam service 
(/var/svc/log/network-physical\:nwam.log):


[ Jan 23 13:03:15 Enabled. ]
[ Jan 23 13:03:16 Executing start method (/lib/svc/method/net-nwam 
start). ]
/lib/svc/method/net-nwam[548]: /sbin/ibd_upgrade: not found [No such 
file or directory]

[ Jan 23 13:03:17 Method start exited with status 0. ]
[ Jan 23 13:03:17 Rereading configuration. ]
[ Jan 23 13:03:17 Executing refresh method (/lib/svc/method/net-nwam 
refresh). ]

[ Jan 23 13:03:17 Method refresh exited with status 0. ]

nothing remarkable here... I investigated the issue on VBox forums and 
this issue was resolved by the rem_drv/add_drv vboxflt commands. It's 
not the first time I've had this issue and one of the people at the 
forums claims that this issue occurs after every third 
powercycle/reboot. It was hinted that VBox doesn't like dynamic IP 
addresses so I have also given e1000g2 a fixed address in the router (I 
configured the DHCP server in the router to always give the same IP to 
the MAC address of the e1000g2 connection). I've done it on the e1000g1 
already, otherwise it would be impossible to ssh to the server from the 
outside world.


Robin.


On 2012-01-23 11:40, Open Indiana wrote:

Ok,

So if I read it correct your virtual machine is playing an audio file and
then the server stops responding. That could mean the hardware that
virtualbox uses to play the soundfile if flooded or that the drivers of the
soundcard in your server/PC are not working very well?
What soundcard are you using?


-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: zondag 22 januari 2012 23:38
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] CIFS performance issues

I don't understand what you mean with PCI-x settings and where to check them
out. The hardware is not PCI-X, it is PCIe. The affected LSI HBA is a
discrete PCIe card that operates in IT-mode. As in system logs I assume you
mean /var/adm/messages

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-23 Thread Open Indiana
What happens if you disable nwam and use the basic/manual ifconfig setup?


-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: maandag 23 januari 2012 15:10
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIndiana-discuss] CIFS performance issues

No, I'm not doing anything in particular in the virtual machine. The media
file is played on another computer in the (physical) network over CIFS. Over
the network I also access the server using Remote Desktop/Terminal Services
to communicate to the virtual machine (using the VirtualBox RDP interface,
i.e. not the guest OS RDP), VNC (to access OI using vncserver) and SSH (to
OI).

I wouldn't say that the entire server stops responding, only the connection
to CIFS and SSH. I wasn't running VNC when it happened yesterday so I don't
know about it, but the RDP connection and the Virtual Machine inside this
server was unaffected while CIFS and SSH was frozen.

I tried today to start the virtual machine but it failed because it could
not find the connection (e1000g2):

Error: failed to start machine. Error message: Failed to open/create the
internal network 'HostInterfaceNetworking-e1000
g2 - Intel PRO/1000 Gigabit Ethernet' (VERR_SUPDRV_COMPONENT_NOT_FOUND).
Failed to attach the network LUN (VERR_SUPDRV_COMPONENT_NOT_FOUND).
Unknown error creating VM (VERR_SUPDRV_COMPONENT_NOT_FOUND)

ifconfig -a returns:
...
e1000g1: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu
1500 index 2
 inet 10.40.137.185 netmask ff00 broadcast 10.40.137.255
e1000g2: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu
1500 index 3
 inet 10.40.137.196 netmask ff00 broadcast 10.40.137.255
rge0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu 1500 index
4
 inet 0.0.0.0 netmask ff00
...

i.e. e1000g1 and e1000g2 appears to be running just fine, wtf !?! I found
the following entries in the /var/adm/messages:

Jan 23 13:50:49 computername nwamd[95]: [ID 234669 daemon.error] 3:
nwamd_door_switch: need solaris.network.autoconf.read for request type 1 Jan
23 13:56:59 computername last message repeated 75 times Jan 23 13:57:04
computername nwamd[95]: [ID 234669 daemon.error] 3:
nwamd_door_switch: need solaris.network.autoconf.read for request type 1 Jan
23 13:58:19 computername last message repeated 15 times Jan 23 13:58:22
computername gnome-session[916]: [ID 702911 daemon.warning] WARNING:
Unable to determine session: Unable to lookup session information for
process '916'
Jan 23 13:58:24 computername nwamd[95]: [ID 234669 daemon.error] 3:
nwamd_door_switch: need solaris.network.autoconf.read for request type 1 Jan
23 14:03:24 computername last message repeated 60 times Jan 23 14:03:26
computername gnome-session[916]: [ID 702911 daemon.warning] WARNING:
Unable to determine session: Unable to lookup session information for
process '916'
Jan 23 14:03:29 computername nwamd[95]: [ID 234669 daemon.error] 3:
nwamd_door_switch: need solaris.network.autoconf.read for request type 1 Jan
23 14:03:34 computername last message repeated 1 time Jan 23 14:03:39
computername nwamd[95]: [ID 234669 daemon.error] 3:
nwamd_door_switch: need solaris.network.autoconf.read for request type 1

Some errors here... I looked into the log of the nwam service
(/var/svc/log/network-physical\:nwam.log):

[ Jan 23 13:03:15 Enabled. ]
[ Jan 23 13:03:16 Executing start method (/lib/svc/method/net-nwam start).
]
/lib/svc/method/net-nwam[548]: /sbin/ibd_upgrade: not found [No such file or
directory] [ Jan 23 13:03:17 Method start exited with status 0. ] [ Jan 23
13:03:17 Rereading configuration. ] [ Jan 23 13:03:17 Executing refresh
method (/lib/svc/method/net-nwam refresh). ] [ Jan 23 13:03:17 Method
refresh exited with status 0. ]

nothing remarkable here... I investigated the issue on VBox forums and this
issue was resolved by the rem_drv/add_drv vboxflt commands. It's not the
first time I've had this issue and one of the people at the forums claims
that this issue occurs after every third powercycle/reboot. It was hinted
that VBox doesn't like dynamic IP addresses so I have also given e1000g2 a
fixed address in the router (I configured the DHCP server in the router to
always give the same IP to the MAC address of the e1000g2 connection). I've
done it on the e1000g1 already, otherwise it would be impossible to ssh to
the server from the outside world.

Robin.


On 2012-01-23 11:40, Open Indiana wrote:
 Ok,

 So if I read it correct your virtual machine is playing an audio file
 and then the server stops responding. That could mean the hardware
 that virtualbox uses to play the soundfile if flooded or that the
 drivers of the soundcard in your server/PC are not working very well?
 What soundcard are you using?


 -Original Message-
 From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
 Sent: zondag 22 januari 2012 23:38
 To: openindiana-discuss@openindiana.org
 Subject: Re: [OpenIndiana-discuss] CIFS

[OpenIndiana-discuss] CIFS performance issues

2012-01-22 Thread Robin Axelsson
In the past, I used OpenSolaris b134 which I then updated to OpenIndiana 
b148 and never did I experience performance issues related to the 
network connection (and that was when using two of the infamous 
RTL8111DL OnBoard ports). Now that I have swapped the motherboard and 
the hard drive and later added a 2-port Intel EXPI9402PT NIC (because of 
driver issues with the Realtek NIC that wasn't there before), I 
performed a fresh install of OpenIndiana.


Since then I experience intermittent network freeze-ups that I cannot 
link to faults of the storage pool (iostat -E returns 0 errors). I have 
had this issue both with the dual port Intel controller as well as with 
a single port Intel controller (EXPI9400PT) and the Realtek 8111E 
OnBoard NIC. The storage pool is behind an LSI MegaRAID 1068e based 
controller using no port extenders.


In detail (9400PT+8111E):
-
I was running a Virtual Machine with VirtualBox 3.2.14 with (1) a 
bridged network connection and was accessed over the network using (2) 
VBox RDP connection and (3) a ZFS based CIFS share to be accessed from a 
Windows computer over the network. These applications were administrated 
both over (4) SSH (port 2244) and (5) VNC (using vncserver). A typical 
start of the VM was done with 'screen VBoxHeadless --startvm ...'


I assigned the network ports the following way:

e1000g: VBox RDP, VNC, SSH
rge0: Virtual Machine Network Connection (Bridged)

I tried various combinations but the connection froze intermittently for 
all applications. The bridged network connection was worst. When I SSHed 
over rge0, the connection was frequently severed which is was not over 
e1000.


So I pulled the plug on the rg0 and let everything go through the e1000 
connection. freeze-ups became more frequent and it seemed like the 
Bridged connection was causing this issue because the connection didn't 
freeze like that when the VM wasn't running.


Note that I didn't assign the CIFS share to any particular port but 
calls to computername were assigned to the e1000 port in the 
/etc/inet/hosts file.

-

In detail (9402PT):
---
In this setup I run essentially the same applications but all through 
the 9402PT which has two ports (e1000g1 and e1000g2). So I assign the 
applications the following way:


e1000g1: VBox RDP, SSH, computername (in /etc/inet/hosts)
e1000g2: Bridged connection to the virtual machine

So while running the virtual machine on the server, having an open SSH 
connection to it and a command prompt pointing (cd x:\) at the CIFS 
share (which is mapped as a network drive, say X:) I started a media 
player and played an audio file over the CIFS share which made the 
connection freeze.


The freezing affected the media player and the command prompt but the 
RDP connection worked and access to internet inside the VM was flawless. 
The SSH connection was frozen as well. After a few minutes it became 
responsive and iostat -E reported no errors. The command prompt and the 
media player were still frozen but ls path to CIFS shared contents 
worked fine over the SSH connection. Shortly after that the CIFS 
connection came back and things seem to run ok.


So in conclusion the freeze-ups are still there but less frequent. I 
have tried VirtualBox 4.1.8 but the ethernet connection is worse with 
that version which is why I downgraded to 3.2.14 (which was published  
_after_ 4.1.8).

---

These issues occur on server grade hardware using drivers that are/were 
certified by Sun (as I understand it). Moreover, CIFS and ZFS are the 
core functionality of OpenIndiana so it is quite essential that the 
network works properly and is stable.


I'm sorely tempted to issue a bug report but I would want some advice on 
how to troubleshoot and provide relevant bug reports. There are no 
entries in the /var/adm/messages that are related to the latest  
freeze-up mentioned above and I couldn't find any when running the prior 
setups. These freeze-ups don't happen all the time so it isn't easy to 
consistently reproduce them.


Robin.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-22 Thread Open Indiana
A very stupid answer, but have you looked at the bios and inspected the
settings of the network devices and /or PCIx ? How is your bios setup (AHCI
or raid or ??) ?

Do you see any error in the system logs?

To my opinion your system swallows in the datatransfers. Either on the
NIC-montherboard side or at the montherboard - harddiskcontroller side.
Do your extra NIC's and the LSI share the same PCI-x settings? Do they both
support all settings?

B,

Roelof
-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: zondag 22 januari 2012 19:38
To: OpenIndiana-discuss@openindiana.org
Subject: [OpenIndiana-discuss] CIFS performance issues

In the past, I used OpenSolaris b134 which I then updated to OpenIndiana
b148 and never did I experience performance issues related to the network
connection (and that was when using two of the infamous
RTL8111DL OnBoard ports). Now that I have swapped the motherboard and the
hard drive and later added a 2-port Intel EXPI9402PT NIC (because of driver
issues with the Realtek NIC that wasn't there before), I performed a fresh
install of OpenIndiana.

Since then I experience intermittent network freeze-ups that I cannot link
to faults of the storage pool (iostat -E returns 0 errors). I have had this
issue both with the dual port Intel controller as well as with a single port
Intel controller (EXPI9400PT) and the Realtek 8111E OnBoard NIC. The storage
pool is behind an LSI MegaRAID 1068e based controller using no port
extenders.

In detail (9400PT+8111E):
-
I was running a Virtual Machine with VirtualBox 3.2.14 with (1) a bridged
network connection and was accessed over the network using (2) VBox RDP
connection and (3) a ZFS based CIFS share to be accessed from a Windows
computer over the network. These applications were administrated both over
(4) SSH (port 2244) and (5) VNC (using vncserver). A typical start of the VM
was done with 'screen VBoxHeadless --startvm ...'

I assigned the network ports the following way:

e1000g: VBox RDP, VNC, SSH
rge0: Virtual Machine Network Connection (Bridged)

I tried various combinations but the connection froze intermittently for all
applications. The bridged network connection was worst. When I SSHed over
rge0, the connection was frequently severed which is was not over e1000.

So I pulled the plug on the rg0 and let everything go through the e1000
connection. freeze-ups became more frequent and it seemed like the Bridged
connection was causing this issue because the connection didn't freeze like
that when the VM wasn't running.

Note that I didn't assign the CIFS share to any particular port but calls to
computername were assigned to the e1000 port in the /etc/inet/hosts file.
-

In detail (9402PT):
---
In this setup I run essentially the same applications but all through the
9402PT which has two ports (e1000g1 and e1000g2). So I assign the
applications the following way:

e1000g1: VBox RDP, SSH, computername (in /etc/inet/hosts)
e1000g2: Bridged connection to the virtual machine

So while running the virtual machine on the server, having an open SSH
connection to it and a command prompt pointing (cd x:\) at the CIFS share
(which is mapped as a network drive, say X:) I started a media player and
played an audio file over the CIFS share which made the connection freeze.

The freezing affected the media player and the command prompt but the RDP
connection worked and access to internet inside the VM was flawless.
The SSH connection was frozen as well. After a few minutes it became
responsive and iostat -E reported no errors. The command prompt and the
media player were still frozen but ls path to CIFS shared contents
worked fine over the SSH connection. Shortly after that the CIFS connection
came back and things seem to run ok.

So in conclusion the freeze-ups are still there but less frequent. I have
tried VirtualBox 4.1.8 but the ethernet connection is worse with that
version which is why I downgraded to 3.2.14 (which was published _after_
4.1.8).
---

These issues occur on server grade hardware using drivers that are/were
certified by Sun (as I understand it). Moreover, CIFS and ZFS are the core
functionality of OpenIndiana so it is quite essential that the network works
properly and is stable.

I'm sorely tempted to issue a bug report but I would want some advice on how
to troubleshoot and provide relevant bug reports. There are no entries in
the /var/adm/messages that are related to the latest freeze-up mentioned
above and I couldn't find any when running the prior setups. These
freeze-ups don't happen all the time so it isn't easy to consistently
reproduce them.

Robin.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss

Re: [OpenIndiana-discuss] CIFS performance issues

2012-01-22 Thread Robin Axelsson
I don't understand what you mean with PCI-x settings and where to check 
them out. The hardware is not PCI-X, it is PCIe. The affected LSI HBA is 
a discrete PCIe card that operates in IT-mode. As in system logs I 
assume you mean /var/adm/messages and I could not find anything there.


If this was only a hard disk controller issue (I made sure that there 
are enough lanes for it) then I wouldn't expect applications such as SSH 
to be affected by it.


The settings of the Intel NIC card is not in the BIOS, at least not what 
I can see (i.e. there is no visible BIOS of the discrete NIC like it is 
for the LSI SAS controller during POST). So, I'm not entirely sure what 
settings for the NIC you are referring to.

Robin.


On 2012-01-22 20:28, Open Indiana wrote:

A very stupid answer, but have you looked at the bios and inspected the
settings of the network devices and /or PCIx ? How is your bios setup (AHCI
or raid or ??) ?

Do you see any error in the system logs?

To my opinion your system swallows in the datatransfers. Either on the
NIC-montherboard side or at the montherboard-  harddiskcontroller side.
Do your extra NIC's and the LSI share the same PCI-x settings? Do they both
support all settings?

B,

Roelof
-Original Message-
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se]
Sent: zondag 22 januari 2012 19:38
To: OpenIndiana-discuss@openindiana.org
Subject: [OpenIndiana-discuss] CIFS performance issues

In the past, I used OpenSolaris b134 which I then updated to OpenIndiana
b148 and never did I experience performance issues related to the network
connection (and that was when using two of the infamous
RTL8111DL OnBoard ports). Now that I have swapped the motherboard and the
hard drive and later added a 2-port Intel EXPI9402PT NIC (because of driver
issues with the Realtek NIC that wasn't there before), I performed a fresh
install of OpenIndiana.

Since then I experience intermittent network freeze-ups that I cannot link
to faults of the storage pool (iostat -E returns 0 errors). I have had this
issue both with the dual port Intel controller as well as with a single port
Intel controller (EXPI9400PT) and the Realtek 8111E OnBoard NIC. The storage
pool is behind an LSI MegaRAID 1068e based controller using no port
extenders.

In detail (9400PT+8111E):
-
I was running a Virtual Machine with VirtualBox 3.2.14 with (1) a bridged
network connection and was accessed over the network using (2) VBox RDP
connection and (3) a ZFS based CIFS share to be accessed from a Windows
computer over the network. These applications were administrated both over
(4) SSH (port 2244) and (5) VNC (using vncserver). A typical start of the VM
was done with 'screen VBoxHeadless --startvm ...'

I assigned the network ports the following way:

e1000g: VBox RDP, VNC, SSH
rge0: Virtual Machine Network Connection (Bridged)

I tried various combinations but the connection froze intermittently for all
applications. The bridged network connection was worst. When I SSHed over
rge0, the connection was frequently severed which is was not over e1000.

So I pulled the plug on the rg0 and let everything go through the e1000
connection. freeze-ups became more frequent and it seemed like the Bridged
connection was causing this issue because the connection didn't freeze like
that when the VM wasn't running.

Note that I didn't assign the CIFS share to any particular port but calls to
computername  were assigned to the e1000 port in the /etc/inet/hosts file.
-

In detail (9402PT):
---
In this setup I run essentially the same applications but all through the
9402PT which has two ports (e1000g1 and e1000g2). So I assign the
applications the following way:

e1000g1: VBox RDP, SSH,computername  (in /etc/inet/hosts)
e1000g2: Bridged connection to the virtual machine

So while running the virtual machine on the server, having an open SSH
connection to it and a command prompt pointing (cd x:\) at the CIFS share
(which is mapped as a network drive, say X:) I started a media player and
played an audio file over the CIFS share which made the connection freeze.

The freezing affected the media player and the command prompt but the RDP
connection worked and access to internet inside the VM was flawless.
The SSH connection was frozen as well. After a few minutes it became
responsive and iostat -E reported no errors. The command prompt and the
media player were still frozen but lspath to CIFS shared contents
worked fine over the SSH connection. Shortly after that the CIFS connection
came back and things seem to run ok.

So in conclusion the freeze-ups are still there but less frequent. I have
tried VirtualBox 4.1.8 but the ethernet connection is worse with that
version which is why I downgraded to 3.2.14 (which was published _after_
4.1.8).
---

These issues occur on server grade hardware using drivers that are/were
certified by Sun (as I understand it). Moreover