Dear Yao,
as I understand, you want to bound two physical interfaces of the host hardware
to and use the bound inside a container.
eth0--[phys]--eth0--+--bound0
eth1--[phys]--eth1--/
Because no other -- neither host nor another container -- may use one of NICs
in addition, I would suggest to put the virtual bounding interface on the host
and reach through the bound into the container via a veth. To me it's seems to
be a better separation of concerns.
eth0--+--bound0--[veth]--eth0
eth1--/
Following this way, you may also share the bound to more than one container by
putting a virtual bridge between the virtual bounding interface and the virtual
Ethernet adapters of the Containers.
By the way, I don't see a clear reason why your current approach may fail. May
you please present you configuration here?
Greetings
Guido
>-----Original Message-----
>From: wang yao [mailto:[email protected]]
>Sent: Friday, November 15, 2013 4:33 AM
>To: [email protected]
>Subject: [Lxc-users] Bonding inside LXC container
>
>Hi all,
>I tried to bond two NICs (eth0 and eth1) in the container, but when I finished
>the bonding configuration (I think my configuraion is correct)
>and started bonding device inside container, this message came out:
>"Bringing up interface bond0: bonding device bond0 does not seem to be
>present, delaying initialization."
>So I want to know if LXC can't support the way of bonding configuration as I
>did, or I can do something to make this achieved.
>I am glad to talk about "Bonding and LXC" with someone who has interest in it.
>Regards,
>Yao
------------------------------------------------------------------------------
DreamFactory - Open Source REST & JSON Services for HTML5 & Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471&iu=/4140/ostg.clktrk
_______________________________________________
Lxc-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lxc-users