On giovedì 3 maggio 2007, kewlemer wrote:
> Here is the ifconfig information on my host -
> [host]# ifconfig
> eth0 Link encap:Ethernet HWaddr 00:16:41:E6:66:10
> inet addr:192.168.1.5 Bcast:192.168.1.255 Mask:255.255.255.0
> inet6 addr: fe80::216:41ff:fee6:6610/64 Scope
On sabato 5 maggio 2007, Jan Ploski wrote:
> Jeff Dike wrote:
> > On Fri, May 04, 2007 at 07:30:36PM +0200, Jan Ploski wrote:
> >>I am experimenting with UML in a HPC cluster. What I do is basically
> >> start up 60 instances all at once, a bunch of instances on each hardware
> >> node, using the r
Jeff Dike wrote:
> On Fri, May 04, 2007 at 07:30:36PM +0200, Jan Ploski wrote:
>
>>I am experimenting with UML in a HPC cluster. What I do is basically start
>>up 60 instances all at once, a bunch of instances on each hardware node,
>>using the resource manager TORQUE. Each instance gets a diffe
On Fri, May 04, 2007 at 07:30:36PM +0200, Jan Ploski wrote:
> I am experimenting with UML in a HPC cluster. What I do is basically start
> up 60 instances all at once, a bunch of instances on each hardware node,
> using the resource manager TORQUE. Each instance gets a different umid.
> The inst
On Wed, May 02, 2007 at 11:00:55AM -0700, Maren Peasley wrote:
> I just completed getting a few UML guests online. I want to separate
> two guests by a switch (ideally) without giving the UML host any
> network connections to those machines.
>
> My setup:
>
> UML host (Ubuntu 6.10)
> UML guest #
Hello,
I am experimenting with UML in a HPC cluster. What I do is basically start
up 60 instances all at once, a bunch of instances on each hardware node,
using the resource manager TORQUE. Each instance gets a different umid.
The instances are configured to boot up, execute a job and halt afte