Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-03 Thread Vladimir Romanov
Well, what's the problem? When i do this, then i just install debian, xen
kernel, then create some config, download gentoo install cd, run it, and
follow the handbook.

2014-12-04 6:14 GMT+05:00 lee :

> Hi,
>
> I'd like to give Gentoo a try and want to install it in a xen VM.  The
> server is otherwise running Debian.  What would be the best way to do
> this?
>
>
> --
> Again we must be afraid of speaking of daemons for fear that daemons
> might swallow us.  Finally, this fear has become reasonable.
>
>


Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-03 Thread Tomas Mozes

On 2014-12-04 02:14, lee wrote:

Hi,

I'd like to give Gentoo a try and want to install it in a xen VM.  The
server is otherwise running Debian.  What would be the best way to do
this?


Either you can run a virtual machine using paravirtualization (PV) or 
full virtualization (HVM).


If you want to use PV, then you create a partition for Gentoo, chroot, 
unpack stage3 and prepare your system for booting (follow the handbook). 
Then you create a configuration for your xen domU (Gentoo), provide a 
kernel and start it. You don't need the install-cd in this situation, 
nor any bootloader.


If you prefer HVM, then you create a partition and use the install-cd to 
boot. After your install cd boots up, you partition your disk provided 
by xen dom0 (Debian), chroot, unpack stage3 and install the system along 
with the kernel and a bootloader. You can boot your Gentoo with pvgrub 
that will handle the booting to grub and it will load the kernel. This 
way, the Gentoo machine is like a black box for your Debian.


I would recommend starting with HVM.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread lee
Tomas Mozes  writes:

> On 2014-12-04 02:14, lee wrote:
>> Hi,
>>
>> I'd like to give Gentoo a try and want to install it in a xen VM.  The
>> server is otherwise running Debian.  What would be the best way to do
>> this?
>
> Either you can run a virtual machine using paravirtualization (PV) or
> full virtualization (HVM).
>
> If you want to use PV, then you create a partition for Gentoo, chroot,
> unpack stage3 and prepare your system for booting (follow the
> handbook). Then you create a configuration for your xen domU (Gentoo),
> provide a kernel and start it. You don't need the install-cd in this
> situation, nor any bootloader.

That's like what I thought I should do :)

I'd like to use PV as it has some advantages.  How do I provide a
kernel?  Is it contained in the stage3 archive?

And no bootloader?  How do I make the VM bootable then?

All the guests are PV and use something called phygrub of which I don't
know where it comes from.

This installation process with xen is some sort of mystery to me.  With
Debian, I used a somehow specially prepared kernel which booted the
Debian installer.  From there, the installation was the same as
installing on bare metal.

> If you prefer HVM, then you create a partition and use the install-cd
> to boot. After your install cd boots up, you partition your disk
> provided by xen dom0 (Debian), chroot, unpack stage3 and install the
> system along with the kernel and a bootloader. You can boot your
> Gentoo with pvgrub that will handle the booting to grub and it will
> load the kernel. This way, the Gentoo machine is like a black box for
> your Debian.
>
> I would recommend starting with HVM.

Hm, I haven't used HVM yet.  Can I change over to PV after the
installation is done?  What's the advantage of starting with HVM?

The "disk" is an LVM volume and won't be partitioned.  I've found it
more reasonable to use a separate LVM volume for swap.

I never installed Gentoo.  I could start with my desktop since I want to
replace Fedora anyway.  That's a bit troublesome because I either have
to plug in some disks for it which I'd need to buy first (I might get
two small SSDs), or I'd have to repartition the existing ones.

Hmmm.  I think I'll try a VM with PV first.  If that doesn't work, no
harm is done and I can still ask when I'm stuck.


Oh I almost forgot: Does the VM need internet access during the
installation?  The network setup is awfully complicated in this case.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread lee
Vladimir Romanov  writes:

> Well, what's the problem? When i do this, then i just install debian, xen
> kernel, then create some config, download gentoo install cd, run it, and
> follow the handbook.

How do you run the installer CD in a PV VM?  It's not like you could
just boot it, or can you?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread Tomas Mozes

On 2014-12-04 11:08, lee wrote:

Tomas Mozes  writes:


On 2014-12-04 02:14, lee wrote:

Hi,

I'd like to give Gentoo a try and want to install it in a xen VM.  
The

server is otherwise running Debian.  What would be the best way to do
this?


Either you can run a virtual machine using paravirtualization (PV) or
full virtualization (HVM).

If you want to use PV, then you create a partition for Gentoo, chroot,
unpack stage3 and prepare your system for booting (follow the
handbook). Then you create a configuration for your xen domU (Gentoo),
provide a kernel and start it. You don't need the install-cd in this
situation, nor any bootloader.


That's like what I thought I should do :)

I'd like to use PV as it has some advantages.  How do I provide a
kernel?  Is it contained in the stage3 archive?

And no bootloader?  How do I make the VM bootable then?

All the guests are PV and use something called phygrub of which I don't
know where it comes from.

This installation process with xen is some sort of mystery to me.  With
Debian, I used a somehow specially prepared kernel which booted the
Debian installer.  From there, the installation was the same as
installing on bare metal.


The kernel is not in stage3, you have to compile it yourself (or 
download from somewhere). When you have the kernel image binary, the xen 
configuration for the host can be simple as:

name = "gentoobox"
kernel = "/xen/_kernel/kernel-3.14.23-gentoo-xen"
extra = "root=/dev/xvda1 net.ifnames=0"
memory = 2500
vcpus = 4
vif = [ '' ]
disk = [ '/dev/vg_data/gentoo-t1_root,raw,xvda1,rw' ]

You can read about PV:
http://wiki.xenproject.org/wiki/Paravirtualization_%28PV%29




If you prefer HVM, then you create a partition and use the install-cd
to boot. After your install cd boots up, you partition your disk
provided by xen dom0 (Debian), chroot, unpack stage3 and install the
system along with the kernel and a bootloader. You can boot your
Gentoo with pvgrub that will handle the booting to grub and it will
load the kernel. This way, the Gentoo machine is like a black box for
your Debian.

I would recommend starting with HVM.


Hm, I haven't used HVM yet.  Can I change over to PV after the
installation is done?  What's the advantage of starting with HVM?

The "disk" is an LVM volume and won't be partitioned.  I've found it
more reasonable to use a separate LVM volume for swap.

I never installed Gentoo.  I could start with my desktop since I want 
to

replace Fedora anyway.  That's a bit troublesome because I either have
to plug in some disks for it which I'd need to buy first (I might get
two small SSDs), or I'd have to repartition the existing ones.

Hmmm.  I think I'll try a VM with PV first.  If that doesn't work, no
harm is done and I can still ask when I'm stuck.


Oh I almost forgot: Does the VM need internet access during the
installation?  The network setup is awfully complicated in this case.


Well, you can copy the files to another place, but I have never done 
this transformation. HVM is like a black box, you start like booting a 
normal machine. For production, I always use PV, but for starters, HVM 
is also fine.


Yes, you will need internet access because we compile everything as it 
goes, so you need to download the source files. Or, maybe you can 
download a livedvd, but I've never tried that.


Why is the networking complicated? Do you use bridging?



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread Rich Freeman
On Thu, Dec 4, 2014 at 5:09 AM, lee  wrote:
> Vladimir Romanov  writes:
>
>> Well, what's the problem? When i do this, then i just install debian, xen
>> kernel, then create some config, download gentoo install cd, run it, and
>> follow the handbook.
>
> How do you run the installer CD in a PV VM?  It's not like you could
> just boot it, or can you?
>

Correct.  I don't believe our install CDs are xen-enabled.  You would
need to boot from another environment, or install the files from
outside of the xen environment.  When I'm installing Gentoo on ec2 now
I typically just boot any convenient EMI, mount an EBS image, extract
a stage3, then install/configure a xen-enabled kernel.

I haven't done an EC2 install in a while but I do have a .config that
used to work lying around.  I believe most of the instructions are on
my blog as well - probably one of the more visited pages as it is
probably helpful for getting any distro working on EC2/Xen.

--
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread lee
Tomas Mozes  writes:

> On 2014-12-04 11:08, lee wrote:
>> Tomas Mozes  writes:
>>
>>> On 2014-12-04 02:14, lee wrote:
 Hi,

 I'd like to give Gentoo a try and want to install it in a xen VM.
 The
 server is otherwise running Debian.  What would be the best way to do
 this?
>>>
>>> Either you can run a virtual machine using paravirtualization (PV) or
>>> full virtualization (HVM).
>>>
>>> If you want to use PV, then you create a partition for Gentoo, chroot,
>>> unpack stage3 and prepare your system for booting (follow the
>>> handbook). Then you create a configuration for your xen domU (Gentoo),
>>> provide a kernel and start it. You don't need the install-cd in this
>>> situation, nor any bootloader.
>>
>> That's like what I thought I should do :)
>>
>> I'd like to use PV as it has some advantages.  How do I provide a
>> kernel?  Is it contained in the stage3 archive?
>>
>> And no bootloader?  How do I make the VM bootable then?
>>
>> All the guests are PV and use something called phygrub of which I don't
>> know where it comes from.
>>
>> This installation process with xen is some sort of mystery to me.  With
>> Debian, I used a somehow specially prepared kernel which booted the
>> Debian installer.  From there, the installation was the same as
>> installing on bare metal.
>
> The kernel is not in stage3, you have to compile it yourself (or
> download from somewhere). When you have the kernel image binary, the
> xen configuration for the host can be simple as:

Compile it with what?  Are the sources in stage3, or downloaded so that
I can compile a suitable Gentoo kernel within the chroot?

> name = "gentoobox"
> kernel = "/xen/_kernel/kernel-3.14.23-gentoo-xen"
> extra = "root=/dev/xvda1 net.ifnames=0"
> memory = 2500
> vcpus = 4
> vif = [ '' ]
> disk = [ '/dev/vg_data/gentoo-t1_root,raw,xvda1,rw' ]

'raw'?  I'll have to look up what that does.

> You can read about PV:
> http://wiki.xenproject.org/wiki/Paravirtualization_%28PV%29
>
>>
>>> If you prefer HVM, then you create a partition and use the install-cd
>>> to boot. After your install cd boots up, you partition your disk
>>> provided by xen dom0 (Debian), chroot, unpack stage3 and install the
>>> system along with the kernel and a bootloader. You can boot your
>>> Gentoo with pvgrub that will handle the booting to grub and it will
>>> load the kernel. This way, the Gentoo machine is like a black box for
>>> your Debian.
>>>
>>> I would recommend starting with HVM.
>>
>> Hm, I haven't used HVM yet.  Can I change over to PV after the
>> installation is done?  What's the advantage of starting with HVM?
>>
>> The "disk" is an LVM volume and won't be partitioned.  I've found it
>> more reasonable to use a separate LVM volume for swap.
>>
>> I never installed Gentoo.  I could start with my desktop since I
>> want to
>> replace Fedora anyway.  That's a bit troublesome because I either have
>> to plug in some disks for it which I'd need to buy first (I might get
>> two small SSDs), or I'd have to repartition the existing ones.
>>
>> Hmmm.  I think I'll try a VM with PV first.  If that doesn't work, no
>> harm is done and I can still ask when I'm stuck.
>>
>>
>> Oh I almost forgot: Does the VM need internet access during the
>> installation?  The network setup is awfully complicated in this case.
>
> Well, you can copy the files to another place, but I have never done
> this transformation. HVM is like a black box, you start like booting a
> normal machine. For production, I always use PV, but for starters, HVM
> is also fine.

So you cannot easily change from HVM to PV?

> Yes, you will need internet access because we compile everything as it
> goes, so you need to download the source files. Or, maybe you can
> download a livedvd, but I've never tried that.
>
> Why is the networking complicated? Do you use bridging?

Yes --- and it was terrible to begin with and still is very complicated.
One of the VMs has a network card passed through to do pppoe for the
internet connection, and it also does routing and firewalling.  The
Gentoo VM is supposed to have another network card passed through
because I want a separate network for miscellaneous devices like IP
phones and printers.  Asterisk is going to run on the Gentoo VM.

Besides devices, there's the usual net, dmz and loc zones.  To top it
off, sooner or later I want to pass another network card to the
firewall/router because I have an internet connection which is currently
not in use and should be employed as an automatic fallback.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread Rich Freeman
On Thu, Dec 4, 2014 at 1:11 PM, lee  wrote:
> Tomas Mozes  writes:
>>
>> The kernel is not in stage3, you have to compile it yourself (or
>> download from somewhere). When you have the kernel image binary, the
>> xen configuration for the host can be simple as:
>
> Compile it with what?  Are the sources in stage3, or downloaded so that
> I can compile a suitable Gentoo kernel within the chroot?

If you've never installed Gentoo anywhere I wouldn't suggest doing it
for the first time under Xen.

Gentoo stage3s include neither a binary kernel nor the sources.  See:
https://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=1&chap=7

--
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread lee
Rich Freeman  writes:

> On Thu, Dec 4, 2014 at 5:09 AM, lee  wrote:
>> Vladimir Romanov  writes:
>>
>>> Well, what's the problem? When i do this, then i just install debian, xen
>>> kernel, then create some config, download gentoo install cd, run it, and
>>> follow the handbook.
>>
>> How do you run the installer CD in a PV VM?  It's not like you could
>> just boot it, or can you?
>>
>
> Correct.  I don't believe our install CDs are xen-enabled.  You would
> need to boot from another environment, or install the files from
> outside of the xen environment.  When I'm installing Gentoo on ec2 now
> I typically just boot any convenient EMI, mount an EBS image, extract
> a stage3, then install/configure a xen-enabled kernel.

ec2? emi? ebs?

> I haven't done an EC2 install in a while but I do have a .config that
> used to work lying around.  I believe most of the instructions are on
> my blog as well - probably one of the more visited pages as it is
> probably helpful for getting any distro working on EC2/Xen.
>
> --
> Rich
>
>
>

-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread lee
Rich Freeman  writes:

> On Thu, Dec 4, 2014 at 1:11 PM, lee  wrote:
>> Tomas Mozes  writes:
>>>
>>> The kernel is not in stage3, you have to compile it yourself (or
>>> download from somewhere). When you have the kernel image binary, the
>>> xen configuration for the host can be simple as:
>>
>> Compile it with what?  Are the sources in stage3, or downloaded so that
>> I can compile a suitable Gentoo kernel within the chroot?
>
> If you've never installed Gentoo anywhere I wouldn't suggest doing it
> for the first time under Xen.
>
> Gentoo stage3s include neither a binary kernel nor the sources.  See:
> https://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=1&chap=7

That's confusing ...  I would think that I can create the file system on
the LV and extract the stage3 archive, then chroot into it.  From there,
I'd have to 'emerge gentoo-sources' and to compile a kernel.

Isn't that easier or the same as booting on bare metal into some life
system and doing these things from there?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread Rich Freeman
On Thu, Dec 4, 2014 at 3:39 PM, lee  wrote:
> Rich Freeman  writes:
>
>> On Thu, Dec 4, 2014 at 1:11 PM, lee  wrote:
>>> Tomas Mozes  writes:

 The kernel is not in stage3, you have to compile it yourself (or
 download from somewhere). When you have the kernel image binary, the
 xen configuration for the host can be simple as:
>>>
>>> Compile it with what?  Are the sources in stage3, or downloaded so that
>>> I can compile a suitable Gentoo kernel within the chroot?
>>
>> If you've never installed Gentoo anywhere I wouldn't suggest doing it
>> for the first time under Xen.
>>
>> Gentoo stage3s include neither a binary kernel nor the sources.  See:
>> https://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=1&chap=7
>
> That's confusing ...  I would think that I can create the file system on
> the LV and extract the stage3 archive, then chroot into it.  From there,
> I'd have to 'emerge gentoo-sources' and to compile a kernel.
>
> Isn't that easier or the same as booting on bare metal into some life
> system and doing these things from there?
>

When you boot a CD on bare metal all you're doing is creating the file
system, extracting the archive, and chrooting into it.  So the outcome
is the same either way.

If your xen guest is going to run on a regular LV you certainly can
just mount it on the host and chroot into it.  That is exactly how I'd
go about it.

Once you're in the chroot then you should install the kernel/etc per
the handbook.  Of course, you have to make sure that the config for
the kernel supports running as a guest under xen.

--
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread Rich Freeman
On Thu, Dec 4, 2014 at 3:30 PM, lee  wrote:
> Rich Freeman  writes:
>
>> On Thu, Dec 4, 2014 at 5:09 AM, lee  wrote:
>>> Vladimir Romanov  writes:
>>>
 Well, what's the problem? When i do this, then i just install debian, xen
 kernel, then create some config, download gentoo install cd, run it, and
 follow the handbook.
>>>
>>> How do you run the installer CD in a PV VM?  It's not like you could
>>> just boot it, or can you?
>>>
>>
>> Correct.  I don't believe our install CDs are xen-enabled.  You would
>> need to boot from another environment, or install the files from
>> outside of the xen environment.  When I'm installing Gentoo on ec2 now
>> I typically just boot any convenient EMI, mount an EBS image, extract
>> a stage3, then install/configure a xen-enabled kernel.
>
> ec2? emi? ebs?
>

EC2 is the brand name for Amazon's hosted cloud servers.  I meant AMI
and not EMI which is an amazon machine image, and EBS is elastic block
storage, which is basically Amazon's equivalent of a logical volume.

I mentioned it only because Amazon EC2 is based on Xen, so the process
for getting EC2 working with a custom kernel is almost the same as
what you're trying to do, other than the amazon-specific stuff like
using their APIs to actually create/mount/snapshot volumes and all
that.

--
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-04 Thread lee
Rich Freeman  writes:

> On Thu, Dec 4, 2014 at 3:39 PM, lee  wrote:
>> Rich Freeman  writes:
>>
>>> On Thu, Dec 4, 2014 at 1:11 PM, lee  wrote:
 Tomas Mozes  writes:
>
> The kernel is not in stage3, you have to compile it yourself (or
> download from somewhere). When you have the kernel image binary, the
> xen configuration for the host can be simple as:

 Compile it with what?  Are the sources in stage3, or downloaded so that
 I can compile a suitable Gentoo kernel within the chroot?
>>>
>>> If you've never installed Gentoo anywhere I wouldn't suggest doing it
>>> for the first time under Xen.
>>>
>>> Gentoo stage3s include neither a binary kernel nor the sources.  See:
>>> https://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=1&chap=7
>>
>> That's confusing ...  I would think that I can create the file system on
>> the LV and extract the stage3 archive, then chroot into it.  From there,
>> I'd have to 'emerge gentoo-sources' and to compile a kernel.
>>
>> Isn't that easier or the same as booting on bare metal into some life
>> system and doing these things from there?
>>
>
> When you boot a CD on bare metal all you're doing is creating the file
> system, extracting the archive, and chrooting into it.  So the outcome
> is the same either way.
>
> If your xen guest is going to run on a regular LV you certainly can
> just mount it on the host and chroot into it.  That is exactly how I'd
> go about it.

Yes, I've already created a LV for it (along with others I'm going to
need).  Then I got stuck because I wanted to create an xfs file system
and found that I hadn't installed a package required for that and
couldn't install it because there was some problem with downloading
package lists which they only fixed some time later ...

BTW, can I use xfs for the VM, or will it be difficult to get the VM
booted from xfs?

> Once you're in the chroot then you should install the kernel/etc per
> the handbook.

So there isn't really an advantage to use HVM ... it's even easier
because I can access the LV from dom0.

> Of course, you have to make sure that the config for
> the kernel supports running as a guest under xen.

It's been a few years since the last time I compiled a kernel.  I'm
looking forward to it :)


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-07 Thread J. Roeleveld
On Thursday, December 04, 2014 07:11:12 PM lee wrote:
> Tomas Mozes  writes:
> > On 2014-12-04 11:08, lee wrote:
> >> Tomas Mozes  writes:
> >>> On 2014-12-04 02:14, lee wrote:



> > name = "gentoobox"
> > kernel = "/xen/_kernel/kernel-3.14.23-gentoo-xen"
> > extra = "root=/dev/xvda1 net.ifnames=0"
> > memory = 2500
> > vcpus = 4
> > vif = [ '' ]
> > disk = [ '/dev/vg_data/gentoo-t1_root,raw,xvda1,rw' ]
> 
> 'raw'?  I'll have to look up what that does.
> 
> > You can read about PV:
> > http://wiki.xenproject.org/wiki/Paravirtualization_%28PV%29
> > 
> >>> If you prefer HVM, then you create a partition and use the install-cd
> >>> to boot. After your install cd boots up, you partition your disk
> >>> provided by xen dom0 (Debian), chroot, unpack stage3 and install the
> >>> system along with the kernel and a bootloader. You can boot your
> >>> Gentoo with pvgrub that will handle the booting to grub and it will
> >>> load the kernel. This way, the Gentoo machine is like a black box for
> >>> your Debian.
> >>> 
> >>> I would recommend starting with HVM.
> >> 
> >> Hm, I haven't used HVM yet.  Can I change over to PV after the
> >> installation is done?  What's the advantage of starting with HVM?
> >> 
> >> The "disk" is an LVM volume and won't be partitioned.  I've found it
> >> more reasonable to use a separate LVM volume for swap.
> >> 
> >> I never installed Gentoo.  I could start with my desktop since I
> >> want to
> >> replace Fedora anyway.  That's a bit troublesome because I either have
> >> to plug in some disks for it which I'd need to buy first (I might get
> >> two small SSDs), or I'd have to repartition the existing ones.
> >> 
> >> Hmmm.  I think I'll try a VM with PV first.  If that doesn't work, no
> >> harm is done and I can still ask when I'm stuck.
> >> 
> >> 
> >> Oh I almost forgot: Does the VM need internet access during the
> >> installation?  The network setup is awfully complicated in this case.
> > 
> > Well, you can copy the files to another place, but I have never done
> > this transformation. HVM is like a black box, you start like booting a
> > normal machine. For production, I always use PV, but for starters, HVM
> > is also fine.
> 
> So you cannot easily change from HVM to PV?

You can, but then you need to ensure the install has both the PV as well as 
the HVM drivers installed and can switch between them easily.

> > Yes, you will need internet access because we compile everything as it
> > goes, so you need to download the source files. Or, maybe you can
> > download a livedvd, but I've never tried that.
> > 
> > Why is the networking complicated? Do you use bridging?
> 
> Yes --- and it was terrible to begin with and still is very complicated.
> One of the VMs has a network card passed through to do pppoe for the
> internet connection, and it also does routing and firewalling.  The
> Gentoo VM is supposed to have another network card passed through
> because I want a separate network for miscellaneous devices like IP
> phones and printers.  Asterisk is going to run on the Gentoo VM.

This sounds convoluted. Why add to the complexity by adding multiple network 
cards into the machine and pass the physical cards?

> Besides devices, there's the usual net, dmz and loc zones.  To top it
> off, sooner or later I want to pass another network card to the
> firewall/router because I have an internet connection which is currently
> not in use and should be employed as an automatic fallback.

How many cards are you planning on having in the machine?
Are all these connected to the same switch?

--
Joost



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-07 Thread J. Roeleveld
On Thursday, December 04, 2014 11:55:50 PM lee wrote:
> Rich Freeman  writes:
> > On Thu, Dec 4, 2014 at 3:39 PM, lee  wrote:
> >> Rich Freeman  writes:
> >>> On Thu, Dec 4, 2014 at 1:11 PM, lee  wrote:
>  Tomas Mozes  writes:
> > The kernel is not in stage3, you have to compile it yourself (or
> > download from somewhere). When you have the kernel image binary, the
>  
> > xen configuration for the host can be simple as:
>  Compile it with what?  Are the sources in stage3, or downloaded so that
>  I can compile a suitable Gentoo kernel within the chroot?
> >>> 
> >>> If you've never installed Gentoo anywhere I wouldn't suggest doing it
> >>> for the first time under Xen.
> >>> 
> >>> Gentoo stage3s include neither a binary kernel nor the sources.  See:
> >>> https://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=1&chap=7
> >> 
> >> That's confusing ...  I would think that I can create the file system on
> >> the LV and extract the stage3 archive, then chroot into it.  From there,
> >> I'd have to 'emerge gentoo-sources' and to compile a kernel.
> >> 
> >> Isn't that easier or the same as booting on bare metal into some life
> >> system and doing these things from there?
> > 
> > When you boot a CD on bare metal all you're doing is creating the file
> > system, extracting the archive, and chrooting into it.  So the outcome
> > is the same either way.
> > 
> > If your xen guest is going to run on a regular LV you certainly can
> > just mount it on the host and chroot into it.  That is exactly how I'd
> > go about it.
> 
> Yes, I've already created a LV for it (along with others I'm going to
> need).  Then I got stuck because I wanted to create an xfs file system
> and found that I hadn't installed a package required for that and
> couldn't install it because there was some problem with downloading
> package lists which they only fixed some time later ...
> 
> BTW, can I use xfs for the VM, or will it be difficult to get the VM
> booted from xfs?

Using PV, not at all. As long as the kernel for the VM has XFS support built-
in. (This is valid for other filesystems as well)

> > Once you're in the chroot then you should install the kernel/etc per
> > the handbook.
> 
> So there isn't really an advantage to use HVM ... it's even easier
> because I can access the LV from dom0.

Not really. For the PV, there isn't even a necessity to have a kernel in the 
VM at all as it is simpler to have the kernel on the filesystem belonging to 
the host and point the config to that.

--
Joost



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-07 Thread lee
"J. Roeleveld"  writes:

> On Thursday, December 04, 2014 07:11:12 PM lee wrote:
>> > Why is the networking complicated? Do you use bridging?
>> 
>> Yes --- and it was terrible to begin with and still is very complicated.
>> One of the VMs has a network card passed through to do pppoe for the
>> internet connection, and it also does routing and firewalling.  The
>> Gentoo VM is supposed to have another network card passed through
>> because I want a separate network for miscellaneous devices like IP
>> phones and printers.  Asterisk is going to run on the Gentoo VM.
>
> This sounds convoluted. Why add to the complexity by adding multiple network 
> cards into the machine and pass the physical cards?

How else do you do pppoe and keep the different networks physically
seperated?

>> Besides devices, there's the usual net, dmz and loc zones.  To top it
>> off, sooner or later I want to pass another network card to the
>> firewall/router because I have an internet connection which is currently
>> not in use and should be employed as an automatic fallback.
>
> How many cards are you planning on having in the machine?
> Are all these connected to the same switch?

It has currently four network ports.  Only one of them is connected to
the switch.  Another one is connected to the pppoe line, and the other
two (on a dual card) aren't connected yet.

I plan to use one for the devices network and the other one for the
second internet connection.  None of them needs to/should be connected
to the switch.  The VM running asterisk will need a second interface
that connects to a bridge so it can reach the router/firewall.  The
interface for the second internet connection needs to be passed to the
router/firewall.

Can you think of an easier setup?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-08 Thread J. Roeleveld
On Sunday, December 07, 2014 11:43:38 PM lee wrote:
> "J. Roeleveld"  writes:
> > On Thursday, December 04, 2014 07:11:12 PM lee wrote:
> >> > Why is the networking complicated? Do you use bridging?
> >> 
> >> Yes --- and it was terrible to begin with and still is very complicated.
> >> One of the VMs has a network card passed through to do pppoe for the
> >> internet connection, and it also does routing and firewalling.  The
> >> Gentoo VM is supposed to have another network card passed through
> >> because I want a separate network for miscellaneous devices like IP
> >> phones and printers.  Asterisk is going to run on the Gentoo VM.
> > 
> > This sounds convoluted. Why add to the complexity by adding multiple
> > network cards into the machine and pass the physical cards?
> 
> How else do you do pppoe and keep the different networks physically
> seperated?

Networks that need to be physically seperated, require either of:
1) seperate NICs
2) VLANs

My comment about the complexity, however, was related to passing physical 
cards to the VMs instead of adding the cards to seperate bridges inside the 
host and using virtual NICs.

> >> Besides devices, there's the usual net, dmz and loc zones.  To top it
> >> off, sooner or later I want to pass another network card to the
> >> firewall/router because I have an internet connection which is currently
> >> not in use and should be employed as an automatic fallback.
> > 
> > How many cards are you planning on having in the machine?
> > Are all these connected to the same switch?
> 
> It has currently four network ports.  Only one of them is connected to
> the switch.  Another one is connected to the pppoe line, and the other
> two (on a dual card) aren't connected yet.
> 
> I plan to use one for the devices network and the other one for the
> second internet connection.  None of them needs to/should be connected
> to the switch.  The VM running asterisk will need a second interface
> that connects to a bridge so it can reach the router/firewall.  The
> interface for the second internet connection needs to be passed to the
> router/firewall.
> 
> Can you think of an easier setup?

create 1 bridge per physical network port
add the physical ports to the respective bridges

pass virtual NICs to the VMs which are part of the bridges.

But it's your server, you decide on the complexity.

I stopped passing physical NICs when I was encountering issues with newer 
cards.
They are now resolved, but passing virtual interfaces is simpler and more 
reliable.

--
Joost

--
Joost



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-08 Thread thegeezer
On 08/12/14 11:26, J. Roeleveld wrote:
> On Sunday, December 07, 2014 11:43:38 PM lee wrote:
>> "J. Roeleveld"  writes:
>>> On Thursday, December 04, 2014 07:11:12 PM lee wrote:
> Why is the networking complicated? Do you use bridging?
 Yes --- and it was terrible to begin with and still is very complicated.
 One of the VMs has a network card passed through to do pppoe for the
 internet connection, and it also does routing and firewalling.  The
 Gentoo VM is supposed to have another network card passed through
 because I want a separate network for miscellaneous devices like IP
 phones and printers.  Asterisk is going to run on the Gentoo VM.
>>> This sounds convoluted. Why add to the complexity by adding multiple
>>> network cards into the machine and pass the physical cards?
>> How else do you do pppoe and keep the different networks physically
>> seperated?
> Networks that need to be physically seperated, require either of:
> 1) seperate NICs
> 2) VLANs
>
> My comment about the complexity, however, was related to passing physical 
> cards to the VMs instead of adding the cards to seperate bridges inside the 
> host and using virtual NICs.
>
 Besides devices, there's the usual net, dmz and loc zones.  To top it
 off, sooner or later I want to pass another network card to the
 firewall/router because I have an internet connection which is currently
 not in use and should be employed as an automatic fallback.
>>> How many cards are you planning on having in the machine?
>>> Are all these connected to the same switch?
>> It has currently four network ports.  Only one of them is connected to
>> the switch.  Another one is connected to the pppoe line, and the other
>> two (on a dual card) aren't connected yet.
>>
>> I plan to use one for the devices network and the other one for the
>> second internet connection.  None of them needs to/should be connected
>> to the switch.  The VM running asterisk will need a second interface
>> that connects to a bridge so it can reach the router/firewall.  The
>> interface for the second internet connection needs to be passed to the
>> router/firewall.
>>
>> Can you think of an easier setup?
> create 1 bridge per physical network port
> add the physical ports to the respective bridges
>
> pass virtual NICs to the VMs which are part of the bridges.
>
> But it's your server, you decide on the complexity.
>
> I stopped passing physical NICs when I was encountering issues with newer 
> cards.
> They are now resolved, but passing virtual interfaces is simpler and more 
> reliable.

+1 for this
i'm sure that one of the reasons that software defined networking is
suddenly the next big buzzword is because a) the commodity hardware is
now good enough to be comparable to custom asic switches and b) the
amazing flexibility you have.  ignoring the security issues of vlans,
for a pure partitioning of the network it's very hard to beat linux+vlan
switch, as you can have a virtual host have just a single network card
which itself has ten vlans connected. with a vlan capable switch you can
have those vlans not just be lan/dmz/wan but can section off departments
too.  you can then incredibly easily stand up a new server for just that
department. without having to be too concerned about downing the whole
server to fit a new NIC into it

>
> --
> Joost
>
> --
> Joost
>




Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-08 Thread lee
"J. Roeleveld"  writes:

> create 1 bridge per physical network port
> add the physical ports to the respective bridges

That tends to make the ports disappear, i. e. become unusable, because
the bridge swallows them.

> pass virtual NICs to the VMs which are part of the bridges.

Doesn't that create more CPU load than passing the port?  And at some
point, you may saturate the bandwidth of the port.

> But it's your server, you decide on the complexity.
>
> I stopped passing physical NICs when I was encountering issues with newer 
> cards.
> They are now resolved, but passing virtual interfaces is simpler and more 
> reliable.

The only issue I have with passing the port is that the kernel module
must not be loaded from the initrd image.  So I don't see how fighting
with the bridges would make things easier.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-09 Thread J. Roeleveld
On Monday, December 08, 2014 11:17:26 PM lee wrote:
> "J. Roeleveld"  writes:
> > create 1 bridge per physical network port
> > add the physical ports to the respective bridges
> 
> That tends to make the ports disappear, i. e. become unusable, because
> the bridge swallows them.

What do you mean with "unusable"?

> > pass virtual NICs to the VMs which are part of the bridges.
> 
> Doesn't that create more CPU load than passing the port?

Do you have an IOMMU on the host?
I don't notice any significant increase in CPU-usage caused by the network 
layer.

> And at some
> point, you may saturate the bandwidth of the port.

And how is this different from assigning the network interface directly?
My switch supports bonding, which means I have a total of 4Gbit/s between the 
server and switch for all networks. (using VLANs)

> > But it's your server, you decide on the complexity.
> > 
> > I stopped passing physical NICs when I was encountering issues with newer
> > cards.
> > They are now resolved, but passing virtual interfaces is simpler and more
> > reliable.
> 
> The only issue I have with passing the port is that the kernel module
> must not be loaded from the initrd image.  So I don't see how fighting
> with the bridges would make things easier.

Unless you are forced to use some really weird configuration utility for the 
network, configuring a bridge and assiging the bridge in the xen-domain config 
file is simpler then assigning physical network interfaces.

--
Joost



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-09 Thread thegeezer
On 08/12/14 22:17, lee wrote:
> "J. Roeleveld"  writes:
>
>> create 1 bridge per physical network port
>> add the physical ports to the respective bridges
> That tends to make the ports disappear, i. e. become unusable, because
> the bridge swallows them.
and if you pass the device then it becomes unusable to the host

>
>> pass virtual NICs to the VMs which are part of the bridges.
> Doesn't that create more CPU load than passing the port?  And at some
> point, you may saturate the bandwidth of the port.

some forward planning is needed. obviously if you have two file servers
using the same bridge and that bridge only has one physical port and the
SAN is not part of the host then you might run into trouble. however,
you can use bonding in various ways to group connections -- and in this
way you can have a virtual nic that actually has 2x 1GB bonded devices,
or if you choose to upgrade at a later stage you can start putting in
10GbE cards and the virtual machine sees nothing different, just access
is faster.
on the flip side you can have four or more relatively low bandwidth
requirement virtual machines running on the same host through the same
single physical port
think of the bridge as an "internal, virtual, network switch"... you
wouldn't load up a switch with 47 high bandwidth requirement servers and
then create a single uplink to the SAN / other network without seriously
considering bonding or partitioning in some way to reduce the 47into1
bottleneck, and the same is true of the virtual-switch (bridge)

the difference is that you need to physically be there to repatch
connections or to add a new switch when you run out of ports. these
limitations are largely overcome.

>
>> But it's your server, you decide on the complexity.
>>
>> I stopped passing physical NICs when I was encountering issues with newer 
>> cards.
>> They are now resolved, but passing virtual interfaces is simpler and more 
>> reliable.
> The only issue I have with passing the port is that the kernel module
> must not be loaded from the initrd image.  So I don't see how fighting
> with the bridges would make things easier.
>
>

vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]

am i missing where the fight is ?

the only issue with bridges is that if eth0 is in the bridge, if you try
to use eth0 directly with for example an IP address things go a bit
weird, so you have to use br0 instead
so don't do that.
perhaps you don't need a full bridge and you might just prefer to lookup
macvlan instead.
this lets you create a new virtual device that behaves much more like a
secondary nic
e.g. in /etc/conf.d/net

macvlan_xenguest1="eth0"
mode_xenguest1="private"
mac_xenguest1="de:ad:be:ef:00:01"
config_xenguest1="192.168.1.12/24"
routes_xenguest1="default via 192.168.1.1"
modules_xenguest1="!ifplugd"

you can then
/etc/init.d/net.xenguest1 start

i'm not big into xen but i believe you should be able to pass this as a
"physical" device to xen and it then comes out on network interface eth0
this way you get to keep your port without it being eaten by the bridge
do let me know if this works with xen i'll add it to my toolbox



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-10 Thread J. Roeleveld
On Tuesday, December 09, 2014 02:26:24 PM thegeezer wrote:
> On 08/12/14 22:17, lee wrote:
> > "J. Roeleveld"  writes:
> >> create 1 bridge per physical network port
> >> add the physical ports to the respective bridges
> > 
> > That tends to make the ports disappear, i. e. become unusable, because
> > the bridge swallows them.
> 
> and if you pass the device then it becomes unusable to the host

Pass device: It is unavailable to the host
Add it to a bridge: you can still have an IP on that network.

> >> pass virtual NICs to the VMs which are part of the bridges.
> > 
> > Doesn't that create more CPU load than passing the port?  And at some
> > point, you may saturate the bandwidth of the port.
> 
> some forward planning is needed. obviously if you have two file servers
> using the same bridge and that bridge only has one physical port and the
> SAN is not part of the host then you might run into trouble.

Same is true when you have a single "dumb" switch with physical machines 
instead of VMs.

> however,
> you can use bonding in various ways to group connections -- and in this
> way you can have a virtual nic that actually has 2x 1GB bonded devices,
> or if you choose to upgrade at a later stage you can start putting in
> 10GbE cards and the virtual machine sees nothing different, just access
> is faster.

+1, it really is that simple.

> on the flip side you can have four or more relatively low bandwidth
> requirement virtual machines running on the same host through the same
> single physical port
> think of the bridge as an "internal, virtual, network switch"... you
> wouldn't load up a switch with 47 high bandwidth requirement servers and
> then create a single uplink to the SAN / other network without seriously
> considering bonding or partitioning in some way to reduce the 47into1
> bottleneck, and the same is true of the virtual-switch (bridge)

The virtual-switch can handle more bandwidth then a physical one (unless you 
pay for high-speed hardware)

> the difference is that you need to physically be there to repatch
> connections or to add a new switch when you run out of ports. these
> limitations are largely overcome.
> 
> >> But it's your server, you decide on the complexity.
> >> 
> >> I stopped passing physical NICs when I was encountering issues with newer
> >> cards.
> >> They are now resolved, but passing virtual interfaces is simpler and more
> >> reliable.
> > 
> > The only issue I have with passing the port is that the kernel module
> > must not be loaded from the initrd image.  So I don't see how fighting
> > with the bridges would make things easier.
> 
> vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]
> 
> am i missing where the fight is ?

I'm wondering the same. :)

> the only issue with bridges is that if eth0 is in the bridge, if you try
> to use eth0 directly with for example an IP address things go a bit
> weird, so you have to use br0 instead

Give br0 an IP instead of eth0 and it will work.

> so don't do that.
> perhaps you don't need a full bridge and you might just prefer to lookup
> macvlan instead.
> this lets you create a new virtual device that behaves much more like a
> secondary nic
> e.g. in /etc/conf.d/net
> 
> macvlan_xenguest1="eth0"
> mode_xenguest1="private"
> mac_xenguest1="de:ad:be:ef:00:01"
> config_xenguest1="192.168.1.12/24"
> routes_xenguest1="default via 192.168.1.1"
> modules_xenguest1="!ifplugd"
> 
> you can then
> /etc/init.d/net.xenguest1 start
> 
> i'm not big into xen but i believe you should be able to pass this as a
> "physical" device to xen and it then comes out on network interface eth0
> this way you get to keep your port without it being eaten by the bridge
> do let me know if this works with xen i'll add it to my toolbox

I never heard of "macvlan" myself, but the networking I use is as follows:


# using the udev name randomizer names as the kernel-option is ignored
config_enp4s0="null"
config_enp5s0="null"
config_enp6s0="null"
config_enp7s0="null"

# Add all physical ports into a single bonded interface
slaves_bond0="enp4s0 enp5s0 enp6s0 enp7s0"
config_bond0="null"
rc_net_bond0_need="net.enp4s0 net.enp5s0 net.enp6s0 net.enp7s0"

# Split the bonded interface into seperate VLANs.
vlans_bond0="1 2 3 4"
vlan1_name="lan"
vlan2_name="gst"
vlan3_name="vm"
vlan4_name="net"
vlan_start_bond0="no"
rc_net_adm_need="net.bond0"
rc_net_lan_need="net.bond0"
rc_net_net_need="net.bond0"

# Configure the bridges
config_lan="null"
bridge_brlan="lan"
rc_net_brlan_need="net.lan"



   
config_gst="null"   

   

Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-28 Thread lee
"J. Roeleveld"  writes:

> On Thursday, December 04, 2014 11:55:50 PM lee wrote:
>> BTW, can I use xfs for the VM, or will it be difficult to get the VM
>> booted from xfs?
>
> Using PV, not at all. As long as the kernel for the VM has XFS support built-
> in. (This is valid for other filesystems as well)

Oh that's good to know.  I've found that Gentoo comes with ZFS, and if I
can create a ZFS file system from some disks and boot from that without
further ado, that might be pretty cool.

I tried out the latest Gentoo live DVD ... and I was surprised that the
software it comes with is so old.  Libreoffice 3.x?  Seamonkey a couple
versions behind?

Is the software going to be more recent when I actually install?

>> > Once you're in the chroot then you should install the kernel/etc per
>> > the handbook.
>> 
>> So there isn't really an advantage to use HVM ... it's even easier
>> because I can access the LV from dom0.
>
> Not really. For the PV, there isn't even a necessity to have a kernel in the 
> VM at all as it is simpler to have the kernel on the filesystem belonging to 
> the host and point the config to that.

ATM, I'm looking at LXC instead of VMs.  I'm planning to make some
changes again, and containers seem very appealing.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-28 Thread lee
"J. Roeleveld"  writes:

> On Monday, December 08, 2014 11:17:26 PM lee wrote:
>> "J. Roeleveld"  writes:
>> > create 1 bridge per physical network port
>> > add the physical ports to the respective bridges
>> 
>> That tends to make the ports disappear, i. e. become unusable, because
>> the bridge swallows them.
>
> What do you mean with "unusable"?

The bridge swallows the physical port, and the port becomes
unreachable.  IIRC, you can get around this by assigning an IP address
to the bridge rather than to the physical port ... In any case, I'm
finding bridges very confusing.

>> > pass virtual NICs to the VMs which are part of the bridges.
>> 
>> Doesn't that create more CPU load than passing the port?
>
> Do you have an IOMMU on the host?
> I don't notice any significant increase in CPU-usage caused by the network 
> layer.

Yes, and the kernel turns it off.  Apparently it's expected to be more
advantageous for some reason to use software emulation instead.

>> And at some
>> point, you may saturate the bandwidth of the port.
>
> And how is this different from assigning the network interface directly?

With more physical ports, you have more bandwidth available.

>> My switch supports bonding, which means I have a total of 4Gbit/s between 
>> the 
>> server and switch for all networks. (using VLANs)

I don't know if mine does.

>> > But it's your server, you decide on the complexity.
>> > 
>> > I stopped passing physical NICs when I was encountering issues with newer
>> > cards.
>> > They are now resolved, but passing virtual interfaces is simpler and more
>> > reliable.
>> 
>> The only issue I have with passing the port is that the kernel module
>> must not be loaded from the initrd image.  So I don't see how fighting
>> with the bridges would make things easier.
>
> Unless you are forced to use some really weird configuration utility for the 
> network, configuring a bridge and assiging the bridge in the xen-domain 
> config 
> file is simpler then assigning physical network interfaces.

Hm, how is that simpler?  And how do you keep the traffic separated when
everything goes over the same bridge?  What about pppoe connections?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-28 Thread Mick
On Monday 29 Dec 2014 02:25:04 lee wrote:
> "J. Roeleveld"  writes:
> > On Thursday, December 04, 2014 11:55:50 PM lee wrote:
> >> BTW, can I use xfs for the VM, or will it be difficult to get the VM
> >> booted from xfs?
> > 
> > Using PV, not at all. As long as the kernel for the VM has XFS support
> > built- in. (This is valid for other filesystems as well)
> 
> Oh that's good to know.  I've found that Gentoo comes with ZFS, and if I
> can create a ZFS file system from some disks and boot from that without
> further ado, that might be pretty cool.
> 
> I tried out the latest Gentoo live DVD ... and I was surprised that the
> software it comes with is so old.  Libreoffice 3.x?  Seamonkey a couple
> versions behind?
> 
> Is the software going to be more recent when I actually install?

During installation you will run 'emerge --sync' which will update portage 
with the latest and greatest for each package ebuild.  Then 'emerge --update 
--newuse --deep --ask world' will install the latest version of any package 
you may have installed, or install thereafter in your system (like 
libreoffice, etc.)

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-29 Thread lee
Mick  writes:

>> On Monday 29 Dec 2014 02:25:04 lee wrote:
>> I tried out the latest Gentoo live DVD ... and I was surprised that the
>> software it comes with is so old.  Libreoffice 3.x?  Seamonkey a couple
>> versions behind?
>> 
>> Is the software going to be more recent when I actually install?
>
> During installation you will run 'emerge --sync' which will update portage 
> with the latest and greatest for each package ebuild.  Then 'emerge --update 
> --newuse --deep --ask world' will install the latest version of any package 
> you may have installed, or install thereafter in your system (like 
> libreoffice, etc.)

That's what I thought ... I tried 'emerge --sync' or so with the live
system, and it seemed to receive lots of updates.

But why use old software for a recent live DVD?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-29 Thread lee
thegeezer  writes:

> On 08/12/14 22:17, lee wrote:
>> "J. Roeleveld"  writes:
>>
>>> create 1 bridge per physical network port
>>> add the physical ports to the respective bridges
>> That tends to make the ports disappear, i. e. become unusable, because
>> the bridge swallows them.
> and if you pass the device then it becomes unusable to the host

The VM uses it instead, which is what I wanted :)

>>> pass virtual NICs to the VMs which are part of the bridges.
>> Doesn't that create more CPU load than passing the port?  And at some
>> point, you may saturate the bandwidth of the port.
>
> some forward planning is needed. obviously if you have two file servers
> using the same bridge and that bridge only has one physical port and the
> SAN is not part of the host then you might run into trouble. however,
> you can use bonding in various ways to group connections -- and in this
> way you can have a virtual nic that actually has 2x 1GB bonded devices,
> or if you choose to upgrade at a later stage you can start putting in
> 10GbE cards and the virtual machine sees nothing different, just access
> is faster.
> on the flip side you can have four or more relatively low bandwidth
> requirement virtual machines running on the same host through the same
> single physical port
> think of the bridge as an "internal, virtual, network switch"... you
> wouldn't load up a switch with 47 high bandwidth requirement servers and
> then create a single uplink to the SAN / other network without seriously
> considering bonding or partitioning in some way to reduce the 47into1
> bottleneck, and the same is true of the virtual-switch (bridge)
>
> the difference is that you need to physically be there to repatch
> connections or to add a new switch when you run out of ports. these
> limitations are largely overcome.

That all makes sense; my situation is different, though.  I plugged a
dual port card into the server and wanted to use one of the ports for
another internet connection and the other one for a separate network,
with firewalling and routing in between.  You can't keep the traffic
separate when it all goes over the same bridge, can you?

And the file server could get it's own physical port --- not because
it's really needed but because it's possible.  I could plug in another
dual-port card for that and experiment with bonding.

However, I've changed plans and intend to use a workstation as a hybrid
system to reduce power consumption and noise, and such a setup has other
advantages, too.  I'll put Gentoo on it and probably use containers for
the VMs.  Then I can still use the server for experiments and/or run
distcc on it when I want to.

>> The only issue I have with passing the port is that the kernel module
>> must not be loaded from the initrd image.  So I don't see how fighting
>> with the bridges would make things easier.
>>
>>
>
> vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]
>
> am i missing where the fight is ?

setting up the bridges

no documentation about in which order a VM will see the devices

a handful of bridges and VMs

a firewall/router VM with it's passed-through port for pppoe and three
bridges

the xen documentation being an awful mess

an awful lot of complexity required


Guess what, I still haven't found out how to actually back up and
restore a VM residing in an LVM volume.  I find it annoying that LVM
doesn't have any way of actually copying a LV.  It could be so easy if
you could just do something like 'lvcopy lv_source
other_host:/backups/lv_source_backup' and 'lvrestore
other_host:/backups/lv_source_backup vg_target/lv_source' --- or store
the copy of the LV in a local file somewhere.

Just why can't you?  ZFS apparently can do such things --- yet what's
the difference in performance of ZFS compared to hardware raid?
Software raid with MD makes for quite a slowdown.

> the only issue with bridges is that if eth0 is in the bridge, if you try
> to use eth0 directly with for example an IP address things go a bit
> weird, so you have to use br0 instead
> so don't do that.

Yes, it's very confusing.

> perhaps you don't need a full bridge and you might just prefer to lookup
> macvlan instead.
> this lets you create a new virtual device that behaves much more like a
> secondary nic
> e.g. in /etc/conf.d/net
>
> macvlan_xenguest1="eth0"
> mode_xenguest1="private"
> mac_xenguest1="de:ad:be:ef:00:01"
> config_xenguest1="192.168.1.12/24"
> routes_xenguest1="default via 192.168.1.1"
> modules_xenguest1="!ifplugd"
>
> you can then
> /etc/init.d/net.xenguest1 start
>
> i'm not big into xen but i believe you should be able to pass this as a
> "physical" device to xen and it then comes out on network interface eth0
> this way you get to keep your port without it being eaten by the bridge
> do let me know if this works with xen i'll add it to my toolbox

Hmm, and where's the other end of it?  Some physical port?  So it's like
plugging in several virtual cables into the same physical port, with a
built-in firew

Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-29 Thread Rich Freeman
On Mon, Dec 29, 2014 at 8:55 AM, lee  wrote:
>
> Just why can't you?  ZFS apparently can do such things --- yet what's
> the difference in performance of ZFS compared to hardware raid?
> Software raid with MD makes for quite a slowdown.
>

Well, there is certainly no reason that you couldn't serialize a
logical volume as far as design goes.  It just isn't implemented (as
far as I'm aware), though you certainly can just dd the contents of a
logical volume.

ZFS performs far better in such situations because you're usually just
snapshotting and not copying data at all (though ZFS DOES support
serialization which of course requires copying data, though it can be
done very efficiently if you're snapshotting since the filesystem can
detect changes without having to read everything).  Incidentally,
other than lacking maturity btrfs has the same capabilities.

The reason ZFS (and btrfs) are able to perform better is that they
dictate the filesystem, volume management, and RAID layers.  md has to
support arbitrary data being stored on top of it - it is just a big
block device which is just a gigantic array.  ZFS actually knows what
is in all those blocks, and it doesn't need to copy data that it knows
hasn't changed, protect blocks when it knows they don't contain data,
and so on.  You could probably improve on mdadm by implementing
additional TRIM-like capabilities for it so that filesystems could
inform it better about the state of blocks, which of course would have
to be supported by the filesystem.  However, I doubt it will ever work
as well as something like ZFS where all this stuff is baked into every
level of the design.

--
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-29 Thread J. Roeleveld
On Monday, December 29, 2014 03:38:40 AM lee wrote:
> "J. Roeleveld"  writes:
> > What do you mean with "unusable"?
> 
> The bridge swallows the physical port, and the port becomes
> unreachable.  IIRC, you can get around this by assigning an IP address
> to the bridge rather than to the physical port ... In any case, I'm
> finding bridges very confusing.

This is by design and is documented that way all over the web.

> >> > pass virtual NICs to the VMs which are part of the bridges.
> >> 
> >> Doesn't that create more CPU load than passing the port?
> > 
> > Do you have an IOMMU on the host?
> > I don't notice any significant increase in CPU-usage caused by the network
> > layer.
> 
> Yes, and the kernel turns it off.  Apparently it's expected to be more
> advantageous for some reason to use software emulation instead.

Huh? That is usually because of a bug in the firmware on your server.

> >> And at some
> >> point, you may saturate the bandwidth of the port.
> > 
> > And how is this different from assigning the network interface directly?
> 
> With more physical ports, you have more bandwidth available.

See following:

> >> My switch supports bonding, which means I have a total of 4Gbit/s between
> >> the server and switch for all networks. (using VLANs)
> 
> I don't know if mine does.

If bandwidth is important to you, investing in a quality switch might be more 
useful.

> >> > But it's your server, you decide on the complexity.
> >> > 
> >> > I stopped passing physical NICs when I was encountering issues with
> >> > newer
> >> > cards.
> >> > They are now resolved, but passing virtual interfaces is simpler and
> >> > more
> >> > reliable.
> >> 
> >> The only issue I have with passing the port is that the kernel module
> >> must not be loaded from the initrd image.  So I don't see how fighting
> >> with the bridges would make things easier.
> > 
> > Unless you are forced to use some really weird configuration utility for
> > the network, configuring a bridge and assiging the bridge in the
> > xen-domain config file is simpler then assigning physical network
> > interfaces.
> 
> Hm, how is that simpler?  And how do you keep the traffic separated when
> everything goes over the same bridge?  What about pppoe connections?

Multiple bridges?

--
Joost



Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-30 Thread thegeezer
On 29/12/14 13:55, lee wrote:
> thegeezer  writes:
>
>> On 08/12/14 22:17, lee wrote:
>>> "J. Roeleveld"  writes:
>>>
 create 1 bridge per physical network port
 add the physical ports to the respective bridges
>>> That tends to make the ports disappear, i. e. become unusable, because
>>> the bridge swallows them.
>> and if you pass the device then it becomes unusable to the host
> The VM uses it instead, which is what I wanted :)
>
 pass virtual NICs to the VMs which are part of the bridges.
>>> Doesn't that create more CPU load than passing the port?  And at some
>>> point, you may saturate the bandwidth of the port.
>> some forward planning is needed. obviously if you have two file servers
>> using the same bridge and that bridge only has one physical port and the
>> SAN is not part of the host then you might run into trouble. however,
>> you can use bonding in various ways to group connections -- and in this
>> way you can have a virtual nic that actually has 2x 1GB bonded devices,
>> or if you choose to upgrade at a later stage you can start putting in
>> 10GbE cards and the virtual machine sees nothing different, just access
>> is faster.
>> on the flip side you can have four or more relatively low bandwidth
>> requirement virtual machines running on the same host through the same
>> single physical port
>> think of the bridge as an "internal, virtual, network switch"... you
>> wouldn't load up a switch with 47 high bandwidth requirement servers and
>> then create a single uplink to the SAN / other network without seriously
>> considering bonding or partitioning in some way to reduce the 47into1
>> bottleneck, and the same is true of the virtual-switch (bridge)
>>
>> the difference is that you need to physically be there to repatch
>> connections or to add a new switch when you run out of ports. these
>> limitations are largely overcome.
> That all makes sense; my situation is different, though.  I plugged a
> dual port card into the server and wanted to use one of the ports for
> another internet connection and the other one for a separate network,
> with firewalling and routing in between.  You can't keep the traffic
> separate when it all goes over the same bridge, can you?
>
> And the file server could get it's own physical port --- not because
> it's really needed but because it's possible.  I could plug in another
> dual-port card for that and experiment with bonding.
>
> However, I've changed plans and intend to use a workstation as a hybrid
> system to reduce power consumption and noise, and such a setup has other
> advantages, too.  I'll put Gentoo on it and probably use containers for
> the VMs.  Then I can still use the server for experiments and/or run
> distcc on it when I want to.
>
>>> The only issue I have with passing the port is that the kernel module
>>> must not be loaded from the initrd image.  So I don't see how fighting
>>> with the bridges would make things easier.
>>>
>>>
>> vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]
>>
>> am i missing where the fight is ?
> setting up the bridges
>
> no documentation about in which order a VM will see the devices
>
> a handful of bridges and VMs
>
> a firewall/router VM with it's passed-through port for pppoe and three
> bridges
>
> the xen documentation being an awful mess
>
> an awful lot of complexity required
>
>
> Guess what, I still haven't found out how to actually back up and
> restore a VM residing in an LVM volume.  I find it annoying that LVM
> doesn't have any way of actually copying a LV.  It could be so easy if
> you could just do something like 'lvcopy lv_source
> other_host:/backups/lv_source_backup' and 'lvrestore
> other_host:/backups/lv_source_backup vg_target/lv_source' --- or store
> the copy of the LV in a local file somewhere.

agreed. you have two choices, you can either use dd and clone the LV
like a normal partition.
alternatively you can use split mirrors and i do this to clone up
physical devices:

1. make a mirror of the lv you want to copy to /dev/usb1
2. # lvconvert --splitmirrors 2 --name copy vg/lv /dev/usb1

in 2 it says
" split the mirror into two parts
   give the new version the name 'copy'
leave this on the pv /dev/usb1 "

you then need to remove it if you want from your voume group

> Just why can't you?  ZFS apparently can do such things --- yet what's
> the difference in performance of ZFS compared to hardware raid?
> Software raid with MD makes for quite a slowdown.
sorry but that's just not true, if you choose the correct raid level and
stripe it can easily compete, and be more portable (you don't have to
find the identical raid card if the raid card goes bang); many raid card
i would argue are even underpowered for their required iops
>
>> the only issue with bridges is that if eth0 is in the bridge, if you try
>> to use eth0 directly with for example an IP address things go a bit
>> weird, so you have to use br0 instead
>> so don't do that.
> Yes, it's very confusing.
>
>> perhaps 

Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-30 Thread J. Roeleveld
On Monday, December 29, 2014 02:55:49 PM lee wrote:
> thegeezer  writes:
> > On 08/12/14 22:17, lee wrote:
> >> "J. Roeleveld"  writes:
> >>> create 1 bridge per physical network port
> >>> add the physical ports to the respective bridges
> >> 
> >> That tends to make the ports disappear, i. e. become unusable, because
> >> the bridge swallows them.
> > 
> > and if you pass the device then it becomes unusable to the host
> 
> The VM uses it instead, which is what I wanted :)
> 
> >>> pass virtual NICs to the VMs which are part of the bridges.
> >> 
> >> Doesn't that create more CPU load than passing the port?  And at some
> >> point, you may saturate the bandwidth of the port.
> > 
> > some forward planning is needed. obviously if you have two file servers
> > using the same bridge and that bridge only has one physical port and the
> > SAN is not part of the host then you might run into trouble. however,
> > you can use bonding in various ways to group connections -- and in this
> > way you can have a virtual nic that actually has 2x 1GB bonded devices,
> > or if you choose to upgrade at a later stage you can start putting in
> > 10GbE cards and the virtual machine sees nothing different, just access
> > is faster.
> > on the flip side you can have four or more relatively low bandwidth
> > requirement virtual machines running on the same host through the same
> > single physical port
> > think of the bridge as an "internal, virtual, network switch"... you
> > wouldn't load up a switch with 47 high bandwidth requirement servers and
> > then create a single uplink to the SAN / other network without seriously
> > considering bonding or partitioning in some way to reduce the 47into1
> > bottleneck, and the same is true of the virtual-switch (bridge)
> > 
> > the difference is that you need to physically be there to repatch
> > connections or to add a new switch when you run out of ports. these
> > limitations are largely overcome.
> 
> That all makes sense; my situation is different, though.  I plugged a
> dual port card into the server and wanted to use one of the ports for
> another internet connection and the other one for a separate network,
> with firewalling and routing in between.  You can't keep the traffic
> separate when it all goes over the same bridge, can you?

Not if it goes over the same bridge. But as they are virtual, you can make as 
many as you need.

> And the file server could get it's own physical port --- not because
> it's really needed but because it's possible.  I could plug in another
> dual-port card for that and experiment with bonding.

How many slots do you have for all those cards?
And don't forget there is a bandwidth limit on the PCI-bus.

> However, I've changed plans and intend to use a workstation as a hybrid
> system to reduce power consumption and noise, and such a setup has other
> advantages, too.  I'll put Gentoo on it and probably use containers for
> the VMs.  Then I can still use the server for experiments and/or run
> distcc on it when I want to.

Most people use a low-power machine as a server and use the fast machine as a 
workstation to keep power consumption and noise down.

> >> The only issue I have with passing the port is that the kernel module
> >> must not be loaded from the initrd image.  So I don't see how fighting
> >> with the bridges would make things easier.
> > 
> > vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]
> > 
> > am i missing where the fight is ?
> 
> setting up the bridges

Really simple, there are plenty of guides around. Including how to configure it 
using netifrc (which is installed by default on Gentoo)

> no documentation about in which order a VM will see the devices

Same goes for physical devices. Use udev-rules to name the interfaces 
logically based on the MAC-address:
***
# cat 70-persistent-net.rules 
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
ATTR{address}=="00:16:3e:16:01:01", ATTR{dev_id}=="0x0", ATTR{type}=="1", 
KERNEL=="eth*", NAME="lan"

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
ATTR{address}=="00:16:3e:16:01:02", ATTR{dev_id}=="0x0", ATTR{type}=="1", 
KERNEL=="eth*", NAME="dmz"
***

> a handful of bridges and VMs

Only 1 bridge per network segment is needed.

> a firewall/router VM with it's passed-through port for pppoe and three
> bridges

Not difficult, had that for years till I moved the router to a seperate 
machine. 
(Needed something small to fit the room where it lives)

> the xen documentation being an awful mess

A lot of it is outdated. A big cleanup would be useful there.

> an awful lot of complexity required

There is a logic to it. If you use the basic xen install, you need to do every 
layer yourself.
You could also opt to go for a more ready product, like XCP, Vmware ESX,...
Those will do more for you, but also hide the interesting details to the point 
of being annoying.
Bit like using Ubuntu or Redhat instead of Gentoo.

> Guess what, I still haven't found out how to actually back up and
> r

Re: [gentoo-user] installing Gentoo in a xen VM

2014-12-30 Thread Rich Freeman
On Tue, Dec 30, 2014 at 1:05 PM, J. Roeleveld  wrote:
>
> I could do with a hardware controller which can be used to off-load all the
> heavy lifting for the RAIDZ-calculations away from the CPU. And if the stuff
> for the deduplication could also be done that way?
>

The CPU is the least of the reasons why ZFS/btrfs will outperform
traditional RAID.  Most of the benefits come from avoiding disk
operations.  If you write 1 byte to the middle of a file ext4 will
overwrite one block in-place, and md will read a stripe and then
rewrite the stripe.  If you write 1 byte to the middle of a file on
btrfs (and likely zfs) it will just write 1 byte to the metadata and
bunch it up with a bunch of other writes, likely overwriting an entire
stripe at once so that there is no need to read the strip first.  If
you copy a file in btrfs it will just create a reflink and it is a
metadata-only change, etc.  If you scrub your array the filesystem
knows what blocks are in use and only those get checked, and if the
filesystem has checksums at the block level it can do the scrub
asynchronously which impacts reads less, etc.

I'm sure that CPU optimizations count for something, but avoiding disk
IO is going to have a much larger impact, especially with spinning
disks.

-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-09 Thread lee
"J. Roeleveld"  writes:

>> with firewalling and routing in between.  You can't keep the traffic
>> separate when it all goes over the same bridge, can you?
>
> Not if it goes over the same bridge. But as they are virtual, you can make as 
> many as you need.

I made as few as I needed.  What sense would it make to, necessarily,
use a physical port for a pppoe connection and then go to the lengths of
creating a bridge for it to somehow bring it over to the VM?  I found it
way easier to just pass the physical port to the VM.

>> > am i missing where the fight is ?
>> 
>> setting up the bridges
>
> Really simple, there are plenty of guides around. Including how to configure 
> it 
> using netifrc (which is installed by default on Gentoo)

Creating a bridge isn't too difficult; getting it work is.

>> no documentation about in which order a VM will see the devices
>
> Same goes for physical devices.

That doesn't make it any better.

> Use udev-rules to name the interfaces 
> logically based on the MAC-address:
> ***
> # cat 70-persistent-net.rules 
> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
> ATTR{address}=="00:16:3e:16:01:01", ATTR{dev_id}=="0x0", ATTR{type}=="1", 
> KERNEL=="eth*", NAME="lan"
>
> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", 
> ATTR{address}=="00:16:3e:16:01:02", ATTR{dev_id}=="0x0", ATTR{type}=="1", 
> KERNEL=="eth*", NAME="dmz"
> ***

Who understands udev?

>> a handful of bridges and VMs
>
> Only 1 bridge per network segment is needed.

Yes, and that makes for a handful.

>> a firewall/router VM with it's passed-through port for pppoe and three
>> bridges
>
> Not difficult, had that for years till I moved the router to a seperate 
> machine. 
> (Needed something small to fit the room where it lives)

It's extremely confusing, difficult and complicated.

>> the xen documentation being an awful mess
>
> A lot of it is outdated. A big cleanup would be useful there.

Yes, it tells you lots of things that you find not to work and confuses
you even more until you don't know what to do anymore because nothing
works.

>> an awful lot of complexity required
>
> There is a logic to it.

If there is, it continues to escape me.

>> Just why can't you?  ZFS apparently can do such things --- yet what's
>> the difference in performance of ZFS compared to hardware raid?
>> Software raid with MD makes for quite a slowdown.
>
> What do you consider "hardware raid" in this comparison?

A decent hardware raid controller, like an HP smart array P800 or an IBM
ServeRaid 8k --- of course, they are outdated, yet they work well.

> Most so-called hardware raid cards depend heavily on the host CPU to do all 
> the calculations and the code used is extremely inefficient.
> The Linux build-in software raid layer ALWAYS outperforms those cards.

You mean the fake controllers?  I wouldn't use those.

> The true hardware raid cards have their own calculation chips to do the heavy 
> lifting. Those actually stand a chance to outperform the linux software raid 
> layer. It depends on the spec of the host CPU and what you use the system for.

With all CPUs, relatively slow and relatively fast ones, I do notice an
awful sluggishness with software raid which hardware raid simply doesn't
have.  This sluggishness might not be considered or even noticed by a
benchmark you might run, yet it is there.

> ZFS and BTRFS runs fully on the host CPU, but has some additional logic built-
> in which allows it to generally outperform hardware raid.

I can't tell for sure yet; so far, it seems that they do better than md
raid.  Btrfs needs some more development ...  ZFS with SSD cache is
probably hard to beat.

> I could do with a hardware controller which can be used to off-load all the 
> heavy lifting for the RAIDZ-calculations away from the CPU. And if the stuff 
> for the deduplication could also be done that way?

Yes, I've already been wondering why they don't make hardware ZFS
controllers.  There doesn't seem to be much point in making "classical"
hardware raid controllers while ZFS has so many advantages over them.

>> > the only issue with bridges is that if eth0 is in the bridge, if you try
>> > to use eth0 directly with for example an IP address things go a bit
>> > weird, so you have to use br0 instead
>> > so don't do that.
>> 
>> Yes, it's very confusing.
>
> It's just using a different name. Once it's configured, the network layer of 
> the 
> OS handles it for you.

I understand things by removing abstractions.  When you remove all
abstractions from a bridge, there isn't anything left.  A network card,
you can take into your hands and look at it; you can plug and unplug the
wire(s); these cards usually have fancy lights to show you whether
there's a connection or not, and the lights even blink when there's
network traffic.  A bridge doesn't exist, has nothing and shows you
nothing.  It's not understandable because it isn't anything, no matter
how it's called or handled.


-- 
Again we must be afraid of speaki

Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-09 Thread lee
"J. Roeleveld"  writes:

> On Monday, December 29, 2014 03:38:40 AM lee wrote:
>> "J. Roeleveld"  writes:
>> > What do you mean with "unusable"?
>> 
>> The bridge swallows the physical port, and the port becomes
>> unreachable.  IIRC, you can get around this by assigning an IP address
>> to the bridge rather than to the physical port ... In any case, I'm
>> finding bridges very confusing.
>
> This is by design and is documented that way all over the web.

Nonetheless, I find them very confusing.

>> >> > pass virtual NICs to the VMs which are part of the bridges.
>> >> 
>> >> Doesn't that create more CPU load than passing the port?
>> > 
>> > Do you have an IOMMU on the host?
>> > I don't notice any significant increase in CPU-usage caused by the network
>> > layer.
>> 
>> Yes, and the kernel turns it off.  Apparently it's expected to be more
>> advantageous for some reason to use software emulation instead.
>
> Huh? That is usually because of a bug in the firmware on your server.

Dunno, the kernel turned it off, so I read up about it and what I found
indicated that using a software emulation of NUMA is supposed to to
better --- make it sense or not.

BTW, there's a kernel option to make the kernel adjust processes for
better performance on NUMA systems.  Does that work fine, or should I
rather use numad?

>> >> And at some
>> >> point, you may saturate the bandwidth of the port.
>> > 
>> > And how is this different from assigning the network interface directly?
>> 
>> With more physical ports, you have more bandwidth available.
>
> See following:
>
>> >> My switch supports bonding, which means I have a total of 4Gbit/s between
>> >> the server and switch for all networks. (using VLANs)
>> 
>> I don't know if mine does.
>
> If bandwidth is important to you, investing in a quality switch might be more 
> useful.

Unfortunately, they can be rather expensive.

>> > Unless you are forced to use some really weird configuration utility for
>> > the network, configuring a bridge and assiging the bridge in the
>> > xen-domain config file is simpler then assigning physical network
>> > interfaces.
>> 
>> Hm, how is that simpler?  And how do you keep the traffic separated when
>> everything goes over the same bridge?  What about pppoe connections?
>
> Multiple bridges?

And how is that simpler?  Isn't that somewhat unsafe since the bridge
reaches into the host?  Why would I set up a bridge, assign an interface
to it, use special firewall rules and whatever else might be required
instead of simply giving the physical port to the VM which does the
pppoe connection and the firewalling and routing?

More bridges are more confusing.

You're kinda suggesting that it's simpler to live on an island which has
50 bridges connecting it to some mainland where you have to go every day
for work than it is to work on the island.  That seems to me like taking
a long detour every day.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-09 Thread lee
Rich Freeman  writes:

> On Mon, Dec 29, 2014 at 8:55 AM, lee  wrote:
>>
>> Just why can't you?  ZFS apparently can do such things --- yet what's
>> the difference in performance of ZFS compared to hardware raid?
>> Software raid with MD makes for quite a slowdown.
>>
>
> Well, there is certainly no reason that you couldn't serialize a
> logical volume as far as design goes.  It just isn't implemented (as
> far as I'm aware), though you certainly can just dd the contents of a
> logical volume.

You can use dd to make a copy.  Then what do you do with this copy?  I
suppose you can't just use dd to write the copy into another volume
group and have it show up as desired.  You might destroy the volume
group instead ...

> ZFS performs far better in such situations because you're usually just
> snapshotting and not copying data at all (though ZFS DOES support
> serialization which of course requires copying data, though it can be
> done very efficiently if you're snapshotting since the filesystem can
> detect changes without having to read everything).

How's the performance of software raid vs. hardware raid vs. ZFS raid
(which is also software raid)?

> Incidentally, other than lacking maturity btrfs has the same
> capabilities.

IIRC, there are things that btrfs can't do and ZFS can, like sending a
FS over the network.

> The reason ZFS (and btrfs) are able to perform better is that they
> dictate the filesystem, volume management, and RAID layers.  md has to
> support arbitrary data being stored on top of it - it is just a big
> block device which is just a gigantic array.  ZFS actually knows what
> is in all those blocks, and it doesn't need to copy data that it knows
> hasn't changed, protect blocks when it knows they don't contain data,
> and so on.  You could probably improve on mdadm by implementing
> additional TRIM-like capabilities for it so that filesystems could
> inform it better about the state of blocks, which of course would have
> to be supported by the filesystem.  However, I doubt it will ever work
> as well as something like ZFS where all this stuff is baked into every
> level of the design.

Well, I'm planning to make some tests with ZFS.  Particularly, I want to
see how it performs when NFS clients write to an exported ZFS file
system.

How about ZFS as root file system?  I'd rather create a pool over all
the disks and create file systems within the pool than use something
like ext4 to get the system to boot.

And how do I convert a system installed on an ext4 FS (on a hardware
raid-1) to ZFS?  I can plug in another two disks, create a ZFS pool from
them, make file systems (like for /tmp, /var, /usr ...) and copy
everything over.  But how do I make it bootable?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-09 Thread lee
thegeezer  writes:

>> Guess what, I still haven't found out how to actually back up and
>> restore a VM residing in an LVM volume.  I find it annoying that LVM
>> doesn't have any way of actually copying a LV.  It could be so easy if
>> you could just do something like 'lvcopy lv_source
>> other_host:/backups/lv_source_backup' and 'lvrestore
>> other_host:/backups/lv_source_backup vg_target/lv_source' --- or store
>> the copy of the LV in a local file somewhere.
>
> agreed. you have two choices, you can either use dd and clone the LV
> like a normal partition.
> alternatively you can use split mirrors and i do this to clone up
> physical devices:
>
> 1. make a mirror of the lv you want to copy to /dev/usb1
> 2. # lvconvert --splitmirrors 2 --name copy vg/lv /dev/usb1
>
> in 2 it says
> " split the mirror into two parts
>give the new version the name 'copy'
> leave this on the pv /dev/usb1 "
>
> you then need to remove it if you want from your voume group

And then you have a copy?

>> Just why can't you?  ZFS apparently can do such things --- yet what's
>> the difference in performance of ZFS compared to hardware raid?
>> Software raid with MD makes for quite a slowdown.
> sorry but that's just not true, if you choose the correct raid level and
> stripe it can easily compete,

No, it can not.  You can use the same raid level and stripe size with
the same disks, once with software raid (md), once with hardware raid.
When there's disk activity going on, you will notice an annoying
sluggishness with the software raid which hardware raid just doesn't
have.

> and be more portable (you don't have to find the identical raid card
> if the raid card goes bang);

That depends on what cards you use.  HP smart arrays are supposed to be
compatible throughout different models.

> many raid card i would argue are even underpowered for their required
> iops

When you require more iops than your hardware delivers, you need to
upgrade the hardware.  Do you have some number to show that md raid
performs better, without the sluggishness, than a decent hardware raid
controller with otherwise identical hardware?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-09 Thread Rich Freeman
On Fri, Jan 9, 2015 at 4:02 PM, lee  wrote:
> Rich Freeman  writes:
>
>> On Mon, Dec 29, 2014 at 8:55 AM, lee  wrote:
>>>
>>> Just why can't you?  ZFS apparently can do such things --- yet what's
>>> the difference in performance of ZFS compared to hardware raid?
>>> Software raid with MD makes for quite a slowdown.
>>>
>>
>> Well, there is certainly no reason that you couldn't serialize a
>> logical volume as far as design goes.  It just isn't implemented (as
>> far as I'm aware), though you certainly can just dd the contents of a
>> logical volume.
>
> You can use dd to make a copy.  Then what do you do with this copy?  I
> suppose you can't just use dd to write the copy into another volume
> group and have it show up as desired.  You might destroy the volume
> group instead ...

You can dd from a logical volume into a file, and from a file into a
logical volume.  You won't destroy the volume group unless you do
something dumb like trying to copy it directly onto a physical volume.
Logical volumes are just block devices as far as the kernel is
concerned.

>
>> ZFS performs far better in such situations because you're usually just
>> snapshotting and not copying data at all (though ZFS DOES support
>> serialization which of course requires copying data, though it can be
>> done very efficiently if you're snapshotting since the filesystem can
>> detect changes without having to read everything).
>
> How's the performance of software raid vs. hardware raid vs. ZFS raid
> (which is also software raid)?

Well, depends on your hardware.  mdadm does pretty well though I'm
sure a very good quality hardware RAID will outperform it.  I would
think that ZFS would outperform both for some workloads, and
underperform it for others - it works very differently.  ZFS doesn't
have the write hole and all that, but I would think that large (many
stripes) internal writes to files would work worse since ZFS has to
juggle metadata and other filesystems will overwrite it in place.

>
>> Incidentally, other than lacking maturity btrfs has the same
>> capabilities.
>
> IIRC, there are things that btrfs can't do and ZFS can, like sending a
> FS over the network.

There are things that each filesystem can do that the other cannot.
That doesn't include sending a filesystem over the network.  btrfs
send can serialize snapshots or the differences between two snapshots.

>
> How about ZFS as root file system?  I'd rather create a pool over all
> the disks and create file systems within the pool than use something
> like ext4 to get the system to boot.

I doubt zfs is supported by grub and such, so you'd have to do the
usual in-betweens as you're eluding to.  However, I suspect it would
generally work.  I haven't really used zfs personally other than
tinkering around a bit in a VM.

>
> And how do I convert a system installed on an ext4 FS (on a hardware
> raid-1) to ZFS?  I can plug in another two disks, create a ZFS pool from
> them, make file systems (like for /tmp, /var, /usr ...) and copy
> everything over.  But how do I make it bootable?
>

I'm pretty sure you'd need an initramfs and a boot partition that is
readable by the bootloader.  You can skip that with btrfs, but not
with zfs.  GRUB is FSF so I doubt they'll be doing anything about zfs
anytime soon.  Otherwise, you'll have to copy everything over - btrfs
can do in-place ext4 conversion, but not zfs.

-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-10 Thread lee
Rich Freeman  writes:

> On Fri, Jan 9, 2015 at 4:02 PM, lee  wrote:
>> Rich Freeman  writes:
>>
>>> On Mon, Dec 29, 2014 at 8:55 AM, lee  wrote:

 Just why can't you?  ZFS apparently can do such things --- yet what's
 the difference in performance of ZFS compared to hardware raid?
 Software raid with MD makes for quite a slowdown.

>>>
>>> Well, there is certainly no reason that you couldn't serialize a
>>> logical volume as far as design goes.  It just isn't implemented (as
>>> far as I'm aware), though you certainly can just dd the contents of a
>>> logical volume.
>>
>> You can use dd to make a copy.  Then what do you do with this copy?  I
>> suppose you can't just use dd to write the copy into another volume
>> group and have it show up as desired.  You might destroy the volume
>> group instead ...
>
> You can dd from a logical volume into a file, and from a file into a
> logical volume.  You won't destroy the volume group unless you do
> something dumb like trying to copy it directly onto a physical volume.
> Logical volumes are just block devices as far as the kernel is
> concerned.

You mean I need to create a LV (of the same size) and then use dd to
write the backup into it?  That doesn't seem like a safe method.

>> How about ZFS as root file system?  I'd rather create a pool over all
>> the disks and create file systems within the pool than use something
>> like ext4 to get the system to boot.
>
> I doubt zfs is supported by grub and such, so you'd have to do the
> usual in-betweens as you're eluding to.  However, I suspect it would
> generally work.  I haven't really used zfs personally other than
> tinkering around a bit in a VM.

That would be a very big disadvantage.  When you use zfs, it doesn't
really make sense to have extra partitions or drives; you just want to
create a pool from all drives and use that.  Even if you accept a boot
partition, that partition must be on a raid volume, so you either have
to dedicate at least two disks to it, or you're employing software raid
for a very small partition and cannot use the whole device for ZFS as
recommended.  That just sucks.

>> And how do I convert a system installed on an ext4 FS (on a hardware
>> raid-1) to ZFS?  I can plug in another two disks, create a ZFS pool from
>> them, make file systems (like for /tmp, /var, /usr ...) and copy
>> everything over.  But how do I make it bootable?
>>
>
> I'm pretty sure you'd need an initramfs and a boot partition that is
> readable by the bootloader.  You can skip that with btrfs, but not
> with zfs.  GRUB is FSF so I doubt they'll be doing anything about zfs
> anytime soon.  Otherwise, you'll have to copy everything over - btrfs
> can do in-place ext4 conversion, but not zfs.

Well, I don't want to use btrfs (yet).  The raid capabilities of brtfs
are probably one of its most unstable features.  They are derived from
mdraid:  Can they compete with ZFS both in performance and, more
important, reliability?

With ZFS at hand, btrfs seems pretty obsolete.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-10 Thread Rich Freeman
On Sat, Jan 10, 2015 at 1:22 PM, lee  wrote:
> Rich Freeman  writes:
>>
>> You can dd from a logical volume into a file, and from a file into a
>> logical volume.  You won't destroy the volume group unless you do
>> something dumb like trying to copy it directly onto a physical volume.
>> Logical volumes are just block devices as far as the kernel is
>> concerned.
>
> You mean I need to create a LV (of the same size) and then use dd to
> write the backup into it?  That doesn't seem like a safe method.

Doing backups with dd isn't terribly practical, but it is completely
safe if done correctly.  The LV would need to be the same size or
larger, or else your filesystem will be truncated.

>
>>> How about ZFS as root file system?  I'd rather create a pool over all
>>> the disks and create file systems within the pool than use something
>>> like ext4 to get the system to boot.
>>
>> I doubt zfs is supported by grub and such, so you'd have to do the
>> usual in-betweens as you're eluding to.  However, I suspect it would
>> generally work.  I haven't really used zfs personally other than
>> tinkering around a bit in a VM.
>
> That would be a very big disadvantage.  When you use zfs, it doesn't
> really make sense to have extra partitions or drives; you just want to
> create a pool from all drives and use that.  Even if you accept a boot
> partition, that partition must be on a raid volume, so you either have
> to dedicate at least two disks to it, or you're employing software raid
> for a very small partition and cannot use the whole device for ZFS as
> recommended.  That just sucks.

Just create a small boot partition and give the rest to zfs.  A
partition is a block device, just like a disk.  ZFS doesn't care if it
is managing the entire disk or just a partition.  This sort of thing
was very common before grub2 started supporting more filesystems.

>
> Well, I don't want to use btrfs (yet).  The raid capabilities of brtfs
> are probably one of its most unstable features.  They are derived from
> mdraid:  Can they compete with ZFS both in performance and, more
> important, reliability?
>


Btrfs raid1 is about as stable as btrfs without raid.  I can't say
whether any code from mdraid was borrowed but btrfs raid works
completely differently and has about as much in common with mdraid as
zfs does.  I can't speak for zfs performance, but btrfs performance
isn't all that great right now - I don't think there is any
theoretical reason why it couldn't be as good as zfs one day, but it
isn't today.  Btrfs is certainly far less reliable than zfs on solaris
- zfs on linux has less long-term history of any kind but most seem to
think it works reasonably well.

> With ZFS at hand, btrfs seems pretty obsolete.

You do realize that btrfs was created when ZFS was already at hand,
right?  I don't think that ZFS will be likely to make btrfs obsolete
unless it adopts more dynamic desktop-oriented features (like being
able to modify a vdev), and is relicensed to something GPL-compatible.
Unless those happen, it is unlikely that btrfs is going to go away,
unless it is replaced by something different.


-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread lee
Rich Freeman  writes:

> On Sat, Jan 10, 2015 at 1:22 PM, lee  wrote:
>> Rich Freeman  writes:
>>>
>>> You can dd from a logical volume into a file, and from a file into a
>>> logical volume.  You won't destroy the volume group unless you do
>>> something dumb like trying to copy it directly onto a physical volume.
>>> Logical volumes are just block devices as far as the kernel is
>>> concerned.
>>
>> You mean I need to create a LV (of the same size) and then use dd to
>> write the backup into it?  That doesn't seem like a safe method.
>
> Doing backups with dd isn't terribly practical, but it is completely
> safe if done correctly.  The LV would need to be the same size or
> larger, or else your filesystem will be truncated.

Yes, my impression is that it isn't very practical or a good method, and
I find it strange that LVM is still lacking some major features.

 How about ZFS as root file system?  I'd rather create a pool over all
 the disks and create file systems within the pool than use something
 like ext4 to get the system to boot.
>>>
>>> I doubt zfs is supported by grub and such, so you'd have to do the
>>> usual in-betweens as you're eluding to.  However, I suspect it would
>>> generally work.  I haven't really used zfs personally other than
>>> tinkering around a bit in a VM.
>>
>> That would be a very big disadvantage.  When you use zfs, it doesn't
>> really make sense to have extra partitions or drives; you just want to
>> create a pool from all drives and use that.  Even if you accept a boot
>> partition, that partition must be on a raid volume, so you either have
>> to dedicate at least two disks to it, or you're employing software raid
>> for a very small partition and cannot use the whole device for ZFS as
>> recommended.  That just sucks.
>
> Just create a small boot partition and give the rest to zfs.  A
> partition is a block device, just like a disk.  ZFS doesn't care if it
> is managing the entire disk or just a partition.

ZFS does care: You cannot export ZFS pools residing on partitions, and
apparently ZFS cannot use the disk cache as efficiently when it uses
partitions.  Caching in memory is also less efficient because another
file system has its own cache.  On top of that, you have the overhead of
software raid for that small partition unless you can dedicate
hardware-raided disks for /boot.

> This sort of thing was very common before grub2 started supporting
> more filesystems.

That doesn't mean it's a good setup.  I'm finding it totally
undesirable.  Having a separate /boot partition has always been a
crutch.

>> Well, I don't want to use btrfs (yet).  The raid capabilities of brtfs
>> are probably one of its most unstable features.  They are derived from
>> mdraid:  Can they compete with ZFS both in performance and, more
>> important, reliability?
>>
>
>
> Btrfs raid1 is about as stable as btrfs without raid.  I can't say
> whether any code from mdraid was borrowed but btrfs raid works
> completely differently and has about as much in common with mdraid as
> zfs does.

Hm, I might have misunderstood an article I've read.

> I can't speak for zfs performance, but btrfs performance isn't all
> that great right now - I don't think there is any theoretical reason
> why it couldn't be as good as zfs one day, but it isn't today.

Give it another 10 years, and btrfs might be the default choice.

> Btrfs is certainly far less reliable than zfs on solaris - zfs on
> linux has less long-term history of any kind but most seem to think it
> works reasonably well.

It seems that ZFS does work (I can't say anything about its reliability
yet), and it provides a solution unlike any other FS.  Btrfs doesn't
fully work yet, see [1].


[1]: https://btrfs.wiki.kernel.org/index.php/RAID56

>> With ZFS at hand, btrfs seems pretty obsolete.
>
> You do realize that btrfs was created when ZFS was already at hand,
> right?  I don't think that ZFS will be likely to make btrfs obsolete
> unless it adopts more dynamic desktop-oriented features (like being
> able to modify a vdev), and is relicensed to something GPL-compatible.
> Unless those happen, it is unlikely that btrfs is going to go away,
> unless it is replaced by something different.

Let's say it seems /currently/ obsolete.  It's not fully working yet,
reliability is very questionable, and it's not as easy to handle as ZFS.
By the time btrfs has matured to the point where it isn't obsolete
anymore, chances are that there will be something else which replaces
it.

Solutions are needed /now/, not in about 10 years when btrfs might be
ready.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 8:14 AM, lee  wrote:
> Rich Freeman  writes:
>>
>> Doing backups with dd isn't terribly practical, but it is completely
>> safe if done correctly.  The LV would need to be the same size or
>> larger, or else your filesystem will be truncated.
>
> Yes, my impression is that it isn't very practical or a good method, and
> I find it strange that LVM is still lacking some major features.

Generally you do backup at the filesystem layer, not at the volume
management layer.  LVM just manages a big array of disk blocks.  It
has no concept of files.

>>
>> Just create a small boot partition and give the rest to zfs.  A
>> partition is a block device, just like a disk.  ZFS doesn't care if it
>> is managing the entire disk or just a partition.
>
> ZFS does care: You cannot export ZFS pools residing on partitions, and
> apparently ZFS cannot use the disk cache as efficiently when it uses
> partitions.

Cite?  This seems unlikely.

> Caching in memory is also less efficient because another
> file system has its own cache.

There is no other filesystem.  ZFS is running on bare metal.  It is
just pointing to a partition on a drive (an array of blocks) instead
of the whole drive (an array of blocks).  The kernel does not cache
partitions differently from drives.

> On top of that, you have the overhead of
> software raid for that small partition unless you can dedicate
> hardware-raided disks for /boot.

Just how often are you reading/writing from your boot partition?  You
only read from it at boot time, and you only write to it when you
update your kernel/etc.  There is no requirement for it to be raided
in any case, though if you have multiple disks that wouldn't hurt.

>
>> This sort of thing was very common before grub2 started supporting
>> more filesystems.
>
> That doesn't mean it's a good setup.  I'm finding it totally
> undesirable.  Having a separate /boot partition has always been a
> crutch.

Better not buy an EFI motherboard.  :)

>
>>> With ZFS at hand, btrfs seems pretty obsolete.
>>
>> You do realize that btrfs was created when ZFS was already at hand,
>> right?  I don't think that ZFS will be likely to make btrfs obsolete
>> unless it adopts more dynamic desktop-oriented features (like being
>> able to modify a vdev), and is relicensed to something GPL-compatible.
>> Unless those happen, it is unlikely that btrfs is going to go away,
>> unless it is replaced by something different.
>
> Let's say it seems /currently/ obsolete.

You seem to have an interesting definition of "obsolete" - something
which holds potential promise for the future is better described as
"experimental."

>
> Solutions are needed /now/, not in about 10 years when btrfs might be
> ready.
>

Well, feel free to create one.  Nobody is stopping anybody from using
zfs, but unless it is either relicensed by Oracle or the
kernel/grub/etc is relicensed by everybody else you're unlikely to see
it become a mainstream solution.  That seems to be the biggest barrier
to adoption, though it would be nice for small installations if vdevs
were more dynamic.

By all means use it if that is your preference.  A license may seem
like a small thing, but entire desktop environments have been built as
a result of them.  When a mainstream linux distro can't put ZFS
support on their installation CD due to licensing compatibility it
makes it pretty impractical to use it for your default filesystem.

I'd love to see the bugs worked out of btrfs faster, but for what I've
paid for it so far I'd say I'm getting good value for my $0.  It is
FOSS - it gets done when those contributing to it (whether paid or
not) are done.  The ones who are paying for it get to decide for
themselves if it meets their needs, which could be quite different
from yours.

I'd actually be interested in a comparison of the underlying btrfs vs
zfs designs.  I'm not talking about implementation (bugs/etc), but the
fundamental designs.  What features are possible to add to one which
are impossible to add to the other, what performance limitations will
the one always suffer in comparison to the other, etc?  All the
comparisons I've seen just compare the implementations, which is
useful if you're trying to decide what to install /right now/ but less
so if you're trying to understand the likely future of either.

-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread lee
Rich Freeman  writes:

> On Sun, Jan 11, 2015 at 8:14 AM, lee  wrote:
>> Rich Freeman  writes:
>>>
>>> Doing backups with dd isn't terribly practical, but it is completely
>>> safe if done correctly.  The LV would need to be the same size or
>>> larger, or else your filesystem will be truncated.
>>
>> Yes, my impression is that it isn't very practical or a good method, and
>> I find it strange that LVM is still lacking some major features.
>
> Generally you do backup at the filesystem layer, not at the volume
> management layer.  LVM just manages a big array of disk blocks.  It
> has no concept of files.

That may require downtime while the idea of taking snapshots and then
backing up the volume is to avoid the downtime.

>>> Just create a small boot partition and give the rest to zfs.  A
>>> partition is a block device, just like a disk.  ZFS doesn't care if it
>>> is managing the entire disk or just a partition.
>>
>> ZFS does care: You cannot export ZFS pools residing on partitions, and
>> apparently ZFS cannot use the disk cache as efficiently when it uses
>> partitions.
>
> Cite?  This seems unlikely.

, [ man zpool ]
|For pools to be portable, you  must  give  the  zpool  command
|whole  disks,  not  just partitions, so that ZFS can label the
|disks with portable EFI labels.  Otherwise,  disk  drivers  on
|platforms  of  different  endianness  will  not  recognize the
|disks.
`

You may be able to export them, and then you don't really know what
happens when you try to import them.  I didn't keep a bookmark for the
article that mentioned the disk cache.

When you read about ZFS, you'll find that using the whole disk is
recommended while using partitions is not.

>> Caching in memory is also less efficient because another
>> file system has its own cache.
>
> There is no other filesystem.  ZFS is running on bare metal.  It is
> just pointing to a partition on a drive (an array of blocks) instead
> of the whole drive (an array of blocks).  The kernel does not cache
> partitions differently from drives.

How do you use a /boot partition that doesn't have a file system?

>> On top of that, you have the overhead of
>> software raid for that small partition unless you can dedicate
>> hardware-raided disks for /boot.
>
> Just how often are you reading/writing from your boot partition?  You
> only read from it at boot time, and you only write to it when you
> update your kernel/etc.  There is no requirement for it to be raided
> in any case, though if you have multiple disks that wouldn't hurt.

If you want to accept that the system goes down or has to be brought
down or is unable to boot because the disk you have your /boot partition
on has failed, you may be able to get away with a non-raided /boot
partition.

When you do that, what's the advantage other than saving the software
raid?  You still need to either dedicate a disk to it, or you have to
leave a part of all the other disks unused and cannot use them as a
whole for ZFS because otherwise they will be of different sizes.

>>> This sort of thing was very common before grub2 started supporting
>>> more filesystems.
>>
>> That doesn't mean it's a good setup.  I'm finding it totally
>> undesirable.  Having a separate /boot partition has always been a
>> crutch.
>
> Better not buy an EFI motherboard.  :)

Yes, they are a security hazard and a PITA.  Maybe I can sit it out
until they come up with something better.

 With ZFS at hand, btrfs seems pretty obsolete.
>>>
>>> You do realize that btrfs was created when ZFS was already at hand,
>>> right?  I don't think that ZFS will be likely to make btrfs obsolete
>>> unless it adopts more dynamic desktop-oriented features (like being
>>> able to modify a vdev), and is relicensed to something GPL-compatible.
>>> Unless those happen, it is unlikely that btrfs is going to go away,
>>> unless it is replaced by something different.
>>
>> Let's say it seems /currently/ obsolete.
>
> You seem to have an interesting definition of "obsolete" - something
> which holds potential promise for the future is better described as
> "experimental."

Can you build systems on potential promises for the future?

If the resources it takes to develop btrfs would be put towards
improving ZFS, or the other way round, wouldn't that be more efficient?
We might even have a better solution available now.  Of course, it's not
a good idea to remove variety, so it's a dilemma.  But are the features
provided or intended to be provided and the problems both btrfs and ZFS
are trying to solve so much different that that each of them needs to
re-invent the wheel?

>> Solutions are needed /now/, not in about 10 years when btrfs might be
>> ready.
>>
>
> Well, feel free to create one.  Nobody is stopping anybody from using
> zfs, but unless it is either relicensed by Oracle or the
> kernel/grub/etc is relicensed by everybody else you're unlikely to see
> it become a mainstream 

Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 1:42 PM, lee  wrote:
> Rich Freeman  writes:
>>
>> Generally you do backup at the filesystem layer, not at the volume
>> management layer.  LVM just manages a big array of disk blocks.  It
>> has no concept of files.
>
> That may require downtime while the idea of taking snapshots and then
> backing up the volume is to avoid the downtime.

Sure, which is why btrfs and zfs support snapshots at the filesystem
layer.  You can do an lvm snapshot but it requires downtime unless you
want to mount an unclean snapshot for backups.

>
 Just create a small boot partition and give the rest to zfs.  A
 partition is a block device, just like a disk.  ZFS doesn't care if it
 is managing the entire disk or just a partition.
>>>
>>> ZFS does care: You cannot export ZFS pools residing on partitions, and
>>> apparently ZFS cannot use the disk cache as efficiently when it uses
>>> partitions.
>>
>> Cite?  This seems unlikely.
>
> , [ man zpool ]
> |For pools to be portable, you  must  give  the  zpool  command
> |whole  disks,  not  just partitions, so that ZFS can label the
> |disks with portable EFI labels.  Otherwise,  disk  drivers  on
> |platforms  of  different  endianness  will  not  recognize the
> |disks.
> `
>
> You may be able to export them, and then you don't really know what
> happens when you try to import them.  I didn't keep a bookmark for the
> article that mentioned the disk cache.
>
> When you read about ZFS, you'll find that using the whole disk is
> recommended while using partitions is not.

Ok, I get the EFI label issue if zfs works with multiple endianness
and only stores the setting in the EFI label (which seems like an odd
way to do things).  You didn't mention anything about disk cache and
it seems unlikely that using partitions vs whole drives is going to
matter here.

Honestly, I feel like there is a lot of cargo cult mentality with many
in the ZFS community.  Another one of those "must do" things is using
ECC RAM.  Sure, you're more likely to end up with data corruptions
without it than with it, but the same is true with ANY filesystem.
I've yet to hear any reasonable argument as to why ZFS is more
susceptible to memory corruption than ext4.

>
>>> Caching in memory is also less efficient because another
>>> file system has its own cache.
>>
>> There is no other filesystem.  ZFS is running on bare metal.  It is
>> just pointing to a partition on a drive (an array of blocks) instead
>> of the whole drive (an array of blocks).  The kernel does not cache
>> partitions differently from drives.
>
> How do you use a /boot partition that doesn't have a file system?

Oh, I thought you meant that the memory cache of zfs itself is less
efficient.  I'd be interested in a clear explanation as to why
10X100GB filesystems use cache differently than 1X1TB filesystem if
file access is otherwise the same.  However, even if having a 1GB boot
partition mounted caused wasted cache space that problem is easily
solved by just not mounting it except when doing kernel updates.

>
>>> On top of that, you have the overhead of
>>> software raid for that small partition unless you can dedicate
>>> hardware-raided disks for /boot.
>>
>> Just how often are you reading/writing from your boot partition?  You
>> only read from it at boot time, and you only write to it when you
>> update your kernel/etc.  There is no requirement for it to be raided
>> in any case, though if you have multiple disks that wouldn't hurt.
>
> If you want to accept that the system goes down or has to be brought
> down or is unable to boot because the disk you have your /boot partition
> on has failed, you may be able to get away with a non-raided /boot
> partition.
>
> When you do that, what's the advantage other than saving the software
> raid?  You still need to either dedicate a disk to it, or you have to
> leave a part of all the other disks unused and cannot use them as a
> whole for ZFS because otherwise they will be of different sizes.

Sure, when I have multiple disks available and need a boot partition I
RAID it with software RAID.  So what?  Updating your kernel /might/ be
a bit slower when you do that twice a month or whatever.

>> Better not buy an EFI motherboard.  :)
>
> Yes, they are a security hazard and a PITA.  Maybe I can sit it out
> until they come up with something better.

Security hazard?  How is being able to tell your motherboard to only
boot software of your own choosing a security hazard?  Or are you
referring to something other than UEFI?

I think the pain is really only there because most of the
utilities/etc haven't been updated to the new reality.

In any case, I don't see it going away anytime soon.

>
> With ZFS at hand, btrfs seems pretty obsolete.

 You do realize that btrfs was created when ZFS was already at hand,
 right?  I don't think that ZFS will be likely to make btrfs obsolete
 unless