Re: [lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread Stéphane Graber
On Wed, May 25, 2016 at 05:05:03PM +0200, tapczan wrote:
> On 25 May 2016 at 16:59, Stéphane Graber  wrote:
> > Well, now that's just weird... why would LXD think that /dev/zfs is the
> > backing block for your zpool...
> >
> > Can you paste a "zpool status"?
> 
> # zpool status
>   pool: lxd
>  state: ONLINE
>   scan: none requested
> config:
> 
> NAMESTATE READ WRITE CKSUM
> lxd ONLINE   0 0 0
>  zfs   ONLINE   0 0 0


Right, that's the problem right here... ZFS doesn't give us a full path
so we have to make a guess and the logic right now is to first look
directly in /dev, then in /dev/mapper, ...

But because your LV is called "zfs", it matches /dev/zfs and LXD moves
on with that...

We may be able to make it slightly smarter by having it skip character
devices which would probably be enough to fix your case.

> 
> errors: No known data errors
> 
> Hmm, important thing is that this zfs pool is build on top of LVM
> volume (which name is "zfs"):
> 
>   --- Logical volume ---
>   LV Path/dev/vg0/zfs
>   LV Namezfs
>   VG Namevg0
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread Brian Candler

On 25/05/2016 16:05, tapczan wrote:

Hmm, important thing is that this zfs pool is build on top of LVM
volume (which name is "zfs"):

   --- Logical volume ---
   LV Path/dev/vg0/zfs
   LV Namezfs
   VG Namevg0

I would expect the logical volume to appear as either or both of these:

ls -l /dev/vg0/zfs
ls -l /dev/mapper/vg0-zfs

which in turn would be symlinks to /dev/dm-X

The /dev/zfs special file is nothing to do with a logical volume called 
zfs, as far as I know.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread tapczan
On 25 May 2016 at 16:59, Stéphane Graber  wrote:
> Well, now that's just weird... why would LXD think that /dev/zfs is the
> backing block for your zpool...
>
> Can you paste a "zpool status"?

# zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
lxd ONLINE   0 0 0
 zfs   ONLINE   0 0 0

errors: No known data errors

Hmm, important thing is that this zfs pool is build on top of LVM
volume (which name is "zfs"):

  --- Logical volume ---
  LV Path/dev/vg0/zfs
  LV Namezfs
  VG Namevg0
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread Stéphane Graber
On Wed, May 25, 2016 at 04:56:10PM +0200, tapczan wrote:
> On 25 May 2016 at 16:50, Stéphane Graber  wrote:
> > Can you check through /dev for the device with:
> >  - type: block
> >  - major: 10
> >  - minor: 55
> 
> # ls -la /dev/
> crw---  1 root   root 10,  55 May 25 12:17 zfs

Well, now that's just weird... why would LXD think that /dev/zfs is the
backing block for your zpool...

Can you paste a "zpool status"?

> 
> 
> > In general, the LXD code does figure out what the backing devices are
> > for your zpool and lets you set blkio limits on it. Whether zfs respects
> > them is a whole different problem though :)
> 
> Thanks for info.
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread tapczan
On 25 May 2016 at 16:50, Stéphane Graber  wrote:
> Can you check through /dev for the device with:
>  - type: block
>  - major: 10
>  - minor: 55

# ls -la /dev/
crw---  1 root   root 10,  55 May 25 12:17 zfs


> In general, the LXD code does figure out what the backing devices are
> for your zpool and lets you set blkio limits on it. Whether zfs respects
> them is a whole different problem though :)

Thanks for info.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread Stéphane Graber
On Wed, May 25, 2016 at 04:23:07PM +0200, tapczan wrote:
> Hello
> 
> As I understand ZFS is recommended filesystem for LXD. However is that
> possible that limiting block I/O resources on ZFS is not working?
> 
> # lxc info | grep zfs
>   storage: zfs
>   storage.zfs_pool_name: lxd
> 
> 
> # lxc config device set c1 root limits.read 10MB
> error: Block device doesn't support quotas: 10:55
> 
> I'm aware of compatibility issues with ZFS io throttling via cgroup:
> https://github.com/zfsonlinux/zfs/issues/1952
> https://github.com/zfsonlinux/zfs/issues/4275
> 
> Am I missing something or limiting block I/O resources is really
> currently impossible?
> 
> Thanks!

Can you check through /dev for the device with:
 - type: block
 - major: 10
 - minor: 55

Seems like LXD thinks that's what's backing your ZFS and the kernel
doesn't think it's a real block device so can't apply I/O quotas on it.


In general, the LXD code does figure out what the backing devices are
for your zpool and lets you set blkio limits on it. Whether zfs respects
them is a whole different problem though :)

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD with ZFS and limiting block I/O resources

2016-05-25 Thread tapczan
Hello

As I understand ZFS is recommended filesystem for LXD. However is that
possible that limiting block I/O resources on ZFS is not working?

# lxc info | grep zfs
  storage: zfs
  storage.zfs_pool_name: lxd


# lxc config device set c1 root limits.read 10MB
error: Block device doesn't support quotas: 10:55

I'm aware of compatibility issues with ZFS io throttling via cgroup:
https://github.com/zfsonlinux/zfs/issues/1952
https://github.com/zfsonlinux/zfs/issues/4275

Am I missing something or limiting block I/O resources is really
currently impossible?

Thanks!
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Overlayfs Snapshot Clones

2016-05-25 Thread jan.zeller
Hi all,

I am using 

# lxc-info --version
2.0.1

and try to create a LXC like this :

# lxc-create --template download -n Alpine-3.3.x --bdev overlayfs -- --dist 
alpine --release 3.3 --arch amd64 

which works like a charm.

Then I create a snapshot clone :
# lxc-copy --name=Alpine-3.3.x --newname=test --snapshot

# lxc-ls -f
NAME STATE   AUTOSTART GROUPS IPV4 IPV6 
Alpine-3.3.x STOPPED 0 -  --
test STOPPED 0 -  --


Starting the freshly created snapshot clone named 'test' fails :

# lxc-start --name test --logpriority=debug --logfile=/var/log/lxc/test.log

# cat /var/log/lxc/test.log
  lxc-start 20160525153840.984 INFO lxc_start_ui - lxc_start.c:main:264 
- using rcfile /var/lib/lxc/test/config
  lxc-start 20160525153840.984 WARN lxc_confile - 
confile.c:config_pivotdir:1879 - lxc.pivotdir is ignored.  It will soon become 
an error.
  lxc-start 20160525153840.986 INFO lxc_start - 
start.c:lxc_check_inherited:251 - closed inherited fd 4
  lxc-start 20160525153840.990 INFO lxc_container - 
lxccontainer.c:do_lxcapi_start:797 - Attempting to set proc title to [lxc 
monitor] /var/lib/lxc test
  lxc-start 20160525153840.991 INFO lxc_utils - 
utils.c:setproctitle:1460 - setting cmdline failed - Invalid argument
  lxc-start 20160525153840.991 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
LSM security driver nop
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment 
this to allow umount -f;  not recommended.
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount 
action 0
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount 
action 0
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .[all].
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for kexec_load action 327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for kexec_load action 327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .open_by_handle_at errno 1.
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for open_by_handle_at action 
327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for open_by_handle_at action 
327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .init_module errno 1.
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for init_module action 327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for init_module action 327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .finit_module errno 1.
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for finit_module action 
327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for finit_module action 
327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .delete_module errno 1.
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for delete_module action 
327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 
327681
  lxc-start 20160525153840.991 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main 
one
  lxc-start 20160525153840.991 DEBUGlxc_start - 
start.c:setup_signal_fd:289 - sigchild handler set
  lxc-start 20160525153840.991 INFO lxc_start - 
start.c:lxc_check_inherited:251 - closed inherited fd 4
  lxc-start 20160525153840.991 DEBUGlxc_console - 
console.c:lxc_console_peer_default:469 - no console peer
  lxc-start 20160525153840.991 INFO lxc_start - start.c:lxc_init:488 - 
'test' is initialized
  lxc-start 20160525153840.992 DEBUGlxc_start - 

Re: [lxc-users] zfs disk usage for published lxd images

2016-05-25 Thread Tomasz Chmielewski
I've been using btrfs quite a lot and it's great technology. There are 
some shortcomings though:


1) compression only really works with compress-force mount argument

On a system which only stores text logs (receiving remote rsyslog logs), 
I was gaining around 10% with compress=zlib mount argument - not that 
great for text files/logs. With compress-force=zlib, I'm getting over 
85% compress ratio (i.e. using just 165 GB of disk space to store 1.2 TB 
data). Maybe that's the consequence of receiving log streams, not sure 
(but, compress-force fixed bad compression ratio).



2) the first kernel where I'm not getting out-of-space issues is 4.6 
(which was released yesterday). If you're using a distribution kernel, 
you will probably be seeing out-of-space issues. Quite likely scenario 
to hit out-of-space with a kernel lower than 4.6 is to use a database 
(postgresql, mongo etc.) and to snapshot the volume. Ubuntu users can 
download kernel packages from 
http://kernel.ubuntu.com/~kernel-ppa/mainline/



3) had some really bad experiences with btrfs quotas stability in older 
kernels, and judging by the amount of changes in this area on 
linux-btrfs mailing list, I'd rather wait a few stable kernels than use 
it again



4) if you use databases - you should chattr +C database dir, otherwise, 
performance will suffer. Please remember that chattr +C does not have 
any effect on existing files, so you might need to stop your database, 
copy the files out, chattr +C the database dir, copy the files in



Other than that - works fine, snapshots are very useful.

It's hard to me to say what's "more stable" on Linux (btrfs or zfs); my 
bets would be btrfs getting more attention in the coming year, as it's 
getting its remaining bugs fixed.



Tomasz Chmielewski
http://wpkg.org




On 2016-05-16 20:20, Ron Kelley wrote:

I tried ZFS on various linux/FreeBSD builds in the past and the
performance was aweful.  It simply required too much RAM to perform
properly.  This is why I went the BTRFS route.

Maybe I should look at ZFS again on Ubuntu 16.04...



On 5/16/2016 6:59 AM, Fajar A. Nugraha wrote:
On Mon, May 16, 2016 at 5:38 PM, Ron Kelley  
wrote:

For what's worth, I use BTRFS, and it works great.


Btrfs also works in nested lxd, so if that's your primary use I highly
recommend btrfs.

Of course, you could also keep using zfs-backed containers, but
manually assign a zvol-formatted-as-btrfs for first-level-container's
/var/lib/lxd.

 Container copies are almost instant.  I can use compression with 
minimal overhead,


zfs and btrfs are almost identical in that aspect (snapshot/clone, and
lz4 vs lzop in compression time and ratio). However, lz4 (used in zfs)
is MUCH faster at decompression compared to lzop (used in btrfs),
while lzop uses less memory.


use quotas to limit container disk space,


zfs does that too

and can schedule a deduplication task via cron to save even more 
space.


That is, indeed, only available in btrfs


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users