RE: A few questions about HAMMER master/slave PFS

2018-08-28 Thread Laurent Vivier
Hello Michael,

 
Thank you kindly for your thorough reply, it helped me to make sense of what 
was going on with the snapshots, indeed, once I reduced the daily retention 
time to 7d for snapshots, I was able to get rid of those old ones with hammer 
cleanup !

 
I also learned about being able to have different retention time for the daily 
PFS slave snapshot which is neat.

 
Apologies for the borked output regarding hammer info. As it stands now after 
running prune-everything on all PFS's, I still have a noticable size 
difference, I'll try to investigate some more and maybe leave some time to 
hammer mirror-stream to even things out.

 
Cheers,

 
Laurent

-Original message-
From:Michael Neumann 
Sent:Mon 08-27-2018 10:19 pm
Subject:Re: A few questions about HAMMER master/slave PFS
To:Laurent Vivier ; 
CC:users@dragonflybsd.org; 
On Mon, Aug 27, 2018 at 06:08:10PM +0200, Laurent Vivier wrote:
> Hello DFlyers,
> 
> I am running DragonFly 5.2.2 as an NFS server with 2x 2TB LUKS-backed HDD's 
> with HAMMER1 v7 as FS in a PFS master/slave mirror-stream setup and it's been 
> working great so far :)
> 
> The setup looks like this :
> 
> Disk1 -> LUKS -> HAMMER1_2TB -> PFS# 0 (root) 
> Disk2 -> LUKS -> HAMMER_SLAVE -> PFS# 0 (root) + PFS 1 (slave to HAMMER1_2TB 
> / PFS# 0)
> 
> Now that I am using the system for a little while, I have a few questions 
> regarding its behavior :
> 
> 1) I realized that the HAMMER slave PFS has several snapshots (not created by 
> me) that seems seemingly impossible to remove e.g

HAMMER1 uses fine-grained snapshots, which means that it basically
automatically creates an "unnamed" snapshot whenever it flushes
something to disk (roughly every 30 seconds). Usually, you don't want to
keep all these fine-grained snapshots and instead keep one snapshot per
day (or one per week...). This is what "hammer cleanup" does. You can
configure it's history retention policy by running "hammer viconfig".
From the man page of "hammer cleanup":

                   snapshots  1d 60d  # 0d 0d  for PFS /tmp, /var/tmp, /usr/obj
                   prune      1d 5m
                   rebalance  1d 5m
                   #dedup      1d 5m  # not enabled by default
                   reblock    1d 5m
                   recopy     30d 10m

This means, when you run "hammer cleanup", it takes one snapshot every
day, and retains the last 60 daily snapshots. hammer cleanup performs
other tasks, for instance pruning (1d = every day, for 5 minutes).
Pruning deletes all the intermediate fine-grained snapshots between the
"named" daily snapshots. It also rebalances the B-tree, dedups, reblocks
and recopies. These are all operations to optimize performance. Dedup is
to save space. 

If you want to delete snapshots, just change "snapshots 1d 60d" to, for
instance, "snapshots 1d 7d" and run "hammer cleanup". If you want to
delete all historical data, you might use "hammer prune-everything", but
be careful and read the man page!!!

One nice feature of HAMMER1 is that the master PFS and slave PFS can
have different history retention policies in place.

> 
> Hikaeme# hammer info /HAMMER_SLAVE
> Volume identification
> ?? Label hammer1_secure_slave
> ?? No. Volumes 1
> ?? HAMMER Volumes?? /dev/mapper/knox2
> ?? Root Volume /dev/mapper/knox2
> ?? FSID?? 0198767f-7139-11e8-9608-6d626d258b95
> ?? HAMMER Version?? 7
> Big-block information
> ?? Total?? 238335
> ?? Used 192009 (80.56%)
> ?? Reserved 32 (0.01%)
> ?? Free?? 46294 (19.42%)
> Space information
> ?? No. Inodes?? 35668
> ?? Total size 1.8T (1999298887680 bytes)
> ?? Used 1.5T (80.56%)
> ?? Reserved 256M (0.01%)
> ?? Free 362G (19.42%)
> PFS information
> ?? ?? PFS#?? Mode?? Snaps
> ??  0?? MASTER?? 0 (root PFS)
> ??  1?? SLAVE 3
> Hikaeme# hammer snapls /HAMMER_SLAVE/pfs/hanma
> Snapshots on /HAMMER_SLAVE/pfs/hanma?? PFS#1
> Transaction ID?? ?? Timestamp?? ?? Note
> 0x0001034045c0?? 2018-07-04 18:19:42 CEST?? -
> 0x0001034406c0?? 2018-07-09 19:28:04 CEST?? -
> 0x00010383bc30?? 2018-08-12 10:51:07 CEST?? -
> Hikaeme# hammer snaprm 0x0001034045c0
> hammer: hammer snaprm 0x0001034045c0: Operation not supported

Have you tried hammer snaprm /HAMMER_SLAVE/pfs/hanma@@0x0001034045c0

> My question her

Re: A few questions about HAMMER master/slave PFS

2018-08-27 Thread Michael Neumann
On Mon, Aug 27, 2018 at 06:08:10PM +0200, Laurent Vivier wrote:
> Hello DFlyers,
> 
> I am running DragonFly 5.2.2 as an NFS server with 2x 2TB LUKS-backed HDD's 
> with HAMMER1 v7 as FS in a PFS master/slave mirror-stream setup and it's been 
> working great so far :)
> 
> The setup looks like this :
> 
> Disk1 -> LUKS -> HAMMER1_2TB -> PFS# 0 (root) 
> Disk2 -> LUKS -> HAMMER_SLAVE -> PFS# 0 (root) + PFS 1 (slave to HAMMER1_2TB 
> / PFS# 0)
> 
> Now that I am using the system for a little while, I have a few questions 
> regarding its behavior :
> 
> 1) I realized that the HAMMER slave PFS has several snapshots (not created by 
> me) that seems seemingly impossible to remove e.g

HAMMER1 uses fine-grained snapshots, which means that it basically
automatically creates an "unnamed" snapshot whenever it flushes
something to disk (roughly every 30 seconds). Usually, you don't want to
keep all these fine-grained snapshots and instead keep one snapshot per
day (or one per week...). This is what "hammer cleanup" does. You can
configure it's history retention policy by running "hammer viconfig".
>From the man page of "hammer cleanup":

   snapshots  1d 60d  # 0d 0d  for PFS /tmp, /var/tmp, /usr/obj
   prune  1d 5m
   rebalance  1d 5m
   #dedup  1d 5m  # not enabled by default
   reblock1d 5m
   recopy 30d 10m

This means, when you run "hammer cleanup", it takes one snapshot every
day, and retains the last 60 daily snapshots. hammer cleanup performs
other tasks, for instance pruning (1d = every day, for 5 minutes).
Pruning deletes all the intermediate fine-grained snapshots between the
"named" daily snapshots. It also rebalances the B-tree, dedups, reblocks
and recopies. These are all operations to optimize performance. Dedup is
to save space. 

If you want to delete snapshots, just change "snapshots 1d 60d" to, for
instance, "snapshots 1d 7d" and run "hammer cleanup". If you want to
delete all historical data, you might use "hammer prune-everything", but
be careful and read the man page!!!

One nice feature of HAMMER1 is that the master PFS and slave PFS can
have different history retention policies in place.

> 
> Hikaeme# hammer info /HAMMER_SLAVE
> Volume identification
> ?? Label hammer1_secure_slave
> ?? No. Volumes 1
> ?? HAMMER Volumes?? /dev/mapper/knox2
> ?? Root Volume /dev/mapper/knox2
> ?? FSID?? 0198767f-7139-11e8-9608-6d626d258b95
> ?? HAMMER Version?? 7
> Big-block information
> ?? Total?? 238335
> ?? Used 192009 (80.56%)
> ?? Reserved 32 (0.01%)
> ?? Free?? 46294 (19.42%)
> Space information
> ?? No. Inodes?? 35668
> ?? Total size 1.8T (1999298887680 bytes)
> ?? Used 1.5T (80.56%)
> ?? Reserved 256M (0.01%)
> ?? Free 362G (19.42%)
> PFS information
> ?? ?? PFS#?? Mode?? Snaps
> ??  0?? MASTER?? 0 (root PFS)
> ??  1?? SLAVE 3
> Hikaeme# hammer snapls /HAMMER_SLAVE/pfs/hanma
> Snapshots on /HAMMER_SLAVE/pfs/hanma?? PFS#1
> Transaction ID?? ?? Timestamp?? ?? Note
> 0x0001034045c0?? 2018-07-04 18:19:42 CEST?? -
> 0x0001034406c0?? 2018-07-09 19:28:04 CEST?? -
> 0x00010383bc30?? 2018-08-12 10:51:07 CEST?? -
> Hikaeme# hammer snaprm 0x0001034045c0
> hammer: hammer snaprm 0x0001034045c0: Operation not supported

Have you tried hammer snaprm /HAMMER_SLAVE/pfs/hanma@@0x0001034045c0

> My question here is should I worry about it/is that an intended behavior ? 
> 
> 2) When executing hammer info and looking at the used space between master 
> and slave PFS, I have quite a big difference (22GB, even after running hammer 
> cleanup)
> 
> Hikaeme# hammer info
> Volume identification
> ?? Label HAMMER1_2TB
> ?? No. Volumes 1
> ?? HAMMER Volumes?? /dev/mapper/knox
> ?? Root Volume /dev/mapper/knox
> ?? FSID?? 81e9d5eb-6be7-11e8-802d-6d626d258b95
> ?? HAMMER Version?? 7
> Big-block information
> ?? Total?? 238335
> ?? Used 194636 (81.66%)
> ?? Reserved 32 (0.01%)
> ?? Free?? 43667 (18.32%)
> Space information
> ?? No. Inodes?? 35665
> ?? Total size 1.8T (1999298887680 bytes)
> ?? Used 1.5T (81.66%)
> ?? Reserved 256M (0.01%)
> ?? Free 341G (18.32%)
> PFS information
> ?? ?? PFS#?? Mode?? Snaps
> ??  0?? MASTER?? 0 (root PFS

A few questions about HAMMER master/slave PFS

2018-08-27 Thread Laurent Vivier
Hello DFlyers,

I am running DragonFly 5.2.2 as an NFS server with 2x 2TB LUKS-backed HDD's 
with HAMMER1 v7 as FS in a PFS master/slave mirror-stream setup and it's been 
working great so far :)

The setup looks like this :

Disk1 -> LUKS -> HAMMER1_2TB -> PFS# 0 (root) 
Disk2 -> LUKS -> HAMMER_SLAVE -> PFS# 0 (root) + PFS 1 (slave to HAMMER1_2TB / 
PFS# 0)

Now that I am using the system for a little while, I have a few questions 
regarding its behavior :

1) I realized that the HAMMER slave PFS has several snapshots (not created by 
me) that seems seemingly impossible to remove e.g

Hikaeme# hammer info /HAMMER_SLAVE
Volume identification
    Label   hammer1_secure_slave
    No. Volumes 1
    HAMMER Volumes  /dev/mapper/knox2
    Root Volume /dev/mapper/knox2
    FSID    0198767f-7139-11e8-9608-6d626d258b95
    HAMMER Version  7
Big-block information
    Total  238335
    Used   192009 (80.56%)
    Reserved   32 (0.01%)
    Free    46294 (19.42%)
Space information
    No. Inodes  35668
    Total size   1.8T (1999298887680 bytes)
    Used 1.5T (80.56%)
    Reserved 256M (0.01%)
    Free 362G (19.42%)
PFS information
      PFS#  Mode    Snaps
     0  MASTER  0 (root PFS)
     1  SLAVE   3
Hikaeme# hammer snapls /HAMMER_SLAVE/pfs/hanma
Snapshots on /HAMMER_SLAVE/pfs/hanma    PFS#1
Transaction ID        Timestamp        Note
0x0001034045c0    2018-07-04 18:19:42 CEST    -
0x0001034406c0    2018-07-09 19:28:04 CEST    -
0x00010383bc30    2018-08-12 10:51:07 CEST    -
Hikaeme# hammer snaprm 0x0001034045c0
hammer: hammer snaprm 0x0001034045c0: Operation not supported

My question here is should I worry about it/is that an intended behavior ? 

2) When executing hammer info and looking at the used space between master and 
slave PFS, I have quite a big difference (22GB, even after running hammer 
cleanup)

Hikaeme# hammer info
Volume identification
    Label   HAMMER1_2TB
    No. Volumes 1
    HAMMER Volumes  /dev/mapper/knox
    Root Volume /dev/mapper/knox
    FSID    81e9d5eb-6be7-11e8-802d-6d626d258b95
    HAMMER Version  7
Big-block information
    Total  238335
    Used   194636 (81.66%)
    Reserved   32 (0.01%)
    Free    43667 (18.32%)
Space information
    No. Inodes  35665
    Total size   1.8T (1999298887680 bytes)
    Used 1.5T (81.66%)
    Reserved 256M (0.01%)
    Free 341G (18.32%)
PFS information
      PFS#  Mode    Snaps
     0  MASTER  0 (root PFS)

Volume identification
    Label   hammer1_secure_slave
    No. Volumes 1
    HAMMER Volumes  /dev/mapper/knox2
    Root Volume /dev/mapper/knox2
    FSID    0198767f-7139-11e8-9608-6d626d258b95
    HAMMER Version  7
Big-block information
    Total  238335
    Used   191903 (80.52%)
    Reserved   32 (0.01%)
    Free    46400 (19.47%)
Space information
    No. Inodes  35668
    Total size   1.8T (1999298887680 bytes)
    Used 1.5T (80.52%)
    Reserved 256M (0.01%)
    Free 363G (19.47%)
PFS information
      PFS#  Mode    Snaps
     0  MASTER  0 (root PFS)
     1  SLAVE   3

Is that something I should be worried about too ? As far as I can tell the 
replication of new files from master and slave works great, I can see the new 
files on the slave PFS fairly quickly.

Wishing you all a good day,

Laurent