Re: [OpenIndiana-discuss] send/receive .. I expected .zfs directories

2017-03-25 Thread Harry Putnam
Doug Hughes  writes:

> silly question: is the filesystem mounted on the receive side? if
> you just sent it, you'll want to mount it.

Yes, for a silly pilot.  You nailed it .. I was to dopey to notice it
was not mounted.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] What to do when zfs destroy -f or -R or both doesn't destroy

2017-03-25 Thread Harry Putnam
Cleaning out a machine I'm about to bring down and reinstall hipster
in place of 151_9 that has been running there.

I found one stubborn snapshot that zfs destroy just will not remove.

Tried -f flag -r and -R flags

The zfs message says the snapshot is busy.

I could not find any evidence of it actually being busy and finally
rebooted just incase something was running I could not find.
On reboot I hit an important service failure that dropped me into a
console shell.

That error (although that is not what I'm asking about in this post
.. but will be, in a separate post)

Paraphrased  and shortened error message
svc:/system/filesystem/local:default failed with exit status 95
And it was put into mainenance status

Ok now back to the troublesome snapshot: In the console shell again
zfs destroy just plain will not remove this snapshot and hence the fs
itself. zfs message
ontinues to say the snapshot is busy.

Surely there is someway to get this thing destroyed?  Even if it
actually is busy.. but I doubt that is the case.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] send/receive .. I expected .zfs directories

2017-03-25 Thread Doug Hughes
silly question: is the filesystem mounted on the receive side? if you just sent 
it, you'll want to mount it.

Sent from my android device.

-Original Message-
From: Harry Putnam 
To: openindiana-discuss@openindiana.org
Sent: Sat, 25 Mar 2017 19:35
Subject: [OpenIndiana-discuss] send/receive .. I expected .zfs directories

I just completed my very first send/receive.  I was suprised to find
no .zfs directories once the recv completed.
Did I do something wrong or is that normal?
Am I supposed to create them?

Here is what I did in general.

On send-host I had two fs p0/vb p0/vb/vm
Sent/recv'ed p0/vb and then ditto for p0/vb/vm

on recv host I tried first to just create p0.  But that didn't seem to
work when attempting to send p0/vb. Eventually I tried creating p0/vb
on recv host... then an error message told me to use -F on send host.
So:
  `zfs send -v p0/vb | ssh recv-host zfs recv -vF p0/vb'

That worked.
Then without creating anything more on recv host:

 `zfs send -v p0/vb/vm | ssh recv-host zfs recv -v p0/vb/vm'

And that also worked.

I really don't now what or how experienced people do these things.
I do know that in general the docs on zfs promote using fairly fine
grained fs.  I assume that means as in the case above.

Instead of one fs p0/vb/vm I have two as shown above... Ok when it
comes to sending and receiving those fs, it would seem then that one
must send first p0/vb and then p0/vb/vm If one had fs several layers
deeper or with more endpoints, it seems this could get quite tedious.

I suspect I may be missing the boat here somewhere but lets say it was
even more fine grained.  Like an fs for each vm and one for the ISOs
required.

  p0/vb/
  p0/vb/vm
  p0/vb/vm/vm1
  p0/vb/vm/vm2
  p0/vb/vm/vm3
  p0/vb/vm/vm4
  p0/vb/isos

Then it seem this would require a whole lot of sending receiving, to
move this data.

Seven separate send receives by my count.

So I'm convinced that either fine grainedness is not so wonderful or
I'm really not understanding how this is done.

I wonder how folks with some experience in this stuff, do this sort of
thing?



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] send/receive .. I expected .zfs directories

2017-03-25 Thread Harry Putnam
I just completed my very first send/receive.  I was suprised to find
no .zfs directories once the recv completed.
Did I do something wrong or is that normal?
Am I supposed to create them?

Here is what I did in general.

On send-host I had two fs p0/vb p0/vb/vm
Sent/recv'ed p0/vb and then ditto for p0/vb/vm

on recv host I tried first to just create p0.  But that didn't seem to
work when attempting to send p0/vb. Eventually I tried creating p0/vb
on recv host... then an error message told me to use -F on send host.
So:
  `zfs send -v p0/vb | ssh recv-host zfs recv -vF p0/vb'

That worked.
Then without creating anything more on recv host:

 `zfs send -v p0/vb/vm | ssh recv-host zfs recv -v p0/vb/vm'

And that also worked.

I really don't now what or how experienced people do these things.
I do know that in general the docs on zfs promote using fairly fine
grained fs.  I assume that means as in the case above.

Instead of one fs p0/vb/vm I have two as shown above... Ok when it
comes to sending and receiving those fs, it would seem then that one
must send first p0/vb and then p0/vb/vm If one had fs several layers
deeper or with more endpoints, it seems this could get quite tedious.

I suspect I may be missing the boat here somewhere but lets say it was
even more fine grained.  Like an fs for each vm and one for the ISOs
required.

  p0/vb/
  p0/vb/vm
  p0/vb/vm/vm1
  p0/vb/vm/vm2
  p0/vb/vm/vm3
  p0/vb/vm/vm4
  p0/vb/isos

Then it seem this would require a whole lot of sending receiving, to
move this data.

Seven separate send receives by my count.

So I'm convinced that either fine grainedness is not so wonderful or
I'm really not understanding how this is done.

I wonder how folks with some experience in this stuff, do this sort of
thing?



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Wiki comments now disabled due to spam

2017-03-25 Thread Aurélien Larcher
Hi,
Due to very recent spam activity in the comment sections I disabled the
option.

Kind regards

Aurélien

-- 
---
Praise the Caffeine embeddings
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Wiki page for upgrading systems is out of date.

2017-03-25 Thread Aurélien Larcher
Hi,
I was pointed to a Wiki page by a newcomer who was willing to update a
system. He asked for confirmation that the steps described were correct at:

https://wiki.openindiana.org/oi/Upgrading+OpenIndiana

This page was last updated in 2013 and proves again that keeping obsolete
pages on the Wiki is wrong.

I just fixed quickly the page to make it look less irrelevant but we should
definitely vacuum clean the Wiki if nobody steps up for updating the pages:
not in 2 or 4 years but now.
Having such obsolete information on the official Wiki is harmful.

-- 
---
Praise the Caffeine embeddings
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 800GB in 6 discs - 460GB in raidz

2017-03-25 Thread Harry Putnam
Timothy Coalson  writes:

> On Fri, Mar 24, 2017 at 5:43 PM, Harry Putnam  wrote:
>>
>> On the vm, I've created discs like so:
>>
>> 2 @  96G
>> 2 @ 116G
>> 2 @ 216G
>>
>> These have just over 2x variation in size, and you put them all in a
> single raidz vdev, that is why your space is "missing" - it is just like
> making a mirror out of a 96G disk and a 216G disk, which would have only
> 96G available and "lose" over 100G of space.  raidz doesn't do anything
> magical for different-sized disks - it uses a slice from each device that
> is equal to the *smallest* device in the vdev.
>
> So, your raw space *before* parity or overhead (which is what "zpool list"
> says) should be 6*96G = 576G.  Remove 1 device worth for raidz1 (5*96G =
> 480G), and a fudge factor for metadata overhead, and lands you at the 460G
> that "zfs list" says about it.

Wow, I really have missed the boat here.  I did not realize that raidz
would reduce everthing to lowest common denominator.

I won't be so `tricky' with disc sizing from here on.  The sad part is
since it is vm stuff, I could of just as well used all 200G dics
and had 1200 g raw space ... or even better... 2 600g discs in
mirrored arrangement, and forget all about raidz.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 800GB in 6 discs - 460GB in raidz

2017-03-25 Thread Harry Putnam
jason matthews  writes:

> On 3/24/17 3:43 PM, Harry Putnam wrote:
>> I continue to have a problem understanding the output of zfs list.
>
> You may want zpool list, depending on what you trying to get. Let's
> see what you have done. please show us: zpool status p0

Ah, yes .. shows a major difference from zfs list

>> Ok, if that is correct then it means that 6 disks when totalled
>> individually adding up to 800+ GB has been reduced by nearly half to
>> accomodate raidz.

> Did you use raidz2 ? In any case, I almost never deploy
> raidz(2). Mirrors offer faster writes with just a minimal trade off in
> money and storage bays.

First a note: I moved on to running zfs send/recv but with a new and
bigger pool to recv.

On the OP arrangement
I did not use raidz2. I used raidz1 specifally as in
  zpool create p0 raidz1 disk disk disk disk disk disk

I've since changed things a bit by adding an additional disk of 416G
so now 7 disks... and this time with the additional disk in place I
did use raidz2, for what I guess would be some additional data
protection.

So with new setup:
2@26 G (mirrored pair for rpool)
===

zpool p0 discs
2@96  G .. 192 G
2@116 G .. 232 G
2@216 G .. 432 G
1@416 G .. 416 G
 ==
   1272 G

zfs list (under raidz2 now) showed 460+ G available

But as you've pointed out `zpool list' shows a quite different
picture:

These stats are with 7 discs totalling 1272 G under raidz2

zpool list p0
  NAME SIZE ALLOC FREE EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
  p0   668G 333G  335G-   0%   49%  1.00x  ONLINE  -
  
  ===
  
   zfs list -r p0
  NAME  USED  AVAIL  REFER  MOUNTPOINT
  p0237G   223G  42.7K  /z
  p0/testg 1.07M   223G   532K  /z/testg
  p0/testg/t1   532K   223G   532K  /z/testg/t1
  p0/vb 237G   223G  6.56G  /z/vb
  p0/vb/vm  230G   223G   230G  /z/vb/vm
  


>> This is an install on a vbox vm.  I created 6 more discs beyond 2 for
>> a mirrored rpool.

> The English to English translation is, you made two pools one of which
> has six disks in raidz and another pool that is a single set of
> mirrors
> Am I following you?

Yes, but as explained above the setup is different now, in that the
raidz is now raidz2 and there are 7 instead of 6 discs

> [snip some stuff i could not grok]
Sorry for my poor command of how to make myself understood.

>> So, in round figures it loses 50 % of available space in raidz
>> config.
>
> No it shouldnt. If you used raidz2 then you would loose 1/3 of your 6
> disk pool to parity.

`zpool list' agrees with your assesment above.

But slightly different story now in raidz2 with 7 discs
  668 G of 1272 G possible so close to 50 % in raidz2 [Note: see 
  `zpool list' above]

>> I have no experience with raidz and have only ever mirrored paired
>> discs.

> Good man. If Jesus was a storage engineer, that is how he would do it.

My reasons weren't quite so inspired though... My pea brain saw
mirroring as simpler.

>> I put these discs in raidz1 in a effort to get a little more out of the
>> total space.  But in fact got very very little more space.

> zfs set compression=lz4 p0

Thanks (I should have thought of that.)

>> I guessed that I would be left with around 600 GB of space based
>> on a wag only.
>
> Wag?

  `Wild assed guess'

>> This is all assuming I haven't made some boneheaded mistake or am
>> suffering from a boneheaded non-understanding of what `zfs list' tells
>> us.

[...]

> we'll have to review the bonehead part after you send me zpool status

[NOTE: understand that the data below is NOT the setup my OP was about]

 zpool status p0
pool: p0
   state: ONLINE
scan: none requested
  config:
  NAMESTATE READ WRITE CKSUM
  p0  ONLINE   0 0 0
raidz2-0  ONLINE   0 0 0
  c3t3d0  ONLINE   0 0 0
  c3t4d0  ONLINE   0 0 0
  c3t5d0  ONLINE   0 0 0
  c3t6d0  ONLINE   0 0 0
  c3t7d0  ONLINE   0 0 0
  c3t8d0  ONLINE   0 0 0
  c3t9d0  ONLINE   0 0 0
  errors: No known data errors

As you see, the setup is now under raidz2 but when I posted OP it was
6 discs under raidz1

[...] snipped good examples

Thanks for the good examples.

And thanks to all posters for showing great patience.

Viewing things with zpool list has made it clearer and I'm now gaining
some experience and understanding of raidz1 and raidz2

Now with your and others help I'm beginning to see that my old method
of paired disc mirrors is good for what I'm doing and I will be using
that technique once I get the hardware HOST reinstalled.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org