[OmniOS-discuss] Restrucuring ZPool required?

2018-06-18 Thread Oliver Weinmann

Hi,





we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then 
when we created the pool we only cared about disk space and so we created a 
raidz2 pool with all 24disks in one vdev. I have the impression that this is 
cool for disk space but is really bad for IO since this only provides the IO of 
a single disk. We only use it for backups and cold CIFS data but I have the 
impression that especially running a single VEEAM backup copy job really maxes 
out the IO. In our case the VEEAM backup copy job reads and writes the data 
from the storage. Now I wonder if it makes sense to restructure the Pool. I 
have to admit that I don't have any other system with a lot of disk space so I 
can't simply mirror the snapshots to another system and recreate the pool from 
scratch.





Would adding two ZIL SSDs improve performance?





Any help is much appreciated.





Best Regards,


Oliver___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss]   VEEAM backups to CIFS fail only on OmniOS hypercongerged VM

2018-06-18 Thread Oliver Weinmann

Hi,





sorry for not being clear enough. Anyway, problem solved. I created a fresh 
OmniOS VM and configured CIFS with no AD connection, created a local user and 
now the backups are working fine since 4 days. :)





Thanks and Best Regards,


Oliver





Am 14. Juni 2018 um 14:47 schrieb Dan McDonald :









On Jun 14, 2018, at 3:04 AM, Oliver Weinmann  wrote:

I would be more than happy to use a zone instead of a full blown VM but since 
there is no ISCSI and NFS server support in a Zone I have to stick with the VM 
as we need NFS since the VM is also a datastore for a few VMs.

You rambled a bit here, so I'm not sure what exactly you're asking. I do know 
that:

- CIFS-server-in-a-zone should work

- NFS-server-in-a-zone and iSCSI-target-in-a-zone are both not available right 
now.

There is purported to be a prototype of NFS-server-in-a-zone kicking around 
*somewhere* but that may have been tied up. I'd watch distros, especially those 
working on file service, to see if that shows up at some point, where it can be 
upstreamed to illumos-gate (and then back down to illumos-omnios).

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] VEEAM backups to CIFS fail only on OmniOS hypercongerged VM

2018-06-14 Thread Oliver Weinmann



Dear All,



I’m struggling with this issue since day one and I have not found any solution 
for it yet. We use VEEAM to back up our VMs and an OmniOS VM as a CIFS target. 
We have one OmniOS VM for internal and one for DMZ. VEEAM Backups to the one 
for internal work fine. No problems at all. Backups to the DMZ one fail every 
time. I can access the CIFS share just fine from windows. When the backup 
starts two or three VMs are backed up and then it fails. I have requested 
support from VEEAM and it turns out the same job running against a Windows 
server CIFS share works just fine. I couldn’t believe that OmniOS is the 
culprit as the CIFS implementation from Illumos is very good. So I setup a new 
OmniOS bare-metal server and created a zone for DMZ. I setup a cifs share and 
ran the same job. Everything works fine. I compared the settings from both the 
VM and the Zone and they are 100% identical. Only difference is one is a VM and 
one is a Zone. But since the VEEAM backup to the internal VM has no problems 
with the backup, I don’t think virtualization is a problem here. Is there 
anywhere I can start investigating further? I would be more than happy to use a 
zone instead of a full blown VM but since there is no ISCSI and NFS server 
support in a Zone I have to stick with the VM as we need NFS since the VM is 
also a datastore for a few VMs.



Any help is really appreciated.



Best Regards,

Oliver___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
I think I really have to start investigating using 3rd party apps again. 
Nexenta doesn't let me change the zfs send command. I can only adjust settings 
for the autosync job.




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


From: OmniOS-discuss  On Behalf Of 
Guenther Alka
Sent: Montag, 11. Juni 2018 12:16
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send | recv

I suppose you can either keep the last snaps identical on source and target 
with a simple zfs send recursively or
you need a script that cares about and does a send per filesystem to allow a 
different number of snaps on the target system.

This is not related to Nexenta but I saw the same on current OmniOS -> OmniOS 
as they use the same Open-ZFS base Illumos

Gea
@naapp-it.org
Am 11.06.2018 um 10:58 schrieb Oliver Weinmann:
Yes it is recursively. We have hundreds of child datasets so single filesystems 
would be a real headache to maintain. L




[cid:image001.png@01D40180.1FD80740]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de<http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


From: OmniOS-discuss 
<mailto:omnios-discuss-boun...@lists.omniti.com>
 On Behalf Of Guenther Alka
Sent: Montag, 11. Juni 2018 09:55
To: omnios-discuss@lists.omniti.com<mailto:omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] zfs send | recv

did you replicate recursively?
keeping a different snap history should be possible when you send single 
filesystems.


gea
@napp-it.org
Am 11.06.2018 um 09:11 schrieb Oliver Weinmann:
Hi,


We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta 
calls this feature autosync. While they say it is only 100% supported between 
nexenta systems, we managed to get it working with OmniOS too. It's Not rocket 
science. But there is one big problem. In the autosync job on the Nexenta 
system one can specify how many snaps to keep local on the nexenta and how many 
to keep on the target system. Somehow we always have the same amount of snaps 
on both systems. Autosync always cleans all snaps on the dest that don't exist 
on the source. I contacted nexenta support and they told me that this is due to 
different versions of zfs send and zfs recv. There should be a -K  flag, that 
instructs the destination to not destroy snapshots that don't exist on the 
source. Is such a flag available in OmniOS? I assume the flag is set on the 
sending side so that the receiving side has to understand it.



Best Regards,

Oliver




[cid:image001.png@01D40180.1FD80740]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de<http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller





___

OmniOS-discuss mailing list

OmniOS-discuss@lists.omniti.com<mailto:OmniOS-discuss@lists.omniti.com>

http://lists.omniti.com/mailman/listinfo/omnios-discuss


--



--

H  f   G

Hochschule für Gestaltung

university of design



Schwäbisch Gmünd

Rektor-Klaus Str. 100

73525 Schwäbisch Gmünd



Guenther Alka, Dipl.-Ing. (FH)

Leiter des Rechenzentrums

head of computer center



Tel 07171 602 627

Fax 07171 69259

guenther.a...@hfg-gmuend.de<mailto:guenther.a...@hfg-gmuend.de>

http://rz.hfg-gmuend.de
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
Yes it is recursively. We have hundreds of child datasets so single filesystems 
would be a real headache to maintain. :(




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


From: OmniOS-discuss  On Behalf Of 
Guenther Alka
Sent: Montag, 11. Juni 2018 09:55
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send | recv

did you replicate recursively?
keeping a different snap history should be possible when you send single 
filesystems.


gea
@napp-it.org
Am 11.06.2018 um 09:11 schrieb Oliver Weinmann:
Hi,


We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta 
calls this feature autosync. While they say it is only 100% supported between 
nexenta systems, we managed to get it working with OmniOS too. It's Not rocket 
science. But there is one big problem. In the autosync job on the Nexenta 
system one can specify how many snaps to keep local on the nexenta and how many 
to keep on the target system. Somehow we always have the same amount of snaps 
on both systems. Autosync always cleans all snaps on the dest that don't exist 
on the source. I contacted nexenta support and they told me that this is due to 
different versions of zfs send and zfs recv. There should be a -K  flag, that 
instructs the destination to not destroy snapshots that don't exist on the 
source. Is such a flag available in OmniOS? I assume the flag is set on the 
sending side so that the receiving side has to understand it.



Best Regards,

Oliver




[cid:image001.png@01D40173.314884D0]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de<http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller






___

OmniOS-discuss mailing list

OmniOS-discuss@lists.omniti.com<mailto:OmniOS-discuss@lists.omniti.com>

http://lists.omniti.com/mailman/listinfo/omnios-discuss



--
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
Hi Priyadarshan,

Thanks a lot for the quick and comprehensive answer. I agree that using a third 
party tool might be helpful. When we started using the two ZFS systems, I 
really had a hard time testing a few third party tools. One of the biggest 
problems was that I wanted to be able to use the omnios systems as a DR site. 
However flipping the mirror always caused the nexenta system to crash. So until 
today there is no real solution to use the omnios system as a real DR site. 
This is due to different versions of zfs send and recv on the two systems and 
not related to using third party tools. I have tested zrep as it contains a DR 
mode and looked at znapzend but had not time to test it. We were told that the 
new version of nexenta no longer supports an ordinary way to sync snaps to a 
non nexenta system as there is no shell access anymore. Nexenta 5.x provides an 
API for this. I need to find some time to test it.

Best Regards,
Oliver





Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland GmbH
 Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
www.telespazio-vega.de
Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
Message-
From: priyadarshan 
Sent: Montag, 11. Juni 2018 09:46
To: Oliver Weinmann 
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send | recv



> On 11 Jun 2018, at 09:11, Oliver Weinmann 
>  wrote:
>
> Hi,
>
> We are replicating snapshots from a Nexenta system to an OmniOS system. 
> Nexenta calls this feature autosync. While they say it is only 100% supported 
> between nexenta systems, we managed to get it working with OmniOS too. It’s 
> Not rocket science. But there is one big problem. In the autosync job on the 
> Nexenta system one can specify how many snaps to keep local on the nexenta 
> and how many to keep on the target system. Somehow we always have the same 
> amount of snaps on both systems. Autosync always cleans all snaps on the dest 
> that don’t exist on the source. I contacted nexenta support and they told me 
> that this is due to different versions of zfs send and zfs recv. There should 
> be a –K  flag, that instructs the destination to not destroy snapshots that 
> don't exist on the source. Is such a flag available in OmniOS? I assume the 
> flag is set on the sending side so that the receiving side has to understand 
> it.
>
> Best Regards,
> Oliver
>

Hello,

OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks 
zfs from illumos-gate.

One can follow various upstream merges here:
https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed

Based on that, illumos man pages also apply to OmniOS: 
https://omnios.omniti.com/wiki.php/ManSections

Illumos zfs man page is here: https://illumos.org/man/1m/zfs

That page does not seem to offer a -K flag.

You may want to consider third party tools.

We have a very similar use-case as you detailed, fulfilled by using zfsnap, 
with reliable and consistent results.

git repository: https://github.com/zfsnap/zfsnap
site: http://www.zfsnap.org/
man page: http://www.zfsnap.org/zfsnap_manpage.html

With zfsnap we have been maintaining (almost) live replicas of mail servers, 
including snapshotting, either automatically synchronised to master, or kept 
aside for special needs.

One just needs to tweak a shell script (or simply, one or more cron jobs) to 
what is desired.


Priyadarshan


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
Hi,


We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta 
calls this feature autosync. While they say it is only 100% supported between 
nexenta systems, we managed to get it working with OmniOS too. It's Not rocket 
science. But there is one big problem. In the autosync job on the Nexenta 
system one can specify how many snaps to keep local on the nexenta and how many 
to keep on the target system. Somehow we always have the same amount of snaps 
on both systems. Autosync always cleans all snaps on the dest that don't exist 
on the source. I contacted nexenta support and they told me that this is due to 
different versions of zfs send and zfs recv. There should be a -K  flag, that 
instructs the destination to not destroy snapshots that don't exist on the 
source. Is such a flag available in OmniOS? I assume the flag is set on the 
sending side so that the receiving side has to understand it.



Best Regards,

Oliver




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] ZFS Dedup

2018-05-03 Thread Oliver Weinmann
Hi,

I always hear people saying don't use dedup with ZFS. Is that still the case? 
We use omnios as a VEEAM backup target and I would assume that using dedup 
would save a lot of disk space.

Best Regards,
Oliver



[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Constantly losing nfs shares smb shares?

2017-08-24 Thread Oliver Weinmann
Hi,

I have done some more investigation and I found the cause for this problem. It 
always happens when running zfs send from a Nexenta system.




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf 
Of Oliver Weinmann
Sent: Montag, 21. August 2017 10:00
To: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: [OmniOS-discuss] Constantly losing nfs shares smb shares?

Hi,

I have no clue why but on our omnios box (151022k) we are constantly losing all 
our nfs and smb  shares. To fix it I have two shell scripts that just reset the 
sharenfs and sharesmb options. But this is not really a good fix as it happens 
at random times. I don't know where to start investigating. I have nothing 
suspicious in /var/adm/messages.

Best Regards,
Oliver



[cid:image001.png@01D31CBE.95E4A530]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] zfs recv causes system to crash / hang

2017-08-24 Thread Oliver Weinmann
Hi all,

every time trying to do zfs send | recv between OmniOS 151022.x and Nexenta 
4.0.5x the Nexenta node crashes. This is very critical to us as we either want 
to use the OmniOS system for DR or migrate some files between the two. I raised 
a ticket with Nexenta and they pointed me in the right direction. It seems that 
this problem had already been reported on Illumos mailinglist. So far no real 
fix has been provided.

https://www.mail-archive.com/discuss@lists.illumos.org/msg02699.html

The only way to fix this is either patch the receiving side (Nexenta). They 
have not done this for 4.0.5.x yet. But I was told that it will be done in 5.1 
and it is planned to upstream the fix to Illumos.

6393 zfs receive a full send as a clone

or the sending side.

6536 zfs send: want a way to disable setting of DRR_FLAG_FREERECORDS

But the patch for the sending side (OmniOS) doesn't work. The last thing on the 
post is an advice to implement this fix on the sending side:

https://gist.github.com/pcd1193182/fcb9f8d43dcbcf32ba736ea7ef600658

It seems that the problem not only affects zfs send | recv between Illumos 
based an Nexenta systems. Nexenta told us that they have a fix for NS 5.1 but 
currently upgrading to 5.x is not an option for us as this version has some 
limitations and it currently doesn't have this very important fix implemented:

https://www.illumos.org/issues/8543

Is there anyone else effected by this bug?

Best Regards,
Oliver



[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] ndmp backups what software are you using?

2017-08-21 Thread Oliver Weinmann
Hi,

Sorry I didn't mean to have support on bareos backup. I have already asked this 
question on their mailinglist. I just wanted to know if users are doing ndmp 
backups and what software they are using.

<< Traditional archive programs like 'tar' and 'find+cpio' would produce a 
result similar to what you are seeing from bareos by default. >>

Thanks for this tip. I'm doing dump. I will now try tar instead. :)



Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
 Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
http://www.telespazio-vega.de
Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Montag, 21. August 2017 15:30
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: RE: [OmniOS-discuss] ndmp backups what software are you using?

On Mon, 21 Aug 2017, Oliver Weinmann wrote:

> Hi Bob,
>
> What I meant is e.g.
>
> root@omnios02:/tank/test# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> tank   187M   193G23K  /tank
> tank/test  185M  30.0G25K  /tank/test
> tank/test/sub1 184M  29.8G   184M  /tank/test/sub1
> tank/test/sub2  23K  30.0G23K  /tank/test/sub2
> tank/test/sub3  23K  30.0G23K  /tank/test/sub3
> tank/test/sub4  23K  30.0G23K  /tank/test/sub4
>
> Taking a backup of tank I would like to have all it's (zfs)
> subfolders. I can get all the subfolders by specifying RECUSIVE=y as a
> meta parameter in bareos but the folders are all empty. No files are
> backed up this way. I wonder If I'm just missing a setting?

It seems like you are asking us a question about Bareos and not OmniOS.  Does 
Bareos have a user support forum or mailing list where you can ask your 
question?

Each zfs filesystem mountpoint will behave the same as if a traditional 
filesystem was mounted to that path.

Traditional archive programs like 'tar' and 'find+cpio' would produce a result 
similar to what you are seeing from bareos by default.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] ndmp backups what software are you using?

2017-08-21 Thread Oliver Weinmann
Hi Bob,

What I meant is e.g.

root@omnios02:/tank/test# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   187M   193G23K  /tank
tank/test  185M  30.0G25K  /tank/test
tank/test/sub1 184M  29.8G   184M  /tank/test/sub1
tank/test/sub2  23K  30.0G23K  /tank/test/sub2
tank/test/sub3  23K  30.0G23K  /tank/test/sub3
tank/test/sub4  23K  30.0G23K  /tank/test/sub4

Taking a backup of tank I would like to have all it's (zfs) subfolders. I can 
get all the subfolders by specifying RECUSIVE=y as a meta parameter in bareos 
but the folders are all empty. No files are backed up this way. I wonder If I'm 
just missing a setting?



Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
 Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
http://www.telespazio-vega.de
Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Montag, 21. August 2017 15:11
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] ndmp backups what software are you using?

On Mon, 21 Aug 2017, Oliver Weinmann wrote:

> Hi,
>
> is anyone doing NDMP backups to tape? And if yes what software are you using?
>
> I have semi successfully configured BAREOS to do NDMP backups to disk.
> The only problem is that the content of zfs subfolders is not backed
> up. They are all empty. :(

What is a "zfs subfolder"?  Are you refering to a distinct zfs filesystem?  If 
so, the problem could be that the software you are using only backs up 
filesystems and does not recurse into other filesystems.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] ndmp backups what software are you using?

2017-08-21 Thread Oliver Weinmann
Hi,

is anyone doing NDMP backups to tape? And if yes what software are you using?

I have semi successfully configured BAREOS to do NDMP backups to disk. The only 
problem is that the content of zfs subfolders is not backed up. They are all 
empty. :(



[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Constantly losing nfs shares smb shares?

2017-08-21 Thread Oliver Weinmann
Hi,

I have no clue why but on our omnios box (151022k) we are constantly losing all 
our nfs and smb  shares. To fix it I have two shell scripts that just reset the 
sharenfs and sharesmb options. But this is not really a good fix as it happens 
at random times. I don't know where to start investigating. I have nothing 
suspicious in /var/adm/messages.

Best Regards,
Oliver



[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Ldap crash causing system to hang fixed in illumos...

2017-07-31 Thread Oliver Weinmann
Hi Andy,

These are great news. Thanks a lot. I'm happy to test. :)

But first I guess I have to upgrade my current omnios 151022 to latest omniosce.

Best Regards,
Oliver

-Original Message-
From: Andy Fiddaman [mailto:omn...@citrus-it.net] 
Sent: Montag, 31. Juli 2017 11:19
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Ldap crash causing system to hang fixed in 
illumos...


On Mon, 31 Jul 2017, Oliver Weinmann wrote:

; Hi Guys,
;
; I'm currently facing this bug under OmniOS 151022 and I just got informed 
that this has been fixed:
;
; https://www.illumos.org/issues/8543
;
; As this bug is causing a complete system hang only a reboot helps. Can this 
maybe be implemented?
;
; Best Regards,
; Oliver

Hi Oliver,

I've opened an issue for you at
https://github.com/omniosorg/illumos-omnios/issues/26
and you can track progress there.

We plan to include this fix in next Monday's release but we're building a test 
update today and somebody will be in touch directly if you want to test it 
early.

Regards,

Andy

--
Citrus IT Limited | +44 (0)333 0124 007 | enquir...@citrus-it.co.uk Rock House 
Farm | Green Moor | Wortley | Sheffield | S35 7DQ Registered in England and 
Wales | Company number 4899123

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Ldap crash causing system to hang fixed in illumos...

2017-07-31 Thread Oliver Weinmann
Hi Guys,

I'm currently facing this bug under OmniOS 151022 and I just got informed that 
this has been fixed:

https://www.illumos.org/issues/8543

As this bug is causing a complete system hang only a reboot helps. Can this 
maybe be implemented?

Best Regards,
Oliver
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] zfs receive -x parameter missing?

2017-07-01 Thread Oliver Weinmann

Hi,

We have a nexenta system and this has a few additional parameters for 
zfs send an receive. These do not exist on omnios or open Indiana. I 
found a very old feature request for this:


https://www.illumos.org/issues/2745

The main reason for this is to ensure that the replicated ZFS folders 
are not shared e.g.


Best regards,

Oliver
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Bug ?

2017-06-29 Thread Oliver Weinmann
Ohh that is bad news.

I have a productions system that somehow fails to join AD and I don’t know what 
is causing this. We had a similar issue on our  nexenta system and they 
provided us a patch that solved it.

Idmap:

additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS 
failure.  Minor code may provide more information (Client not found in Kerberos 
database)
adutils: ldap_lookup_init failed






[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
From: Paul B. Henson [mailto:hen...@acm.org]
Sent: Donnerstag, 29. Juni 2017 08:51
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Bug ?

Unfortunately OmniTI no longer offers support contracts for OmniOS. We actually 
have a contract that's still good through I think November, but given their 
main support engineer is no longer with the company and the OS appears to be in 
limbo at the moment I'm not sure what good that does us ;).

If you think you've found a bug, your best bet at the moment is to just report 
it to this list, possibly to the upstream illumos developer list if you can 
detail it reasonably technically, and perhaps open an issue on the illumos 
issue tracker.

On Jun 28, 2017, at 11:29 PM, Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>> 
wrote:
Hi,

What if I would like to report a possible bug? Do I need a valid support 
contract for this?

Best Regards,
Oliver





Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com<mailto:OmniOS-discuss@lists.omniti.com>
http://lists.omniti.com/mailman/listinfo/omnios-discuss
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Bug ?

2017-06-29 Thread Oliver Weinmann
Hi,

What if I would like to report a possible bug? Do I need a valid support 
contract for this?

Best Regards,
Oliver



[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] CIFS access to a folder with traditional (owner:group:other) Unix permissions

2017-06-28 Thread Oliver Weinmann
Hi,

Thanks for pointing this out. Basically I would do the chmod on a Linux system 
where NFS share is mounted as root.

Now that I have this working on my test system I have lots of problems on my 
production system. I can join it to AD but I get lots of errors like this:

gedaspw02.a.space.corp: additional info: SASL(-1): generic failure: GSSAPI 
Error: Unspecified GSS failure.  Minor code may provide more information 
(Client not found in Kerberos database)

smbd.info: logon[A\someuser]: CANT_ACCESS_DOMAIN_INFO
smbd.info: logon[A\someuser]: LOGON_FAILURE

I checked all possible settings and compared them to my test system but can't 
find any difference. The only difference is that the production system was 
upgraded twice from 1510xx to 1510xx to 151022.

I even deleted the computer object in AD and rejoined the domain but still the 
same errors occur.



Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
 Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
http://www.telespazio-vega.de
Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
Message-
From: Jim Klimov [mailto:jimkli...@cos.ru]
Sent: Mittwoch, 28. Juni 2017 13:00
To: omnios-discuss@lists.omniti.com; Jens Bauernfeind 
<bauernfe...@ipk-gatersleben.de>; Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] CIFS access to a folder with traditional 
(owner:group:other) Unix permissions

On June 28, 2017 8:08:40 AM GMT+02:00, Jens Bauernfeind 
<bauernfe...@ipk-gatersleben.de> wrote:
>Yeah, AD with IDMU
>
>According to this page (very old, but still the truth), you can't live
>without ACLs.
>https://mattwilson.org/blog/solaris/solaris-cifs-server-and-zfs-acls-th
>e-pro
>blem/
>
>You have to inherit the ACLs to newly created files.
>At first I switched to the passthrough acl properties:
>zfs set aclmode=passthrough tank
>zfs set aclinherit=passthrough tank
>Then you have to define an initial ACL for your datasets
>
>For this example I just assume you have the pool tank and one dataset
>test
>- first set your sticky bit
>chmod g+s /tank/test
>- then set the ACLs
>chmod
>A=owner@:rwxp-DaARWcCos:df:allow,group@:rwxp-DaARWcCos:df:allow,everyon
>e@::d
>f:allow /tank/test
>so nearly full permission for the owner and the group, and nothing for
>others; all ACLs are inherited to new created files and directories
>[the "df"]
>8<---
>ls -Vd /tank/test
>drwxrws---+  5 root IT5 Jun 28 07:55 /tank/test
> owner@:rwxp-DaARWcCos:fd-:allow
> group@:rwxp-DaARWcCos:fd-:allow
>  everyone@:--:fd-:allow
>8<---
>(This inheritance doesnt apply to new datesets you create via zfs, btw)
>
>But care: When you ever doing a chmod operation or a chgrp on
>/tank/test (or every other dateset,), the owner,group and everyone ACEs
>get overwritten (according to
>http://docs.oracle.com/cd/E36784_01/html/E36835/gbaaz.html)
>8<---
>chgrp 0 /tank/test
>ls -Vd /tank/test
>drwxrws---   5 root root   5 Jun 28 07:55 /tank/test
> owner@:rwxp-DaARWcCos:---:allow
> group@:rwxp-Da-R-c--s:---:allow
>  everyone@:--a-R-c--s:---:allow
>See the missing "+" and "fd"?
>8<---
>(This doesn't apply to folders or files)
>
>I hope this helps and I'm not telling lies here.
>But that is my experience with that.
>
>Jens
>
>> -Original Message-
>> From: Oliver Weinmann [mailto:oliver.weinm...@telespazio-vega.de]
>> Sent: Dienstag, 27. Juni 2017 15:21
>> To: Jens Bauernfeind <bauernfe...@ipk-gatersleben.de>
>> Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
>> Subject: RE: [OmniOS-discuss] CIFS access to a folder with
>traditional
>> (owner:group:other) Unix permissions
>>
>> Mine has ldap only for passwd and group.
>>
>> So on your system it really works with just having the traditional
>unix
>> permissions set. There are no ACLs in place?
>>
>> Do you have an Active Directory domain with IDMU?
>>
>> -Original Message-
>> From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
>> Sent: Dienstag, 27. Juni 2017 15:19
>> To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
>> Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
>> Subject: RE: [OmniOS-discuss] CIFS access to a folder with
>traditional
>> (owner:group:other) Unix permissions
&

Re: [OmniOS-discuss] CIFS access to a folder with traditional (owner:group:other) Unix permissions

2017-06-28 Thread Oliver Weinmann
Hi again,

You're the man. This looks very promising. If I get this right the ZFS ACEs
are behaving more like a (u)mask to newly created files via CIFS on folder
with traditional Unix permissions. So there are really no additional ACEs
required. This is perfect.

E.g. If I remove all ACEs on the subfolder Unix

root@omnios02:/tank/ReferenceSU/TEST/Software# chmod A- Unix/

It will leave just the default ones:

root@omnios02:/tank/ReferenceSU/TEST/Software# ls -V
total 1
drwxrws---   4 tuserUp TEST de_dt Da Lg   6 Jun 28 11:42 Unix
 owner@:rwxp-DaARWcCos:---:allow
 group@:rwxp-Da-R-c--s:---:allow
  everyone@:--a-R-c--s:---:allow

Trying to access the folder Unix via CIFS works fine as user utest2 as he is
a member of the " Up TEST de_dt Da Lg" group and this groups has rws unix
permissions. Excellent. :)

root@omnios02:/tank/ReferenceSU/TEST/Software# groups utest2
1 Up TEST de_dt Da Lg

Now I can control the access fine using the normal traditional unix
permissions. If I change the group to a group that he is not a member of his
access is denied. Excellent again :)

root@omnios02:/tank/ReferenceSU/TEST/Software# chgrp "Up BCSIM De_dt Da Lg"
Unix
root@omnios02:/tank/ReferenceSU/TEST/Software# ls -al
total 3
drwxr-xr-x+  3 root root   3 Jun 27 15:03 .
d-+  4 root root   4 Jun 27 15:04 ..
drwxrws---   4 tuserUp BCSIM De_Dt Da Lg   6 Jun 28 11:42 Unix

Switching back to the "Up test ..." group and creating a file "testcifs.txt"
via CIFS.

root@omnios02:/tank/ReferenceSU/TEST/Software# chgrp "Up TEST De_dt Da Lg"
Unix
root@omnios02:/tank/ReferenceSU/TEST/Software# ls -al
total 3
drwxr-xr-x+  3 root root   3 Jun 27 15:03 .
d-+  4 root root   4 Jun 27 15:04 ..
drwxrws---   4 tuserUp TEST de_dt Da Lg   6 Jun 28 11:42 Unix


The file gets the following traditional Unix permissions:

root@omnios02:/tank/ReferenceSU/TEST/Software/Unix# ls -al
total 4
drwxrws---   2 tuserUp TEST de_dt Da Lg   4 Jun 28 12:00 .
drwxr-xr-x+  3 root root   3 Jun 27 15:03 ..
-rwx--+  1 utest2   Up TEST de_dt Da Lg  14 Jun 28 12:00
testcifs.txt

Only the owner can rwx. Not so good. But with your awesome chmod command
applied to the Unix folder.

chmod A- Unix
chmod
A=owner@:rwxp-DaARWcCos:df:allow,group@:rwxp-DaARWcCos:df:allow,everyone@::d
f:allow Unix

The permissions are just right when creating a file from CIFS:

root@omnios02:/tank/ReferenceSU/TEST/Software/Unix# ls -al
total 4
drwxrws---+  3 tuserUp TEST de_dt Da Lg   4 Jun 28 12:20 .
drwxr-xr-x+  3 root root   3 Jun 27 15:03 ..
drwxrws---+  2 utest2   Up TEST de_dt Da Lg   2 Jun 28 12:20 New folder
-rwxrwx---+  1 utest2   Up TEST de_dt Da Lg   3 Jun 28 12:20
testcifs_aclset.txt
root@omnios02:/tank/ReferenceSU/TEST/Software/Unix# ls -V
total 2
drwxrws---+  2 utest2   Up TEST de_dt Da Lg   2 Jun 28 12:20 New folder
 owner@:rwxp-DaARWcCos:fdI:allow
 group@:rwxp-DaARWcCos:fdI:allow
  everyone@:--:fdI:allow
-rwxrwx---+  1 utest2   Up TEST de_dt Da Lg   3 Jun 28 12:20
testcifs_aclset.txt
 owner@:rwxp-DaARWcCos:--I:allow
 group@:rwxp-DaARWcCos:--I:allow
  everyone@:--:--I:allow
root@omnios02:/tank/ReferenceSU/TEST/Software/Unix#

This looks perfect. I will need to do some more testing. Especially with
aclmode and aclinherit. But so far this could be the solution I was looking
for. :)



-Original Message-
From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de] 
Sent: Mittwoch, 28. Juni 2017 08:09
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
(owner:group:other) Unix permissions

Yeah, AD with IDMU

According to this page (very old, but still the truth), you can't live
without ACLs.
https://mattwilson.org/blog/solaris/solaris-cifs-server-and-zfs-acls-the-pro
blem/

You have to inherit the ACLs to newly created files.
At first I switched to the passthrough acl properties:
zfs set aclmode=passthrough tank
zfs set aclinherit=passthrough tank
Then you have to define an initial ACL for your datasets

For this example I just assume you have the pool tank and one dataset test
- first set your sticky bit
chmod g+s /tank/test
- then set the ACLs
chmod
A=owner@:rwxp-DaARWcCos:df:allow,group@:rwxp-DaARWcCos:df:allow,everyone@::d
f:allow /tank/test
so nearly full permission for the owner and the group, and nothing for
others; all ACLs are inherited to new created files and directories [the
"df"]
8<---
ls -Vd /tank/test
drwxrws---+  5 root IT5 Jun 28 07:55 /tank/test
 owner@:rwxp-DaARWcCos:fd-:allow

Re: [OmniOS-discuss] CIFS access to a folder with traditional (owner:group:other) Unix permissions

2017-06-28 Thread Oliver Weinmann
Hi Jens,

Thanks a lot for your support. I really appreciate it. :)

I will test this on my fresh install of omnios 151022 and report back.

It's really a pity that it only works If I do touch the ZFS ACLs. :(

-Original Message-
From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de] 
Sent: Mittwoch, 28. Juni 2017 08:09
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
(owner:group:other) Unix permissions

Yeah, AD with IDMU

According to this page (very old, but still the truth), you can't live
without ACLs.
https://mattwilson.org/blog/solaris/solaris-cifs-server-and-zfs-acls-the-pro
blem/

You have to inherit the ACLs to newly created files.
At first I switched to the passthrough acl properties:
zfs set aclmode=passthrough tank
zfs set aclinherit=passthrough tank
Then you have to define an initial ACL for your datasets

For this example I just assume you have the pool tank and one dataset test
- first set your sticky bit
chmod g+s /tank/test
- then set the ACLs
chmod
A=owner@:rwxp-DaARWcCos:df:allow,group@:rwxp-DaARWcCos:df:allow,everyone@::d
f:allow /tank/test
so nearly full permission for the owner and the group, and nothing for
others; all ACLs are inherited to new created files and directories [the
"df"]
8<---
ls -Vd /tank/test
drwxrws---+  5 root IT5 Jun 28 07:55 /tank/test
 owner@:rwxp-DaARWcCos:fd-:allow
 group@:rwxp-DaARWcCos:fd-:allow
  everyone@:--:fd-:allow
8<---
(This inheritance doesnt apply to new datesets you create via zfs, btw)

But care: When you ever doing a chmod operation or a chgrp on /tank/test (or
every other dateset,), the owner,group and everyone ACEs get overwritten
(according to http://docs.oracle.com/cd/E36784_01/html/E36835/gbaaz.html)
8<---
chgrp 0 /tank/test
ls -Vd /tank/test
drwxrws---   5 root root   5 Jun 28 07:55 /tank/test
 owner@:rwxp-DaARWcCos:---:allow
 group@:rwxp-Da-R-c--s:---:allow
  everyone@:--a-R-c--s:---:allow
See the missing "+" and "fd"?
8<---
(This doesn't apply to folders or files)

I hope this helps and I'm not telling lies here.
But that is my experience with that.

Jens

> -Original Message-
> From: Oliver Weinmann [mailto:oliver.weinm...@telespazio-vega.de]
> Sent: Dienstag, 27. Juni 2017 15:21
> To: Jens Bauernfeind <bauernfe...@ipk-gatersleben.de>
> Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> Mine has ldap only for passwd and group.
> 
> So on your system it really works with just having the traditional unix
> permissions set. There are no ACLs in place?
> 
> Do you have an Active Directory domain with IDMU?
> 
> -Original Message-
> From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
> Sent: Dienstag, 27. Juni 2017 15:19
> To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
> Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> also r151022
> 
> What is your /etc/nsswitch.conf saying?
> Mine has nearly everywhere "files ldap", except hosts and ipnodes.
> 
> > -Original Message-
> > From: Oliver Weinmann [mailto:oliver.weinm...@telespazio-vega.de]
> > Sent: Dienstag, 27. Juni 2017 14:49
> > To: Jens Bauernfeind <bauernfe...@ipk-gatersleben.de>
> > Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> > Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> > (owner:group:other) Unix permissions
> >
> > What version of omnios are you using? I'm using R151022.
> >
> > -Original Message-
> > From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
> > Sent: Dienstag, 27. Juni 2017 14:47
> > To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
> > Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> > Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> > (owner:group:other) Unix permissions
> >
> > Hm,
> >
> > maybe I should share my ldap config.
> > ldapclient -v manual \
> > -a credentialLevel=proxy \
> > -a authenticationMethod=simple \
> > -a proxyDN="cn=XXX" \
> > -a proxyPassword=SECRET \
> > -a defaultSearchBase=dc=ipk=de \
> > -a domainName=DOMAINNAME \
> > -a defaultServerList= \
> > -a attributeMap=group:userpassword=userPas

Re: [OmniOS-discuss] CIFS access to a folder with traditional (owner:group:other) Unix permissions

2017-06-27 Thread Oliver Weinmann
Mine has ldap only for passwd and group.

So on your system it really works with just having the traditional unix
permissions set. There are no ACLs in place?

Do you have an Active Directory domain with IDMU?

-Original Message-
From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de] 
Sent: Dienstag, 27. Juni 2017 15:19
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
(owner:group:other) Unix permissions

also r151022

What is your /etc/nsswitch.conf saying?
Mine has nearly everywhere "files ldap", except hosts and ipnodes.

> -Original Message-
> From: Oliver Weinmann [mailto:oliver.weinm...@telespazio-vega.de]
> Sent: Dienstag, 27. Juni 2017 14:49
> To: Jens Bauernfeind <bauernfe...@ipk-gatersleben.de>
> Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> What version of omnios are you using? I'm using R151022.
> 
> -Original Message-
> From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
> Sent: Dienstag, 27. Juni 2017 14:47
> To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
> Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> Hm,
> 
> maybe I should share my ldap config.
> ldapclient -v manual \
> -a credentialLevel=proxy \
> -a authenticationMethod=simple \
> -a proxyDN="cn=XXX" \
> -a proxyPassword=SECRET \
> -a defaultSearchBase=dc=ipk=de \
> -a domainName=DOMAINNAME \
> -a defaultServerList= \
> -a attributeMap=group:userpassword=userPassword \
> -a attributeMap=group:uniqueMember=member \
> -a attributeMap=group:gidnumber=gidNumber \
> -a attributeMap=passwd:gecos=cn \
> -a attributeMap=passwd:gidnumber=gidNumber \
> -a attributeMap=passwd:uidnumber=uidNumber \
> -a attributeMap=passwd:uid=sAMAccountName \
> -a attributeMap=passwd:homedirectory=unixHomeDirectory \
> -a attributeMap=passwd:loginshell=loginShell \
> -a attributeMap=shadow:shadowflag=shadowFlag \
> -a attributeMap=shadow:userpassword=userPassword \
> -a objectClassMap=group:posixGroup=group \
> -a objectClassMap=passwd:posixAccount=user \
> -a objectClassMap=shadow:shadowAccount=user \
> -a serviceSearchDescriptor="passwd:" \
> -a serviceSearchDescriptor=group:  \
> -a followReferrals=true
> 
> Maybe also a restart of the smb service?
> 
> Jens
> 
> > -Original Message-
> > From: Oliver Weinmann [mailto:oliver.weinm...@telespazio-vega.de]
> > Sent: Dienstag, 27. Juni 2017 14:40
> > To: Jens Bauernfeind <bauernfe...@ipk-gatersleben.de>
> > Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> > (owner:group:other) Unix permissions
> >
> > Hi,
> >
> >
> >
> > Now I get can’t access domain info in the smb log and users are prompted
> to
> > enter a password when accessing the shares. :(
> >
> >
> >
> > From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
> > Sent: Dienstag, 27. Juni 2017 09:37
> > To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
> > Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> > (owner:group:other) Unix permissions
> >
> >
> >
> > Hi,
> >
> >
> >
> > I fixed this problem after executing this:
> >
> > idmap add winname:"*@" unixuser:"*"
> >
> > idmap add wingroup:"*@ " unixgroup:"*"
> >
> > svcadm restart idmap
> >
> > All new created files has now the uid and gid from the IDMU
> >
> >
> >
> > Jens
> >
> >
> >
> > From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com]
> > On Behalf Of Oliver Weinmann
> > Sent: Dienstag, 27. Juni 2017 08:25
> > To: omnios-discuss <omnios-discuss@lists.omniti.com <mailto:omnios-
> > disc...@lists.omniti.com> >
> > Subject: [OmniOS-discuss] CIFS access to a folder with traditional
> > (owner:group:other) Unix permissions
> >
> >
> >
> > Hi,
> >
> >
> >
> > we are currently migrating all our data from a NetAPP system to an
OmniOS
> > sytem.
> >
> >
> >
> > The OmniOS system is joined to AD and LDAP client is configured to pull
> LDAP
> > info from AD / IDMU. This works fine.
> >
> >
> >
> > However we c

Re: [OmniOS-discuss] CIFS access to a folder with traditional (owner:group:other) Unix permissions

2017-06-27 Thread Oliver Weinmann
What version of omnios are you using? I'm using R151022. 

-Original Message-
From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de] 
Sent: Dienstag, 27. Juni 2017 14:47
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
(owner:group:other) Unix permissions

Hm,

maybe I should share my ldap config.
ldapclient -v manual \
-a credentialLevel=proxy \
-a authenticationMethod=simple \
-a proxyDN="cn=XXX" \
-a proxyPassword=SECRET \
-a defaultSearchBase=dc=ipk=de \
-a domainName=DOMAINNAME \
-a defaultServerList= \
-a attributeMap=group:userpassword=userPassword \
-a attributeMap=group:uniqueMember=member \
-a attributeMap=group:gidnumber=gidNumber \
-a attributeMap=passwd:gecos=cn \
-a attributeMap=passwd:gidnumber=gidNumber \
-a attributeMap=passwd:uidnumber=uidNumber \
-a attributeMap=passwd:uid=sAMAccountName \
-a attributeMap=passwd:homedirectory=unixHomeDirectory \
-a attributeMap=passwd:loginshell=loginShell \
-a attributeMap=shadow:shadowflag=shadowFlag \
-a attributeMap=shadow:userpassword=userPassword \
-a objectClassMap=group:posixGroup=group \
-a objectClassMap=passwd:posixAccount=user \
-a objectClassMap=shadow:shadowAccount=user \
-a serviceSearchDescriptor="passwd:" \
-a serviceSearchDescriptor=group:  \
-a followReferrals=true

Maybe also a restart of the smb service?

Jens

> -Original Message-
> From: Oliver Weinmann [mailto:oliver.weinm...@telespazio-vega.de]
> Sent: Dienstag, 27. Juni 2017 14:40
> To: Jens Bauernfeind <bauernfe...@ipk-gatersleben.de>
> Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> Hi,
> 
> 
> 
> Now I get can’t access domain info in the smb log and users are prompted
to
> enter a password when accessing the shares. :(
> 
> 
> 
> From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
> Sent: Dienstag, 27. Juni 2017 09:37
> To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
> Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> 
> 
> Hi,
> 
> 
> 
> I fixed this problem after executing this:
> 
> idmap add winname:"*@" unixuser:"*"
> 
> idmap add wingroup:"*@ " unixgroup:"*"
> 
> svcadm restart idmap
> 
> All new created files has now the uid and gid from the IDMU
> 
> 
> 
> Jens
> 
> 
> 
> From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com]
> On Behalf Of Oliver Weinmann
> Sent: Dienstag, 27. Juni 2017 08:25
> To: omnios-discuss <omnios-discuss@lists.omniti.com <mailto:omnios-
> disc...@lists.omniti.com> >
> Subject: [OmniOS-discuss] CIFS access to a folder with traditional
> (owner:group:other) Unix permissions
> 
> 
> 
> Hi,
> 
> 
> 
> we are currently migrating all our data from a NetAPP system to an OmniOS
> sytem.
> 
> 
> 
> The OmniOS system is joined to AD and LDAP client is configured to pull
LDAP
> info from AD / IDMU. This works fine.
> 
> 
> 
> However we can’t manage to have access on folders where we have Unix
> permissions from windows (CIFS).
> 
> 
> 
> e.g.
> 
> 
> 
> the user utest2 is member of the goup “Up BCSIM De_Dt Da Lg”:
> 
> 
> 
> root@omnios01:/hgst4u60/ReferenceAC/BCSIM/Software# groups utest2
> 
> 1 Up BCSIM De_Dt Da Lg
> 
> 
> 
> The folder Unix has the following permissions set:
> 
> 
> 
> root@omnios01:/hgst4u60/ReferenceAC/BCSIM/Software# ls -al
> 
> total 47
> 
> d-+  4 root 2147483653   4 Apr 25 05:37 .
> 
> d-+  4 root 2147483659   4 Apr 25 05:35 ..
> 
> drwxrws---   9 bcsimUp BCSIM De_Dt Da Lg  11 Mar  9 10:40 Unix
> 
> d-+  6 root 2147483653   6 Apr 25 05:37 Windows
> 
> 
> 
> so User bcsim and all members of group “Up BCSIM De_Dt Da Lg” can access
> the folder just fine via NFS.
> 
> 
> 
> If the user utest2 tries to access this folder from windows via CIFS he
gets
> access denied.
> 
> 
> 
> If I change the permissions so that other have r-x he can access the
folder
> but then I have no control on who can access the folder.
> 
> 
> 
> On our NetApp system this was working fine. I assume it has to do with the
> IDMAP daemon using ephemeral mappings instead of pulling the uidnumber
> and gidnumber from AD?
> 
> 
> 
> I don’t want to use extended ACLs on this folder.
> 
> 
> 
> Any ideas?
> 
> 
> 
> 
> 
> Oliver Weinmann
> Senior Unix VMWare, Storage Engineer
> 
&g

Re: [OmniOS-discuss] Loosing NFS shares

2017-06-22 Thread Oliver Weinmann
Hi Dan,

Thanks for pointing this out. No the service is not running:

svcs -a | grep cap



Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
 Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
http://www.telespazio-vega.de
Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
Message-
From: Dan McDonald [mailto:dan...@kebe.com]
Sent: Donnerstag, 22. Juni 2017 14:10
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>; Dan McDonald 
<dan...@kebe.com>
Cc: Tobias Oetiker <t...@oetiker.ch>; omnios-discuss 
<omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Loosing NFS shares


> On Jun 22, 2017, at 3:13 AM, Oliver Weinmann 
> <oliver.weinm...@telespazio-vega.de> wrote:
>
> Hi,
>
> Don’t think so:
>
> svcs -vx rcapd
>
> shows nothing.

You're not looking for the right thing.

neuromancer(~)[0]% pgrep rcapd
340
neuromancer(~)[0]% svcs -a | grep cap
online May_12   svc:/system/rcap:default
neuromancer(~)[0]% svcs -xv rcap
svc:/system/rcap:default (resource capping daemon)
 State: online since Fri May 12 02:12:40 2017
   See: man -M /usr/share/man -s 1M rcapd
   See: man -M /usr/share/man -s 1M rcapstat
   See: man -M /usr/share/man -s 1M rcapadm
   See: /var/svc/log/system-rcap:default.log
Impact: None.
neuromancer(~)[0]% su troot
Password:
OmniOS 5.11 omnios-r151022-f9693432c2   May 2017
(0)# svcadm disable rcap
(0)#


Hope this helps,
Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Loosing NFS shares

2017-06-22 Thread Oliver Weinmann
Hi,

Running the zfs mount –a from / shows the same errors.

I now ran the following commands to correct the mountpoints:

Re-enable inheritance:

zfs inherit -r mountpoint hgst4u60/ReferencePR

Reset mountpoint on the root folder:

zfs set mountpoint=/hgst4u60/ReferencePR hgst4u60/ReferencePR

unmount all subfolders:

for fs in `zfs mount | grep ReferencePR | awk '{print $2}'`; do zfs unmount 
$fs; done

Check that all subfolders are unmounted:

zfs mount | grep ReferencePR

Check that all folders are empty, just to be sure!!!:

du  /hgst4u60/ReferencePR

If they are empty remove the root folder:

rm -rf /hgst4u60/ReferencePR

Finally remount:

zfs mount –a


This solves the mount issues but I wonder why this has happened? And hopefully 
this doesn’t happen again?




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
From: Sriram Narayanan [mailto:sriram...@gmail.com]
Sent: Donnerstag, 22. Juni 2017 10:26
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: Stephan Budach <stephan.bud...@jvm.de>; omnios-discuss 
<omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Loosing NFS shares



On Thu, Jun 22, 2017 at 3:45 PM, Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>> 
wrote:
One more thing I just noticed is that the system seems to be unable to mount 
directories:

root@omnios01:/hgst4u60/ReferenceAC/AGDEMO# /usr/sbin/zfs mount -a
cannot mount '/hgst4u60/ReferenceAC': directory is not empty
cannot mount '/hgst4u60/ReferenceDF': directory is not empty
cannot mount '/hgst4u60/ReferenceGI': directory is not empty
cannot mount '/hgst4u60/ReferenceJL': directory is not empty
cannot mount '/hgst4u60/ReferenceMO': directory is not empty
cannot mount '/hgst4u60/ReferencePR': directory is not empty
cannot mount '/hgst4u60/ReferenceSU': directory is not empty
cannot mount '/hgst4u60/ReferenceVX': directory is not empty
cannot mount '/hgst4u60/ReferenceYZ': directory is not empty

Maybe this is where all problems are coming from?

Please issue the zfs mount -a command from elsewhere rather than from within 
"/hgst4u60/ReferenceAC"

It also seems that "" and the others may already have local files. If possible, 
then rename those Reference* directories and issue the zfs mount -a again.


From: Stephan Budach 
[mailto:stephan.bud...@jvm.de<mailto:stephan.bud...@jvm.de>]
Sent: Donnerstag, 22. Juni 2017 09:30
To: Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>>
Cc: omnios-discuss 
<omnios-discuss@lists.omniti.com<mailto:omnios-discuss@lists.omniti.com>>
Subject: Re: [OmniOS-discuss] Loosing NFS shares

Hi Oliver,


Von: "Oliver Weinmann" 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>>
An: "Tobias Oetiker" <t...@oetiker.ch<mailto:t...@oetiker.ch>>
CC: "omnios-discuss" 
<omnios-discuss@lists.omniti.com<mailto:omnios-discuss@lists.omniti.com>>
Gesendet: Donnerstag, 22. Juni 2017 09:13:27
Betreff: Re: [OmniOS-discuss] Loosing NFS shares

Hi,

Don’t think so:

svcs -vx rcapd

shows nothing.




[cid:image001.png@01D2EB3B.A023E330]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744<tel:+49%206151%208257744> | Fax: +49 (0)6151 8257 
799<tel:+49%206151%208257799>
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
From: Tobias Oetiker [mailto:t...@oetiker.ch]
Sent: Donnerstag, 22. Juni 2017 09:11
To: Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>>
Cc: omnios-discuss 
<omnios-discuss@lists.omniti.com<mailto:omnios-discuss@lists.omniti.com>>
Subject: Re: [OmniOS-discuss] Loosing NFS shares

Oliver,

are you running rcapd ? we found that (at least of the box) this thing wrecks 
havoc to both
nfs and iscsi sharing ...

cheers
tobi

- On Jun 22, 2017, at 8:45 AM, Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>> 
wrote:
Hi,

we are using OmniOS for a few months now and have big tr

Re: [OmniOS-discuss] Loosing NFS shares

2017-06-22 Thread Oliver Weinmann
One more thing I just noticed is that the system seems to be unable to mount 
directories:

 

root@omnios01:/hgst4u60/ReferenceAC/AGDEMO# /usr/sbin/zfs mount -a

cannot mount '/hgst4u60/ReferenceAC': directory is not empty

cannot mount '/hgst4u60/ReferenceDF': directory is not empty

cannot mount '/hgst4u60/ReferenceGI': directory is not empty

cannot mount '/hgst4u60/ReferenceJL': directory is not empty

cannot mount '/hgst4u60/ReferenceMO': directory is not empty

cannot mount '/hgst4u60/ReferencePR': directory is not empty

cannot mount '/hgst4u60/ReferenceSU': directory is not empty

cannot mount '/hgst4u60/ReferenceVX': directory is not empty

cannot mount '/hgst4u60/ReferenceYZ': directory is not empty

 

Maybe this is where all problems are coming from?

 

From: Stephan Budach [mailto:stephan.bud...@jvm.de] 
Sent: Donnerstag, 22. Juni 2017 09:30
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Loosing NFS shares

 

Hi Oliver,

 

  _____  

Von: "Oliver Weinmann" < <mailto:oliver.weinm...@telespazio-vega.de> 
oliver.weinm...@telespazio-vega.de>
An: "Tobias Oetiker" < <mailto:t...@oetiker.ch> t...@oetiker.ch>
CC: "omnios-discuss" < <mailto:omnios-discuss@lists.omniti.com> 
omnios-discuss@lists.omniti.com>
Gesendet: Donnerstag, 22. Juni 2017 09:13:27
Betreff: Re: [OmniOS-discuss] Loosing NFS shares

 

Hi,

 

Don’t think so:

 

svcs -vx rcapd 

 

shows nothing.

 

 



Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany 
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
 <mailto:oliver.weinm...@telespazio-vega.de> oliver.weinm...@telespazio-vega.de
 <http://www.telespazio-vega.de> http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller

From: Tobias Oetiker [mailto:t...@oetiker.ch] 
Sent: Donnerstag, 22. Juni 2017 09:11
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de 
<mailto:oliver.weinm...@telespazio-vega.de> >
Cc: omnios-discuss <omnios-discuss@lists.omniti.com 
<mailto:omnios-discuss@lists.omniti.com> >
Subject: Re: [OmniOS-discuss] Loosing NFS shares

 

Oliver,

 

are you running rcapd ? we found that (at least of the box) this thing wrecks 
havoc to both

nfs and iscsi sharing ... 

 

cheers

tobi

 

- On Jun 22, 2017, at 8:45 AM, Oliver Weinmann < 
<mailto:oliver.weinm...@telespazio-vega.de> oliver.weinm...@telespazio-vega.de> 
wrote:

Hi,

 

we are using OmniOS for a few months now and have big trouble with stability. 
We mainly use it for VMware NFS datastores. The last 3 nights we lost all NFS 
datastores and VMs stopped running. I noticed that even though zfs get sharenfs 
shows folders as shared they become inaccessible. Setting sharenfs to off and 
sharing again solves the issue. I have no clue where to start. I’m fairly new 
to OmniOS.

 

Any help would be highly appreciated.

 

Thanks and Best Regards,

Oliver

 



Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany 
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
 <mailto:oliver.weinm...@telespazio-vega.de> oliver.weinm...@telespazio-vega.de
 <http://www.telespazio-vega.de> http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller

 

What is the output from fmdump / fmdump -v? Also, it would be good to have a 
better understanding of your setup. We have been using NFS shares from OmniOS 
since r006 on OVM and also VMWare and at least the NFS part has always been 
very solid for us. So, how did you setup your storage and how many NFS clients 
do you have?

 

Cheers,

Stephan



smime.p7s
Description: S/MIME cryptographic signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Loosing NFS shares

2017-06-22 Thread Oliver Weinmann
Repair Attempted

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

May 23 16:58:42.2836 af1d96ae-489e-46cb-b4da-b5adf5780018 SMF-8000-YX Diagnosed

  100%  defect.sunos.smf.svc.maintenance

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

May 23 17:01:53.4846 af1d96ae-489e-46cb-b4da-b5adf5780018 FMD-8000-4M Repaired

  100%  defect.sunos.smf.svc.maintenanceRepair Attempted

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

May 23 17:01:53.4857 af1d96ae-489e-46cb-b4da-b5adf5780018 FMD-8000-6U Resolved

  100%  defect.sunos.smf.svc.maintenanceRepair Attempted

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

May 24 10:04:18.5145 34f126dd-fd71-4de2-cafd-dd084438d63a SMF-8000-YX Diagnosed

  100%  defect.sunos.smf.svc.maintenance

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

May 24 12:15:47.8823 34f126dd-fd71-4de2-cafd-dd084438d63a FMD-8000-4M Repaired

  100%  defect.sunos.smf.svc.maintenanceRepair Attempted

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

May 24 12:15:47.8838 34f126dd-fd71-4de2-cafd-dd084438d63a FMD-8000-6U Resolved

  100%  defect.sunos.smf.svc.maintenanceRepair Attempted

 

Problem in: svc:///network/ldap/client:default

   Affects: svc:///network/ldap/client:default

   FRU: -

  Location: -

 

I do see a lot errors from mountd in /var/adm/messages:

 

e.g.

 

Jun 21 15:01:22 omnios01 mountd[766]: [ID 801587 daemon.notice] 
/hgst4u60/ReferenceAC/AGDEMO/Software/Unix: No such file or directory

 

While these exist and I can access them just fine.

 

root@omnios01:/hgst4u60# zfs list hgst4u60/ReferenceAC/AGDEMO

NAME  USED  AVAIL  REFER  MOUNTPOINT

hgst4u60/ReferenceAC/AGDEMO   256K  5.00G   256K  /hgst4u60/ReferenceAC/AGDEMO

 

root@omnios01:/hgst4u60/ReferenceAC/AGDEMO# df -h .

Filesystem Size   Used  Available Capacity  Mounted on

hgst4u60/ReferenceAC/AGDEMO

   5.0G   256K   5.0G 1%
/hgst4u60/ReferenceAC/AGDEMO

 

Also there is an error for smbd shown on the console very often:

 

Jun 21 16:26:46 omnios01 smbd[628]: [ID 160719 auth.alert] adt_set_user: 
Invalid argument

 

From: Stephan Budach [mailto:stephan.bud...@jvm.de] 
Sent: Donnerstag, 22. Juni 2017 09:30
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Loosing NFS shares

 

Hi Oliver,

 

  _____  

Von: "Oliver Weinmann" <oliver.weinm...@telespazio-vega.de 
<mailto:oliver.weinm...@telespazio-vega.de> >
An: "Tobias Oetiker" <t...@oetiker.ch <mailto:t...@oetiker.ch> >
CC: "omnios-discuss" <omnios-discuss@lists.omniti.com 
<mailto:omnios-discuss@lists.omniti.com> >
Gesendet: Donnerstag, 22. Juni 2017 09:13:27
Betreff: Re: [OmniOS-discuss] Loosing NFS shares

 

Hi,

 

Don’t think so:

 

svcs -vx rcapd 

 

shows nothing.

 

 



Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany 
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de <mailto:oliver.weinm...@telespazio-vega.de> 
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller

From: Tobias Oetiker [mailto:t...@oetiker.ch] 
Sent: Donnerstag, 22. Juni 2017 09:11
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de 
<mailto:oliver.weinm...@telespazio-vega.de> >
Cc: omnios-discuss <omnios-discuss@lists.omniti.com 
<mailto:omnios-discuss@lists.omniti.com> >
Subject: Re: [OmniOS-discuss] Loosing NFS shares

 

Oliver,

 

are you running rcapd ? we found that (at least of the box) this thing wrecks 
havoc to both

nfs and iscsi sharing ... 

 

cheers

tobi

 

- On Jun 22, 2017, at 8:45 AM, Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de <mailto:oliver.weinm...@telespazio-vega.de> 
> wrote:

Hi,

 

we are using OmniOS for a few months now and have big trouble with stability. 
We mainly use it for VMware NFS datastores. The last 3 nights we lost all NFS 
datastores and VMs stopped running. I noticed that eve

Re: [OmniOS-discuss] Loosing NFS shares

2017-06-22 Thread Oliver Weinmann
Hi,

Don’t think so:

svcs -vx rcapd

shows nothing.




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller
From: Tobias Oetiker [mailto:t...@oetiker.ch]
Sent: Donnerstag, 22. Juni 2017 09:11
To: Oliver Weinmann <oliver.weinm...@telespazio-vega.de>
Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] Loosing NFS shares

Oliver,

are you running rcapd ? we found that (at least of the box) this thing wrecks 
havoc to both
nfs and iscsi sharing ...

cheers
tobi

- On Jun 22, 2017, at 8:45 AM, Oliver Weinmann 
<oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>> 
wrote:

Hi,

we are using OmniOS for a few months now and have big trouble with stability. 
We mainly use it for VMware NFS datastores. The last 3 nights we lost all NFS 
datastores and VMs stopped running. I noticed that even though zfs get sharenfs 
shows folders as shared they become inaccessible. Setting sharenfs to off and 
sharing again solves the issue. I have no clue where to start. I’m fairly new 
to OmniOS.

Any help would be highly appreciated.

Thanks and Best Regards,
Oliver



[cid:image001.png@01D2EB37.CE643CD0]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com<mailto:OmniOS-discuss@lists.omniti.com>
http://lists.omniti.com/mailman/listinfo/omnios-discuss

--
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
www.oetiker.ch<http://www.oetiker.ch> t...@oetiker.ch<mailto:t...@oetiker.ch> 
+41 62 775 9902
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Loosing NFS shares

2017-06-22 Thread Oliver Weinmann
Hi,

we are using OmniOS for a few months now and have big trouble with stability. 
We mainly use it for VMware NFS datastores. The last 3 nights we lost all NFS 
datastores and VMs stopped running. I noticed that even though zfs get sharenfs 
shows folders as shared they become inaccessible. Setting sharenfs to off and 
sharing again solves the issue. I have no clue where to start. I'm fairly new 
to OmniOS.

Any help would be highly appreciated.

Thanks and Best Regards,
Oliver



[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Senior Unix VMWare, Storage Engineer

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
http://www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss