Hi,
we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then
when we created the pool we only cared about disk space and so we created a
raidz2 pool with all 24disks in one vdev. I have the impression that this is
cool for disk space but is really bad for IO since this
McDonald :
On Jun 14, 2018, at 3:04 AM, Oliver Weinmann wrote:
I would be more than happy to use a zone instead of a full blown VM but since
there is no ISCSI and NFS server support in a Zone I have to stick with the VM
as we need NFS since the VM is also a datastore for a few VMs.
You
Dear All,
I’m struggling with this issue since day one and I have not found any solution
for it yet. We use VEEAM to back up our VMs and an OmniOS VM as a CIFS target.
We have one OmniOS VM for internal and one for DMZ. VEEAM Backups to the one
for internal work fine. No problems at all. B
I think I really have to start investigating using 3rd party apps again.
Nexenta doesn't let me change the zfs send command. I can only adjust settings
for the autosync job.
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Yes it is recursively. We have hundreds of child datasets so single filesystems
would be a real headache to maintain. :(
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland GmbH
. I need to find some time to test it.
Best Regards,
Oliver
Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
www.telespazio-vega.de
Registered
ide so that the receiving side has to understand it.
Best Regards,
Oliver
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e11111.png]
Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +
-9230-766adacfe55e1.png]
Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de
vc/method/nfs-server stop
319"). ]
[ Apr 25 05:57:12 Method "stop" exited with status 0. ]
Best Regards,
Oliver
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland Gm
Hi,
I have done some more investigation and I found the cause for this problem. It
always happens when running zfs send from a Nexenta system.
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage
ds,
Oliver
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e11111.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespaz
would produce a
result similar to what you are seeing from bareos by default. >>
Thanks for this tip. I'm doing dump. I will now try tar instead. :)
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 4
x27;s (zfs) subfolders. I can
get all the subfolders by specifying RECUSIVE=y as a meta parameter in bareos
but the folders are all empty. No files are backed up this way. I wonder If I'm
just missing a setting?
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutsc
. :(
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.we
start investigating. I have nothing
suspicious in /var/adm/messages.
Best Regards,
Oliver
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darm
liver Weinmann
Cc: omnios-discuss
Subject: Re: [OmniOS-discuss] Ldap crash causing system to hang fixed in
illumos...
On Mon, 31 Jul 2017, Oliver Weinmann wrote:
; Hi Guys,
;
; I'm currently facing this bug under OmniOS 151022 and I just got informed
that this has been fixed:
Hi Guys,
I'm currently facing this bug under OmniOS 151022 and I just got informed that
this has been fixed:
https://www.illumos.org/issues/8543
As this bug is causing a complete system hang only a reboot helps. Can this
maybe be implemented?
Best Regards,
Oliver
_
Hi,
We have a nexenta system and this has a few additional parameters for
zfs send an receive. These do not exist on omnios or open Indiana. I
found a very old feature request for this:
https://www.illumos.org/issues/2745
The main reason for this is to ensure that the replicated ZFS folders
failure. Minor code may provide more information (Client not found in Kerberos
database)
adutils: ldap_lookup_init failed
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA
Hi,
What if I would like to report a possible bug? Do I need a valid support
contract for this?
Best Regards,
Oliver
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA
and compared them to my test system but can't
find any difference. The only difference is that the production system was
upgraded twice from 1510xx to 1510xx to 151022.
I even deleted the computer object in AD and rejoined the domain but still the
same errors occur.
Oliver Weinmann
S
aclinherit. But so far this could be the solution I was looking
for. :)
-Original Message-
From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
Sent: Mittwoch, 28. Juni 2017 08:09
To: Oliver Weinmann
Cc: omnios-discuss
Subject: RE: [OmniOS-discuss] CIFS access to a folder with
.@ipk-gatersleben.de]
Sent: Mittwoch, 28. Juni 2017 08:09
To: Oliver Weinmann
Cc: omnios-discuss
Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
(owner:group:other) Unix permissions
Yeah, AD with IDMU
According to this page (very old, but still the truth), you can't live
without A
-gatersleben.de]
Sent: Dienstag, 27. Juni 2017 15:19
To: Oliver Weinmann
Cc: omnios-discuss
Subject: RE: [OmniOS-discuss] CIFS access to a folder with traditional
(owner:group:other) Unix permissions
also r151022
What is your /etc/nsswitch.conf saying?
Mine has nearly everywhere "files ldap", ex
What version of omnios are you using? I'm using R151022.
-Original Message-
From: Jens Bauernfeind [mailto:bauernfe...@ipk-gatersleben.de]
Sent: Dienstag, 27. Juni 2017 14:47
To: Oliver Weinmann
Cc: omnios-discuss
Subject: RE: [OmniOS-discuss] CIFS access to a folder with tradit
re_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespaz
e that there is a
link between the problems and auto-sync or snapshotting.
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germ
Hi Dan,
Thanks for pointing this out. No the service is not running:
svcs -a | grep cap
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm
e the root folder:
rm -rf /hgst4u60/ReferencePR
Finally remount:
zfs mount –a
This solves the mount issues but I wonder why this has happened? And hopefully
this doesn’t happen again?
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e11111.png]
Olive
cannot mount '/hgst4u60/ReferenceSU': directory is not empty
cannot mount '/hgst4u60/ReferenceVX': directory is not empty
cannot mount '/hgst4u60/ReferenceYZ': directory is not empty
Maybe this is where all problems are coming from?
From: Stephan Budach [ma
an.bud...@jvm.de]
Sent: Donnerstag, 22. Juni 2017 09:30
To: Oliver Weinmann
Cc: omnios-discuss
Subject: Re: [OmniOS-discuss] Loosing NFS shares
Hi Oliver,
_
Von: "Oliver Weinmann" mailto:oliver.weinm...@telespazio-vega.de> >
An: "Tobias Oetiker" mailt
Hi,
Don’t think so:
svcs -vx rcapd
shows nothing.
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0
. Setting sharenfs to off and
sharing again solves the issue. I have no clue where to start. I'm fairly new
to OmniOS.
Any help would be highly appreciated.
Thanks and Best Regards,
Oliver
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
O
33 matches
Mail list logo