[OmniOS-discuss] Poor read performance on fresh zpool

2018-06-14 Thread gijs

Hi,

on a OmniosCE r151022 system we have rebuild our zfs pool. The pool is 
constructed out of 18 mirror vdevs, each consisting of 2 1,8TB SAS 
drives. Since about half the disks are 512b and the other half are 512e 
(physical 4k) we opted to use 4k sectors using sd.conf for all devices; 
thus all vdevs report as ashift 12. During testing today we experienced 
extremely poor read performance. Doing sequential reads with dd of the 
pool results in a read rate of 300MB/s, benchmarking with bonnie++ gives 
about 600MB/s

Write performance is as expected, around 1,4GB/s.
Scrub also performs as expected, zpool status shows a scrub speed of 
2GB/s, iostat -x shows a total pool speed of 4GB/s.


Any hints as to what might be causing our poor read performance?

Sincerely,

Gijs Peskens
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] VEEAM backups to CIFS fail only on OmniOS hypercongerged VM

2018-06-14 Thread Dan McDonald



> On Jun 14, 2018, at 3:04 AM, Oliver Weinmann  wrote:
> I would be more than happy to use a zone instead of a full blown VM but since 
> there is no ISCSI and NFS server support in a Zone I have to stick with the 
> VM as we need NFS since the VM is also a datastore for a few VMs.

You rambled a bit here, so I'm not sure what exactly you're asking.  I do know 
that:

- CIFS-server-in-a-zone should work

- NFS-server-in-a-zone and iSCSI-target-in-a-zone are both not available right 
now.

There is purported to be a prototype of NFS-server-in-a-zone kicking around 
*somewhere* but that may have been tied up.  I'd watch distros, especially 
those working on file service, to see if that shows up at some point, where it 
can be upstreamed to illumos-gate (and then back down to illumos-omnios).

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] VEEAM backups to CIFS fail only on OmniOS hypercongerged VM

2018-06-14 Thread Oliver Weinmann



Dear All,



I’m struggling with this issue since day one and I have not found any solution 
for it yet. We use VEEAM to back up our VMs and an OmniOS VM as a CIFS target. 
We have one OmniOS VM for internal and one for DMZ. VEEAM Backups to the one 
for internal work fine. No problems at all. Backups to the DMZ one fail every 
time. I can access the CIFS share just fine from windows. When the backup 
starts two or three VMs are backed up and then it fails. I have requested 
support from VEEAM and it turns out the same job running against a Windows 
server CIFS share works just fine. I couldn’t believe that OmniOS is the 
culprit as the CIFS implementation from Illumos is very good. So I setup a new 
OmniOS bare-metal server and created a zone for DMZ. I setup a cifs share and 
ran the same job. Everything works fine. I compared the settings from both the 
VM and the Zone and they are 100% identical. Only difference is one is a VM and 
one is a Zone. But since the VEEAM backup to the internal VM has no problems 
with the backup, I don’t think virtualization is a problem here. Is there 
anywhere I can start investigating further? I would be more than happy to use a 
zone instead of a full blown VM but since there is no ISCSI and NFS server 
support in a Zone I have to stick with the VM as we need NFS since the VM is 
also a datastore for a few VMs.



Any help is really appreciated.



Best Regards,

Oliver___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss