Re: [OmniOS-discuss] OmniOS r151018 is now out!

2016-04-17 Thread Mark Kushigian
There seems to be an issue with time zones. During a fresh installation,
I attempted to choose US Eastern time zone. In the first place, it is not
listed, it only says "(most areas)". I choose it anyway and then the
installation fails, reporting "Timezone value specified (Eastern) is not
valid". If I just choose UTC it installs fine. I have screenshots but I'm
new to this forum and don't know how to include them.

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] heavy NFS load causing server to become unavailable from memory starvation.

2016-04-17 Thread Doug Hughes
Been seeing this a bit lately on a server with 96GB ram where the zil is 
limited to 48GB. Under heavy NFS load (caused by user running parallel 
find/xargs/rm clean job), it sends the machine into desparation memory 
and causes it to be unreachable for a while. (the server is 24 x SSD, 
and I/O load is ok)


I have some mdb/kmastat/kmausers/memstat output.

here's the kmastat break down.

Total [hat_memload] 15.8M5155240 0
Total [kmem_msb] 22.0G  987112603 0
Total [kmem_firewall] 271M  411704261   895
Total [kmem_va] 40.8G   22314937 0
Total [kmem_default] 31.8G 4275706645   619
Total [kmem_io_4P] 44.7M   26714639 0
Total [kmem_io_4G] 108K908 0
Total [kmem_io_2G] 100K 68 0
Total [bp_map]0 7251 0
Total [umem_np]   0 728 0
Total [zfs_file_data] 118M  93380 0
Total [zfs_file_data_buf] 19.4G   22030035 0
Total [segkp] 256K   6787 0
Total [ip_minor_arena_sa] 512  72304 0
Total [ip_minor_arena_la]64 29350 0
Total [spdsock]   0 1 0
Total [namefs_inodes]64 27 0
-- - - - -- 
-- -


the kmausers is almost 1MB.

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Slow scrub on SSD-only pool

2016-04-17 Thread Stephan Budach

Am 17.04.16 um 20:42 schrieb Dale Ghent:

On Apr 17, 2016, at 9:07 AM, Stephan Budach  wrote:

Well… searching the net somewhat more thoroughfully, I came across an archived 
discussion which deals also with a similar issue. Somewhere down the 
conversation, this parameter got suggested:

echo "zfs_scrub_delay/W0" | mdb -kw

I just tried that as well and although the caculated speed climbs rathet slowly 
up, iostat now shows  approx. 380 MB/s read from the devices, which rates  at 
24 MB/s per single device * 8 *2.

Being curious, I issued a echo "zfs_scrub_delay/W1" | mdb -kw to see what would 
happen and that command immediately drowned the rate on each device down to 1.4 MB/s…

What is the rational behind that? Who wants to wait for weeks for a scrub to 
finish? Usually, I am having znapzend run as well, creating snapshots on a 
regular basis. Wouldn't that hurt scrub performance even more?

zfs_scrub_delay is described here:

http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/dsl_scan.c#63

How busy are your disks if you subtract the IO caused by a scrub? Are you doing 
these scrubs with your VMs causing normal IO as well?

Scrubbing, overall, is treated as a background maintenance process. As such, it is 
designed to not interfere with "production IO" requests. It used to be that 
scrubs ran as fast as disk IO and bus bandwidth would allow, which in turn severely 
impacted the IO performance of running applications, and in some cases this would cause 
problems for production or user services.  The scrub delay setting which you've 
discovered is the main governor of this scrub throttle code[1], and by setting it to 0, 
you are effectively removing the delay it imposes on itself to allow 
non-scrub/resilvering IO requests to finish.

The solution in your case is specific to yourself and how you operate your 
servers and services. Can you accept degraded application IO while a scrub or 
resilver is running? Can you not? Maybe only during certain times?

/dale

[1] 
http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/dsl_scan.c#1841
I do get the notion if this, but if the increase from 0 to 1 reduces the 
throughput from 24Mb/s to 1MB/s, this seems way overboard to me. Having 
to wait for a couple of hours when running with 0 as opposed to days (up 
to 10) when running at 1  - on a 1.3 TB zpool - doesn't seem to be the 
right choice. If this tunable offered some more room for choice,  that 
would be great, but it obviously doesn't.


It's the weekend and my VMs aren't excatly hogging their disks, so there 
was plenty of I/O available… I'd wish for a more granular setting 
regarding this setting.


Anyway, the scrub finished a couple of hours later and of course, I can 
always set this tunable to 0, should I need it,


Thanks,
Stephan
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] cifs anonymous troubles

2016-04-17 Thread Natxo Asenjo
hi Gordon,

On Sun, Apr 17, 2016 at 5:38 PM, Gordon Ross 
wrote:

> Hi Dan,
>
> So with that bug fixed, one can logon as "guest" only if:
> (1) you actually ask for guest in your logon request,
> (2) a local Unix account named "guest" exists, and
> (3) the guest account is enabled for SMB
>
> Therefore, if you were using guest access before 1122 was fixed,
> (and were depending on accidental guest access working),
> you'll need to do the following to re-enable guest access:
>
> useradd (options] guest
> smbadm enable-user guest


I confirm this works. Thanks!
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] cifs anonymous troubles

2016-04-17 Thread Gordon Ross
Hi Dan,

I can take a guess what this might be about.

There were several bugs fixed as part of the "extended security" work:
1122 smbsrv should use SPNEGO (inbound authentication)

One of those was that we used to give a client a "guest" logon
if they tried to logon to SMB with _any_ unrecognized account.
No, that was never a good idea. Not only was it questionable
for security, but it confused issues about failed logon.  Example:
Windows user does NOT get the expected pop-up dialog asking
for new credentials when they try to connect to a share using
an invalid user name.  Instead, they would get connected,
but would fail to have access to anything in the share.

So with that bug fixed, one can logon as "guest" only if:
(1) you actually ask for guest in your logon request,
(2) a local Unix account named "guest" exists, and
(3) the guest account is enabled for SMB

Therefore, if you were using guest access before 1122 was fixed,
(and were depending on accidental guest access working),
you'll need to do the following to re-enable guest access:

useradd (options] guest
smbadm enable-user guest

The guest account password is ignored by SMB, so
all that matters to SMB is whether that account is
marked as enabled in /var/smb/smbpasswd

To keep Unix users from using guest for login, you can
set the Unix password hash to something invalid, etc.

On Fri, Apr 15, 2016 at 4:05 PM, Natxo Asenjo  wrote:
> hi,
>
> trying to set up an anonymous share on workgroup mode  I do not get it
> working.
>
> I have a dataset tank/test with these sharesmb properties:
>
> zfs get sharesmb tank/testshare
> NAMEPROPERTY  VALUE   SOURCE
> tank/testshare  sharesmb  name=test,guestok=true  local
>
> These are the permissions on that path:
>
> # /usr/bin/ls -Vd /tank/testshare/
> drwxrwxrwx+ 14 root root  14 Sep 11  2015 /tank/testshare/
>   everyone@:rwxpdDaARWcCos:fd-:allow
>
> Both using a windows client (win 2012r2) as a linux smbclient (fedora 23),
> both quite modern, I cannot access the share:
>
> Linux smbclient:
> $ smbclient -U " " -L //192.168.0.172 -N
> Anonymous login successful
> Domain=[WORKGROUP] OS=[SunOS 5.11 omnios-r151018-ae314] Server=[Native SMB
> service]
>
> Sharename   Type  Comment
> -     ---
> c$  Disk  Default Share
>
> testDisk
> Connection to 192.168.0.172 failed (Error NT_STATUS_CONNECTION_REFUSED)
> NetBIOS over TCP disabled -- no workgroup available
>
>
> Windows client:
> C:\Users\Administrator>net view \\192.168.0.172
> System error 5 has occurred.
>
> Access is denied.
>
>
> Using a local user works, with smb2 ;-)
>
> Any one success with guestok=true and cifs?
>
> --
> Groeten,
> natxo
>
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Slow scrub on SSD-only pool

2016-04-17 Thread Stephan Budach

Hi all,

I am running a scrub on a SSD-only zpool on r018. This zpool consists of 
16 iSCSI targets, which are served from two other OmniOS boxes - 
currently still running r016 over 10GbE connections.


This zpool serves as a NFS share for my Oracle VM cluster and it 
delivers reasonable performance. Even while the scrub is running, I can 
get approx 1200MB/s throughput when dd'ing a vdisk from the ZFS to 
/dev/null.


However, the running scrub is only progressing like this:

root@zfsha02gh79:/root# zpool status ssdTank
  pool: ssdTank
 state: ONLINE
  scan: scrub in progress since Sat Apr 16 23:37:52 2016
68,5G scanned out of 1,36T at 1,36M/s, 276h17m to go
0 repaired, 4,92% done
config:

NAME STATE READ WRITE CKSUM
ssdTank ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c3t600144F090D0961356B8A76C0001d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A93C0009d0 ONLINE   0 
0 0

  mirror-1 ONLINE   0 0 0
c3t600144F090D0961356B8A7BE0002d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A948000Ad0 ONLINE   0 
0 0

  mirror-2 ONLINE   0 0 0
c3t600144F090D0961356B8A7F10003d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A958000Bd0 ONLINE   0 
0 0

  mirror-3 ONLINE   0 0 0
c3t600144F090D0961356B8A7FC0004d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A964000Cd0 ONLINE   0 
0 0

  mirror-4 ONLINE   0 0 0
c3t600144F090D0961356B8A8210005d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A96E000Dd0 ONLINE   0 
0 0

  mirror-5 ONLINE   0 0 0
c3t600144F090D0961356B8A82E0006d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A978000Ed0 ONLINE   0 
0 0

  mirror-6 ONLINE   0 0 0
c3t600144F090D0961356B8A83B0007d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A983000Fd0 ONLINE   0 
0 0

  mirror-7 ONLINE   0 0 0
c3t600144F090D0961356B8A84A0008d0 ONLINE   0 
0 0
c3t600144F090D0961356B8A98E0010d0 ONLINE   0 
0 0


errors: No known data errors

These are all Intel S3710s with 800GB and I can't seem to find out why 
it's moving so slowly.

Anything I can look at specifically?

Thanks,
Stephan
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss