[OmniOS-discuss] OmniOS Panic on high ZFS Write Load

2014-05-15 Thread Rune Tipsmark
same, easy to reproduce. Googled for ever to find anything, but nothing. Does anyone have any idea? I don't really want to abandon ZFS just yet. Venlig hilsen / Best regards, Rune Tipsmark ___ OmniOS-discuss mailing list OmniOS-di

Re: [OmniOS-discuss] OmniOS Panic on high ZFS Write Load

2014-05-16 Thread Rune Tipsmark
Hi guys, After having tried various distros as mentioned and after having tried SLC and MLC PCI-E devices as well as SSD disks I think I actually found the issue. Previously I had a bunch of SATA disks connected to my SAS controller as well as a bunch of SAS disks... now that I removed the SATA

Re: [OmniOS-discuss] OmniOS Panic on high ZFS Write Load

2014-05-16 Thread Rune Tipsmark
SAS expander and 9 western digital WD4003FZEX Now with 10 Seagate ST4000NM0023 instead things seem to work much better. /Rune -Original Message- From: Dan McDonald [mailto:dan...@omniti.com] Sent: Friday, May 16, 2014 10:41 AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com

Re: [OmniOS-discuss] Slow write performance

2014-05-16 Thread Rune Tipsmark
opied, 50.2413 s, 408 MB/s Maybe try creating a pool from disks on one of the controllers and test. Venlig hilsen / Best regards, Rune Tipsmark From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Matthew Lagoe Sent: Friday, May 16, 2014 9:15 PM To: omnios-di

[OmniOS-discuss] Onboard Intel X540-T2 10gbe NIC shows 1gbit

2014-06-09 Thread Rune Tipsmark
ready - no difference. Venlig hilsen / Best regards, Rune Tipsmark ___ OmniOS-discuss mailing list OmniOS-discuss@lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss

Re: [OmniOS-discuss] Onboard Intel X540-T2 10gbe NIC shows 1gbit

2014-06-09 Thread Rune Tipsmark
- From: Dan McDonald [mailto:dan...@omniti.com] Sent: Monday, June 09, 2014 4:26 PM To: Rune Tipsmark Cc: omnios-discuss Subject: Re: [OmniOS-discuss] Onboard Intel X540-T2 10gbe NIC shows 1gbit On Jun 9, 2014, at 7:14 PM, Rune Tipsmark wrote: > As stated above, > > Got a Super Mi

Re: [OmniOS-discuss] Onboard Intel X540-T2 10gbe NIC shows 1gbit

2014-06-09 Thread Rune Tipsmark
Thanks but it is already the latest version, I don’t know if there is a specific firmware available for the onboard LAN only. Br, Rune From: Chih-Hung Hsieh [mailto:flight@gmail.com] Sent: Monday, June 09, 2014 5:29 PM To: Rune Tipsmark Cc: Dan McDonald; omnios-discuss Subject: Re: [OmniOS

Re: [OmniOS-discuss] Onboard Intel X540-T2 10gbe NIC shows 1gbit

2014-06-16 Thread Rune Tipsmark
? Br, Rune -Original Message- From: Ian Collins [mailto:i...@ianshome.com] Sent: Monday, June 09, 2014 6:57 PM To: Rune Tipsmark Cc: omnios-discuss Subject: Re: [OmniOS-discuss] Onboard Intel X540-T2 10gbe NIC shows 1gbit Rune Tipsmark wrote: > > As stated above, > > Got a

[OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-07 Thread Rune Tipsmark
hi guys, wondering if someone might know why my pool is still allocated 1.45T after I removed the files on the LU's provisioned onto that pool. pool: pool02 state: ONLINE scan: scrub in progress since Wed Oct 8 02:31:41 2014 29.1G scanned out of 1.45T at 52.4M/s, 7h54m to go 0 repai

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
from vmware client, that you delete your data. Filip -- Date: Tue, 7 Oct 2014 23:00:47 + From: Rune Tipsmark To: omnios-discuss Subject: [OmniOS-discuss] ZFS pool allocation remains after removing all files Messa

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
scuss@lists.omniti.com Cc: Rune Tipsmark Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files Hello, yes, VAAI could be probably the answer for vmware, and for example NexentaStor have some VAAI support as I know, but there were many problems with that (based on posts from use

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
i.com] Sent: Thursday, October 09, 2014 8:46 AM To: Richard Elling Cc: Rune Tipsmark; Filip Marvan; omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files On Oct 9, 2014, at 11:21 AM, Richard Elling wrote: > DanMcD will know for s

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
On OmniOS v11 r151010 -Original Message- From: Dan McDonald [mailto:dan...@omniti.com] Sent: Thursday, October 09, 2014 11:11 AM To: Rune Tipsmark Cc: Richard Elling; Filip Marvan; omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
So if I just upgrade to latest it should be supported? Rune -Original Message- From: Dan McDonald [mailto:dan...@omniti.com] Sent: Thursday, October 09, 2014 11:37 AM To: Rune Tipsmark Cc: Richard Elling; Filip Marvan; omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] ZFS

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
Is there a command I can run to check? Rune -Original Message- From: Dan McDonald [mailto:dan...@omniti.com] Sent: Thursday, October 09, 2014 11:51 AM To: Rune Tipsmark Cc: omnios-discuss Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files On Oct 9

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-09 Thread Rune Tipsmark
ift = 0x10 vdev_mirror_shift = 0x15 zfs_vdev_aggregation_limit = 0x2 Rune -Original Message- From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Rune Tipsmark Sent: Thursday, October 09, 2014 3:33 PM To: Dan McDonald Cc: omnios-discuss Subject: Re: [OmniOS-di

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-10-10 Thread Rune Tipsmark
] Sent: Friday, October 10, 2014 10:01 AM To: Rune Tipsmark Cc: Dan McDonald; omnios-discuss Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files On Oct 9, 2014, at 4:58 PM, Rune Tipsmark wrote: > Just updated to latest version r151012 > > Still same... I ch

[OmniOS-discuss] zfs pool 100% busy, disks less than 10%

2014-10-30 Thread Rune Tipsmark
Hi all, Hope someone can help me get this pool running as it should, I am seeing something like 200-300 MB/sec max which is much much less than I want to see... 11 mirrored vdevs... 2 spares and 2 SLOG devices, 192gb ram in host... Why is this pool showing near 100% busy when the underlying dis

Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10%

2014-10-31 Thread Rune Tipsmark
g.com] Sent: Friday, October 31, 2014 9:03 AM To: Eric Sproul Cc: Rune Tipsmark; omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10% On Oct 31, 2014, at 7:14 AM, Eric Sproul wrote: > On Fri, Oct 31, 2014 at 2:33 AM, Rune Tipsmark wrote: > >

Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10%

2014-10-31 Thread Rune Tipsmark
-discuss-boun...@lists.omniti.com] On Behalf Of Rune Tipsmark Sent: Friday, October 31, 2014 12:38 PM To: Richard Elling; Eric Sproul Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10% Ok, makes sense. What other kind of indicators can I look at

[OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-01 Thread Rune Tipsmark
Hi all, Is it possible to do zfs send/recv via SRP or some other RMDA enabled protocol? IPoIB is really slow, about 50 MB/sec between two boxes, no disks are more than 10-15% busy. If not, is there a way I can aggregate say 8 or 16 IPoIB partitions and push throughput to a more reasonable sp

Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-01 Thread Rune Tipsmark
://www.ssec.wisc.edu/~scottn/Lustre_ZFS_notes/lustre_zfs_srp_mirror.html Br, Rune From: David Bomba [mailto:turbo...@gmail.com] Sent: Saturday, November 01, 2014 6:01 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled

Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-02 Thread Rune Tipsmark
8086,3c08@3/pci10b5,8616@0/pci10b5,8616@6/pci103c,178e@0 Specify disk (enter its number): -Original Message- From: Johan Kragsterman [mailto:johan.kragster...@capvert.se] Sent: Sunday, November 02, 2014 9:56 AM To: Rune Tipsmark Cc: David Bomba; omnios-discuss@lists.omniti.com Subje

Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-02 Thread Rune Tipsmark
I know, but how do I initiate a session from ZFS10? Br, Rune From: Johan Kragsterman [mailto:johan.kragster...@capvert.se] Sent: Sunday, November 02, 2014 10:33 AM To: Rune Tipsmark Cc: David Bomba; omnios-discuss@lists.omniti.com Subject: Ang: RE: Re: [OmniOS-discuss] zfs send via SRP or other

Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10%

2014-11-02 Thread Rune Tipsmark
the same time at say 1.50 ratio, will the pool show 100 MB/sec and the client write 75 MB/sec actual? Br, Rune -Original Message- From: Richard Elling [mailto:richard.ell...@richardelling.com] Sent: Sunday, November 02, 2014 6:07 PM To: Rune Tipsmark Cc: Eric Sproul; omnios-discuss

Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-02 Thread Rune Tipsmark
connectX2 and drivers are loaded, both OmniOS servers have LUNs I can access from both ESX and Windows... just the conection between them that I cant figure out. Br, Rune From: Johan Kragsterman [mailto:johan.kragster...@capvert.se] Sent: Sunday, November 02, 2014 10:49 PM To: Rune Tipsmark Cc

Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-02 Thread Rune Tipsmark
PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Ang: RE: RE: RE: RE: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol Hej! Hmm, how about the target/initiiator configuration of the HCA's? When I think about it, I have never done this that you're

Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled protocol

2014-11-03 Thread Rune Tipsmark
That was a secondary thought, maybe worth testing one day. Primarily I was looking at a way of speeding up zfs send-recv. Guess it's a no go on a single HCA... From: Johan Kragsterman [mailto:johan.kragster...@capvert.se] Sent: Monday, November 03, 2014 12:41 AM To: Rune Tipsmark Cc: o

Re: [OmniOS-discuss] infiniband

2014-11-09 Thread Rune Tipsmark
What network throughput were you looking at before the tweaking? Br, Rune -Original Message- From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Michael Rasmussen Sent: Sunday, November 09, 2014 5:21 PM To: omnios-discuss@lists.omniti.com Subject: Re: [Omni

[OmniOS-discuss] No space left on device - upgrade failed

2014-11-10 Thread Rune Tipsmark
Hi all, Hoping someone can help here. root@zfs00:~# /usr/bin/pkg update --be-name=omnios-r151012 entire@11,5.11-0.151012 Creating Plan |pkg: An error was encountered while attempting to store information about the current operation in client history. pkg: [Errno 28] No space left on device: '/

Re: [OmniOS-discuss] No space left on device - upgrade failed

2014-11-10 Thread Rune Tipsmark
>What is the dataset breakout (zfs list)? >Maybe you have reservations like swap and dump (volumes in general) - their >unused space is not available for other datasets and not allocated on >backend >storage either (what zpool list reflects). root@zfs00:~# zfs list NAME USE

Re: [OmniOS-discuss] No space left on device - upgrade failed

2014-11-10 Thread Rune Tipsmark
-discuss-boun...@lists.omniti.com] On Behalf Of Michael Rasmussen Sent: Monday, November 10, 2014 3:47 PM To: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] No space left on device - upgrade failed On Mon, 10 Nov 2014 23:32:14 + Rune Tipsmark wrote: > > root@zfs00:~# zf

Re: [OmniOS-discuss] ZFS pool allocation remains after removing all files

2014-11-11 Thread Rune Tipsmark
ining unsupported options? Br, Rune -Original Message- From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Rune Tipsmark Sent: Friday, October 10, 2014 1:58 PM To: Richard Elling Cc: omnios-discuss Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after rem

Re: [OmniOS-discuss] infiniband

2014-11-12 Thread Rune Tipsmark
: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] infiniband On Mon, 10 Nov 2014 04:38:23 + Rune Tipsmark wrote: > What network throughput were you looking at before the tweaking? > Raised my performance from 5.2 gbps to 7.9 gbps (50% performance increase) -- Hilsen/Regards M

[OmniOS-discuss] slog limits write speed more than it should

2014-11-12 Thread Rune Tipsmark
Hi all, Got a problem... with my pool using sync=always I see a max write speed of about 6000 IOPS (64KB block size) during storage vMotion. I doesn't matter if I have one, two or three SLOGs, if I use one it will just do ~6000 w/s, 0% busy (SLC IO Drive), if I use two of these each will do ~30

Re: [OmniOS-discuss] slog limits write speed more than it should

2014-11-12 Thread Rune Tipsmark
http://www.fusionio.com/products/iodrive 160GB SLC -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Wednesday, November 12, 2014 3:17 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] slog limits write speed more

Re: [OmniOS-discuss] infiniband

2014-11-12 Thread Rune Tipsmark
-boun...@lists.omniti.com] On Behalf Of Michael Rasmussen Sent: Wednesday, November 12, 2014 3:48 PM To: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] infiniband How exactly do you configure this? Is a switch required? On Wed, 12 Nov 2014 23:19:24 + Rune Tipsmark wrote

Re: [OmniOS-discuss] infiniband

2014-11-12 Thread Rune Tipsmark
Subject: Re: [OmniOS-discuss] infiniband On Thu, 13 Nov 2014 00:22:40 + Rune Tipsmark wrote: > > ipadm create-addr -T static -a 10.98.0.10 p.ibp0/ipv4 ipadm > create-addr -T static -a 10.99.0.10 p.ibp1/ipv4 ipadm create-addr > -T static -a 10.98.0.12 p.ibp2/ipv4

Re: [OmniOS-discuss] infiniband

2014-11-12 Thread Rune Tipsmark
On Thu, 13 Nov 2014 00:22:40 + Rune Tipsmark wrote: > > ipadm create-addr -T static -a 10.98.0.10 p.ibp0/ipv4 ipadm > create-addr -T static -a 10.99.0.10 p.ibp1/ipv4 ipadm create-addr > -T static -a 10.98.0.12 p.ibp2/ipv4 ipadm create-addr -T static -a > 10.99.

Re: [OmniOS-discuss] slog limits write speed more than it should

2014-11-14 Thread Rune Tipsmark
drive and then back around to the first. You don't actually increase your throughput beyond the performance of a single drive. http://nex7.blogspot.com/2013/04/zfs-intent-log.html http://www.nexentastor.org/boards/5/topics/6179 On Wed, Nov 12, 2014 at 6:21 PM, Rune Tipsmark mai

Re: [OmniOS-discuss] slog limits write speed more than it should

2014-11-14 Thread Rune Tipsmark
Well that sucks... I guess one more reason to move to NV-Dimms to replace slow SLC cards. Br, Rune -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Friday, November 14, 2014 6:48 AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re

Re: [OmniOS-discuss] slog limits write speed more than it should

2014-11-14 Thread Rune Tipsmark
r file Br, Rune -Original Message- From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Rune Tipsmark Sent: Friday, November 14, 2014 9:47 AM To: Bob Friesenhahn Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] slog limits write speed mo

Re: [OmniOS-discuss] slog limits write speed more than it should

2014-11-14 Thread Rune Tipsmark
stripes and get the speed I want. Copying ~60GB from one LUN to another on same ZFS box. Sync=Always: [cid:image001.png@01D0001C.7E3289F0] Sync=Disabled [cid:image002.png@01D0001C.7E3289F0] -Original Message- From: Rune Tipsmark Sent: Friday, November 14, 2014 11:53 AM To: Rune

[OmniOS-discuss] need to change c17d0 to c15d0

2014-11-19 Thread Rune Tipsmark
I moved one of my PCI-E IOdrives and the disks changed from c14d0 and c15d0 to c16d0 and c17d0 How do I change it back so I can get my pool back online? 32. c16d0 /pci@79,0/pci8086,3c02@1/pci10b5,8616@0/pci10b5,8616@5/pci103c,178e@0 33. c17d0 /pci@79,0/pci8086,3c02@1/pci10b

Re: [OmniOS-discuss] need to change c17d0 to c15d0

2014-11-19 Thread Rune Tipsmark
AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] need to change c17d0 to c15d0 Put the drive/hba back and then export your pool (zpool export pool-name). Then move the drive/hba and import the pool (zpool import pool-name). If putting the drive/hba back isn&#

Re: [OmniOS-discuss] need to change c17d0 to c15d0

2014-11-19 Thread Rune Tipsmark
0 0 0 cannot open -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Wednesday, November 19, 2014 7:00 AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] need to change c17d0 to c15d0 On Wed, 19 Nov

Re: [OmniOS-discuss] need to change c17d0 to c15d0

2014-11-19 Thread Rune Tipsmark
Zpool destroy pool02 and then zpool import -f pool02 worked. Br, Rune -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Wednesday, November 19, 2014 7:24 AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: RE: [OmniOS-discuss] need to

[OmniOS-discuss] Active-Active vSphere

2014-11-27 Thread Rune Tipsmark
Hi guys, Does anyone know if Active/Active and Round Robin is supported from vSphere towards OmniOS ZFS on Fiber Channel? Br Rune ___ OmniOS-discuss mailing list OmniOS-discuss@lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss

Re: [OmniOS-discuss] Active-Active vSphere

2014-11-27 Thread Rune Tipsmark
... -Original Message- From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Saso Kiselkov Sent: Thursday, November 27, 2014 12:47 PM To: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] Active-Active vSphere On 11/27/14 3:35 PM, Rune Tipsmark wrote: >

Re: [OmniOS-discuss] Active-Active vSphere

2014-11-28 Thread Rune Tipsmark
Okay, i noticed alua support is not enabled per default on omnios. How do i enable that? > On Nov 27, 2014, at 11:29 PM, Saso Kiselkov wrote: > >> On 11/27/14 11:40 PM, Rune Tipsmark wrote: >> so to simplify, say we have one esxi host that has two FC ports, one >> o

Re: [OmniOS-discuss] PCIe dedicated device for ZIL

2014-12-05 Thread Rune Tipsmark
its a good idea, I use Fusion IO SLC drives but I still see a limit of about 750 MB/sec which is too little, also I need to run multiple streams to achieve this, if I only use a single data stream I only get around 350 MB/sec. I makes no difference if I use 1 or 4 SLC drives, speed remains prett

[OmniOS-discuss] hangs on reboot

2014-12-11 Thread Rune Tipsmark
hi all, I got a bunch (3) installations of omnios on SuperMicro hardware and all 3 have issues rebooting. They simply hang and never ever reboot. The install is latest version and I only added the storage-server package and installed napp-it and changed the fibre channel setting in /kernel

Re: [OmniOS-discuss] hangs on reboot

2014-12-11 Thread Rune Tipsmark
d has Infiniband as well... I am leaning towards something with the SuperMicro hardware but can't really pinpoint it. br, Rune From: Dan McDonald Sent: Thursday, December 11, 2014 11:39 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: R

Re: [OmniOS-discuss] hangs on reboot

2014-12-11 Thread Rune Tipsmark
still same... output can be seen here: http://i.imgur.com/BuwaGGn.png From: Dan McDonald Sent: Thursday, December 11, 2014 11:39 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] hangs on reboot Nothing printed out on the

Re: [OmniOS-discuss] hangs on reboot

2014-12-11 Thread Rune Tipsmark
ember 11, 2014 11:32 PM To: Rune Tipsmark; omnios-discuss@lists.omniti.com Subject: RE: [OmniOS-discuss] hangs on reboot > Rune Tipsmark > Sent: Thursday, December 11, 2014 2:26 PM > > I got a bunch (3) installations of omnios on SuperMicro hardware and all 3 > have issues rebooting.

[OmniOS-discuss] latency spikes ~every hour

2014-12-14 Thread Rune Tipsmark
hi all, All my vSphere (ESXi5.1) hosts experience a big spike in latency every hour or so. I tested on Infiniband iSER and SRP and also 4Gbit FC and 8GBit FC. All exhibit the same behavior so I don't think its the connection that is causing this. When I modify the arc_shrink_shift 10 (192GB

Re: [OmniOS-discuss] Fibre Target problems

2014-12-14 Thread Rune Tipsmark
did you ever find a solution? I have the same problem on a SuperMicro based system... FC drops and it causes Windows to loose connection and copying files fails... br, Rune From: OmniOS-discuss on behalf of Mark Sent: Sunday, September 14, 2014 11:30 AM

Re: [OmniOS-discuss] latency spikes ~every hour

2014-12-15 Thread Rune Tipsmark
ok I removed some of my SLOG devices and currently I am only using a single SLOG (no mirror or anything) and no spikes seen since. I wonder why multiple SLOG devices would cause this. br. Rune From: OmniOS-discuss on behalf of Rune Tipsmark Sent: Sunday

Re: [OmniOS-discuss] Fibre Target problems

2014-12-15 Thread Rune Tipsmark
where do you check that? br, Rune From: Mark Sent: Monday, December 15, 2014 7:19 AM To: Rune Tipsmark; omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] Fibre Target problems On 15/12/2014 4:44 a.m., Rune Tipsmark wrote: > did you ever fin

[OmniOS-discuss] dedup causes zfs/omnios to drop connections.

2014-12-15 Thread Rune Tipsmark
hi all, got a new system I was intending on using as backup repository. Whenever dedup is enabled it dies after anywhere between 5 and 30 minutes. I need to reboot OmniOS to get it back online. the files being copied onto the zfs vols are rather large, about ~2TB each... if I copy smaller fi

Re: [OmniOS-discuss] dedup causes zfs/omnios to drop connections.

2014-12-15 Thread Rune Tipsmark
On 12/15/2014 09:53 PM, Dan McDonald wrote: > >> On Dec 15, 2014, at 3:43 PM, Rune Tipsmark wrote: >> >> hi all, >> >> got a new system I was intending on using as backup repository. Whenever >> dedup is enabled it dies after anywhere between 5 and 30 mi

[OmniOS-discuss] mount/create volume lu from snapshot

2014-12-22 Thread Rune Tipsmark
hi all, I have two omnios boxes and zfs replication going between the two every 30 min. I am replicating a volume lu pool01/vol01 from hostA to hostB how can I mount this or create a volume lu out of it on my destination box? br, Rune ___ Omn

Re: [OmniOS-discuss] mount/create volume lu from snapshot

2014-12-22 Thread Rune Tipsmark
? br, Rune From: Dan McDonald Sent: Monday, December 22, 2014 8:07 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] mount/create volume lu from snapshot > On Dec 22, 2014, at 1:59 PM, Rune Tipsmark wrote: &g

[OmniOS-discuss] zfs destroy takes forever, how do I set async_destroy?

2014-12-29 Thread Rune Tipsmark
hi all, as stated above, I have a server where I am syncing some 70 or so TB from and it has some very large snapshots and a destroy takes forever... has run for hours and hours now, 100% disk busy...I read there was a feature flag async_destroy but I don't seem to be able to find it. Any id

Re: [OmniOS-discuss] zfs destroy takes forever, how do I set async_destroy?

2014-12-30 Thread Rune Tipsmark
I found out the feature is already enabled, I guess destroying very large snapshots just takes a very long time regardless... br, Rune From: OmniOS-discuss on behalf of Rune Tipsmark Sent: Monday, December 29, 2014 10:59 PM To: omnios-discuss Subject

[OmniOS-discuss] offline dedup

2015-01-05 Thread Rune Tipsmark
hi all, does anyone know if offline dedup is something we can expect in the future of ZFS? I have some backup boxes with 50+TB on them and only 32GB Ram and even zdb -S crashes due to lack of memory. Seems complete overkill to put 256+GB ram in a slow backup box... and if I enable dedup as is,

[OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed

2015-01-19 Thread Rune Tipsmark
hi all, just in case there are other people out there using their ZFS box against vSphere 5.1 or later... I found my storage vmotion were slow... really slow... not much info available and so after a while of trial and error I found a nice combo that works very well in terms of performance, l

Re: [OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed

2015-01-19 Thread Rune Tipsmark
From: Richard Elling Sent: Monday, January 19, 2015 1:57 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed On Jan 19, 2015, at 3:55 AM, Rune Tipsmark mailto:r

Re: [OmniOS-discuss] VAAI Testing

2015-01-20 Thread Rune Tipsmark
I would be able to help test if its stable in my environment as well. I can't program though. br, Rune From: OmniOS-discuss on behalf of W Verb Sent: Tuesday, January 20, 2015 3:59 AM To: omnios-discuss@lists.omniti.com Subject: [OmniOS-discuss] VAAI Testing

[OmniOS-discuss] iostat skip first output

2015-01-24 Thread Rune Tipsmark
hi all, I am just writing some scripts to gather performance data from iostat... or at least trying... I would like to completely skip the first output since boot from iostat output and just get right to the period I specified with the data current from that period. Is this possible at all? b

Re: [OmniOS-discuss] iostat skip first output

2015-01-24 Thread Rune Tipsmark
zation last $varInterval seconds; echo 0 disk_latency_${tokens[$i]} ms=${tokens[$i-3]} ${tokens[$i-3]} ms response time average last $varInterval seconds; done From: OmniOS-discuss on behalf of Rune Tipsmark Sent: Saturday, January 24, 2015 6:25 PM To:

Re: [OmniOS-discuss] iostat skip first output

2015-01-24 Thread Rune Tipsmark
hi Richard, thanks for that input, will see what I can do with it. I do store data and graph it so I can keep track of things :) br, Rune From: Richard Elling Sent: Sunday, January 25, 2015 1:02 AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject

[OmniOS-discuss] Windows crashes my ZFS box

2015-02-01 Thread Rune Tipsmark
hi all, I got some major problems... when using Windows and Fibre Channel I am able to kill my ZFS box totally for at least 15 minutes... it simply drops all connections to all hosts connected via FC. This happens under load, for example doing backups writing to the ZFS, running IO Meter agai

[OmniOS-discuss] ZFS Slog - force all writes to go to Slog

2015-02-18 Thread Rune Tipsmark
hi all, I found an entry about zil_slog_limit here: http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII it basically explains how writes larger than 1MB per default hits the main pool rather than my Slog device - I could not find much further information nor the equivalent sett

Re: [OmniOS-discuss] ZFS Slog - force all writes to go to Slog

2015-02-18 Thread Rune Tipsmark
From: Richard Elling Sent: Thursday, February 19, 2015 1:27 AM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] ZFS Slog - force all writes to go to Slog On Feb 18, 2015, at 12:04 PM, Rune Tipsmark mailto:r...@steait.net

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-05 Thread Rune Tipsmark
Same problem here… have noticed I can cause this easily by using Windows as initiator… I cannot cause this using VMware as initiator… No idea how to fix, but a big problem. Br, Rune From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Nate Smith Sent: Thursday, Mar

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-05 Thread Rune Tipsmark
: Thursday, March 05, 2015 8:10 AM To: Rune Tipsmark; omnios-discuss@lists.omniti.com Subject: RE: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks? Do you see the same problem with Windows and iSCSI as an initiator? I wish there was a way to turn up debugging to figure this out. From: Rune Tipsmark

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-05 Thread Rune Tipsmark
Pls see below >> -Original Message- From: Johan Kragsterman [mailto:johan.kragster...@capvert.se] Sent: Thursday, March 05, 2015 9:00 AM To: Rune Tipsmark Cc: 'Nate Smith'; omnios-discuss@lists.omniti.com Subject: Ang: Re: [OmniOS-discuss] QLE2652 I/O Disconnect.

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-05 Thread Rune Tipsmark
: Rune Tipsmark Cc: 'Nate Smith'; omnios-discuss@lists.omniti.com Subject: Ang: RE: Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks? Hi! -Rune Tipsmark skrev: - Till: 'Johan Kragsterman' Från: Rune Tipsmark Datum: 2015-03-05 19:38 Kopia: 'Nate

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-05 Thread Rune Tipsmark
-Original Message- From: Johan Kragsterman [mailto:johan.kragster...@capvert.se] Sent: Thursday, March 05, 2015 12:12 PM To: Rune Tipsmark Cc: 'Nate Smith'; omnios-discuss@lists.omniti.com Subject: Ang: RE: RE: Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks? -Run

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-06 Thread Rune Tipsmark
No idea to be honest, even if there is its scary if it can cause these kinds of problems… Br, Rune From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf Of Nate Smith Sent: Friday, March 06, 2015 8:57 AM To: 'Richard Elling' Cc: omnios-discuss@lists.omniti.com Subject:

Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?

2015-03-07 Thread Rune Tipsmark
about it the more I lean towards SM having an issue... and Dell uses essentially SM so same same. br, Rune From: Johan Kragsterman Sent: Saturday, March 7, 2015 4:24 PM To: Rune Tipsmark Cc: 'Nate Smith'; 'Richard Elling'; omnios-disc

[OmniOS-discuss] crash dump analysis help

2015-04-18 Thread Rune Tipsmark
hi guys, my omnios zfs server crashed today and I got a complete core dump and I was wondering if I am on the right track... here is what I did so far... root@zfs10:/root# fmdump -Vp -u 775e0fc1-dcd2-4cb2-b800-88a1b9910f94 TIME UUID

Re: [OmniOS-discuss] crash dump analysis help

2015-04-20 Thread Rune Tipsmark
root@zfs10:/root# uname -a SunOS zfs10 5.11 omnios-10b9c79 i86pc i386 i86pc Any idea how I can troubleshoot further? br, Rune From: Dan McDonald Sent: Monday, April 20, 2015 3:58 AM To: Rune Tipsmark Cc: omnios-discuss; Dan McDonald Subject: Re: [OmniOS

Re: [OmniOS-discuss] crash dump analysis help

2015-04-20 Thread Rune Tipsmark
it's nearly 30 gigs... not sure anyone would download it :) maybe I can compress it or something. br, Rune From: Dan McDonald Sent: Monday, April 20, 2015 2:40 PM To: Rune Tipsmark Cc: omnios-discuss Subject: Re: [OmniOS-discuss] crash dump ana

Re: [OmniOS-discuss] disk failure causing reboot?

2015-05-19 Thread Rune Tipsmark
Same issue here around two months ago when a L2arc device failed… failmode was default and the device was actually an mSata SSD mounted in a PCI-E mSata card: http://www.addonics.com/products/ad4mspx2.php and the disk was one of four of these http://www.samsung.com/us/computer/memory-storage/MZ

[OmniOS-discuss] ZIL TXG commits happen very frequently - why?

2015-10-13 Thread Rune Tipsmark
Hi all. Wondering if anyone could shed some light on why my ZFS pool would perform TXG commits up to 5 times per second. It's set to the default 5 second interval and occasionally it does wait 5 seconds between commits, but only when nearly idle. I'm not sure if this impacts my performance but

Re: [OmniOS-discuss] ZIL TXG commits happen very frequently - why?

2015-10-14 Thread Rune Tipsmark
__ From: Schweiss, Chip Sent: Wednesday, October 14, 2015 2:44 PM To: Rune Tipsmark Cc: omnios-discuss@lists.omniti.com Subject: Re: [OmniOS-discuss] ZIL TXG commits happen very frequently - why? It all has to do with the write throttle and buffers filling. Here's a great b