Hi Ben,

Actually, I have a KVM above virtualized Windows server and KVM provides all 
necesary services, like iSCSI. I suspect as a problem here lack of free space 
on the pool, because I had more then 90% filled with data and omnios pool 
signalised 0% free space. The fastest solution has been deletion of this disk, 
restoration from backup and cleanup some useless data (there wasn't any 
databases or system disks).
Now I have unfortunately a different problem for that I start new fiber.

Thank you for your time.
Martin Truhlar


-----Original Message-----
From: Ben Kitching [mailto:narrator...@icloud.com] 
Sent: Saturday, August 15, 2015 12:37 PM
To: Martin Truhlář
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] data gone ...?

Hi Martin,

You say that you are exporting a volume over iSCSI to your windows server. I 
assume that means you have an NTFS (or other windows filesystem) sitting on top 
of the iSCSI volume? It might be worth using windows tools to check the 
integrity of that filesystem as it may be that rather than ZFS that is causing 
problems.

Are you using the built in Windows iSCSI initiator? I’ve had problems with this 
in past on versions of windows older than windows 8 / server 2012 due to it not 
supporting iSCSI unmap commands and therefore being unable to tell ZFS to free 
blocks when files are deleted. You can see if you are having this problem by 
comparing the free space reported by both windows and ZFS. If there is a 
disparity then you are likely experiencing this problem and could ultimately 
end up in a situation where ZFS will stop allowing writes because it thinks the 
volume is full no matter how many files you delete from the windows end. I saw 
this manifest as errors with the NTFS filesystem on the windows end as from 
Windows point of view it has free space and can’t understand why it isn’t 
allowed to write, it sees it as an error.

On 15 Aug 2015, at 00:38, Martin Truhlář <martin.truh...@archcon.cz> wrote:

Hallo everyone,

I have a little problem here. I'm using OmniOS v11 r151014 with nappit 0.9f5 
and 3 pools (2 data pool and a system) There is a problem with epool that I'm 
sharing by iSCSI to Windows 2008 SBS server. This pool is few days old, but 
used disks are about 5 years old. Obviously something happen with one 500GB 
disk (S:0 H:106 T:12), but data on epool seems to be in a good condition. But. 
I had a problem with accessing some data on that pool and today most of them 
(roughly 2/3) have disappeared. But ZFS seems to be ok and available space 
epool indicates is the same as day before.

I welcome any advice.
Martin Truhlar






pool: dpool
state: ONLINE
 scan: scrub repaired 0 in 14h11m with 0 errors on Thu Aug 13 14:34:21 2015
config:

        NAME                       STATE     READ WRITE CKSUM      CAP          
  Product /napp-it   IOstat mess
        dpool                      ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c1t50014EE00400FA16d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE2B40F14DBd0  ONLINE       0     0     0      1 TB         
  WDC WD1003FBYX-0   S:0 H:0 T:0
          mirror-1                 ONLINE       0     0     0
            c1t50014EE05950B131d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE2B5E5A6B8d0  ONLINE       0     0     0      1 TB         
  WDC WD1003FBYZ-0   S:0 H:0 T:0
          mirror-2                 ONLINE       0     0     0
            c1t50014EE05958C51Bd0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0595617ACd0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
          mirror-3                 ONLINE       0     0     0
            c1t50014EE0AEAE7540d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0AEAE9B65d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
        logs
          mirror-4                 ONLINE       0     0     0
            c1t55CD2E404B88ABE1d0  ONLINE       0     0     0      120 GB       
  INTEL SSDSC2BW12   S:0 H:0 T:0
            c1t55CD2E404B88E4CFd0  ONLINE       0     0     0      120 GB       
  INTEL SSDSC2BW12   S:0 H:0 T:0
        cache
          c1t55CD2E4000339A59d0    ONLINE       0     0     0      180 GB       
  INTEL SSDSC2BW18   S:0 H:0 T:0

errors: No known data errors

 pool: epool
state: ONLINE
 scan: scrub repaired 0 in 6h26m with 0 errors on Fri Aug 14 07:17:03 2015
config:

        NAME                       STATE     READ WRITE CKSUM      CAP          
  Product /napp-it   IOstat mess
        epool                      ONLINE       0     0     0
          raidz1-0                 ONLINE       0     0     0
            c1t50014EE1578AC0B5d0  ONLINE       0     0     0      500.1 GB     
  WDC WD5002ABYS-0   S:0 H:0 T:0
            c1t50014EE1578B1091d0  ONLINE       0     0     0      500.1 GB     
  WDC WD5002ABYS-0     S:0 H:106 T:12
            c1t50014EE1ACD9A82Bd0  ONLINE       0     0     0      500.1 GB     
  WDC WD5002ABYS-0   S:0 H:1 T:0
            c1t50014EE1ACD9AC4Ed0  ONLINE       0     0     0      500.1 GB     
  WDC WD5002ABYS-0   S:0 H:1 T:0

errors: No known data errors

_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to