Re: [zfs-discuss] Snapshots and Data Loss

2010-04-16 Thread Maurilio Longo
Richard,

> Applications can take advantage of this and there are services available
> to integrate ZFS snapshots with Oracle databases, Windows clients, etc.

which services are you referring to?

best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-08 Thread Maurilio Longo
By the way,

there are more than fifty bugs logged for marevell88sx, many of them about 
problems with DMA handling and/or driver behaviour under stress.

Can it be that I'm stumbling upon something along these lines?

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6826483

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-05 Thread Maurilio Longo
Richard,

it is the same controller used inside Sun's thumpers; It could be a problem in 
my unit (which is a couple of years old now), though.

Is there something I can do to find out if I owe you that steak? :)

Thanks.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-04 Thread Maurilio Longo
Richard,

thanks for the explanation.

So can we say that the problem is in the disks loosing a command now and then  
under stress?

Best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Maurilio Longo
Errata,

they're ST31000333AS and not 340AS

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Maurilio Longo
Carson,

they're seagate  ST31000340AS with a firmware release CC1H, which from a rapid 
googling should have no firmware errors.

Anyway, setting NCQ depth to 1 

# echo zfs_vdev_max_pending/W0t1 | mdb -kw

did not solve the problem :(

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Maurilio Longo
Milek,

this is it


# iostat -En
c1t0d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3808110AS  Revision: DSerial No:
Size: 80,03GB <80026361856 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 91 Predictive Failure Analysis: 0
c2t0d0   Soft Errors: 0 Hard Errors: 11 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 101 Predictive Failure Analysis: 0
c2t1d0   Soft Errors: 0 Hard Errors: 4 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 96 Predictive Failure Analysis: 0
c2t2d0   Soft Errors: 0 Hard Errors: 69 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 105 Predictive Failure Analysis: 0
c2t3d0   Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 96 Predictive Failure Analysis: 0
c2t4d0   Soft Errors: 0 Hard Errors: 90 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 96 Predictive Failure Analysis: 0
c2t5d0   Soft Errors: 0 Hard Errors: 30 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 96 Predictive Failure Analysis: 0
c2t7d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST31000333AS Revision: CC1H Serial No:
Size: 1000,20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 94 Predictive Failure Analysis: 0
#


What are hard errors?

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Maurilio Longo
> Possible, but less likely. I'd suggest running some
> disk I/O tests, looking at 
> the drive error counters before/after.
> 

These disks have a few months of life and are scrubbed weekly, no errors so far.

I did try to use smartmontools, but it cannot report SMART logs nor start SMART 
tests, so I don't know how to look at their internal state.

> You could also have a firmware bug on your disks. You
> might try lowering the 
> number of tagged commands per disk and see if that
> helps at all.

from man marvell88sx I read that this driver has no tunable parameters, so I 
don't know how I could change NCQ depth.

Best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Maurilio Longo
Carson,

the strange thing is that this is happening on several disks (can it be that 
are all failing?)

What is the controller bug you're talking about? I'm running snv_114 on this 
pc, so it is fairly recent.

Best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Maurilio Longo
Hi,

I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made up of a six 
disks in a raid-z pool with a hot spare.


-bash-3.2$ /sbin/zpool status
  pool: nas
 stato: ONLINE
 scrub: scrub in progress for 9h4m, 81,59% done, 2h2m to go
config:

NAMESTATE READ WRITE CKSUM
nas ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t4d0  ONLINE   0 0 0
c2t5d0  ONLINE   0 0 0
c2t3d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
dischi di riserva
  c2t7d0AVAIL

errori: nessun errore di dati rilevato


Now, the problem is that issuing an

iostat -Cmnx 10 

or any other time intervall, I've seen, sometimes, a complete stall of disk I/O 
due to a disk in the pool (not always the same) being 100% busy.



$ iostat -Cmnx 10 

   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,00,30,02,0  0,0  0,00,00,1   0   0 c1
0,00,30,02,0  0,0  0,00,00,1   0   0 c1t0d0
 1852,1  297,0 13014,9 4558,4  9,2  1,64,30,7   2 158 c2
  311,8   61,3 2185,3  750,7  2,0  0,35,50,7  17  25 c2t0d0
  309,5   34,7 2207,2  769,5  1,6  0,54,71,4  41  47 c2t1d0
  309,3   36,3 2173,0  770,0  1,0  0,32,90,7  18  26 c2t2d0
  296,0   65,5 2057,3  749,2  2,1  0,25,90,6  16  23 c2t3d0
  313,3   64,1 2187,3  748,8  1,7  0,24,60,5  15  21 c2t4d0
  311,9   35,1 2204,8  770,1  0,7  0,22,10,5  11  17 c2t5d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t7d0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,4   14,73,2   30,4  0,0  0,20,0   13,2   0   2 c1
0,4   14,73,2   30,4  0,0  0,20,0   13,2   0   2 c1t0d0
1,70,0   58,90,0  3,0  1,0 1766,4  593,1   2 101 c2
0,30,07,70,0  0,0  0,00,30,4   0   0 c2t0d0
0,30,0   11,50,0  0,0  0,04,48,4   0   0 c2t1d0
0,00,00,00,0  3,0  1,00,00,0 100 100 c2t2d0
0,40,0   14,10,0  0,0  0,00,46,6   0   0 c2t3d0
0,40,0   14,10,0  0,0  0,00,32,5   0   0 c2t4d0
0,30,0   11,50,0  0,0  0,03,66,9   0   0 c2t5d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t7d0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,03,10,03,1  0,0  0,00,00,7   0   0 c1
0,03,10,03,1  0,0  0,00,00,7   0   0 c1t0d0
0,00,00,00,0  3,0  1,00,00,0   2 100 c2
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t0d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t1d0
0,00,00,00,0  3,0  1,00,00,0 100 100 c2t2d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t3d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t4d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t5d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t7d0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,00,10,00,4  0,0  0,00,01,2   0   0 c1
0,00,10,00,4  0,0  0,00,01,2   0   0 c1t0d0
0,0   29,50,0  320,2  3,4  1,0  113,9   34,6   2 102 c2
0,06,90,0   63,3  0,1  0,0   12,60,7   0   0 c2t0d0
0,04,40,0   65,5  0,0  0,08,70,8   0   0 c2t1d0
0,00,00,00,0  3,0  1,00,00,0 100 100 c2t2d0
0,07,40,0   62,7  0,1  0,0   15,40,8   1   1 c2t3d0
0,06,80,0   63,6  0,1  0,0   13,20,7   0   0 c2t4d0
0,04,00,0   65,1  0,0  0,07,90,7   0   0 c2t5d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t7d0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,00,30,02,4  0,0  0,00,00,1   0   0 c1
0,00,30,02,4  0,0  0,00,00,1   0   0 c1t0d0
0,00,00,00,0  3,0  1,00,00,0   2 100 c2
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t0d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t1d0
0,00,00,00,0  3,0  1,00,00,0 100 100 c2t2d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t3d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t4d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t5d0
0,00,00,00,0  0,0  0,00,00,0   0   0 c2t7d0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 

Re: [zfs-discuss] zfs send of a cloned zvol

2009-09-10 Thread Maurilio Longo
> Neither.
> It'll send all necessary data (without having to
> promote anything) so
> that the receiving zvol has a working vol1, and it's
> not a clone. 

Fajar,

thanks for clarifying, this is what I was calling 'promotion'. 

It is like a "promotion" happening on the receiving side.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send of a cloned zvol

2009-09-10 Thread Maurilio Longo
Hi,

I have a question, let's say I have a zvol named vol1 which is a clone of a 
snapshot of another zvol (its origin property is tank/my...@mysnap).

If I send this zvol to a different zpool through a zfs send does it send the 
origin too that is, does an automatic promotion happen or do I end up whith a 
broken zvol?

Best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-30 Thread Maurilio Longo
Robin,

LSI 3041er and 3081er are pci-e 4 and 8 ports sata cards; they are not hot-swap 
capable, as far as I know, but do work very well (I'm using several of them) in 
jbod and they're not too expensive.

See this

http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3041er/index.html
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-30 Thread Maurilio Longo
Hi,  

I'd like to have it fixed as well, I'm having the same problem with 20 zvols 
which are windows xp images exported through iscsi, they are auto-snapshotted 
every hour/day/month, right now I've got nearly 1500 snapshots and booting this 
4core xeon with 8Gb of ram and 8 disks on four pairs of mirrors takes around 
15-20 minutes (last time I've booted it two months ago).

So, it is definitely a bug, in my opinion, that such a pc takes 15 minutes to 
handle 1500 snapshots during boot while it would take a few seconds at worst to 
create the same number of snapshots.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool replace leaves pool degraded after resilvering

2009-07-09 Thread Maurilio Longo
I forgot to mention this is a 

SunOS biscotto 5.11 snv_111a i86pc i386 i86pc

version.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool replace leaves pool degraded after resilvering

2009-07-09 Thread Maurilio Longo
Hi,

I have a pc where a pool suffered a disk failure, I did replace the failed disk 
and the pool resilvered but, after resilvering, it was in this state

mauri...@biscotto:~# zpool status iscsi
  pool: iscsi
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver completed after 12h33m with 0 errors on Thu Jul  9 00:07:12 
2009
config:

NAME STATE READ WRITE CKSUM
iscsiDEGRADED 0 0 0
  mirror ONLINE   0 0 0
c2t0d0   ONLINE   0 0 0
c2t5d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t6d0   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c11t0d0  ONLINE   0 0 0
c11t1d0  ONLINE   0 0 0
  mirror DEGRADED 0 0 0
c11t2d0  ONLINE   0 0 0
c11t3d0  DEGRADED 0 0 23,0M  too many errors
cache
  c1t4d0 ONLINE   0 0 0

errors: No known data errors

It says it resilvered ok and that there are no known data errors, but pool is 
still marked as degraded.

I did a zpool clear and now it says it is ok

mauri...@biscotto:~# zpool status
  pool: iscsi
 state: ONLINE
 scrub: resilver completed after 12h33m with 0 errors on Thu Jul  9 00:07:12 
2009
config:

NAME STATE READ WRITE CKSUM
iscsiONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t0d0   ONLINE   0 0 0
c2t5d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t6d0   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c11t0d0  ONLINE   0 0 0
c11t1d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c11t2d0  ONLINE   0 0 0
c11t3d0  ONLINE   0 0 0  326G resilvered
cache
  c1t4d0 ONLINE   0 0 0

errors: No known data errors

Look at c11t3d0 which now reads 326G resilvered; my question is: is the pool 
ok? Why had I to issue a zpool clear if the resilvering process completed 
without problems?

Best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send of a cloned zvol

2009-06-19 Thread Maurilio Longo
Hi,

I'd like to understand a thing or two ... :)

I have a zpool on which I've created a zvol, then I've snapshotted the zvol and 
I've created a clone out of that snapshot.

Now, what happens if I do a 

zfs send mycl...@mysnap > myfile?

I mean, is this stream enough to recover the clone (does it contain a promoted 
zvol?) or do I have to have a stream for the zvol (from which the clone was 
created) as well? And if I do a zfs recv of the clone's stream, does it 'find' 
by itself the zvol from which it stemmed?

In  other words, will I be able to recover the clone, and the zvol it depends 
on, in some way having a zfs stream of them?

Best regards.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fmd writes tons of errors during a resilver

2009-06-13 Thread Maurilio Longo
Eric,

thanks for the hint.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] fmd writes tons of errors during a resilver

2009-06-12 Thread Maurilio Longo
Hi,

I'm trying to expand a raidz pool made up of six drives replacing one at a time 
with a bigger disk and waiting for resilver on a snv114 system.

While resilvering fmd writes 8/10Mb each second inside 

/var/fm/fmd/errlog

I had to disable it since it was filling up my boot disk.

Is this expected?

# zpool status
  pool: nas
 stato: DEGRADED
condizione: viene eseguita la risincronizzazione di uno o più dispositivi.  Il 
pool
continuerà a funzionare normalmente oppure in stato degradato.
azione: attendere la fine della risincronizzazione.
 scrub: resilver in progress for 6h52m, 79,82% done, 1h44m to go
config:

NAME  STATE READ WRITE CKSUM
nas   DEGRADED 0 0 0
  raidz1  DEGRADED 0 0 0
c2t1d0ONLINE   0 0 0
replacing DEGRADED 0 0 7,58M
  c2t4d0s0/o  FAULTED  0 0 0  dati danneggiati
  c2t4d0  ONLINE   0 0 0  141G resilvered
c2t5d0ONLINE   0 0 0
c2t3d0ONLINE   0 0 0
c2t2d0ONLINE   0 0 0
c2t0d0ONLINE   0 0 0

errori: nessun errore di dati rilevato

Here c1t0d0 is the boot disk (still on UFS)

extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,0 3711,70,0 9661,0  0,0  0,60,00,2   2  60 c1t0d0
  452,10,0 9707,50,0  0,4  0,20,90,4  13  20 c2t0d0
  464,10,0 9785,00,0  0,3  0,20,60,4  11  18 c2t1d0
  467,10,0 9695,00,0  0,4  0,20,80,4  12  19 c2t2d0
  445,10,0 9743,00,0  0,4  0,20,90,4  14  19 c2t3d0
  236,0  309,1 9842,1 9279,9 24,0  1,0   44,01,8  99  99 c2t4d0
  420,10,0 9682,50,0  0,4  0,21,00,5  14  20 c2t5d0
 cpu
 us sy wt id
 12 32  0 55
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,0 3992,50,0 10281,7  0,0  0,60,00,2   2  63 c1t0d0
  318,00,0 6042,70,0  0,3  0,21,10,7  13  22 c2t0d0
  307,00,0 6129,70,0  0,4  0,21,40,8  15  24 c2t1d0
  315,00,0 6137,70,0  0,4  0,21,20,7  17  22 c2t2d0
  351,00,0 6020,20,0  0,4  0,21,00,6  15  21 c2t3d0
  223,0  273,0 6453,7 6134,7 25,8  1,0   52,02,0  96  97 c2t4d0
  346,00,0 5988,20,0  0,3  0,20,80,5  11  17 c2t5d0

in a few seconds it wrote

# svcadm disable fmd
# ls -l /var/fm/fmd/
totale 55934
drwx--   3 root sys  512 10 apr 17:41 ckpt
-rw-r--r--   1 root root 28599664 12 giu 15:56 errlog
-rw-r--r--   1 root root3410 12 giu 15:55 fltlog
drwx--   2 root sys  512 12 giu 15:55 rsrc
drwx--   2 root sys  512 13 dic  2007 xprt

the content of the file is not printable.

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send -R gets blocked by auto-snapshot service

2009-06-09 Thread Maurilio Longo
Tim,

I really was trying to have a full copy of my pool onto a different pc, so I 
think that I have to use -R otherwise I would loose all the history (monthly 
and weekly and daily snapshots) of my data which is valuable for me.

That said, I fear that during a send -R the autosnapshot service should be 
disabled on the receiving end as well or  it could start to change the incoming 
filesystem; am I right?

And if it is so, it means that zfs send/receive cannot be used to really have 
backups of a heavily used system; apart from the autosnapshot service a user 
could be creating/deleting snapshot breaking the send/receive process.

Anyway thanks a lot for you help!

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send -R gets blocked by auto-snapshot service

2009-06-08 Thread Maurilio Longo
Hi,

I'm trying to send a pool (its filesystems) from a pc to another, so I first 
created a recursive snapshot:

# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
nas  840G   301G  3,28G  /nas
nas/drivers 12,6G   301G  12,6G  /nas/drivers
nas/nwserv   110G   301G  49,0G  /nas/nwserv
nas/rsync_clienti166G   301G  17,5G  /nas/rsync_clienti
nas/rsync_privati560M   301G  36,5K  /nas/rsync_privati
nas/rsync_privati/gianluca   560M   301G   560M  /nas/rsync_privati/gianluca
nas/samba486G   301G   486G  /nas/samba
nas/sviluppo-bak1,91G   301G  1,60G  /nas/sviluppo-bak
nas/winsrv  60,0G   301G  55,1G  /nas/winsrv

# zfs snapshot -r n...@t1

zfs list -t snapshot -r nas | grep T1
n...@t1  0  
-  3,28G  -
nas/driv...@t1  0  
-  12,6G  -
nas/nws...@t1   0  
-  49,0G  -
nas/rsync_clie...@t1 101K  
-  17,5G  -
nas/rsync_priv...@t10  
-  36,5K  -
nas/rsync_privati/gianl...@t1   0  
-   560M  -
nas/sa...@t10  
-   486G  -
nas/sviluppo-...@t1 0  
-  1,60G  -
nas/win...@t1   0  
-  55,1G  -

So, now I have a recursive snapshot encompassing all my nas pool and its 
filesystems.

Then I issued a:

pfexec /sbin/zfs send -R n...@t1 | ssh biscotto pfexec /sbin/zfs recv -dF 
iscsi/nasone

this one from an user account which has auto login on the receiving box; the 
copy started ok and kept running for more than an hour, but stopped with this 
error:

warning: cannot send 'nas/nws...@zfs-auto-snap.frequent-2009-06-08-11.00': no 
such pool or dataset
warning: cannot send 'nas/nws...@zfs-auto-snap.frequent-2009-06-08-11.15': no 
such pool or dataset
warning: cannot send 'nas/nws...@t1': incremental source 
(@zfs-auto-snap.frequent-2009-06-08-11.15) does not exist

Now, from the error it seems that T1 needs all the snapshots which were active 
at the time it was created, which is not what I would expect from a snapshot.

I've now stopped the auto-snapshot service and restarted the send, but I'm 
wondering if this is the correct and expected behaviour or If I'm making 
something wrong or maybe I did not fully understand what send -R is supposed to 
do.

Maurilio
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs snapshots sometimes don't report correct used space

2008-03-10 Thread Maurilio Longo
Hi,

I'm using Tim Foster's zfs automatic snapshot to keep copies of a zfs 
filesystem.

I keep 30 days, seven weeks and 12 months, these snapshots get created at 00:00 
every day and when a new week starts (or a new month) there are two or three 
snapshots that start at the same time.

Using zfs list I see that when more than one snapshot starts at the same time, 
the USED column of the report shows no space occupied, while, when just a 
single snapshot is done, I can see how much space is being used.

Snaphots are ok, I mean, if I go inside one of the 0 byte used one I can see 
that the files are ok and their content is what they really contained at that 
point in time, so it seem just a 'cosmetic' problem.

Here his zfs list on that filesystem

---8<-
# zfs list
NAME   USED  AVAIL  REFER  MOUNT
POINT
nas262G   879G  11,2G  /nas
nas/nwserv13,0G   879G  6,86G  /nas/
nwserv
nas/[EMAIL PROTECTED]:daily-2008-02-27-00:00:00 519M  -  6,99G  -
nas/[EMAIL PROTECTED]:daily-2008-02-28-00:00:00 491M  -  7,02G  -
nas/[EMAIL PROTECTED]:daily-2008-02-29-00:00:000  -  6,91G  -
nas/[EMAIL PROTECTED]:weekly-2008-02-29-00:00:00   0  -  6,91G  -
nas/[EMAIL PROTECTED]:daily-2008-03-01-00:00:000  -  6,93G  -
nas/[EMAIL PROTECTED]:monthly-2008-03-01-00:00:00  0  -  6,93G  -
nas/[EMAIL PROTECTED]:weekly-2008-03-01-00:00:00   0  -  6,93G  -
nas/[EMAIL PROTECTED]:daily-2008-03-02-00:00:00 241M  -  6,94G  -
nas/[EMAIL PROTECTED]:daily-2008-03-03-00:00:00 242M  -  6,94G  -
nas/[EMAIL PROTECTED]:daily-2008-03-04-00:00:00 538M  -  6,96G  -
nas/[EMAIL PROTECTED]:daily-2008-03-05-00:00:00 499M  -  6,74G  -
nas/[EMAIL PROTECTED]:daily-2008-03-06-00:00:00 519M  -  6,80G  -
nas/[EMAIL PROTECTED]:daily-2008-03-07-00:00:00 514M  -  6,81G  -
nas/[EMAIL PROTECTED]:daily-2008-03-08-00:00:000  -  6,80G  -
nas/[EMAIL PROTECTED]:weekly-2008-03-08-00:00:00   0  -  6,80G  -
nas/[EMAIL PROTECTED]:daily-2008-03-09-00:00:00 256M  -  6,85G  -
--->8-

As you can see, when there is a weekly or monthly snapshot used space is 0.

Is this a known issue?

Best regards.

Maurilio.

PS. nas/nwserv is compressed and has atime=off

# zfs get all nas/nwserv | grep local
nas/nwserv  compressionon local
nas/nwserv  atime  offlocal
nas/nwserv  snapdirhidden local
nas/nwserv  com.sun:auto-snapshot:weekly   true   local
nas/nwserv  com.sun:auto-snapshot:monthly  true   local
nas/nwserv  com.sun:auto-snapshot:dailytrue   local
#
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] dos programs on a

2008-02-06 Thread Maurilio Longo
Alan,

I'm using nexenta core rc4 which is based on nevada 81/82.

zfs casesensitivity is set to 'insensitive'

Best regards.

Maurilio.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dos programs on a ZFS+CIFS server setup

2008-02-05 Thread Maurilio Longo
Hi,

I'm testing ZFS+CIFS server using nexenta core rc4, everything seems fine and 
speed is also ok, but DOS programs don't see sub-dirs (command.com sees them, 
though).

I've set casesensitivity=insensitive in the ZFS filesystem that I'm sharing.

I've made this test using Windows2000, Windows2003 Server and Windows XP (to 
connect to the share) and norton commander for dos plus a vertical application 
written in clipper 5.2e.

Is this a known issue? Should I file a bug?

Best regards.

Maurilio.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss