Re: [zfs-discuss] Clearing space nearly full zpool

2010-10-30 Thread Cuyler Dingwell
It wasn't a completely full volume so I wasn't getting the classic 'no space' 
issue. 

What I did end up doing was booting OpenIndiana (build 147) which seemed tohave 
more succes clearing up the space.  I also set up some scripts to clear out 
space slower.  Deleting a 4GB file would take 1-2 minutes and then add a pause 
afterwards to allow the system to quiesce then continue. Once I got the pool 
down to ~90% I blew away the OpenSolaris installed (build 130), installed, OI 
and then upgraded the pools from version 22 to version 28.  Things seem to be 
smoother now but it likely had to do with the space being cleared up.

It would have been nice if performance didn't take a nose dive when nearing 
(and not even at) capacity. In my case I would have preferred if the necessary 
space was reserved and I got a space issue before degrading to the point of 
uselessness.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing space nearly full zpool

2010-10-26 Thread Cuyler Dingwell
No datasets in the pool.

As another data point I've been slowing trying to clear things out but 
eventually the IO operations hang.


Pool Free   Dir UsedFile Used   File
...
189,238,526 44,771,026  102,413 FileName.part103.rar
189,238,526 44,668,613  102,413 FileName.part104.rar
189,238,526 44,566,201  102,413 FileName.part105.rar
189,238,526 44,463,788  102,413 FileName.part106.rar
189,238,526 44,361,376  102,413 FileName.part107.rar
189,238,526 44,258,963  102,413 FileName.part108.rar
189,238,526 44,156,551  102,413 FileName.part109.rar
189,238,526 44,054,138  102,388 FileName.part110.rar
189,238,526 43,951,750  102,413 FileName.part111.rar
189,238,526 43,849,338  102,413 FileName.part112.rar
189,238,526 43,746,925  102,413 FileName.part113.rar
189,238,526 43,644,513  102,413 FileName.part114.rar
189,238,788 43,542,100  102,414 FileName.part115.rar
189,240,308 43,439,686  102,413 FileName.part116.rar
189,242,745 43,337,274  102,413 FileName.part117.rar
353,519,874 43,234,854  102,413 FileName.part118.rar

After this one the console isn't responding.  This is the only operation I have 
on the file system since the reboot this morning.  Checking the io statistics 
the pool is getting accessed, I just have no idea what it's doing.

u...@server:/opt/DTT# zpool iostat tank 5 5
   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
tank6.95T   320G164 20   360K  45.5K
tank6.95T   320G 67  0   148K  0
tank6.95T   320G 71  0   153K  0
tank6.95T   320G 70  0   151K  0
tank6.95T   320G 69  0   149K  0
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing space nearly full zpool

2010-10-25 Thread Cuyler Dingwell
Oh, a few items to highlight.

There are no snapshots - never have been on this volume.

It's not just this directory in the example - it's any directory or file.  The 
system was running fine up until it hit 96%.  Also, a full scrub of the file 
system was done (took nearly two days).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Clearing space nearly full zpool

2010-10-25 Thread Cuyler Dingwell
I have a zpool that once it hit 96% full the performance degraded horribly.  
So, in order to get things better I'm trying to clear out some space.  The 
problem I have is after I've deleted a directory it no longer shows on the 
filesystem level (ls) but the free space isn't cleared up.  After a reboot, the 
directory is back.

u...@server:/tank# df -h /tank
FilesystemSize  Used Avail Use% Mounted on
tank  5.4T  5.3T  124G  98% /tank
u...@server:/tank# du -csh directory_to_clear
18Gdirectory_to_clear
18G total
u...@server:/tank# rm -Rf directory_to_clear
u...@server:/tank# df -h /tank
FilesystemSize  Used Avail Use% Mounted on
tank  5.4T  5.3T  124G  98% /tank
u...@server:/tank# zfs list -r -t snapshot tank
no datasets available

zfs version 3, zpool version 22, opensolaris build 130.

Any thoughts?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Issue with drive replacement

2009-01-28 Thread Cuyler Dingwell
In the process of replacing a raidz1 of four 500GB drives with four 1.5TB 
drives on the third one I ran into an interesting issue.  The process was to 
remove the old drive, put the new drive in and let it rebuild.

The problem was the third drive I put in had a hardware fault.  That caused 
both drives (c4t2d0) to show as FAULTED.  I couldn't put a new 1.5TB drive in 
as a replacement - it'd still show as a faulted drive.  I couldn't remove the 
faulted since you can't remove a drive without enough replicas. You also can't 
do anything to a pool in the process of replacing.

The remedy was to put the original drive back in and let it resilver.  Once 
complete, a new 1.5TB drive was put in and the process was able to complete.

If I didn't have the original drive (or it was broken) I think I would have 
been in a tough spot.

Has anyone else experienced this - and if so, is there a way to force the 
replacement of drive that failed during resilvering?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs recovery after create

2008-06-28 Thread Cuyler Dingwell
I had a friend rebuild his system and instead of running a "zpool import" he 
ran a "zpool create".  Sadly this means he has an empty raidz disk now.  A 
"zpool import" only shows the last zpool he created.  a "zpool import -D" shows 
now destroyed pools for importing.

Any suggestions as to data recovery (which don't involve time travel)?

Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss