Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Jim Klimov
2011-07-12 9:06, Brandon High пишет: On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproulespr...@omniti.com wrote: Interesting-- what is the suspected impact of not having TRIM support? There shouldn't be much, since zfs isn't changing data in place. Any drive with reasonable garbage collection

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Jim Klimov
2011-07-09 20:04, Edward Ned Harvey ?: --- Performance gain: Unfortunately there was only one area that I found any performance gain. When you read back duplicate data that was previously written with dedup, then you get a lot more cache hits, and as a result, the reads go faster.

[zfs-discuss] recover raidz from fried server ??

2011-07-12 Thread Brett
Hi Folks, Situation :- x86 based solaris 11 express server with 2 pools (rpool / data) got fried. I need to recover the raidz pool data which consists of 5 x 1tb sata drives. Have individually checked disks with seagate diag tool, they are all physically ok. Issue :- new sandybridge based x86

Re: [zfs-discuss] recover raidz from fried server ??

2011-07-12 Thread Jim Klimov
Well, actually you've scored a hit on both ideas I had after reading the question ;) One more idea though: is it possible to change the disk controller mode in BIOS i.e. to a generic IDE? Hopefully that might work, even if sub-optimal... AFAIK FreeBSD 8.x is limited to stable ZFSv15, and

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Fajar A. Nugraha
On Tue, Jul 12, 2011 at 6:18 PM, Jim Klimov jimkli...@cos.ru wrote: 2011-07-12 9:06, Brandon High пишет: On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproulespr...@omniti.com  wrote: Interesting-- what is the suspected impact of not having TRIM support? There shouldn't be much, since zfs isn't

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov By the way, did you estimate how much is dedup's overhead in terms of metadata blocks? For example it was often said on the list that you shouldn't bother with dedup unless you

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Jim Klimov
This dedup discussion (and my own bad expreience) have also left me with another grim thought: some time ago sparse-root zone support was ripped out of OpenSolaris. Among the published rationales were transition to IPS and the assumption that most people used them to save on disk space (notion

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Jim Klimov
You and I seem to have different interprettations of the empirical 2x soft-requirement to make dedup worthwhile. Well, until recently I had little interpretation for it at all, so your approach may be better. I hope that authors of the requirement statement would step forward and explain

Re: [zfs-discuss] recover raidz from fried server ??

2011-07-12 Thread Bob Friesenhahn
On Mon, 11 Jul 2011, Brett wrote: 1) to try freebsd as an alternative o/s hoping it has more recently updated drivers to support the sata controllers. According to the zfs wiki, freebsd 8.2 supports zpool version 28. I have a concern that when i updated the old (fried) server to sol11exp it

[zfs-discuss] Can zpool permanent errors fixed by scrub?

2011-07-12 Thread Ciaran Cummins
Hi, we had a server that lost connection to fiber attached disk array where data luns were housed, due to 3510 power fault. After connection restored alot of the zpool status had these permanent errors listed as per below. I check the files in question and as far as I could see they were

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Bob Friesenhahn
On Tue, 12 Jul 2011, Edward Ned Harvey wrote: You know what? A year ago I would have said dedup still wasn't stable enough for production. Now I would say it's plenty stable enough... But it needs performance enhancement before it's truly useful for most cases. What has changed for you to

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Eric Sproul
On Tue, Jul 12, 2011 at 1:06 AM, Brandon High bh...@freaks.com wrote: On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul espr...@omniti.com wrote: Interesting-- what is the suspected impact of not having TRIM support? There shouldn't be much, since zfs isn't changing data in place. Any drive with

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Bob Friesenhahn
On Tue, 12 Jul 2011, Eric Sproul wrote: Now, others have hinted that certain controllers are better than others in the absence of TRIM, but I don't see how GC could know what blocks are available to be erased without information from the OS. Drives which keep spare space in reserve (as any

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Garrett D'Amore
I think high end SSDs, like those from Pliant, use a significant amount of over allocation, and internal remapping and internal COW, so that they can automatically garbage collect when they need to, without TRIM. This only works if the drive has enough extra free space that it knows about

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Henry Lau
It is hard to say, 90% or 80%. SSD has already reserved overprovisioning places for garbage collection and wear leveling. The OS level only knows file LBA, not the physical LBA mapping to flash pages/block. Uberblock updates and COW from ZFS will use a new page/block each time. A TRIM command

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Erik Trimble
FYI - virtually all non-super-low-end SSDs are already significantly over-provisioned, for GC and scratch use inside the controller. In fact, the only difference between the OCZ extended models and the non-extended models (e.g. Vertex 2 50G (OCZSSD2-2VTX50G) and Vertex 2 Extended 60G

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Brandon High
On Tue, Jul 12, 2011 at 7:41 AM, Eric Sproul espr...@omniti.com wrote: But that's exactly the problem-- ZFS being copy-on-write will eventually have written to all of the available LBA addresses on the drive, regardless of how much live data exists.  It's the rate of change, in other words,

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Eric Sproul
On Tue, Jul 12, 2011 at 1:35 PM, Brandon High bh...@freaks.com wrote: Most enterprise SSDs use something like 30% for spare area. So a drive with 128MiB (base 2) of flash will have 100MB (base 10) of available storage. A consumer level drive will have ~ 6% spare, or 128MiB of flash and 128MB

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Brandon High
On Tue, Jul 12, 2011 at 12:14 PM, Eric Sproul espr...@omniti.com wrote: I see, thanks for that explanation.  So finding drives that keep more space in reserve is key to getting consistent performance under ZFS. More spare area might give you more performance, but the big difference is the

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Jim Klimov
2011-07-12 23:14, Eric Sproul пишет: So finding drives that keep more space in reserve is key to getting consistent performance under ZFS. I think I've read in a number of early SSD reviews (possibly regarding Intel devices - not certain now) that the vendor provided some low-level formatting

Re: [zfs-discuss] Can zpool permanent errors fixed by scrub?

2011-07-12 Thread Ian Collins
On 07/13/11 12:04 AM, Ciaran Cummins wrote: Hi, we had a server that lost connection to fiber attached disk array where data luns were housed, due to 3510 power fault. After connection restored alot of the zpool status had these permanent errors listed as per below. I check the files in

Re: [zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-12 Thread Gary Mills
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote: On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote: The `lofiadm' man page describes how to export a file as a block device and then use `mkfs -F pcfs' to create a FAT filesystem on it. Can't I do the

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Orvar Korvar
I am now using S11E and a OCZ Vertex 3, 240GB SSD disk. I am using it in a SATA 2 port (not the new SATA 6gbps). The PC seems to work better now, the worst lag is gone. For instance, I am using Sunray, and if my girl friend is using the PC, and I am doing bit torrenting, the PC could lock up

Re: [zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-12 Thread Andrew Gabriel
Gary Mills wrote: On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote: On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote: The `lofiadm' man page describes how to export a file as a block device and then use `mkfs -F pcfs' to create a FAT filesystem

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-12 Thread Edward Ned Harvey
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Tuesday, July 12, 2011 9:58 AM You know what? A year ago I would have said dedup still wasn't stable enough for production. Now I would say it's plenty stable enough... But it needs performance enhancement before it's