On Dec 31, 2009, at 6:14 PM, R.G. Keen wrote:
On Thu, 31 Dec 2009, Bob Friesenhahn wrote:
I like the nice and short answer from this "Bob
Friesen" fellow the
best. :-)
It was succinct, wasn't it? 8-)
Sorry - I pulled the attribution from the ID, not the
signature which was waiting below. DOH!
Make that 25MB/sec, and rising...
So it's 8x faster now.
mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 31 Dec 2009, R.G. Keen wrote:
Given the largish aggregate monetary value to RAIDZ builders of
sidestepping the doubled-cost of raid specialized drives, it occurs
to me that having a special set of actions for desktop-ish drives
might be a good idea. Something like a fix-the-failed repair
I've written about my slow-to-dedupe RAIDZ.
After a week of.waitingI finally bought a little $100 30G OCZ
Vertex and plugged it in as a cache.
After <2 hours of warmup, my zfs send/receive rate on the pool is
>16MB/sec (reading and writing each at 16MB as measured by zpool
iostat).
That's
> On Thu, 31 Dec 2009, Bob Friesenhahn wrote:
> I like the nice and short answer from this "Bob
> Friesen" fellow the
> best. :-)
It was succinct, wasn't it? 8-)
Sorry - I pulled the attribution from the ID, not the
signature which was waiting below. DOH!
When you say:
> It does not really mat
In osol 2009.06 - rpool vdev was dying but I was able to do the clean export of
the data pool. The data pool's zil was on the failed HDD's slice as well as
slog's GUID. As the result I have 4 out of 4 raid5 healthy data drives but
cannot import zpool to access the data. This is obviously a disas
Hello!
> This could be a broken disk, or it could be some other
> hardware/software/firmware issue. Check the errors on the
> device with
> iostat -En
Heres the output:
c7t1d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA Product: WDC WD10EADS-00L Revision
On Dec 31, 2009, at 13:44, Joerg Schilling wrote:
ZFS is COW, but does the SSD know which block is "in use" and which
is not?
If the SSD did know whether a block is in use, it could erase unused
blocks
in advance. But what is an "unused block" on a filesystem that
supports
snapshots?
P
On Thu, 31 Dec 2009, R.G. Keen wrote:
I'm in full overthink/overresearch mode on this issue, preparatory
to ordering disks for my OS/zfs NAS build. So bear with me. I've
been reading manuals and code, but it's hard for me to come up to
speed on a new OS quickly.
The question(s) underlying th
On 31 dec 2009, at 19.26, Richard Elling wrote:
> [I TRIMmed the thread a bit ;-)]
>
> On Dec 31, 2009, at 1:43 AM, Ragnar Sundblad wrote:
>> On 31 dec 2009, at 06.01, Richard Elling wrote:
>>>
>>> In a world with copy-on-write and without snapshots, it is obvious that
>>> there will be a lot o
I'm in full overthink/overresearch mode on this issue, preparatory to ordering
disks for my OS/zfs NAS build. So bear with me. I've been reading manuals and
code, but it's hard for me to come up to speed on a new OS quickly.
The question(s) underlying this thread seem to be:
(1) Does zfs raidz/
mijoh...@gmail.com said:
> I've never had a lun go bad but bad things do happen. Does anyone else use
> ZFS in this way? Is this an unrecommended setup?
We used ZFS like this on a Hitachi array for 3 years. Worked fine, not
one bad block/checksum error detected. Still using it on an old Sun 61
Richard Elling wrote:
> The reason you want TRIM for SSDs is to recover the write speed.
> A freshly cleaned page can be written faster than a dirty page.
> But in COW, you are writing to new pages and not rewriting old
> pages. This is fundamentally different than FAT, NTFS, or HFS+,
> but it is
Bob Friesenhahn wrote:
> I have heard quite a few times that TRIM is "good" for SSD drives but
> I don't see much actual use for it. Every responsible SSD drive
> maintains a reserve of unused space (20-50%) since it is needed for
> wear leveling and to repair failing spots. This means that
[I TRIMmed the thread a bit ;-)]
On Dec 31, 2009, at 1:43 AM, Ragnar Sundblad wrote:
On 31 dec 2009, at 06.01, Richard Elling wrote:
In a world with copy-on-write and without snapshots, it is obvious
that
there will be a lot of blocks running around that are no longer in
use.
Snapshots (a
On Dec 31, 2009, at 1:43 AM, Andras Spitzer wrote:
Let me sum up my thoughts in this topic.
To Richard [relling] : I agree with you this topic is even more
confusing if we are not careful enough to specify exactly what we
are talking about. Thin provision can be done on multiple layers,
a
Thomas Burgess wrote:
For the OS, I'd drop the adapter/compact-flash combo and use the
"stripped down" Kingston version of the Intel x25m MLC SSD. If you're
not familiar with it, the basic scoup is that this drive contains half
the flash memory (40Gb) *and* half the controller
On 31 dec 2009, at 17.18, Bob Friesenhahn wrote:
> On Thu, 31 Dec 2009, Ragnar Sundblad wrote:
>>
>> Also, currently, when the SSDs for some very strange reason is
>> constructed from flash chips designed for firmware and slowly
>> changing configuration data and can only erase in very large chu
On Thu, 31 Dec 2009, Emily Grettel wrote:
I'm using OpenSolaris 127 from my previous posts to address CIFS problems. I
have a few zpools but
lately (with an uptime of 32 days) we've started to get CIFS issues and really
bad IO performance.
I've been running scrubs on a nightly basis.
I'm no
On Dec 31, 2009, at 2:49 AM, Robert Milkowski wrote:
judging by a *very* quick glance it looks like you have an issue
with c3t0d0 device which is responding very slowly.
Yes, there is an I/O stuck on the device which is not getting serviced.
See below...
--
Robert Milkowski
http://milek.
On 30/12/2009 22:57, ono wrote:
will i be able to see which files were "affected" by dedup or can i do a
zfs send/recieve to another filesystem to clean it up?
send|recv will be enough.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
Just an update :
Finally I found some technical details about this Thin Reclamation API :
(http://blogs.hds.com/claus/2009/12/i-love-it-when-a-plan-comes-together.html)
"This week, (December 7th), Symantec announced their “completing the thin
provisioning ecosystem” that includes the necessary
On Thu, 31 Dec 2009, Ragnar Sundblad wrote:
Also, currently, when the SSDs for some very strange reason is
constructed from flash chips designed for firmware and slowly
changing configuration data and can only erase in very large chunks,
TRIMing is good for the housekeeping in the SSD drive. A t
On Thu, Dec 31 at 2:14, Willy wrote:
Thanks, sounds like it should handle all but the
worst faults OK then; I believe the maximum retry
timeout is typically set to about 60 seconds in
consumer drives.
Are you sure about this? I thought these consumer level drives
would try indefinitely to car
> Yeah, still no joy on getting my pool back. I think
> I might have to try grabbing another server with a
> lot more memory and slapping the HBA and the drives
> in that. Can ZFS deal with a controller change?
Just some more info that 'may' help.
After I upgraded to 8GB of RAM, I did not limit
http://mail.opensolaris.org/pipermail/zfs-discuss/
Henrik
http://sparcv9.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello there,
is there any possibilty to receive all old mailings from
the list? I would like to search those for know-how that i
don't double post to often :-)
Thanks,
Florian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
>
> For the OS, I'd drop the adapter/compact-flash combo and use the
> "stripped down" Kingston version of the Intel x25m MLC SSD. If you're
> not familiar with it, the basic scoup is that this drive contains half
> the flash memory (40Gb) *and* half the controller channels (5 versus
> 10) of the
Mike wrote:
Just thought I would let you all know that I followed what Alex suggested
along with what many of you pointed out and it worked! Here are the steps
I followed:
1. Break root drive mirror
2. zpool export filesystem
3. run the command to start MPIOX and reboot the machine
4. zpool impo
> Thanks, sounds like it should handle all but the
> worst faults OK then; I believe the maximum retry
> timeout is typically set to about 60 seconds in
> consumer drives.
Are you sure about this? I thought these consumer level drives would try
indefinitely to carry out its operation. Even Sams
On 31 dec 2009, at 10.43, Andras Spitzer wrote:
> Then came Veritas with this brilliant idea of building a bridge between the
> FS and the SAN frame (this became the Thin Reclamation API), so they can
> communicate which blocks are not in use indeed.
This is exactly what TRIM is for, but could
On 31 dec 2009, at 00.31, Bob Friesenhahn wrote:
> On Wed, 30 Dec 2009, Mike Gerdts wrote:
>>
>> Should the block size be a tunable so that page size of SSD (typically
>> 4K, right?) and upcoming hard disks that sport a sector size > 512
>> bytes?
>
> Enterprise SSDs are still in their infancy.
On 31 dec 2009, at 06.01, Richard Elling wrote:
>
> On Dec 30, 2009, at 2:24 PM, Ragnar Sundblad wrote:
>
>>
>> On 30 dec 2009, at 22.45, Richard Elling wrote:
>>
>>> On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
>>>
Richard,
That's an interesting question, if it's wort
Let me sum up my thoughts in this topic.
To Richard [relling] : I agree with you this topic is even more confusing if we
are not careful enough to specify exactly what we are talking about. Thin
provision can be done on multiple layers, and though you said you like it to be
closer to the app th
Rather than hacking something like that, he could use a Disk on Module
(http://en.wikipedia.org/wiki/Disk_on_module) or something like
http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html
(which I suspect may be a DOM but I've not poked around sufficiently to see).
Paul
--
35 matches
Mail list logo