Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
> I'm doing compiles of the JDK, with a single backed ZFS system handing 
> the files for 20-30 clients, each trying to compile a 15 million-line 
> JDK at the same time.

Very cool application!

Can you share any metrics, such as the aggregate size of source files
compiled and the size of the resultant binaries?

Thanks,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Maurice R Volaski
>>TRIM was putback in July...  You're telling me it didn't make it into S11
>Express?
>
>http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html

It looks like this refers to the ability to use the TRIM command, but ZFS
doesn't:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6957655


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Tim Cook
On Sat, Nov 27, 2010 at 9:29 PM, Erik Trimble wrote:

> On 11/27/2010 6:50 PM, Christopher George wrote:
>
>> Furthermore, I don't think "1 hour sustained" is a very accurate
>>> benchmark.
>>> Most workloads are bursty in nature.
>>>
>> The IOPS degradation is additive, the length of the first and second one
>> hour
>> sustained period is completely arbitrary.  The take away from slides 1 and
>> 2 is
>> drive inactivity has no effect on the eventual outcome.  So with either a
>> bursty
>> or sustained workload the end result is always the same, dramatic write
>> IOPS
>> degradation after unpackaging or secure erase of the tested Flash based
>> SSDs.
>>
>> Best regards,
>>
>> Christopher George
>> Founder/CTO
>> www.ddrdrive.com
>>
>
> Without commenting on other threads, I often seen sustained IO in my setups
> for extended periods of time - particularly, small IO which eats up my IOPS.
>  At this moment, I run with ZIL turned off for that pool, as it's a scratch
> pool and I don't care if it gets corrupted. I suspect that a DDRdrive or one
> of the STEC Zeus drives might help me, but I can overwhelm any other SSD
> quickly.
>
> I'm doing compiles of the JDK, with a single backed ZFS system handing the
> files for 20-30 clients, each trying to compile a 15 million-line JDK at the
> same time.
>
> Lots and lots of small I/O.
>
> :-)
>
>
>

Sounds like you need lots and lots of 15krpm drives instead of 7200rpm SATA
;)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Erik Trimble

On 11/27/2010 6:50 PM, Christopher George wrote:

Furthermore, I don't think "1 hour sustained" is a very accurate benchmark.
Most workloads are bursty in nature.

The IOPS degradation is additive, the length of the first and second one hour
sustained period is completely arbitrary.  The take away from slides 1 and 2 is
drive inactivity has no effect on the eventual outcome.  So with either a bursty
or sustained workload the end result is always the same, dramatic write IOPS
degradation after unpackaging or secure erase of the tested Flash based SSDs.

Best regards,

Christopher George
Founder/CTO
www.ddrdrive.com


Without commenting on other threads, I often seen sustained IO in my 
setups for extended periods of time - particularly, small IO which eats 
up my IOPS.  At this moment, I run with ZIL turned off for that pool, as 
it's a scratch pool and I don't care if it gets corrupted. I suspect 
that a DDRdrive or one of the STEC Zeus drives might help me, but I can 
overwhelm any other SSD quickly.


I'm doing compiles of the JDK, with a single backed ZFS system handing 
the files for 20-30 clients, each trying to compile a 15 million-line 
JDK at the same time.


Lots and lots of small I/O.

:-)

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
> Furthermore, I don't think "1 hour sustained" is a very accurate benchmark.  
> Most workloads are bursty in nature.

The IOPS degradation is additive, the length of the first and second one hour 
sustained period is completely arbitrary.  The take away from slides 1 and 2 is 
drive inactivity has no effect on the eventual outcome.  So with either a 
bursty 
or sustained workload the end result is always the same, dramatic write IOPS 
degradation after unpackaging or secure erase of the tested Flash based SSDs.  

Best regards,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Christopher George
> 
> Jump to slide 37 for the write IOPS benchmarks:
> 
> http://www.ddrdrive.com/zil_accelerator.pdf

Anybody who designs or works with NAND (flash) at a low level knows it can't
possibly come close to the sustainable speed of ram, except in corner cases
where all the stars are aligned perfectly in favor of the NAND.  Think how
fast your system can fill its system ram, and then think how fast it can
fill an equivalently sized hard drive.  If bus speed was actually the
limiting factor (and it isn't for any SSD that I know) ... You've got NUMA
to system ram, you've got NUMA to PCIe to DDRDrive, and you've got NUM to
PCIe to SATA to the SSD.  Where you can't even fully utilize the SATA bus
because the SSD can't keep up.

The above result isn't the slightest bit surprising to me.  The SSD
manufacturers report maximum statistics that aren't typical or sustainable
under anything resembling typical usage.  I think the SSD's can actually
live up to their claims if (a) they have a read-mostly workload, and either
(b)(1) they have large sequential operations mostly, or (b)(2) they have
random operations which are suitably sized to match the geometry of the NAND
cells internally.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Tim Cook
On Sat, Nov 27, 2010 at 3:12 PM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> I am waiting for the next gen Intel SSD drives, G3. They are arriving very
> soon. And from what I can infer by reading here, I can use it without
> issues. Solaris will recognize the Intel SDD drive without any drivers
> needed, or whatever?
>
> Intel new SSD should work with Solaris 11 Express, yes?
>
>
You don't need drivers for any SATA based SSD.  It shows up as a standard
hard drive and plugs into a standard SATA port.  By the time the G3 Intel
drive is out, the next gen Sandforce should be out as well.  Unless Intel
does something revolutionary, they will still be behind the Sandforce
drives.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Moazam Raja
Agreed, SSD with SandForce controllers are the only way to go. The
controller makes a world of difference.

-Moazam


On Sat, Nov 27, 2010 at 12:27 PM, Tim Cook  wrote:
>
>
> On Sat, Nov 27, 2010 at 2:16 PM, Orvar Korvar
>  wrote:
>>
>> "Your system drive on a Solaris system generally doesn't see enough I/O
>> activity to require the kind of IOPS you can get out of most modern SSD's. "
>>
>> My system drive sees a lot of activity, to the degree everything is going
>> slow. I have a SunRay that my girlfriend use, and I have 5-10 torrents going
>> on, and surf the web - often my system crawls. Very often my girlfriend gets
>> irritated because everything lags and she frequently asks me if she can do
>> some task, or if she should wait until I have finished copying my files.
>> Unbearable.
>>
>> I have a quad core Intel 9450 at 2.66GHz, and 8GB RAM.
>>
>> I am planning to use a SSD and really hope it will be faster.
>>
>>
>>
>>
>> $ iostat -xcnXCTdz 1
>>
>> cpu
>> us sy wt id
>>  25  7  0 68
>>                    extended device statistics
>>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>>    0,0    0,0    0,0    0,0  0,0  0,0    0,0    0,0   0   0 c8
>>    0,0    0,0    0,0    0,0  0,0  0,0    0,0    0,0   0   0 c8t0d0
>>   37,0  442,1 4489,6 51326,1  7,5  2,0   15,7    4,1  98 100 c7d0
>
> Desktop usage is a different beast as I alluded to.  A dedicated server
> typically doesn't have any issues.  I'd strongly suggest getting one of the
> sandforce controller based SSD's.  They're the best on the market right now
> by far.
>
> --Tim
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Orvar Korvar
I am waiting for the next gen Intel SSD drives, G3. They are arriving very 
soon. And from what I can infer by reading here, I can use it without issues. 
Solaris will recognize the Intel SDD drive without any drivers needed, or 
whatever? 

Intel new SSD should work with Solaris 11 Express, yes?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Chris Mosetick
A word of caution on the Silicon Image 3124.  I have tested out a two
extremely cheap card using the si3124 driver on b134 and OIb147.  One card
was PCI, the other PCI-X.  I found that both are unusable until the driver
is updated.  Large'ish file transfers, say over 1GB would lock up the
machine and cause a kernel panic.  Investigation revealed it was si3124.
The driver is in serious need of an update, at least in the builds mentioned
above.  It's possible that a firmware update on the card would help, but I
never had time to explore that option.  If a device using the si3124 driver
works great for you in a L2ARC role after extensive testing, then by all
means use it, I just wanted to pass along my experience.

-Chris

The RevoDrive should not require a custom device driver as it is based on
> the
> Silicon Image 3124 PCI-X RAID controller connected to a Pericom PCI-X to
> PCIe bridge chip (PI7C9X130).  The required driver would be the si3124(7D),
> I noticed the man page states NCQ is not supported.  I found the following
> link
> detailing the status:
>
> http://opensolaris.org/jive/thread.jspa?messageID=466436
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
> TRIM was putback in July...  You're telling me it didn't make it into S11
> Express?

Without top level ZFS TRIM support, SATA Framework (sata.c) support
has no bearing on this discussion.

Best regards,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS iscsitgtd & backing store no such file or directory after reboot

2010-11-27 Thread Thierry Delaitre
Hello,

 

A ZFS VDI related question. I'm exporting an iscsi share from a linux
box which i'm mounting on a Solaris 10 VDI broker and subsequently used
by the desktop providers. This is for a proof of concept. This works
fine under VDI 3.2.1 until i reboot the VDI broker.

 

After the broker reboots, the desktop providers are not able to see the
iscsi share anymore. The disk in the vdi pool and its snapshoot are
present. If i try to re-export the isci disk previously used prior to
the reboot of the VDI broker by the desktop provider using 'zfs set
shareiscsi=on', i then get  a 'iscsitgtd failed request to share'

 

I've spent a few days on this one and don't have a clue despite
googling.

 

vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460114M   765G   114M
-

vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c...@version1  0  -   114M
-

bash-3.00# zfs set shareiscsi=on
vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460

cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460': iscsitgtd
failed request to share

cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c...@version1':
iscsitgtd failed request to share

 

zfs set shareiscsi=off vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460

zfs set shareiscsi=on vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460

cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c...@version1':
iscsitgtd failed request to share

 

iscsitadm list target -v

Target: vdi/e8945efc-b9ec-4423-88fb-9772a0a50296 (Created after the
reboot)

iSCSI Name:
iqn.1986-03.com.sun:02:6a253034-c453-e81d-bc7e-f1af8f26063b

Alias: vdi/e8945efc-b9ec-4423-88fb-9772a0a50296

Connections: 1

Initiator:

iSCSI Name:
iqn.2009-08.com.sun.virtualbox.initiator:01:192.168.7.11

Alias: unknown

ACL list:

TPGT list:

LUN information:

LUN: 0

GUID: 600144f04cf14639144f201a2c00

VID: SUN

PID: SOLARIS

Type: disk

Size:   20G

Backing store:
/dev/zvol/rdsk/vdi/e8945efc-b9ec-4423-88fb-9772a0a50296

Status: online

Target: vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460 (Created before the
reboot)

iSCSI Name:
iqn.1986-03.com.sun:02:a840ac46-41d9-41d7-c5c3-b930ac4a9852

Alias: vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460

Connections: 0

ACL list:

TPGT list:

LUN information:

LUN: 0

GUID: 600144f04cd5e869144f201a2c00

VID: SUN

PID: SOLARIS

Type: disk

Size: 1.0G

   Backing store:
/dev/zvol/rdsk/vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460

Status: No such file or directory

 

Thanks

 

Thierry.

 



--
The University of Westminster is a charity and a company limited by
guarantee.  Registration number: 977818 England.  Registered Office:
309 Regent Street, London W1B 2UW, UK.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Tim Cook
On Sat, Nov 27, 2010 at 2:24 PM, Christopher George wrote:

> > Why would you disable TRIM on an SSD benchmark?
>
> Because ZFS does *not* support TRIM, so the benchmarks
> are configured to replicate actual ZIL Accelerator workloads.
>
> > If you're doing sustained high-IOPS workloads like that, the
> > back-end is going to fall over and die long before the hour time-limit.
>
> The reason the graphs are done in a time line fashion is so you look
> at any point in the 1 hour series to see how each device performs.
>
> Best regards,
>
>
>
TRIM was putback in July...  You're telling me it didn't make it into S11
Express?

http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Tim Cook
On Sat, Nov 27, 2010 at 2:16 PM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> "Your system drive on a Solaris system generally doesn't see enough I/O
> activity to require the kind of IOPS you can get out of most modern SSD's. "
>
> My system drive sees a lot of activity, to the degree everything is going
> slow. I have a SunRay that my girlfriend use, and I have 5-10 torrents going
> on, and surf the web - often my system crawls. Very often my girlfriend gets
> irritated because everything lags and she frequently asks me if she can do
> some task, or if she should wait until I have finished copying my files.
> Unbearable.
>
> I have a quad core Intel 9450 at 2.66GHz, and 8GB RAM.
>
> I am planning to use a SSD and really hope it will be faster.
>
>
>
>
> $ iostat -xcnXCTdz 1
>
> cpu
> us sy wt id
>  25  7  0 68
>extended device statistics
>r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>0,00,00,00,0  0,0  0,00,00,0   0   0 c8
>0,00,00,00,0  0,0  0,00,00,0   0   0 c8t0d0
>   37,0  442,1 4489,6 51326,1  7,5  2,0   15,74,1  98 100 c7d0



Desktop usage is a different beast as I alluded to.  A dedicated server
typically doesn't have any issues.  I'd strongly suggest getting one of the
sandforce controller based SSD's.  They're the best on the market right now
by far.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
> Why would you disable TRIM on an SSD benchmark?

Because ZFS does *not* support TRIM, so the benchmarks
are configured to replicate actual ZIL Accelerator workloads.

> If you're doing sustained high-IOPS workloads like that, the
> back-end is going to fall over and die long before the hour time-limit.

The reason the graphs are done in a time line fashion is so you look
at any point in the 1 hour series to see how each device performs.

Best regards,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Orvar Korvar
"Your system drive on a Solaris system generally doesn't see enough I/O 
activity to require the kind of IOPS you can get out of most modern SSD's. "

My system drive sees a lot of activity, to the degree everything is going slow. 
I have a SunRay that my girlfriend use, and I have 5-10 torrents going on, and 
surf the web - often my system crawls. Very often my girlfriend gets irritated 
because everything lags and she frequently asks me if she can do some task, or 
if she should wait until I have finished copying my files. Unbearable.

I have a quad core Intel 9450 at 2.66GHz, and 8GB RAM.

I am planning to use a SSD and really hope it will be faster.




$ iostat -xcnXCTdz 1

cpu
us sy wt id
 25  7  0 68
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0,00,00,00,0  0,0  0,00,00,0   0   0 c8
0,00,00,00,0  0,0  0,00,00,0   0   0 c8t0d0
   37,0  442,1 4489,6 51326,1  7,5  2,0   15,74,1  98 100 c7d0
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Eugen Leitl
On Sat, Nov 27, 2010 at 01:19:50PM -0600, Tim Cook wrote:

> They're a standard SATA hard drive.  You can use them for whatever you'd
> like.  For the price though, they aren't really worth the money to buy just
> to put your OS on.   Your system drive on a Solaris system generally doesn't
> see enough I/O activity to require the kind of IOPS you can get out of most

I run hundreds of vserver guests from an SSD, only the /home is mounted
on a hard drive/RAID.

> modern SSD's.  If you were using the system as a workstation, it'd
> definitely help, as applications tend to feel more responsive with an SSD.
> That's all I run in my laptops now.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Tim Cook
On Sat, Nov 27, 2010 at 8:10 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> A noob question:
>
> These drives that people talk about, can you use them as a system disc too?
> Install Solaris 11 Express on them? Or can you only use them as a L2ARC or
> Zil?
> --
>
>
They're a standard SATA hard drive.  You can use them for whatever you'd
like.  For the price though, they aren't really worth the money to buy just
to put your OS on.   Your system drive on a Solaris system generally doesn't
see enough I/O activity to require the kind of IOPS you can get out of most
modern SSD's.  If you were using the system as a workstation, it'd
definitely help, as applications tend to feel more responsive with an SSD.
That's all I run in my laptops now.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Marc Nicholas
That's a great deck, Chris.

-marc

Sent from my iPhone

On 2010-11-27, at 10:34 AM, Christopher George  wrote:

>> I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd 
>> be interested if anyone else has.
> 
> I recently presented at the OpenStorage Summit 2010 and compared
> exactly the three devices you mention in your post (Vertex 2 EX,
> Vertex 2 Pro, and the DDRdrive X1) as ZIL Accelerators.
> 
> Jump to slide 37 for the write IOPS benchmarks:
> 
> http://www.ddrdrive.com/zil_accelerator.pdf
> 
>> and you *really* want to make sure you get  the 4k alignment right
> 
> Excellent point, starting on slide 66 the performance impact of partition 
> misalignment is illustrated.  Considering the results, longevity might be
> an even greater concern than decreased IOPS performance as ZIL
> acceleration is a worst case scenario for a Flash based SSD.
> 
>> The DDRdrive is still the way to go for the ultimate ZIL accelleration, 
>> but it's pricey as hell.
> 
> In addition to product cost, I believe IOPS/$ is a relevant point of 
> comparison.
> 
> Google products gives the price range for the OCZ 50GB SSDs:
> Vertex 2 EX (OCZSSD2-2VTXEX50G: $870 - $1,011 USD)
> Vertex 2 Pro (OCZSSD2-2VTXP50G:  $399 - $525 USD)
> 
> 4KB Sustained and Aligned Mixed Write IOPS results (See pdf above):
> Vertex 2 EX (6325 IOPS)
> Vertex 2 Pro (3252 IOPS)
> DDRdrive X1 (38701 IOPS)
> 
> Using the lowest online price for both the Vertex 2 EX and Vertex 2 Pro,
> and the full list price (SRP) of the DDRdrive X1.
> 
> IOPS/Dollar($):
> Vertex 2 EX (6325 IOPS / $870)  =  7.27
> Vertex 2 Pro (3252 IOPS / $399)  =  8.15
> DDRdrive X1 (38701 IOPS / $1,995)  =  19.40
> 
> Best regards,
> 
> Christopher George
> Founder/CTO
> www.ddrdrive.com
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Tim Cook
On Sat, Nov 27, 2010 at 9:34 AM, Christopher George wrote:

> > I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd
> > be interested if anyone else has.
>
> I recently presented at the OpenStorage Summit 2010 and compared
> exactly the three devices you mention in your post (Vertex 2 EX,
> Vertex 2 Pro, and the DDRdrive X1) as ZIL Accelerators.
>
> Jump to slide 37 for the write IOPS benchmarks:
>
> http://www.ddrdrive.com/zil_accelerator.pdf
>
> > and you *really* want to make sure you get  the 4k alignment right
>
> Excellent point, starting on slide 66 the performance impact of partition
> misalignment is illustrated.  Considering the results, longevity might be
> an even greater concern than decreased IOPS performance as ZIL
> acceleration is a worst case scenario for a Flash based SSD.
>
> > The DDRdrive is still the way to go for the ultimate ZIL accelleration,
> > but it's pricey as hell.
>
> In addition to product cost, I believe IOPS/$ is a relevant point of
> comparison.
>
> Google products gives the price range for the OCZ 50GB SSDs:
> Vertex 2 EX (OCZSSD2-2VTXEX50G: $870 - $1,011 USD)
> Vertex 2 Pro (OCZSSD2-2VTXP50G:  $399 - $525 USD)
>
> 4KB Sustained and Aligned Mixed Write IOPS results (See pdf above):
> Vertex 2 EX (6325 IOPS)
> Vertex 2 Pro (3252 IOPS)
> DDRdrive X1 (38701 IOPS)
>
> Using the lowest online price for both the Vertex 2 EX and Vertex 2 Pro,
> and the full list price (SRP) of the DDRdrive X1.
>
> IOPS/Dollar($):
> Vertex 2 EX (6325 IOPS / $870)  =  7.27
> Vertex 2 Pro (3252 IOPS / $399)  =  8.15
> DDRdrive X1 (38701 IOPS / $1,995)  =  19.40
>
> Best regards,
>



Why would you disable TRIM on an SSD benchmark?  I can't imagine anyone
intentionally crippling their drive in the real-world.  Furthermore, I don't
think "1 hour sustained" is a very accurate benchmark.  Most workloads are
bursty in nature.  If you're doing sustained high-IOPS workloads like that,
the back-end is going to fall over and die long before the hour time-limit.
Your 38k IOPS would need nearly 500 drives to sustain that workload with any
kind of decent latency.  If you've got 500 drives, you're going to want a
hell of a lot more ZIL space than the ddrdrive currently provides.

I'm all for benchmarks, but try doing something a bit more realistic.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
> I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd 
> be interested if anyone else has.

I recently presented at the OpenStorage Summit 2010 and compared
exactly the three devices you mention in your post (Vertex 2 EX,
Vertex 2 Pro, and the DDRdrive X1) as ZIL Accelerators.

Jump to slide 37 for the write IOPS benchmarks:

http://www.ddrdrive.com/zil_accelerator.pdf

> and you *really* want to make sure you get  the 4k alignment right

Excellent point, starting on slide 66 the performance impact of partition 
misalignment is illustrated.  Considering the results, longevity might be
an even greater concern than decreased IOPS performance as ZIL
acceleration is a worst case scenario for a Flash based SSD.

> The DDRdrive is still the way to go for the ultimate ZIL accelleration, 
> but it's pricey as hell.

In addition to product cost, I believe IOPS/$ is a relevant point of comparison.

Google products gives the price range for the OCZ 50GB SSDs:
Vertex 2 EX (OCZSSD2-2VTXEX50G: $870 - $1,011 USD)
Vertex 2 Pro (OCZSSD2-2VTXP50G:  $399 - $525 USD)

4KB Sustained and Aligned Mixed Write IOPS results (See pdf above):
Vertex 2 EX (6325 IOPS)
Vertex 2 Pro (3252 IOPS)
DDRdrive X1 (38701 IOPS)

Using the lowest online price for both the Vertex 2 EX and Vertex 2 Pro,
and the full list price (SRP) of the DDRdrive X1.

IOPS/Dollar($):
Vertex 2 EX (6325 IOPS / $870)  =  7.27
Vertex 2 Pro (3252 IOPS / $399)  =  8.15
DDRdrive X1 (38701 IOPS / $1,995)  =  19.40

Best regards,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Orvar Korvar
A noob question:

These drives that people talk about, can you use them as a system disc too? 
Install Solaris 11 Express on them? Or can you only use them as a L2ARC or Zil?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Edward Ned Harvey
> > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> >
> > In fact, I recently got one of these Samsung drives...
> > http://tinyurl.com/38s3ac3
> > The spec sheet says sequential read 220MB/s, sequential write 120MB/s...
> > Which is 2-4 times faster than the best SATA disk out there...  And of
> > course, negligible seek time and latency ...
> >
> > But in practice, I find that drive is no faster than my cheap 500G sata
> > disk.  Or maybe just barely faster.  Not much.
> >
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> 
> What kind of testing did you do on the Samsung SSD?

Nothing official yet.  Although I plan to run a benchmark on it sometime, I
haven't got the cycles available for now.  I am using these as the OS drive
in some laptops, and I expected it to make the laptop faster.  Well, maybe
it did, but it's pretty negligible.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Erik Trimble

On 11/26/2010 1:11 PM, Krunal Desai wrote:

What about powering the X25-E by an external power source, one that is also 
solid-state and backed by a UPS?  In my experience, smaller power supplies tend 
to be much more reliable than typical ATX supplies.

I don't think the different PSU would be an issue, The supply you've linked 
doesn't seem to care about linking grounds together.


or even more reliable would be a PicoPSU w/ a hack to make sure that the power 
is always on.

Has anyone tried something like this?  Powering ZILs using a second, more 
reliable PSU?  Thoughts?

I hacked up a PicoPSU for robotics use (running off +24V and providing +5/+3.3); your 
"always-on" should be as easy as shorting the green-black wires (short Pin 14 
to ground) with a little solder jumper.

But wouldn't you need some type of reset trigger for when the system is reset? 
Or is that performed by the SATA controller?
___


Frankly, adding something to the controller card (and, that's where 
you'd have to put it, since just providing UPS power to the SSD wouldn't 
be sufficient) is going to be a nightmare, and I would suspect 
ultimately creates more unreliability and failure than it solves.



I've gone to using an OCZ Vertex 2 EX, which has a supercapacitor 
on-board to enable full consistency in case of a power outage.


OCZSSD2-2VTXEX50G

It's not cheap ($800 / 50G), and you *really* want to make sure you get 
the 4k alignment right, but I haven't had any real problems with it.





I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd 
be interested if anyone else has.  The EX is SLC-based, and the PRO is 
MLC-based, but the claimed performance numbers are similar.  If the PRO 
works well, it's less than half the cost, and would be a nice solution 
for most users who don't need ultra-super-performance from their ZIL.   
The DDRdrive is still the way to go for the ultimate ZIL accelleration, 
but it's pricey as hell.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss