On 08/07/2012 02:18 AM, Christopher George wrote:
>> I mean this as constructive criticism, not as angry bickering. I totally
>> respect you guys doing your own thing.
> 
> Thanks, I'll try my best to address your comments...

Thanks for your kind reply, though there are some points I'd like to
address, if that's okay.

>> *) Increased capacity for high-volume applications.
> 
> We do have a select number of customers striping two
> X1s for a total capacity of 8GB, but for a majority of our customers 4GB
> is perfect.  Increasing capacity
> obviously increases the cost, so we wanted the baseline
> capacity to reflect a solution to most but not every need.

Certainly, for most uses this isn't an issue. I just threw that in
there, considering how cheap DRAM and flash is nowadays and how easy it
is to create disk pools which push >2GB/s in write thoughput, I was
hoping you guys would be keeping pace with that (getting 4GB of sync
writes in the txg commit window can be tough, but not unthinkable). In
any case, simply dismissing it by saying that "simply get two", you are
effectively doubling my slog costs which, considering the recommended
practice is to get a slog mirror, would mean that I have to get 4 X1's.
That's $8k in list prices and 8 full-height PCI-e slots wasted (seeing
as how an X1 is wider than the standard PCI-e card). Not many systems
can do that (that's why I said solder the DRAM and go low-profile).

>> *) Remove the requirement to have an external UPS (couple of
>>    supercaps? microbattery?)
> 
> Done!  We will be formally introducing an optional DDRdrive
> SuperCap PowerPack at the upcoming OpenStorage Summit.

Great! Though I suppose that will inflate the price even further (seeing
as you used the word "optional").

>> *) Use cheaper MLC flash to lower cost - it's only written to in case
>>    of a power outage, anyway so lower write cycles aren't an issue and
>>    modern MLC is almost as fast as SLC at sequential IO (within 10%
>>    usually).
> 
> We will be staying with SLC not only for performance but
> longevity/reliability.
> Check out the specifications (ie erase/program cycles and required ECC)
> for a "modern" 20 nm MLC chip and then let me know if this is where you
> *really* want to cut costs :)

MLC is so much cheaper that you can simply slap on twice as much and use
the rest for ECC, mirroring or simply overprovisioning sectors. The
common practice to extending the lifecycle of MLC is by "short-stroking"
it, i.e. using only a fraction of the capacity. E.g. a 40GB MLC unit
with 5-10k cycles per cell can be turned into a 4GB unit (with the
controller providing wear leveling) with effectively 50-100k cycles
(that's SLC land) for about a hundred bucks. Also, since I'm mirroring
it already with ZFS checksums to provide integrity checking, your
argument simply doesn't hold up.

Oh and don't count on Illumos missing support for SCSI Unmap or SATA
TRIM forever. Work is underway to rectify this situation.

>> *) PCI Express 3.0 interface (perhaps even x4)
> 
> Our product is FPGA based and the PCIe capability is the biggest factor
> in determining component cost.  When we introduced the X1, the FPGA cost
> *alone* to support just PCIe Gen2 x8 was greater than the current street
> price of the DDRdrive X1.

I always had a bit of an issue with non-hotswappable storage systems.
What if an X1 slog dies? I need to power the machine down, open it up,
take out the slog, put it another one and power it back up. Since ZFS
has slog removal support, there's no reason to go for non-hotpluggable
slogs anyway. What about 6G SAS? Dual ported you could push around
12Gbit/s of bandwidth to/from the device, way more than the current
250MB/s, and get hotplug support in there too.

>> *) At least updated benchmarks your site to compare against modern
>>    flash-based competition (not the Intel X25-E, which is seriously
>>    stone age by now...)
> 
> I completely agree we need to refresh the website, not even the photos
> are representative of our shipping product (we now offer VLP DIMMs).
> We are engineers first and foremost, but an updated website is in the
> works.
> 
> In the mean time, we have benchmarked against both the Intel 320/710
> in my OpenStorage Summit 2011 presentation which can be found at:
> 
> http://www.ddrdrive.com/zil_rw_revelation.pdf

I always had a bit of an issue with your benchmarks. First off, you're
only ever doing synthetics. They are very nice, but don't provide much
in terms of real-world perspective. Try and compare on price too. Take
something like a Dell R720, stick in the equivalent (in terms of cost!)
of DRAM SSDs and Flash SSDs (i.e. for X1 you're looking at like 4 Intel
710s) and run some real workloads (database benchmarks, virtualization
benchmarks, etc.). Experiment beats theory, every time.

>> *) Lower price, lower price, lower price.
>>    I can get 3-4 200GB OCZ Talos-Rs for $2k FFS. That means I could
>>    equip my machine with one to two mirrored slogs and nearly 800GB
>>    worth of L2ARC for the price of a single X1.
> 
> I strongly believe the benefits of a DRAM/NAND based SSD (compared to a
> Flash only based SSD) make them exceptionally cost effective for
> enterprise focused ZIL acceleration.  Sustained write IOPS are paramount
> for a dedicated log device, I detail this key fact and compare against
> OCZ SSDs (older now but also sandforce based) in a OpenStorage Summit
> 2010 presentation:
> 
> http://www.ddrdrive.com/zil_accelerator.pdf

Never underestimate the power of brute-force scaling. A single
high-performance high-quality product is nice, but ZFS has always been
more about taking a pile of rubbish and getting it to perform better
than the cherished premium products...

> I do agree cost is always critical to wider acceptance.  Know this, our
> street price is *extremely* aggressive relative to our costs of
> production for such a targeted product.  We do what we do at DDRdrive
> for a single reason, our passion for ZFS.  We want nothing more than to
> continue to design and offer our unique ZIL accelerators as an
> alternative to Flash only SSDs and hopefully help (in some small way)
> the success of ZFS.



> Thanks again for taking the time to share your thoughts!

Thank you again for answering some of my questions. I understand that
you guys are Davids taking on the Goliaths of this world (OCZ, Intel,
LSI), but at the end of the day, sympathy from the ZFS-enthusiast
community (and count me as one among them) will only get you so far. We
build systems to do real work in the real world and we have to make our
numbers work.

Cheers,
--
Saso
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to