Gabriele, I don't really understand the architecture from the press release, but my thoughts (as a simple zfs/openindiana system admin) are.
It states three typical needs, * OLTP on Oracle(r) databases * OLTP on Microsoft(r) SQL Server(r) databases running on VMware(r) * VDI on VMware<http://www.dell.com/spredir.ashx/shared-content%7Edata-sheets%7Een/documents%7Edell-fluid-cache-for-san-claims.pdf> For the first one, if you have an oracle database, it would be difficult not to choose an Oracle storage applicence, The other 2 nexenta has a product for VMWare called Nexenta VSA, that my initial impression is closer to the VMWare server /better integrated . Hence would probably perform better. Fitting Dells solution to their IOPS problems, into ZFS which probably has different problems / different design from the ground up is probably wrong. John From: Gabriele Bulfon [mailto:discuss@lists.illumos.org] Sent: 16 April 2014 11:44 To: discuss@lists.illumos.org Subject: [discuss] Dell fluid cache for SAN Hi, I'd like to know your opinion on this new Dell product: http://www.dell.com/learn/us/en/555/campaigns/dell-fluid-cache-for-san?c=us&l=en&s=biz Is there any possibility to reproduce it as a zfs based solution? Here are a couple of posts by one of the engineers who worked on it: "I doubt ZFS can share in-server PCIe SSD's over the network using RDMA (RoCE / IB) for fault-tolerant write-back cache on 8 servers simultaneously." "Every server has local PCIe SSDs used for caching. Dirty blocks are replicated to other nodes during write-back caching until the dirty data can be flushed to the backing store. We provide virtual block devices locally (our VMWare solution uses iSCSI / iSER to expose the virtual block devices to guest VMs, our bare-metal solution just creates local block devices -- no iSCSI involved). Cache blocks are distributed among the nodes using a round-robin-like algorithm. No multicast involved. Writes to the backing store only happen on a single cache server -- whichever cache server owns the block does the write. Block clients read/write blocks directly to whichever cache server owns the block -- locally or remotely via RDMA over ethernet (RoCE). Cache servers read/write to the backing store via multi-initiator iSCSI or FC SAN." [Image removed by sender.]<http://www.sonicle.com> Gabriele Bulfon - Sonicle S.r.l. Tel +39 028246016 Int. 30 - Fax +39 028243880 via Santa Maria Valle 3 - 20123 - Milano - Italy http://www.sonicle.com illumos-discuss | Archives<https://www.listbox.com/member/archive/182180/=now> [Image removed by sender.] <https://www.listbox.com/member/archive/rss/182180/24506464-f4272eae> | Modify<https://www.listbox.com/member/?&> Your Subscription [Image removed by sender.]<http://www.listbox.com> _______________________________________________________________________ The contents of this e-mail and any attachment(s) are strictly confidential and are solely for the person(s) at the e-mail address(es) above. If you are not an addressee, you may not disclose, distribute, copy or use this e-mail, and we request that you send an e-mail to ad...@stirling-dynamics.com and delete this e-mail. Stirling Dynamics Ltd. accepts no legal liability for the contents of this e-mail including any errors, interception or interference, as internet communications are not secure. Any views or opinions presented are solely those of the author and do not necessarily represent those of Stirling Dynamics Ltd. Registered In England No. 2092114 Registered Office: 26 Regent Street, Clifton, Bristol. BS8 4HG VAT no. GB 464 6551 29 _______________________________________________________________________ This e-mail has been scanned for all viruses MessageLabs. ------------------------------------------- illumos-discuss Archives: https://www.listbox.com/member/archive/182180/=now RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be Modify Your Subscription: https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4 Powered by Listbox: http://www.listbox.com
<<inline: ~WRD000.jpg>>
<<inline: image001.jpg>>