Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-07 Thread Christopher George
we target (enterprise customers). The beauty of ZFS is the flexibility of it's implementation. By supporting multiple log device types and configurations it ultimately enables a broad range of performance capabilities! Best regards, Chris -- C

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Christopher George
ing more than to continue to design and offer our unique ZIL accelerators as an alternative to Flash only SSDs and hopefully help (in some small way) the success of ZFS. Thanks again for taking the time to share your thoughts! The drive for speed, Chris ---- Christopher Geor

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Christopher George
ta" to mean the SSD's internal meta data... I'm curious, any other interpretations? Thanks, Chris -------- Christopher George cgeorge at ddrdrive.com http://www.ddrdrive.com/ ___ zfs-discuss mailing list zf

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Christopher George
? Yes! Customers using Illumos derived distros make-up a good portion of our customer base. Thanks, Christopher George www.ddrdrive.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Christopher George
rotection" at: http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html Intel's brief also clears up a prior controversy of what types of data are actually cached, per the brief it's both user and system data! Best regards, Christophe

Re: [zfs-discuss] Separate Log Devices

2011-06-07 Thread Christopher George
slice instead of the entire device will automatically disable the on-board write cache. Christopher George Founder / CTO http://www.ddrdrive.com/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread George Wilson
Don, Try setting the zfs_scrub_delay to 1 but increase the zfs_top_maxinflight to something like 64. Thanks, George On Wed, May 18, 2011 at 5:48 PM, Donald Stahl wrote: > Wow- so a bit of an update: > > With the default scrub delay: > echo "zfs_scrub_delay/K" | mdb

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
as to spare the absent > whiteboard ,) No. Imagine if you started allocations on a disk and used the metaslabs that are at the edge of disk and some out a 1/3 of the way in. Then you want all the metaslabs which are a 1/3 of the way in and lower to get the bonus. This keeps the allocations tow

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
4) In one internet post I've seen suggestions about this > value to be set as well: > set zfs:metaslab_smo_bonus_pct = 0xc8 > > http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg40765.html This is used to add more weight (i.e. preference) to specific metaslabs. A metasla

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
7;s or processor load- > so I'm wondering what else I might be missing. Scrub will impact performance although I wouldn't expect a 60% drop. Do you mind sharing more data on this? I would like to see the spa_scrub_* values I sent you earlier while you're running your test (in a loop s

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
system so you may want to make this change during off-peak hours. Then check your performance and see if it makes a difference. - George On Mon, May 16, 2011 at 10:58 AM, Donald Stahl wrote: > Here is another example of the performance problems I am seeing: > > ~# dd if=/dev/zero of=/p

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
Can you check that you didn't mistype this? Thanks, George On Mon, May 16, 2011 at 7:41 AM, Donald Stahl wrote: >> Can you share your 'zpool status' output for both pools? > Faster, smaller server: > ~# zpool status pool0 >  pool: pool0 >  state: ONLINE >  sc

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-15 Thread George Wilson
Can you share your 'zpool status' output for both pools? Also you may want to run the following a few times in a loop and provide the output: # echo "::walk spa | ::print spa_t spa_name spa_last_io spa_scrub_inflight" | mdb -k Thanks, George On Sat, May 14, 2011 at 8:29 AM,

Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't recognize the pool as existing

2011-02-06 Thread George Wilson
Chris, I might be able to help you recover the pool but will need access to your system. If you think this is possible just ping me off list and let me know. Thanks, George On Sun, Feb 6, 2011 at 4:56 PM, Chris Forgeron wrote: > Hello all, > > Long time reader, first ti

Re: [zfs-discuss] ZFS and TRIM

2011-02-04 Thread Christopher George
is immune to TRIM support status and thus unaffected. Actually, TRIM support would only add unnecessary overhead to the DDRdrive X1's device driver. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from o

Re: [zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread George Wilson
k that has always been there. Now you can monitor how much CPU is being used by the underlying ZFS I/O subsystem. If you're seeing a specific performance problem feel free to provide more details about the issue. - George On Mon, Jan 31, 2011 at 4:54 PM, Gary Mills wrote: > After an upgrad

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-26 Thread Christopher George
SATA cable, see slides 15-17. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS/NFS benchmarking - is this normal?

2011-01-11 Thread Christopher George
ing.com/Home/scripts-and-programs-1/zilstat Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-23 Thread Christopher George
aster, > assuming that cache disabled on a rotating drive is roughly 100 > IOPS with queueing), that it'll still provide a huge performance boost > when used as a ZIL in their system. I agree 100%. I never intended to insinuate otherwise :-) Best regards, Christopher George Fou

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-23 Thread Christopher George
Above excerpts written by a OCZ employed thread moderator (Tony). Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-22 Thread Christopher George
g in a larger context: http://www.oug.org/files/presentations/zfszilsynchronicity.pdf Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-22 Thread Christopher George
ng to perform a Secure Erase every hour, day, or even week really be the most cost effective use of an administrators time? Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailin

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-22 Thread Christopher George
y" than sync=disabled. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-22 Thread Christopher George
> got it attached to a UPS with very conservative shut-down timing. Or > are there other host failures aside from power a ZIL would be > vulnerable too (system hard-locks?)? Correct, a system hard-lock is another example... Best regards, Christopher George Founder/CTO www.ddrdrive.com

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-22 Thread Christopher George
e. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Looking for 3.5" SSD for ZIL

2010-12-22 Thread Christopher George
are valid, the resulting degradation will vary depending on the controller used. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.o

Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
he size of the resultant binaries? Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
is drive inactivity has no effect on the eventual outcome. So with either a bursty or sustained workload the end result is always the same, dramatic write IOPS degradation after unpackaging or secure erase of the tested Flash based SSDs. Best regards, Christopher George Founder/CTO www.

Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
> TRIM was putback in July... You're telling me it didn't make it into S11 > Express? Without top level ZFS TRIM support, SATA Framework (sata.c) support has no bearing on this discussion. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This mes

Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
the hour time-limit. The reason the graphs are done in a time line fashion is so you look at any point in the 1 hour series to see how each device performs. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from o

Re: [zfs-discuss] Ext. UPS-backed SATA SSD ZIL?

2010-11-27 Thread Christopher George
IOPS / $1,995) = 19.40 Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-25 Thread Christopher George
1 Express! Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Any opinoins on these SSD's?

2010-11-11 Thread Christopher George
> Any opinions? stories? other models I missed? I was a speaker at the recent OpenStorage Summit, my presentation "ZIL Accelerator: DRAM or Flash?" might be of interest: http://www.ddrdrive.com/zil_accelerator.pdf Best regards, Christopher George Founder/CTO www.ddrdrive.com --

Re: [zfs-discuss] Question about (delayed) block freeing

2010-10-29 Thread George Wilson
This value is hard-coded in. - George On Fri, Oct 29, 2010 at 9:58 AM, David Magda wrote: > On Fri, October 29, 2010 10:00, Eric Schrock wrote: > > > > On Oct 29, 2010, at 9:21 AM, Jesus Cea wrote: > > > >> When a file is deleted, its block are freed, and that

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-24 Thread George Wilson
The guid is stored on the mirrored pair of the log and in the pool config. If you're log device was not mirrored then you can only find it in the pool config. - George On Sun, Oct 24, 2010 at 9:34 AM, David Ehrmann wrote: > How does ZFS detect that there's a log device attached

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-23 Thread George Wilson
If your pool is on version > 19 then you should be able to import a pool with a missing log device by using the '-m' option to 'zpool import'. - George On Sat, Oct 23, 2010 at 10:03 PM, David Ehrmann wrote: > > > From: zfs-discuss-boun...@opensolaris.org >

Re: [zfs-discuss] Bursty writes - why?

2010-10-12 Thread Christopher George
nt (or aggregate) write pattern trends to random. Over 50% random with a pool containing just 5 filesystems. This makes intuitive sense knowing each filesystem has it's own ZIL and they all share the dedicated log (ZIL Accelerator). Best regards, Christopher George Founder/CTO www.d

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread George Wilson
Can you post the output of 'zpool status'? Thanks, George LIC mesh wrote: Most likely an iSCSI timeout, but that was before my time here. Since then, there have been various individual drives lost along the way on the shelves, but never a whole LUN, so, theoretically, /except/

Re: [zfs-discuss] Resilver endlessly restarting at completion

2010-09-29 Thread George Wilson
ll the work over again. Are drives still failing randomly for you? 3. Can i force remove c9d1 as it is no longer needed but c11t3 can be resilvered instead? You can detach the spare and let the resilver work on only c11t3. Can you send me t

Re: [zfs-discuss] resilver that never finishes

2010-09-18 Thread George Wilson
tp://mail.opensolaris.org/mailman/listinfo/zfs-discuss It sounds like you're hitting '6891824 7410 NAS head "continually resilvering" following HDD replacement'. If you stop taking and destroying snapshots you should see the resilver finish. Thanks, George _

Re: [zfs-discuss] Hang on zpool import (dedup related)

2010-09-12 Thread George Wilson
be past the import phase and into the mounting phase. What I would recommend is that you 'zpool import -N zp' so that none of the datasets get mounted and only the import happens. Then one by one you can mount the datasets in order (starting with 'zp') so you can find out wh

Re: [zfs-discuss] what is zfs doing during a log resilver?

2010-09-06 Thread George Wilson
remove the log device and then re-add it to the pool as a mirror-ed log device. - George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Christopher George
en though we delineate the storage media used depending on host power condition. The X1 exclusively uses DRAM for all IO processing (host is on) and then Flash for permanent non-volatility (host is off). Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from op

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-30 Thread Christopher George
SSD does *not* suffer the same fate, as its performance is not bound by or vary with partition (mis)alignment. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-27 Thread George Wilson
Bob Friesenhahn wrote: On Thu, 26 Aug 2010, George Wilson wrote: What gets "scrubbed" in the slog? The slog contains transient data which exists for only seconds at a time. The slog is quite likely to be empty at any given point in time. Bob Yes, the typical ZIL block never

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
g from the data portion of an empty device wouldn't really show us much as we're going to be reading a bunch of non-checksummed data. The best we can do is to "probe" the device's label region to determine it's health. This

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
oss is not possible should it fail. - George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
(one device failed, and the other device is good) ... Do you read the data from *both* sides of the mirror, in order to discover the corrupted log device, and correctly move forward without data loss? Yes, we read all sides of the mirror when we claim (i.e. read) the log blocks for a log device. Th

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread George Wilson
Robert Hartzell wrote: On 08/16/10 07:47 PM, George Wilson wrote: The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount '. Then run 'zfs mount -a'. - George mounting the dataset failed because

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread George Wilson
The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount '. Then run 'zfs mount -a'. - George On 08/16/10 07:30 PM, Robert Hartzell wrote: I have a disk which is 1/2 of a boot disk mirror from a fa

Re: [zfs-discuss] Best usage of SSD-disk in ZFS system

2010-08-09 Thread Christopher George
are a ZIL accelerator well matched to the 24/7 demands of enterprise use. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.open

Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed?

2010-08-03 Thread George Wilson
Darren, It looks like you've lost your log device. The newly integrated missing log support will help once it's available. In the meantime, you should run 'zdb -l' on your log device to make sure the label is still intact. Thanks, George Darren Taylor wrote: I'm a

Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-30 Thread George Wilson
current version 22 (snv_129)? Dmitry, I can't comment on when this will be available but I can tell you that it will work with version 22. This requires that you have a pool that is running a minimum of version 19. Thanks, George [r...@storage ~]# zpool import pool: tank

Re: [zfs-discuss] How does zil work

2010-07-21 Thread Christopher George
Here is another very recent blog post from ConstantThinking: http://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained Very well done, a highly recommended read. Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org

Re: [zfs-discuss] zfs hangs with B141 when filebench runs

2010-07-15 Thread George Wilson
I don't recall seeing this issue before. Best thing to do is file a bug and include a pointer to the crash dump. - George zhihui Chen wrote: Looks that the txg_sync_thread for this pool has been blocked and never return, which leads to many other threads have been blocked. I have tri

[zfs-discuss] spreading data after adding devices to pool

2010-07-09 Thread George Helyar
I use ZFS (on FreeBSD) for my home NAS. I started on 4 drives then added 4 and have now added another 4, bringing the total up to 12 drives on 3 raidzs in 1 pool. I was just wondering if there was any advantage or disadvantage to spreading the data across the 3 raidz, as two are currently full

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-07-08 Thread George
corrupted metadata. This seems to be caused by the functions print_import_config and print_statement_config having slightly different case statements and not a difference in the pool itself. Hopefully I'll be able to complete the reinstall soon and see if that fixe

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-07-03 Thread George
> Because of that I'm thinking that I should try > to change the hostid when booted from the CD to be > the same as the previously installed system to see if > that helps - unless that's likely to confuse it at > all...? I've now tried changing the hostid using the code from http://forums.sun.com

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-07-02 Thread George
> I think I'll try booting from a b134 Live CD and see > that will let me fix things. Sadly it appears not - at least not straight away. Running "zpool import" now gives pool: storage2 id: 14701046672203578408 state: FAULTED status: The pool was last accessed by another system. action: Th

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-30 Thread George
Aha: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794136 I think I'll try booting from a b134 Live CD and see that will let me fix things. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-30 Thread George
ge about being unable to find the device (output attached). George -- This message posted from opensolaris.orgr...@crypt:~# zdb -C storage2 version=14 name='storage2' state=0 txg=1807366 pool_guid=14701046672203578408 hostid=8522651 hostname='crypt'

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-29 Thread George
> I suggest you to try running 'zdb -bcsv storage2' and > show the result. r...@crypt:/tmp# zdb -bcsv storage2 zdb: can't open storage2: No such device or address then I tried r...@crypt:/tmp# zdb -ebcsv storage2 zdb: can't open storage2: File exists George --

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-29 Thread George
Another related question - I have a second enclosure with blank disks which I would like to use to take a copy of the existing zpool as a precaution before attempting any fixes. The disks in this enclosure are larger than those that the one with a problem. What would be the best way to do this

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-28 Thread George
I've attached the output of those commands. The machine is a v20z if that makes any difference. Thanks, George -- This message posted from opensolaris.orgmdb: logging to "debug.txt" > ::status debugging crash dump vmcore.0 (64-bit) from crypt operating system: 5.11 snv_

[zfs-discuss] Kernel Panic on zpool clean

2010-06-28 Thread George
Hi, I have a machine running 2009.06 with 8 SATA drives in SCSI connected enclosure. I had a drive fail and accidentally replaced the wrong one, which unsurprisingly caused the rebuild to fail. The status of the zpool then ended up as: pool: storage2 state: FAULTED status: An intent log reco

Re: [zfs-discuss] High-Performance ZFS (2000MB/s+)

2010-06-16 Thread Christopher George
. The same principles and benefits of multi-core processing apply here with multiple controllers. The performance potential of NVRAM based SSDs dictates moving away from a single/separate HBA based controller. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message

Re: [zfs-discuss] SSDs adequate ZIL devices?

2010-06-15 Thread Christopher George
as not power protecting on-board volatile caches. As the X25-E does implement the ATA FLUSH CACHE command, but does not have the required power protection to avoid transaction (data) loss. Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from op

Re: [zfs-discuss] zfs-discuss Digest, Vol 56, Issue 78

2010-06-15 Thread Nikos George
/Nikos On Jun 15, 2010, at 4:04 PM, zfs-discuss-requ...@opensolaris.org wrote: Send zfs-discuss mailing list submissions to zfs-discuss@opensolaris.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.opensolaris.org/mailman/listinfo/zfs-discuss or, via email, send

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread George Wilson
zfs_vdev_max_pending defaults to 10 which helps. You can tune it lower as described in the Evil Tuning Guide. Also, as Robert pointed out, CR 6494473 offers a more resource management friendly way to limit scrub traffic (b143). Everyone can buy George a beer for implementing this change :-) I&#x

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Christopher George
SSDs that fully comply with the POSIX requirements for synchronous write transactions and do not lose transactions on a host power failure, we are competitively priced at $1,995 SRP. Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Christopher George
> No Slogs as I haven't seen a compliant SSD drive yet. As the architect of the DDRdrive X1, I can state categorically the X1 correctly implements the SCSI Synchronize Cache (flush cache) command. Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensol

Re: [zfs-discuss] dedup status

2010-05-20 Thread George Wilson
t practice to size your system accordingly such that the dedup table can stay resident in the ARC or L2ARC. - George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Christopher George
to/from removable media. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Christopher George
market. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Christopher George
and an overwhelming attention to detail. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Christopher George
. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-15 Thread george
hi all im brand new to opensolaris ... feel free to call me noob :) i need to build a home server for media and general storage zfs sound like the perfect solution but i need to buy a 8 (or more) SATA controller any suggestions for compatible 2 opensolaris products will be really appreciated

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-03 Thread Christopher George
ers well. We are actively designing our soon to be available support plans. Your voice will be heard, please email directly at for requests, comments and/or questions. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread George Puchalski
http://fixunix.com/solaris-rss/570361-make-most-your-ssd-zfs.html I think this is what you are looking for. GParted FTW. Cheers, _GP_ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.op

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-19 Thread Christopher George
rs (non-clustered) an additional option. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Christopher George
> Personally I'd say it's a must. Most DC's I operate in wouldn't tolerate > having a card separately wired from the chassis power. May I ask the list, if this is a hard requirement for anyone else? Please email me directly "cgeorge at ddrdrive dot com". Th

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Christopher George
t for the DC jack to be unpopulated so that an internal power source could be utilized. We will make this modification available to any customer who asks. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Christopher George
ree to disagree. I respect your point of view, and do agree strongly that Li-Ion batteries play a critical and highly valued role in many industries. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Christopher George
because it is a proven and industry standard method of enterprise class data backup. Thanks, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Christopher George
oduct cannot be supported by any of the BBUs currently found on RAID controllers. It would require either a substantial increase in energy density or a decrease in packaging volume both of which incur additional risks. > Interesting product though! Thanks, Christopher George Founder/CTO www

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Christopher George
r HBA's which do require a x4 or x8 PCIe connection. Very appreciative of the feedback! Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-13 Thread Christopher George
ovides an optional (user configured) backup/restore feature. Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-13 Thread Christopher George
advancement of Open Storage and explore the far-reaching potential of ZFS based Hybrid Storage Pools? If so, please send an inquiry to "zfs at ddrdrive dot com". The drive for speed, Christopher George Founder/CTO www.ddrdrive.com *** Special thanks goes out to SUN employees Garrett D'

[zfs-discuss] Heads-Up: Changes to the zpool(1m) command

2009-12-02 Thread George Wilson
FREECAP DEDUP HEALTH ALTROOT rpool 464G 64.6G 399G13% 1.00x ONLINE - tank 2.27T 207K 2.27T 0% 1.00x ONLINE - jaggs# zpool get allocated,free rpool NAME PROPERTY VALUE SOURCE rpool allocated 64.6G - rpool free 399G - We realize that these

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread George Wilson
its entirety. Not a situation to be tolerated in production. Expect the fix for this issue this month. Thanks, George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-11-14 Thread george white
Is there a way to use only 2 or 3 digits for the second level of the var/pkg/download cache? This directory hierarchy is particularly problematic relative to moving, copying, sending, etc. This would probably speed up lookups as well. -- This message posted from opensolaris.org __

Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-11-10 Thread George Janczuk
I've been following the use of SSD with ZFS and HSPs for some time now, and I am working (in an architectural capacity) with one of our IT guys to set up our own ZFS HSP (using a J4200 connected to an X2270). The best practice seems to be to use an Intel X25-M for the L2ARC (Readzilla) and an I

Re: [zfs-discuss] dedupe question

2009-11-08 Thread George Wilson
- zp_dd allocated 4.22G - The dedupe ratio has climbed to 1.95x with all those unique files that are less than %recordsize% bytes. You can get more dedup information by running 'zdb -DD zp_dd'. This should show you how we break things down. Add more 'D&#

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread George Wilson
working on it. We have a fix for this and it should be available in a couple of days. - George - Eric -- Regards, Cyril -- Eric Schrock, Fishworks http://blogs.sun.com/eschrock ___ zfs-discuss mailing list zf

Re: [zfs-discuss] zpool export taking hours

2009-07-29 Thread George Wilson
0 errors: No known data errors Can you run the following command and post the output: # echo "::pgrep zpool | ::walk thread | ::findstack -v" | mdb -k Thanks, George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] zpool export taking hours

2009-07-27 Thread George Wilson
: No known data errors Can you run the following command and post the output: # echo "::pgrep zpool | ::walk thread | ::findstack -v" | mdb -k Thanks, George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-22 Thread George Wilson
Once these bits are available in Opensolaris then users will be able to upgrade rather easily. This would allow you to take a liveCD running these bits and recover older pools. Do you currently have a pool which needs recovery? Thanks, George Alexander Skwar wrote: Hi. Good to Know! But

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-21 Thread George Wilson
using the following CR as the tracker for this work: 6667683 need a way to rollback to an uberblock from a previous txg Thanks, George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread George Wilson
this change was so that all device expansion would be managed in the same way. I'll try to blog about this soon but for now be aware that post snv_116 the typical method of growing pools by replacing devices will require at least one additional step. Thanks, George _

Re: [zfs-discuss] Issues with slightly different sized drives in raidz pool?

2009-06-08 Thread George Wilson
umber of metaslabs. With this change you should be okay. Thanks, George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   >