Re: [zfs-discuss] Sun X4200 Question...

2013-03-14 Thread Gary Driggs
On Mar 14, 2013, at 5:55 PM, Jim Klimov  wrote:

> However, recently the VM "virtual hardware" clocks became way slow.

Does NTP help correct the guest's clock?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Gary Driggs
On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:

I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.


Whose description still reads, "everything ZFS running on illumos-based
distributions."

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sonnet Tempo SSD supported?

2012-12-04 Thread Gary Driggs
On Dec 4, 2012, Eugen Leitl wrote:

> Either way I'll know the hardware support situation soon
> enough.

Have you tried contacting Sonnet?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] IOzone benchmarking

2012-05-01 Thread Gary Driggs
On May 1, 2012, at 1:41 AM, Ray Van Dolson wrote:

> Throughput:
>iozone -m -t 8 -T -r 128k -o -s 36G -R -b bigfile.xls
>
> IOPS:
>iozone -O -i 0 -i 1 -i 2 -e -+n -r 128K -s 288G > iops.txt

Do you expect to be reading or writing 36 or 288Gb files very often on
this array? The largest file size I've used in my still lengthy
benchmarks was 16Gb. If you use the sizes you've proposed, it could
take several days or weeks to complete. Try a web search for "iozone
examples" if you want more details on the command switches.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seagate Constellation vs. Hitachi Ultrastar

2012-04-06 Thread Gary Driggs
I've seen a couple sources that suggest prices should be dropping by
the end of April -- apparently not as low as pre flood prices due in
part to a rise in manufacturing costs but about 10% lower than they're
priced today.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Comes to OS X Courtesy of Apple's Former Chief ZFS Architect

2012-01-31 Thread Gary Driggs
It looks like the first iteration has finally launched...

http://tenscomplement.com/our-products/zevo-silver-edition

http://www.macrumors.com/2012/01/31/zfs-comes-to-os-x-courtesy-of-apples-former-chief-zfs-architect
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any HP Servers recommendation for Openindiana (Capacity Server) ?

2012-01-03 Thread Gary Driggs
On Jan 3, 2012, at 10:36 PM, "Eric D. Mudama" wrote:

> Supposedly the H200/H700 cards are just their name for the 6gbit LSI SAS 
> cards, but I haven't tested them personally.

They might use the same chipset but their firmware usually doesn't
support JBOD. Unless they've changed in the last couple of years...
Best you can do is try but if you don't see each drive individually
you'll know it's by design and not lack of skill on your part.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any HP Servers recommendation for Openindiana (Capacity Server) ?

2012-01-03 Thread Gary Driggs
I can't comment on their 4U servers but HP's 1&2U includwd SAS
controllers rarely allow JBOD discovery of drives. So I'd recommend an
LSI card and an external storage chassis like those available from
Promise and others.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Gary Driggs
On Dec 12, 2011, at 11:42 AM, "\"Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.\"" wrote:

> please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and ZIL(SSD)

Do those appliances also use the F20 PCIe flash cards? I know the
Exadata storage cells use them but they aren't utilizing ZFS in the
Linux version of the X2-2. Has that changed with the Solaris x86
versions of the appliance? Also, does OCZ or someone make an
equivalent to the F20 now?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-11 Thread Gary Driggs
What kind of drives are we talking about? Even SATA drives are
available according to application type (desktop, enterprise server,
home PVR, surveillance PVR, etc). Then there are drives with SAS &
fiber channel interfaces. Then you've got Winchester platters vs SSD
vs hybrids. But even before considering that and all the other system
factors, throughput for direct attached storage can vary greatly not
only from interface type and storage tech but even small on drive
controller firmware differences could potentially introduce variances.
That's why server manufacturers like HP, DELL, et al prefer that you
replace failed drives with one of theirs instead of something off the
shelf because they usually have firmware that's been fine tuned in
house or in conjunction with the manufacturer.


On Dec 11, 2011, at 8:25 AM, Edward Ned Harvey
 wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
>>
>> That reminds me of something I have been wondering about... Why only 12x
>> faster? If we are effectively reading from memory - as compared to a
>> disk reading at approximately 100MB/s (which is about an average PC HDD
>> reading sequentially), I'd have thought it should be a lot faster than
> 12x.
>>
>> Can we really only pull stuff from cache at only a little over one
>> gigabyte per second if it's dedup data?
>
> Actually, cpu's and memory aren't as fast as you might think.  In a system
> with 12 disks, I've had to write my own "dd" replacement, because "dd
> if=/dev/zero bs=1024k" wasn't fast enough to keep the disks busy.  Later, I
> wanted to do something similar, using unique data, and it was simply
> impossible to generate random data fast enough.  I had to tweak my "dd"
> replacement to write serial numbers, which still wasn't fast enough, so I
> had to tweak my "dd" replacement to write a big block of static data,
> followed by a serial number, followed by another big block (always smaller
> than the disk block, so it would be treated as unique when hitting the
> pool...)
>
> 1 typical disk sustains 1Gbit/sec.  In theory, 12 should be able to sustain
> 12 Gbit/sec.  According to Nathan's email, the memory bandwidth might be 25
> Gbit, of which, you probably need to both read & write, thus making it
> effectively 12.5 Gbit...  I'm sure the actual bandwidth available varies by
> system and memory type.
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics

2011-11-23 Thread Gary Driggs
On Nov 23, 2011, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. :

> did you see this link

Thank you for this. Some of the other refs it lists will come in handy as well.

kind regards,
Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS forensics

2011-11-23 Thread Gary Driggs
Is zdb still the only way to dive in to the file system? I've seen the 
extensive work by Max Bruning on this but wonder if there are any tools that 
make this easier...?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss