On 2/27/2013 2:05 PM, Tim Cook wrote:
On Wed, Feb 27, 2013 at 2:57 AM, Dan Swartzendruber
mailto:dswa...@druber.com>> wrote:
I've been using it since rc13. It's been stable for me as long as
you don't
get into things like zvols and such...
Then it de
I've been using it since rc13. It's been stable for me as long as you don't
get into things like zvols and such...
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
Sent: Wednesday, February 27, 2013 6:37
Zfs on linux (ZOL) has made some pretty impressive strides over the last
year or so...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Did you set the autoexpand property?___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/14/2012 9:44 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
Well, I think I give up for now. I spent quite a few hours over the last
couple of days
On 11/14/2012 9:44 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
Well, I think I give up for now. I spent quite a few hours over the last
couple of days
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
Sent: Tuesday, November 13, 2012 10:08 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for
storage?
On 2012-11-14 03:20, Dan Swartzendruber wrote:
> Well, I think I give up for now. I spent quite a few hours over the
> last couple of days trying to get gnome desktop working on bare-metal
> OI, followed by virtualbox. Supposedly that works in headless mode
> with RDP fo
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but fail for me. Found quite a few posts on various
f
storage?
Dan,
If you are going to do the all in one with vbox, you probably want to look
at:
http://sourceforge.net/projects/vboxsvc/
It manages the starting/stopping of vbox vms via smf.
Kudos to Jim Klimov for creating and maintaining it.
Geoff
On Thu, Nov 8, 2012 at 7:32 PM, Dan
I have to admit Ned's (what do I call you?)idea is interesting. I may give
it a try...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
wait my brain caught up with my fingers :) the guest is running on the
same host, so there is no virtual switch in this setup. i'm still going
to try the vmxnet3 and see what difference it makes...
___
zfs-discuss mailing list
zfs-discuss@opensolar
On 11/8/2012 1:41 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Dan Swartzendruber [mailto:dswa...@druber.com]
Now you have me totally confused. How does your setup get data from the
guest to the OI box? If thru a wire, if it's gig-e, it's going to be
1
On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HB
On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HB
-Original Message-
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Sent: Wednesday, November 07, 2012 11:44 PM
To: Dan Swartzendruber; Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
Cc: Tiernan OToole
On 11/7/2012 10:53 AM, Edmund White wrote:
Same thing here. With the right setup, an all-in-one system based on
VMWare can be very solid and perform well.
I've documented my process here: http://serverfault.com/a/398579/13325
But I'm surprised at the negative comments about VMWare in this contex
On 11/7/2012 10:02 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
I formerly did exactly the same thing. Of course performance is abysmal
because you're booting a guest VM to share storage back to the host where the
actual VM's run. Not to mention, there's the startup de
On 10/25/2012 11:44 AM, Sašo Kiselkov wrote:
It may be that you'll get reduced cabling range (only up to SATA
lengths, obviously), but it works. The voltage differences are very
small and should only come into play when you're pushing the envelope of
the cable length.
I have a two-drive esat
On 10/4/2012 1:56 PM, Jim Klimov wrote:
What if the backup host is down (i.e. the ex-master after the failover)?
Will your failed-over pool accept no writes until both storage machines
are working?
What if internetworking between these two heads has a glitch, and as
a result both of them become
On 10/4/2012 12:19 PM, Richard Elling wrote:
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber <mailto:dswa...@druber.com>> wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber <mailto:dswa...@druber.com>> wrote:
This who
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber <mailto:dswa...@druber.com>> wrote:
This whole thread has been fascinating. I really wish we (OI) had
the two following things that freebsd supports:
1. HAST - provides a block-level dr
Forgot to mention: my interest in doing this was so I could have my ESXi
host point at a CARP-backed IP address for the datastore, and I would
have no single point of failure at the storage level.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
This whole thread has been fascinating. I really wish we (OI) had the
two following things that freebsd supports:
1. HAST - provides a block-level driver that mirrors a local disk to a
network "disk" presenting the result as a block device using the GEOM API.
2. CARP.
I have a prototype w
Matt, how about running the same disk benchmark(s), with sync=disabled vs
sync=enabled and the ZIL accelerator in place?
_
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Matt Van Mater
Sent: Monday, October 01, 2012 9:19 AM
To: zfs-disc
On 9/26/2012 11:18 AM, Matt Van Mater wrote:
If the added device is slower, you will experience a slight drop in
per-op performance, however, if your working set needs another SSD,
overall it might improve your throughput (as the cache hit ratio will
increase).
Thanks for your
On 9/25/2012 3:38 PM, Jim Klimov wrote:
2012-09-11 16:29, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
My first thought was everything is
hitting in ARC
On 9/18/2012 10:31 AM, Eugen Leitl wrote:
I'm currently thinking about rolling a variant of
http://www.napp-it.org/napp-it/all-in-one/index_en.html
with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for production in
our organisation. The systems themselves would
f the datastore
and back (to get new smaller recordsize.) I wonder if that has an effect?
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 10:12 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensolaris.org
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 10:12 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Interesting question about L2ARC
On 09/11/2012 04:06 PM, Dan Swar
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 9:52 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Interesting question about L2ARC
On 09/11/2012 03:41 PM, Dan Swar
like
160GB (thin provisioning in action), so it seems to me, I should be able to
fit the entire thing in L2ARC?
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 9:35 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensola
I think you may have a point. I'm also inclined to enable prefetch caching
per Saso's comment, since I don't have massive throughput - latency is more
important to me.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of J
Hmmm, but the "real hit ratio" was 68%?
-Original Message-
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Sent: Tuesday, September 11, 2012 8:30 AM
To: Dan Swartzendruber; zfs-discuss@opensolaris.org
S
I got a 256GB Crucial M4 to use for L2ARC for my OpenIndiana box. I added
it to the tank pool and let it warm for a day or so. By that point, 'zpool
iostat -v' said the cache device had about 9GB of data, but (and this is
what has me puzzled) kstat showed ZERO l2_hits. That's right, zero.
kst
35 matches
Mail list logo