George wrote:
I'm curious about something. Wouldn't ZFS `send` and `recv` be a
perfect fit for Apple Time Machine in Leopard if glued together by
some scripts? In this scenario you could have an external volume and
simply send snapshots to it and reciprocate as needed with recv.
Also, it would
I'm curious about something. Wouldn't ZFS `send` and `recv` be a
perfect fit for Apple Time Machine in Leopard if glued together by
some scripts? In this scenario you could have an external volume and
simply send snapshots to it and reciprocate as needed with recv.
Also, it would seem that Appl
Ian Collins wrote:
David Dyer-Bennet wrote:
Richard Elling wrote:
What I would do:
2 disks: slice 0 & 3 root (BE and ABE), slice 1 swap/dump, slice
6 ZFS mirror
2 disks: whole disk mirrors
I don't understand "slice 6 zfs mirror". A mirror takes *two* things
of the same size.
Note t
On 6/15/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
On Fri, Jun 15, 2007 at 02:27:18PM -0700, Neal Pollack wrote:
>
> So it only has room for one power supply. How many disk drives will you
> be installing?
> It's not the steady state current that matters, as much as it is the
> ability to ha
David Dyer-Bennet wrote:
> Richard Elling wrote:
>> What I would do:
>> 2 disks: slice 0 & 3 root (BE and ABE), slice 1 swap/dump, slice
>> 6 ZFS mirror
>> 2 disks: whole disk mirrors
>>
> I don't understand "slice 6 zfs mirror". A mirror takes *two* things
> of the same size.
>
Note the "
[EMAIL PROTECTED] said:
> Richard Elling wrote:
>> For the time being, these SATA disks will operate in IDE compatibility mode,
>> so don't worry about the write cache. There is some debate about whether
>> the write cache is a win at all, but that is another rat hole. Go ahead
>> and split off s
Richard Elling wrote:
What I would do:
2 disks: slice 0 & 3 root (BE and ABE), slice 1 swap/dump, slice 6
ZFS mirror
2 disks: whole disk mirrors
I don't understand "slice 6 zfs mirror". A mirror takes *two* things of
the same size.
--
David Dyer-Bennet, [EMAIL PROTECTED]; http://dd
Rick Mann wrote:
Richard Elling wrote:
For the time being, these SATA disks will operate in IDE compatibility mode, so
don't worry about the write cache. There is some debate about whether the write
cache is a win at all, but that is another rat hole. Go ahead and split off
some
space for bo
Rick Mann wrote:
> Richard Elling wrote:
>
>
>> For the time being, these SATA disks will operate in IDE compatibility mode,
>> so
>> don't worry about the write cache. There is some debate about whether the
>> write
>> cache is a win at all, but that is another rat hole. Go ahead and split
Richard Elling wrote:
> For the time being, these SATA disks will operate in IDE compatibility mode,
> so
> don't worry about the write cache. There is some debate about whether the
> write
> cache is a win at all, but that is another rat hole. Go ahead and split off
> some
> space for boot a
Rick Mann wrote:
I'm having a heckuva time posting to individual replies (keep getting
exceptions).
I have a 1U rackmount server with 4 bays. I don't think there's any way to
squeeze in a small IDE drive, and I don't want to reduce the swap transfer rate
if I can avoid it.
The machine has 4
I'm having a heckuva time posting to individual replies (keep getting
exceptions).
I have a 1U rackmount server with 4 bays. I don't think there's any way to
squeeze in a small IDE drive, and I don't want to reduce the swap transfer rate
if I can avoid it.
The machine has 4 500 GB SATA drives,
On 6/15/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
Hmmm, that's an interesting point. I remember the old days of having to
stagger startup for large drives (physically large, not capacity large).
Can that be done with SATA?
I had to link 2 600w power supplies together to be able to power
Rob Windsor wrote:
>
> What 8-port-SATA motherboard models are Solaris-friendly? I've hunted
> and hunted and have finally resigned myself to getting a "generic"
> motherboard with PCIe-x16 and dropping in an Areca PCIe-x8 RAID card
> (in JBOD config, of course).
>
I don't know about 8 port SATA,
On Fri, Jun 15, 2007 at 02:27:18PM -0700, Neal Pollack wrote:
>
> So it only has room for one power supply. How many disk drives will you
> be installing?
> It's not the steady state current that matters, as much as it is the
> ability to handle the surge current
> of starting to spin 17 disks
Tom Kimes wrote:
Here's a start for a suggested equipment list:
Lian Li case with 17 drive bays (12 3.5" , 5 5.25")
http://www.newegg.com/Product/Product.aspx?Item=N82E1682064
So it only has room for one power supply. How many disk drives will you
be installing?
It's not the steady
Here's a start for a suggested equipment list:
Lian Li case with 17 drive bays (12 3.5" , 5 5.25")
http://www.newegg.com/Product/Product.aspx?Item=N82E1682064
Asus M2N32-WS motherboard has PCI-X and PCI-E slots. I'm using Nevada b64 for
iSCSI targets:
http://www.newegg.com/Product/Produc
Victor Engle wrote:
Well I suppose complexity is relative. Still, to use Sun Cluster at
all I have to install the cluster framework on each node, correct? And
even before that I have to install an interconnect with 2 switches
unless I direct connect a simple 2 node cluster.
Yes, rolling your ow
Well I suppose complexity is relative. Still, to use Sun Cluster at
all I have to install the cluster framework on each node, correct? And
even before that I have to install an interconnect with 2 switches
unless I direct connect a simple 2 node cluster.
My thinking was that ZFS seems to try and
On 6/15/07, Ian Collins <[EMAIL PROTECTED]> wrote:
Alec Muffett wrote:
> 2) I've considered pivot-root solutions based around a USB stick or
> drive; cute, but I want a single tower box and no "dongles"
You could buy a laptop disk, or mount one of these on the motherboard:
http://www.newegg.com/
Alec Muffett wrote:
As I understand matters, from my notes to design the "perfect" home NAS
server :-)
1) you want to give ZFS entire spindles if at all possible; that will
mean it can enable and utilise the drive's hardware write cache
properly, leading to a performance boost. You want to do
Vic Engle wrote:
Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a vi
comments from the peanut gallery...
Rob Windsor wrote:
Ian Collins wrote:
Alec Muffett wrote:
As I understand matters, from my notes to design the "perfect" home
NAS server :-)
1) you want to give ZFS entire spindles if at all possible; that will
mean it can enable and utilise the drive's har
On Fri, Jun 15, 2007 at 04:37:06AM -0700, Douglas Atique wrote:
>
> I have the impression (didn't check though) that the pool is made
> available by just setting some information in its main superblock or
> something alike (sorry for the imprecisions in ZFS jargon). I
> understand the OS knows whic
Ian Collins wrote:
Alec Muffett wrote:
As I understand matters, from my notes to design the "perfect" home
NAS server :-)
1) you want to give ZFS entire spindles if at all possible; that will
mean it can enable and utilise the drive's hardware write cache
properly, leading to a performance boos
Hi Rick,
> Hmm. Not sure I can do RAID5 (and boot from it). Presumably, though,
> this would continue to function if a drive went bad.
>
> It also prevents ZFS from managing the devices itself, which I think
> is undesirable (according to the ZFS Admin Guide).
>
> I'm also not sure if I have RAI
Same version on both systems
On Monday i´ll concat the facts, what stuck out me ...
there are some points there are very strange ..
Am Freitag, den 15.06.2007, 10:52 -0400 schrieb Torrey McMahon:
> This sounds familiarlike something about the powerpath device not
> responding to the SCS
This sounds familiarlike something about the powerpath device not
responding to the SCSI inquiry strings. Are you using the same version
of powerpath on both systems? Same type of array on both?
Dominik Saar wrote:
Hi there,
have a strange behavior if i´ll create a zfs pool at an EMC Powe
Customer asks:
Will SunCluster 3.2 support ZFS zpool created with MPxIO devices instead
of the corresponding DID devices?
Will it cause any support issues?
Thank you,
James Lefebvre
--
James Lefebvre - OS Technical Support[EMAIL PROTECTED]
(800)USA-4SUN (Reference your
Has there been any discussion here about the idea integrating a virtual IP into
ZFS. It makes sense to me because of the integration of NFS and iSCSI with the
sharenfs and shareiscsi properties. Since these are both dependent on an IP it
would be pretty cool if there was also a virtual IP that w
Hi there,
have a strange behavior if i´ll create a zfs pool at an EMC PowerPath
pseudo device.
I can create a pool on emcpower0a
but not on emcpower2a
zpool core dumps with invalid argument
Thats my second maschine with powerpath and zfs
the first one works fine, even zfs/powerpath an
Alec Muffett wrote:
> As I understand matters, from my notes to design the "perfect" home
> NAS server :-)
>
> 1) you want to give ZFS entire spindles if at all possible; that will
> mean it can enable and utilise the drive's hardware write cache
> properly, leading to a performance boost. You want
> No. There is nothing else the OS can do when it
> cannot mount the root
> filesystem.
I have the impression (didn't check though) that the pool is made available by
just setting some information in its main superblock or something alike (sorry
for the imprecisions in ZFS jargon). I understand
On 14 June, 2007 - Bill Sommerfeld sent me these 0,6K bytes:
> On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote:
> > The implication of which, of course, is that any app build for Solaris 9
> > or before which uses scandir may have picked up a broken one.
>
> or any app which includes i
> I definitely [i]don't[/i] want to use flash for swap...
You could use a ZVOL on the RAID-Z. Ok, not the most efficient thing,
but there's no sort of flag to disable parity on a specific object. I
wish there was, exactly for this reason.
-mg
signature.asc
Description: This is a digitally signe
Ed Ravin <[EMAIL PROTECTED]> wrote:
> > 15 years ago, Novell Netware started to return a fixed size of 512 for all
> > directories via NFS.
> >
> > If there is still unfixed code, there is no help.
>
> The Novell behavior, commendable as it is, did not break the BSD scandir()
> code, because BSD
Tsk, turns out Mysql was holding on to some old files..
Thanks Daniel!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As I understand matters, from my notes to design the "perfect" home NAS
server :-)
1) you want to give ZFS entire spindles if at all possible; that will
mean it can enable and utilise the drive's hardware write cache
properly, leading to a performance boost. You want to do this if you
can. A
38 matches
Mail list logo