For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 04/16 - 04/30
=
Size of all threads during per
Nathan,
Some answers inline...
Nathan Huisman wrote:
= PROBLEM
To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.
= BUDGET
Currently I have about 25-30
Questions I don't know answers to are omitted. "I am but a nestling."
On 5/31/07, Nathan Huisman <[EMAIL PROTECTED]> wrote:
= STORAGE REQUIREMENTS
5-10tb of redundant fairly high speed storage
What does "high speed" mean? How many users are there for this
system? Are they accessing it v
On May 31, 2007, at 12:15 AM, Nathan Huisman wrote:
= PROBLEM
To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.
Here are some tips from me. I notice you m
= PROBLEM
To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.
= BUDGET
Currently I have about 25-30k to start the project, more could be
allocated in the ne
Out of curiosity, I'm wondering if Lori, or anyone else who actually writes the
stuff, has any sort of a 'current state of play' page that describes the latest
OS ON release and how it does ZFS boot and installs? There's blogs all over the
place, of course, which have a lot of stale information,
On 30-May-07, at 6:31 PM, Jerry Kemp wrote:
What comment in particular was that?
Sorry, I should have cited it. Blew my chance to moderate by posting
to the thread :)
http://ask.slashdot.org/comments.pl?sid=236627&cid=19319903
I computed the FUD factor by sorting the items into known bug
I have a simple fibre channel SAN setup, with 2 disc arrays and 2
SunFire boxes attached to a FC switch. Each disc array holds a ZFS
pool which should be mounted by one OpenSolaris system, and not the
other.
One of the two pairs was a recent addition to the FC switch (it was
previously direct-atta
The reliability calculations for these scenarios are described in several
articles on my blog.
http://blogs.sun.com/relling
You do get additional, mirror-like reliability for using the copies
property, also described in my blog.
Personally, I'd go with mirroring across the shelves. KISS
What comment in particular was that?
Jerry K
Toby Thain wrote:
On 30-May-07, at 4:28 PM, Mark A. Carlson wrote:
http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss
One highly rated comment features some of the first real ZFS FUD I've
seen in the wild. Does this signify that
On 30-May-07, at 4:28 PM, Mark A. Carlson wrote:
http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss
One highly rated comment features some of the first real ZFS FUD I've
seen in the wild. Does this signify that ZFS is being taken seriously
now? :)
--Toby
___
Will Murnane wrote:
> Sorry for singling you out, Ian; I meant "Reply to All". This list
> doesn't set "reply-to"...
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
>> How about 8 two way mirrors between shelves and a couple of hot spares?
> That's fine and good, but then losing just one disk
Will Murnane wrote:
> Sorry for singling you out, Ian; I meant "Reply to All". This list
> doesn't set "reply-to"...
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
>> How about 8 two way mirrors between shelves and a couple of hot spares?
> That's fine and good, but then losing just one disk
http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a Solaris 11 build server with build 58 and a zfs scratch
filesystem. When trying to upgrade to build 63 using liveupgrade
I get the following upon reboot. The machine never comes up. Just
keeps giving the error/warning below.
Is there something I am doing wrong?
WARNING: /[EMAIL PROTECT
Hey all,
I'm having the following issue:
We have been setting up ZVOL's and we share them via ISCSI
All goes well untill we want to secure this via CHAP authentication.
When we try to do that, we never succeed in discovering the target from an
external initiator. We tested both solaris (b57)
On 30-May-07, at 12:33 PM, Roch - PAE wrote:
Torrey McMahon writes:
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
What if your HW-RAID-control
[EMAIL PROTECTED] said:
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
> > How about 8 two way mirrors between shelves and a couple of hot spares?
>
> That's fine and good, but then losing just one disk from each shelf fast
> enough means the whole array is gone. Then one strong enough pow
Torrey McMahon writes:
> Toby Thain wrote:
> >
> > On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
> >
> >> Toby Thain wrote:
> >>>
> >>> On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
> >>>
> On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
> > What if your HW-RAID-cont
Thanks
actually I already saw the script mentioned there.
Is it possible to use zfs send/receive when the disk is not mounted?
i.e. give it device name as paramter and not zfs partition names?
-me2unix
This message posted from opensolaris.org
___
zf
Hello all,
Sorry if you think that question is stupid, but i need to ask..
Imagine a normal situation on a NFS server with "N" client nodes. The objects
of the shares is software (/usr/ for instance), and the admin wants to make
available new versions of a few packages.
So, would be nice if t
On May 29, 2007, at 2:59 PM, [EMAIL PROTECTED] wrote:
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some
kind of
H E wrote:
Does it sound possible at all ,
or cannot be done with the current ZFS commands yet?
zfs replace
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Sorry for singling you out, Ian; I meant "Reply to All". This list
doesn't set "reply-to"...
On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
How about 8 two way mirrors between shelves and a couple of hot spares?
That's fine and good, but then losing just one disk from each shelf
fast enough
Louwtjie Burger wrote:
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the
prefered HBA I suppose)
Did it work correctly?
Yes, it was tested as part
You can do this using zfs send and receive. See
http://blogs.sun.com/chrisg/entry/recovering_my_laptop_using_zfs for an
example. If the file system was remote then you would need to squeeze some ssh
commands into the script but the concept is the same.
This message posted from opensolaris.o
Hi there
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the
prefered HBA I suppose)
Did it work correctly?
Thank you
__
Does it sound possible at all ,
or cannot be done with the current ZFS commands yet?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
28 matches
Mail list logo