Ed Saipetch wrote:
> To answer a number of questions:
>
> Regarding different controllers, I've tried 2 Syba Sil 3114 controllers
> purchased about 4 months apart. I've tried 5.4.3 firmware with one and
> 5.4.13 with another. Maybe Syba makes crappy Sil 3114 cards but it's the
> same one that
Ima wrote:
>
> 3. Can anyone recommend a PCI-Express SATA controller that will work with
> 64-bit x86 Solaris 10?
>
I believe these cards support SAS and SATA devices just fine:
http://www.sun.com/storagetek/storage_networking/hba/sas/
___
zfs-di
Matthew Flanagan wrote:
> Mike,
>
> I followed your procedure for cloning zones and it worked well up until
> yesterday when I tried applying the S10U4 kernel patch 12001-14 and it
> wouldn't apply because I had my zones on zfs :(
>
> I'm still figuring out how to fix this other than moving all o
according to the zoneadm(1m) man page on s10u4:
clone [-m copy] [-s zfs_snapshot] source_zone
Install a zone by copying an existing installed zone.
This subcommand is an alternative way to install the
zone.
-m copy
Force the clone to
hi all,
I was extracting a 8GB tar and encountered this panic. the system was
just installed last week with Solaris 10 update 3 and the latest
recommended patches as of June 26. I can provide more output from mdb,
or the crashdump itself if it would be of any use.
any ideas what's going on her
On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote:
> We have putback a significant number of fixes and features from
> OpenSolaris into what will become Solaris 10 11/06. For reference here's
> the list:
George,
this is great! any idea when these will be available as patches for
s1
On Thu, Jul 27, 2006 at 10:41:10PM -0700, Frank Cusack wrote:
> On July 28, 2006 11:59:50 AM +1000 grant beattie <[EMAIL PROTECTED]> wrote:
> >
> >ZFS won't automatically import a pool unless it is explicitly exported
> >first via "zfs export", so it shou
On Thu, Jul 27, 2006 at 06:35:06PM -0700, Frank Cusack wrote:
> Hi
>
> I have a SAS array with a zfs pool on it. zfs automatically searches for
> and mounts the zfs pool I've created there. I want to attach another
> host to this array, but it doesn't have any provision for zones or the
> like.
On Thu, Jul 13, 2006 at 11:42:21AM -0700, Richard Elling wrote:
> >Yes, and while it's not an immediate showstopper for me, I'll want to
> >know that expansion is coming imminently before I adopt RAID-Z.
>
> [in brainstorming mode, sans coffee so far this morning]
>
> Better yet, buy two disks,
On Tue, Jun 27, 2006 at 12:07:47PM +0200, Roch wrote:
> > > for small file workloads, setting recordsize to a value lower than the
> > > default (128k) may prove useful.
> >
> > When changing things like recordsize, can i do it on the fly on a
> > volume ? ( and then if i can what happens to
On Tue, Jun 27, 2006 at 11:16:40AM +0200, Patrick wrote:
> >sounds like your workload is very similar to mine. is all public
> >access via NFS?
>
> Well it's not 'public directly', courier-imap/pop3/postfix/etc... but
> the maildirs are accessed directly by some programs for certain
> things.
ye
On Tue, Jun 27, 2006 at 10:14:06AM +0200, Patrick wrote:
> Hi,
>
> I've just started using ZFS + NFS, and i was wondering if there is
> anything i can do to optimise it for being used as a mailstore ? (
> small files, lots of them, with lots of directory's and high
> concurrent access )
>
> So a
On Mon, Jun 19, 2006 at 04:48:13PM -0600, Mark Shellenbaum wrote:
> grant beattie wrote:
> >On Mon, Jun 19, 2006 at 01:37:55PM +0200, Detlef Drewanz wrote:
> >
> >>Hi,
> >>moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within
> >>zfs is possible
On Mon, Jun 19, 2006 at 01:37:55PM +0200, Detlef Drewanz wrote:
> Hi,
> moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within
> zfs is possible to run. I also tried it and it worked for my
> 2.6 GB Home directory very well. Does anyone see any issues ?
ufsdump can't write ACLs to ZFS yet.
On Thu, Jun 01, 2006 at 06:40:15PM -0500, Tao Chen wrote:
> >ABR> What about small random writes? Won't those also require reading
> >ABR> from all disks in RAID-Z to read the blocks for update, where in
> >ABR> mirroring only one disk need be accessed? Or am I missing something?
> >
> >If I under
On Wed, May 31, 2006 at 03:28:12PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
> Hi Grant, this may provide some guidance for your setup;
>
> it's somewhat theoretical (take it for what it's worth) but
> it spells out some of the tradeoffs in the RAID-Z vs Mirror
> battle:
>
>
>
hi all,
I am hoping to move roughly 1TB of maildir format email to ZFS, but
I am unsure of what the most appropriate disk configuration on a 3510
would be.
based on the desired level of redundancy and usable space, my thought
was to create a pool consisting of 2x RAID-Z vdevs (either double
parit
On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote:
> RAID-Z is single-fault tolerant. If if you take out two disks, then you
> no longer have the required redundancy to maintain your data. Build 42
> should contain double-parity RAID-Z, which will allow you to sustain two
> simulatane
I updated an i386 system to b39 yesterday, and noticed this when
running iostat:
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.00.50.0 10.0 0.0 0.00.00.5 0 0 c0t0d0
0.00.50.0 10.0 0.0 0.00.00.6 0 0 c0t1d0
0.0 65.
On Thu, May 18, 2006 at 11:40:53PM -0600, Sanjay Nadkarni wrote:
> Since it's not exactly clear what you did with SVM I am assuming the
> following:
>
> You had a file system on top of the mirror and there was some I/O
> occurring to the mirror. The *only* time, SVM puts a device into
> maint
On Tue, May 16, 2006 at 10:13:46AM -0700, Eric Schrock wrote:
> What has happened is that your device has started reporting errors, but
> is still available on the system. i.e. ZFS is still able to ldi_open()
> the underlying device. This seems like a strange failure mode for the
> device (you m
On Tue, May 16, 2006 at 07:02:37PM +1000, grant beattie wrote:
> running b37 on amd64. after removing power from a disk configured as
> a mirror, 10 minutes has passed and ZFS has still not offlined it.
I should have mentioned, the disks are connected to an Adaptec 2120S
card (aac). not
running b37 on amd64. after removing power from a disk configured as
a mirror, 10 minutes has passed and ZFS has still not offlined it.
# zpool status tank
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the er
On Fri, May 12, 2006 at 01:49:38PM -0700, Marion Hakanson wrote:
> Greetings,
>
> I've seen discussion that tar & cpio are "ZFS ACL aware"; And that
> Veritas NetBackup is not. GNU tar is not (at this time); Joerg's "star"
> probably will be Real Soon Now. Feel free to correct me if I'm wrong
On Wed, May 03, 2006 at 04:03:18PM +1000, James C. McPherson wrote:
> >Exists (or It will exists) any metoth or tool for migrate a UFS/SVM
> >filesystems with soft partitions to ZFS filesystems with pools?
> >Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme
> >Manager to Solar
25 matches
Mail list logo