> I don't know, I'm not a UFS expert (heck, I'm not an expert
> on _anything_). Have you investigated putting your paying
> customers onto zfs and managing quotas with zfs properties
> instead of ufs?
Yep, we spent about 6 weeks during the trial period of the x4500 to try
to find a way for ZFS to
Jorgen Lundman wrote:
> > Since the panic stack only ever goes through ufs, you should
> log a call with Sun support.
>
> We do have support, but they only speak Japanese, and I'm still quite
> poor at it. But I have started the process of having it translated and
> passed along to the next per
> Since the panic stack only ever goes through ufs, you should
log a call with Sun support.
We do have support, but they only speak Japanese, and I'm still quite
poor at it. But I have started the process of having it translated and
passed along to the next person. It is always fun to see what
Jorgen Lundman wrote:
> On Saturday the X4500 system paniced, and rebooted. For some reason the
> /export/saba1 UFS partition was corrupt, and needed "fsck". This is why
> it did not come back online. /export/saba1 is mounted "logging,noatime",
> so fsck should never (-ish) be needed.
>
> SunOS
I'm not sure how to interpret the output of fmdump:
-bash-3.2# fmdump -ev
TIME CLASS ENA
Jul 06 23:25:39.3184 ereport.fs.zfs.vdev.bad_label
0x03b3e4e8b1900401
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum
0xdaffb466a7e1
Jul 07 03:32:14.3561 ere
On Saturday the X4500 system paniced, and rebooted. For some reason the
/export/saba1 UFS partition was corrupt, and needed "fsck". This is why
it did not come back online. /export/saba1 is mounted "logging,noatime",
so fsck should never (-ish) be needed.
SunOS x4500-01.unix 5.11 snv_70b i86pc
Hello Ross,
We're trying to accomplish the same goal over here, ie. serving multiple
VMware images from a NFS server.
Could you tell what kind of NVRAM device did you end up choosing? We bought
a Micromemory PCI card but can't get a Solaris driver for it...
Thanks
Gilberto
On 7/6/08 9:54 AM,
Indeed, after rebooting we see the following. You'll have to trust me that
/ehome and /ehome/v1 are the relevant ZFS filesystems. If it makes any
different, this file system had been previously mounted. My memory is
suggesting that zpool import works in this situation whenever the FS
hasn't been pr
Currently on SNV92 + some BFUs but this has bene going on for quite a while.
If I boot my system without a USB drive plugged in and then plug it in,
rmformat sees it but ZFS seems not to. If I reboot the system, ZFS
will have no problem with using the disk.
> # zpool import
> # rmformat
> Lookin
As a first step, 'fmdump -ev' should indicate why it's complaining
about the mirror.
Jeff
On Sun, Jul 06, 2008 at 07:55:22AM -0700, Pete Hartman wrote:
> I'm doing another scrub after clearing "insufficient replicas" only to find
> that I'm back to the report of insufficient replicas, which basi
Ross Smith wrote:
> Thanks Richard, filebench sounds ideal for testing the abilities of
> the server, far better than I expected to find actually.
>
> NFSstat might be tricky however, since the clients are going to be
> running XP :). I've got a very basic free benchmark that I'll use to
> ch
Ross wrote:
> Can anybody tell me how to measure the raw performance of a new system I'm
> putting together? I'd like to know what it's capable of in terms of IOPS and
> raw throughput to the disks.
>
> I've seen Richard's raidoptimiser program, but I've only seen results for
> random read iops
> Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems
> to have solved the problem.
Is this really the case? If so that is an important clue to finding out why
virtualized opensolaris performance is so poor. I tried every network adapter
in virtualbox and vmware and
Tommaso Boccali wrote:
> is there a way to do it "via software" ? (attach remove add detach)
>
Skeleton process:
1. detach c1t7d0 from the root mirror
2. replace c5t4d0 with c1t7d0
In the details, you will need to be careful with the partitioning
for the root mirror. You will need
I'm doing another scrub after clearing "insufficient replicas" only to find
that I'm back to the report of insufficient replicas, which basically leads me
to expect this scrub (due to complete in about 5 hours from now) won't have any
benefit either.
-bash-3.2# zpool status local
pool: local
is there a way to do it "via software" ? (attach remove add detach)
if not else, it would help me quite a lot to understand the underlying
zfs mechanism ...
thanks
;)
tom
On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> I would just swap the physical locations of t
I have a zpool which has grown "organically". I had a 60Gb disk, I added a
120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor
OneTouch USB drives.
The original system I created the 60+120+500 pool
On Sun, Jul 6, 2008 at 3:46 PM, Ross <[EMAIL PROTECTED]> wrote:
>
> For your second one I'm less sure what's going on:
> # zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz
> c1t6d0 c1t8d0
>
> This creates three two disk raid-z sets and stripes the data across them.
> The probl
I'm no expert in ZFS, but I think I can explain what you've created there:
# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror
c1t6d0 c1t8d0
This creates a stripe of three mirror sets (or in old fashioned terms, you have
a raid-0 stripe made up of three raid-1 sets of two
Can anybody tell me how to measure the raw performance of a new system I'm
putting together? I'd like to know what it's capable of in terms of IOPS and
raw throughput to the disks.
I've seen Richard's raidoptimiser program, but I've only seen results for
random read iops performance, and I'm p
On Sun, Jul 6, 2008 at 10:13 AM, Rob Clark <[EMAIL PROTECTED]> wrote:
> Is there a way to get mirror performance (double speed) with raid integrity
> (one drive can fail and you are OK)? I can't imagine that there exists no one
> who would want that configuration.
That's what mirroring does - y
On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> I would just swap the physical locations of the drives, so that the
> second half of the mirror is in the right location to be bootable.
> ZFS won't mind -- it tracks the disks by content, not by pathname.
> Note that SATA
> Peter Tribble wrote:
> Because what you've created is a pool containing two
> components:
> - a 3-drive raidz
> - a 3-drive mirror
> concatenated together.
>
OK. Seems odd that ZFS would allow that (would people want that configuration
instead of what I am attempting to do).
> I think that w
I would just swap the physical locations of the drives, so that the
second half of the mirror is in the right location to be bootable.
ZFS won't mind -- it tracks the disks by content, not by pathname.
Note that SATA is not hotplug-happy, so you're probably best off
doing this while the box is powe
On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark <[EMAIL PROTECTED]> wrote:
> I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.
>
> I have eight 10GB drives.
...
> I have 6 remaining 10 GB drives and I desire to "raid" 3 of them and "mirror"
> them to the other 3 to give me raid security
I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.
I have eight 10GB drives.
When I installed SX:CE (snv_91) I chose "3" ("Solaris Interactive Text (Desktop
Session)) and the installer found all my drives but I told it to only use two -
giving me a 10GB mirrored rpool.
Immediat
26 matches
Mail list logo