Hi,
I recently upgraded from 2009.06 to b131 (mainly to get dedup
support). The upgrade to b131 went fairly smoothly, but then I ran
into an issue trying to get the old datasets snapshotted and
send/recv'd to dedup the existing data. Here's the steps I ran:
zfs snapshot -r data/me...@prereplica
> If you've got hardware raid-5, why not just run
> regular (non-raid)
> pools on top of the raid-5?
>
> I wouldn't go back to JBOD. Hardware arrays offer a
> number of
> advantages to JBOD:
> - disk microcode management
> - optimized access to storage
> - large write cache
> Is this a bug?
>
>
>capacity operationsbandwidth
> ed avail read write read write
> -- - - - - - -
> zfs 14.2G 1.35T 0 62 0 5.46M
> raidz214.2G 1.35T 0 62 0 5.46M
> c0d0-
> Just some random thoughts on this...
>
> One of the initial design criteria of ZFS is that
> it's simple. If it's
> not, that was a bug...
>
> If we need tutorials to use the zfs commands, has
> something missed the
> mark?
>
> If the information that is needed to do the work is
> NOT in the m
> Currently the Genunix facility, including the wiki,
> is a resource for the
> OpenSolaris Community, run by OpenSolaris community
> members. Anyone
> wishing to contribute OpenSolaris related content is
> welcome to make use
> of it.
>
> Down the road, Sun may decide to provide a wiki
> facilit
I was browsing around the OpenSolaris pages and came across the http://www.genunix.org/wiki/";>OpenSolaris wiki at genunix. I did a
quick search for "zfs" on the wiki and it returned no results. Are there plans
for putting content related to ZFS on a wiki? Would the OpenSolaris wiki be the
des
By the way, with the URL for the ZFS version information in the subject line of
the topic, it prevents a user from clicking the thread and reading it. It
takes you directly to the URL instead of the thread. The workaround is to
click the user name of the person who made the last post, which ta
> Hi experts,
>
> I have few issues about ZFS and virtualization:
>
> [b]Virtualization and performance[/b]
> When filesystem traffic occurs on a zpool containing
> only spindles dedicated to this zpool i/o can be
> distributed evenly. When the zpool is located on a
> lun sliced from a raid group
Mark,
I might know a little bit more about what's causing this particular panic. I'm
currently running OpenSolaris as a guest OS under VMware Server RC1 on a CentOS
4.3 host OS. I have 3 - 300GB (~280GB usable) SATA disks in the server that
are all formatted under CentOS like so:
[EMAIL PROT
Do you want the vmcore file from /var/crash or something else? Where can I
upload it to, supportfiles.sun.com? The bzip'd vmcore file is ~35MB.
Thanks,
Nate
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR
build 39) that happens to reside on a ZFS file system. The problem is 100%
reproducible. I'm quite new to OpenSolaris, so I may be incorrect in saying
it's ZFS' fault. Also, let me know if you need any additional
Thanks for the help!
So I BFU'd to the following:
bash-3.00# uname -a
SunOS mathrock-opensolaris 5.11 opensol-20060605 i86pc i386 i86pc
I blew away all my old ZFS pools and created a new raidz pool with my three
disks. The file system now correctly reports the right size, and df/du report
th
I'm seeing odd behaviour when I create a ZFS raidz pool using three disks. The
output of "zpool status" shows the pool size as the size of the three disks
combined (as if it were a Raid 0 volume). This isn't expected behaviour is it?
When I create a mirrored volume in ZFS everything is as one
13 matches
Mail list logo