On Sat, Jul 5, 2008 at 9:48 PM, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
>> $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
>> unix:0:vopstats_zfs:nread 418787
>> unix:0:vopstats_zfs:
ostat. While iostat shows physical reads and writes only "zpool
iostat" and fsstat show reads that are satisfied by a cache and never
result in physical I/O activity. As such, a workload that looks
write-intensive on UFS monitored via iostat may seem to have shifted
to being very
elling/entry/zfs_raid_recommendations_space_performance
http://blogs.sun.com/relling/entry/a_story_of_two_mttdl
http://opensolaris.org/jive/thread.jspa?threadID=65564#255257
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Good explanation at
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/046937.html
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
suggest it is
fixed) proper dependencies do not exist to prevent paging activity
after boot from trashing the crash dump in a shared swap+dump device -
even when savecore is enabled. It is only by luck that you get
anything out of it. Arguably this should be fixed by proper SMF
depend
ly as physically large as the combined size of your fridge,
your mom's fridge, and those of your three best friends that are out
of college and have a fridges significantly larger than a keg.
2. "Shared" as in one server's behavior can and may be somewhat likely
t
On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Mike Gerdts wrote:
>>
>> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Instead we should take it completely out of their hands an
on
"why does my machine suck?" can say that it has been excessively short
on memory X times in recent history. Any of these approaches is miles
above the Linux approach of finding a memory hog to kill.
--
Mike Gerdts
http://mgerdts.blogspot.com/
problems with I/O errors when doing a stat()
of a file. Repeated tries fails, but a reboot seems to clear it.
zpool scrub reports no errors and the pool consists of a single mirror
vdev. I haven't filed a bug on this yet.
--
Mike Gerdts
http:
On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky <[EMAIL PROTECTED]> wrote:
> Hi Mike,
>
>
> Mike Gerdts wrote:
>>
>> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Thank you very much all for this v
The fact that the system is not resilient to any misstep is not a
bug. If you remove /sbin/init the system would be hosed worse but you
would have gotten no error message before reboot.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-disc
On Wed, Jun 25, 2008 at 3:36 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 25, 2008 at 3:09 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> Well, I've seen core dumps bigger than 10GB (even without ZFS)... :)
>
> Was that the size in the dump device o
eported
on the console after the dump completed.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to
enable (the yet non-existent) svc:/system/savecore:default.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
According to the timestamps in my prompt, I'm
thinking that virtualbox reset the time to zero while the command was
running. This seems to happen from time to time, but this is the most
entertaining result I have seen.
--
Mike Gerdts
http://mgerd
ument can be made for VMware, LDoms, Xen, etc., but those
are much more likely to use jumpstart for installations than
laptop-based VM's.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jun 24, 2008 at 7:24 AM, Gary Mills <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 23, 2008 at 10:25:09PM -0500, Mike Gerdts wrote:
>>
>> Really it boils down to lots of file systems to hold the OS adds
>> administrative complexity and rarely saves more work than it c
ems then patch. Of course today's development work will make the
3 hour outage for patching a thing of ancient history as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
th /. For
example, /var/sadm has lots of information about which packages and
patches are installed. There is a lot of other stuff that shouldn't
be snapshotted with it. I have proposed /var/share to cope with this.
http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-
y supported.
3. There were numerous complaints of repeated timeouts when the snv_90
packages were released resulting in having to restart the upgrade from
the start.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing li
s should
be easier and safer than patching is today.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jun 11, 2008 at 12:58 AM, Robin Guo <[EMAIL PROTECTED]> wrote:
> Hi, Mike,
>
> It's like 6452872, it need enough space for 'zfs promote'
Not really - in 6452872 a file system is at its quota before the
promote is issued. I expect that a promote may cause
Is it a bug in the
documentation or zfs?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote:
>
> On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
>
> > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >>
> clients do not. Without per-filesystem mounts, 'df' on the client
> will not report correct da
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote:
> On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >
> > >> clients do not. Without per-filesystem mounts, 'df' on the client
> > >> will not report correct data though.
> > >
> > > I expect that mirror mounts will be
ments
and suggestions and wrote a blog entry.
http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-environment.html
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 05, 2008 1:56 PM
To: Ellis, Mike
Cc: ZFS discuss
Subject: Re: [zfs-discuss] ZFS root finally here in SNV90
Mike,
As we discussed, you can't currently break out other datasets besides
/var. I'll add thi
In addition to the standard "containing the carnage" arguments used to
justify splitting /var/tmp, /var/mail, /var/adm (process accounting
etc), is there an interesting use-case where would one split out /var
for "compression reasons" (as in, turn on compression for /var so that
process accounting,
The FAQ document (
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
jumpstart profile example:
install_type initial_install
pool newpool auto auto auto mirror c0t0d0 c0t1d0
bootenv installbe bename sxce_xx
The B90 jumpstart "check" program (SPARC) flags th
On Sat, May 31, 2008 at 9:38 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> $ find /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
> /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
> /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/.make.state.lock
> /ws/mount/onn
mittedly, it is sloppy to just get
rid of the undo.z file - the existence of the other related
directories is (save/) may trip something up.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, May 31, 2008 at 8:48 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> I just experienced a zfs-related crash. I have filed a bug (don't
> know number - grumble). I have a crash dump but little free space. If
> someone would like some more info from the core, please let m
default
pool0 bootfs - default
pool0 delegation on default
pool0 autoreplace off default
pool0 temporaryoff default
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
ll/copyright
var/sadm/pkg/SPROcc/save/pspool/SPROcc/install/depend
var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkginfo
var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkgmap
Notice the lack of undo.Z files (and associated patch directories),
but the rest looks the same.
--
Mike Ge
says that this is a good idea - but I haven't
seen any better method for getting rid of the cruft that builds up in
/var/sadm either.
I suspect that further discussion on this topic would be best directed
to [EMAIL PROTECTED] or sun-managers mailing list (see
http://www.sunmanagers.
dac_read and
file_dac_write. A backup program that has those privileges has
everything they need to gain full root access.
I wish that there was a flag to open(2) to say not to update the atime
and that there was a privilege that could be granted to allow this
flag without granting file_dac_write
Is there a way to to a create a zfs file system
(e.g. zpool create boot /dev/dsk/c0t0d0s1)
Then, (after vacating the old boot disk) add another
device and make the zpool a mirror?
(as in: zpool create boot mirror /dev/dsk/c0t0d0s1 /dev/dsk/c1t0d0s1)
Thanks!
emike
This message posted from op
GB. However, it is *a lot* less than feeding system-board DIMM
slots to workloads that use a lot of RAM but are fairly inactive. As
such, a $10k PCIe card may be able to allow a $42k 64 GB T5240 handle
5+ times the number of not-too-busy J2EE instances.
If anyone's done any modelling or test
I like the link you sent along... They did a nice job with that.
(but it does show that mixing and matching vastly different drive-sizes
is not exactly optimal...)
http://www.drobo.com/drobolator/index.html
Doing something like this for ZFS allowing people to create pools by
mixing/match
ot;zfs iostat", or how can I get the stats with general
> systemtools of a particular directory?
>
> any idea would be appreciated
> karsten
Have you tried fsstat? I think it will do what you are looking for
whether it is zfs, ufs, tmpf
> Mike DeMarco wrote:
> > I currently have a zpool with two 8Gbyte disks in
> it. I need to replace them with a single 56Gbyte
> disk.
> >
> > with veritas I would just add the disk in as a
> mirror and break off the other plex then destroy it.
> >
> >
I currently have a zpool with two 8Gbyte disks in it. I need to replace them
with a single 56Gbyte disk.
with veritas I would just add the disk in as a mirror and break off the other
plex then destroy it.
I see no way of being able to do this with zfs.
Being able to migrate data without having
es over between
systems independently either I need to have a zpool per zone or I need
to have per-dataset replication. Considering that with some workloads
20+ zones on a T2000 is quite feasible, a T5240 could be pushing 80+
zones and as such a relatively large number of zpools.
--
Mike Gerdts
http
Could someone kindly provide some details on using a zvol in sparse-mode?
Wouldn't the COW nature of zfs (assuming COW still applies on ZVOLS) quickly
erode the sparse nature of the zvol?
Would sparse data-presentation only work by delegating a part of a zpool to a
zone, but that's at the file-
mport -R" mentioned the temporary attribute.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e system reboots. This pro-
perty can also be referred to by its shortened column
name, "temp".
> (I am trying to move this thread over to zfs-discuss, since I originally
> posted to the wrong alias)
storage-discuss trimmed in my reply.
--
Mike Gerdt
Activating the updated OS should take only a few
seconds longer than a standard "init 6". Failback is similarly easy.
I can't remember the last time I swapped physical drives to minimize
the outage during an upgrade.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
ol and then import it to get
the additional space to be seen.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Use zpool replace to swap one side of the mirror with the iscsi lun.
-- mikee
- Original Message -
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
Sent: Tue Jan 15 08:46:40 2008
Subject: Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN
What would be
except in my experience it is piss poor slow... but yes it is another
option that is -basically- built on standards (i say that only because
it's not really a traditional filesystem concept)
On 1/14/08, David Magda <[EMAIL PROTECTED]> wrote:
>
> On Jan 14, 2008, at 17:15, mike
On 1/14/08, eric kustarz <[EMAIL PROTECTED]> wrote:
>
> On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
>
> > www.mozy.com appears to have unlimited backups for 4.95 a month.
> > Hard to beat that. And they're owned by EMC now so you know they
> > aren't going anywhere anytime soon.
mozy's been oka
usy?
# fuser /homes
If you still can't resolve it
# zfs set mountpoint=/somewhere_else homespool/homes
# zfs mount -a (not sure this needed)
# cd /somewhere_else
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
modes (missing license key especially major system
changes, on-disk corruption)
- Opportunities to do things previously not possible
ZFS doesn't win on many of those, but with the improvements that I
have seen throughout the storage stack it is somewhat likely that the
require
mirror ONLINE 0 0 0
c0t1d0s7 ONLINE 0 0 0
c0t0d0s7 ONLINE 0 0 0
errors: No known data errors
I'll keep the crash dump around for a while in the event that someone
has interest in digging into it more.
--
Mike Gerdts
ht
s failing...
What command line is used to compile the code? I would guess that you
don't have large file support. A variant of the following would
probably be good:
cc -c $CFLAGS `getconf LFS_CFLAGS` myprog.c
cc -o myprog $LDFLAGS `getconf LFS_LDFLAG
re hours of runtime and likely more space in production use than
ZFS.
I think that ZFS holds a lot of promise for shared-nothing database
clusters, such as is being done by Greenplumb with their extended
variant of Postgres.
--
Mike Gerdts
http://m
the usage error, man page, etc. would be
appropriate too. You can see a few other "#ifdef XPG4" blocks that
show the quite small differences between the two variants.
Also... since there is nothing zfs-specific here, opensolaris-code may
be a more appropriate forum.
--
Mike Gerdts
http:/
wing:
$ ls df*
df df.odf.po.xpg4 df.xpg4
df.cdf.po df.xcl df.xpg4.o
It looks to me as though df becomes /usr/bin/df and df.xpg4 becomes
/usr/xpg4/bin/df.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
19246.
>
> Is there a patch that was not included with 10_Recommended?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/lis
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ation to me.
Am I wrong?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you can likely get the same IDR
(errr, an IDR with the same fix - mine was SPARC) to see if it
addresses your problem.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I actually have a related motherboard, chassis, dual power-supplies
and 12x400 gig drives already up on ebay too. If I recall Areca cards
are supported in OpenSolaris...
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=300172982498
On 11/22/07, Jason P. Warr <[EMAIL PROTECTED]> wrote:
> If you
ter/failover, etc), could be
either or.
>
> Thanks for any help and advice.
>
> Brian.
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Dotson
On Thu, 2007-11-15 at 05:25 -0800, Boris Derzhavets wrote:
> Thank you very much Mike for your feedback.
> Just one more question.
> I noticed five device under /dev/rdsk:-
> c1t0d0p0
> c1t0d0p1
> c1t0d0p2
> c1t0d0p3
> c1t0d0p4
> been created by system immediately after
none requested
config:
NAMESTATE READ WRITE CKSUM
lpool ONLINE 0 0 0
c0d0p4ONLINE 0 0 0
errors: No known data errors
So to create the pool in my case would be: zpool create lpool c0d0p4
--
Mike Dotson
(CiFS,
NFS, iSCSI, FC) file and block serving with remote replication seem
intuitive. Kinda makes you understand why Netapp no longer feels that
they can compete on features + ease of use.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mai
> Mike DeMarco wrote:
> > Looking for a way to mount a zfs filesystem ontop
> of another zfs
> > filesystem without resorting to legacy mode.
>
> doesn't simply 'zfs set mountpoint=...' work for you?
>
> --
>
Looking for a way to mount a zfs filesystem ontop of another zfs filesystem
without resorting to legacy mode.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
The ideal situation it would go like:
host1# zpool export pool
host2# zpool import pool
If you know (really know) that it is offline on the other server (e.g. you
can verify the host is dead), you can use:
# zpool import -f
Mike
On 10/19/07, Mertol Ozyoney <[EMAIL PROTECTED]> wrote:
&
On 10/18/07, Gary Mills <[EMAIL PROTECTED]> wrote:
> What's the command to show cross calls?
mpstat will show it on a system basis.
xcallsbypid.d from the DTraceToolkit (ask google) will tell you which
PID is responsible.
--
Mike Gerdts
http://mgerdt
nd thread migrations, and ...) are much
cheaper on systems with lower latency between CPUs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fact-checking, the code rarely finds its way in front of
you.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed was that any application that linked against the included
version of OpenSSL automatically gets to take advantage of the N2
crypto engine, so long as it is using one of the algorithms supported
by N2 engine.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
y changes the importance of 2 a bit.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
LABEL 3
version=3
name='tank'
state=0
txg=43
pool_guid=8219303556773256880
top_guid=4844356610838567439
guid=4844356610838567439
vdev_tree
type='disk'
id=0
df now turns
into 40+ screens[1] on the default sized terminal window.
1. If you are in this situation, there is a good chance that the
formatting of df cause line folding or wrapping that doubles the
number of lines to 80+ screens of df output.
--
Mike Gerdts
http://
On 9/24/07, Paul B. Henson <[EMAIL PROTECTED]> wrote:
> but checking the actual release notes shows no ZFS mention. 3.0.26 to
> 3.2.0? That seems an odd version bump...
3.0.x and before are GPLv2. 3.2.0 and later are GPLv3.
http://news.samba.org/announcements/samba_gplv3/
--
Mike
le system cloning
capabilities play in coordination with iSCSI.
Oh, wait! What if the NAS device runs out of space while I'm
patching? Better rule out the thin provisioning capabilities of the
HDS storage that Sun sells as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
On 9/20/07, Matthew Flanagan <[EMAIL PROTECTED]> wrote:
> Mike,
>
> I followed your procedure for cloning zones and it worked
> well up until yesterday when I tried applying the S10U4
> kernel patch 12001-14 and it wouldn't apply because I had
> my zones on zfs :(
Th
It worked quite well for giving one place to
administer the location mapping while providing transparency to the
end-users.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ne
zoneadm -z master attach
zonecfg -z newzone create -t master
# change IP's et. al.
zoneadm -z newzone attach
zoneadm -z newzone boot -s
zlogin newzone sys-unconfig
zoneadm -z newzone boot
zlogin -C newzone
--
Mike Gerdts
http://mgerdts.blogspot.com/
roject in the
works (Snap Upgrade) that is very much targeted at environments that
use zfs, I would be surprised to see zfs support come into live
upgrade.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolar
Yup...
With Leadville/MPXIO targets in the 32-digit range, identifying the "new
storage/LUNs" is not a trivial operatrion.
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Russ
Petruzzelli
Sent: Monday, September 17, 2007 1:51 PM
To: zfs-discus
bet that Live Upgrade never does, but Snap Upgrade does.
http://opensolaris.org/os/project/caiman/Snap_Upgrade/
It is likely worth considering more of the roadmap when reading that page.
http://opensolaris.org/os/project/caiman/Roadmap/
--
Mike Gerdts
http:/
f a various failure modes.
Of course, I can see how writes could be batched coalesced and applied
in a journaled manner such that each batch fully applies or is rolled
back on the target. I haven't heard of this being done.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
quot; so that the array can reclaim
the space? I could see this as useful when doing re-writes of
data (e.g. crypto rekey) to concentrate data that had become
scattered into contiguous space.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discu
have you tried zpool clear?
Peter Tribble wrote:
On 9/13/07, Solaris <[EMAIL PROTECTED]> wrote:
Try exporting the pool then import it. I have seen this after moving disks
between systems, and on a couple of occasions just rebooting.
Doesn't work. (How can you export something that is
> On 9/12/07, Mike DeMarco <[EMAIL PROTECTED]> wrote:
>
> > Striping several disks together with a stripe width
> that is tuned for your data
> > model is how you could get your performance up.
> Stripping has been left out
> > of the ZFS model for some reason
> On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]>
> wrote:
> > > I've got 12Gb or so of db+web in a zone on a ZFS
> > > filesystem on a mirrored zpool.
> > > Noticed during some performance testing today
> that
> > > its i/o bound but
&
> I've got 12Gb or so of db+web in a zone on a ZFS
> filesystem on a mirrored zpool.
> Noticed during some performance testing today that
> its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be
> a quick win.
If it is io bound won't compression make it worse?
>
ave reliable
backups, etc. Pushing that out to desktop or laptop machines is not
really a good idea.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
"cp ~course/hugefile ~"
become not so expensive - you would be charging quota to each user but
only storing one copy. Depending on the balance of CPU power vs. I/O
bandwidth, compressed zvols could be a real win, more than paying back
the space required to have a few
On 9/7/07, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> For me, quotas are likely to be a pain point that prevents me from
> making good use of snapshots. Getting changes in application teams'
> understanding and behavior is just too much trouble. Others are:
not to mention the
at paging (file system and pager)
will begin soon. This may be fine on a file server, but it really
messes with me if it is a J2EE server and I'm trying to figure out how
many more app servers I can add.
I have a lot of hopes for ZFS and have used it with success (and
On 9/6/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> This is my personal opinion and all, but even knowing that Sun
> encourages open conversations on these mailing lists and blogs it seems to
> falter common sense for people from @sun.com to be commenting on this
> topic. It seems like
On 9/5/07, Joerg Schilling <[EMAIL PROTECTED]> wrote:
> As I wrote before, my wofs (designed and implemented 1989-1990 for SunOS 4.0,
> published May 23th 1991) is copy on write based, does not need fsck and always
> offers a stable view on the media because it is COW.
Side question:
If COW is su
From the primary LDOM, there is no corruption. An
unexpected reset (panic, I believe) of the primary LDOM seems to have
caused the corruption in the guest LDOM. What was that about having
the redundancy as close to the consumer as possible? :)
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
On 8/29/07, Jeffrey W. Baker <[EMAIL PROTECTED]> wrote:
> I have a lot of people whispering "zfs" in my virtual ear these days,
> and at the same time I have an irrational attachment to xfs based
> entirely on its lack of the 32000 subdirectory limit. I'm not afraid of
> ext4's newness, since real
"zpool is corrupt" "restore from backups"
S10u4 Beta, snv69 and I think snv59:
panic - S10u4 backtrace is very different from snv*
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
301 - 400 of 527 matches
Mail list logo