write it up for my blog, which
can then be pointed to when this comes up again.
Thanks in advance,
Steve
--
Stephen Green // stephen.gr...@sun.com
Principal Investigator \\ http://blogs.sun.com/searchguy
The AURA Project // Voice
.
I guess I should turn of the auto snapshots and clear out the old ones,
but those snapshots saved my behind when the wife's mac went crazy...
Thanks!
Steve
--
Stephen Green // stephen.gr...@sun.com
Principal Investigator \\ http://blogs.sun.com/sea
I'm having trouble booting with one of my zpools. It looks like this:
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz1ONLINE 0 0 0
c4d0ONLIN
Greg Mason wrote:
How about the bug "removing slog not possible"? What if this slog
fails? Is there a plan for such situation (pool becomes inaccessible
in this case)?
You can "zpool replace" a bad slog device now.
And I can testify that it works as described.
St
r alternative is to clone one of the volume's snapshots
from a time the backup was working and then see if that can be mounted.
Any advice would be greatly appreciated.
Steve
--
Stephen Green // stephen.gr...@sun.com
Principal Investigator \\ http://blogs.su
Stephen Green wrote:
I'll let you know
how it works out. Suggestions as to pre/post installation IO tests
welcome.
The installation went off without a hitch (modulo a bad few seconds
after reboot.) Story here:
http://blogs.sun.com/searchguy/entry/homebrew_hybrid_storage_pool
I
Stephen Green wrote:
Also, I got my wife to agree to a new SSD, so I presume that I can
simply do the re-silver with the new drive when it arrives.
And the last thing for today, I ended up getting:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609330
which is 16GB and should be
Stephen Green wrote:
Oh, and for those following along at home, the re-silvering of the slog
to a file is proceeding well. 72% done in 25 minutes.
And, for the purposes of the archives, the re-silver finished in 34
minutes and I successfully removed the RAM disk. Thanks, Erik for the
Scott Meilicke wrote:
Note - this has a mini PCIe interface, not PCIe.
Well, that's an *excellent* point. I guess that lets that one out.
It turns out I do have an open SATA port, so I might just go for a disk
that has a SATA interface, since that should just work.
I had the 64GB version
Stephen Green wrote:
Thanks for the advice, I think it might be time to convince the wife
that I need to buy an SSD. Anyone have recommendations for a reasonably
priced SSD for a home box?
For example, does anyone know if something like:
http://www.newegg.com/Product/Product.aspx?Item
erik.ableson wrote:
On 7 août 09, at 02:03, Stephen Green wrote:
Man, that looks so nice I think I'll change my mail client to do dates
in French :-)
Now my only question is: what do I do when it's done? If I reboot
and the ram disk disappears, will my tank be dead? Or wi
ally care about to
another pool (the Mac's already been backed up to a USB drive.)
Have I meddled in the affairs of wizards? Is ZFS subtle and quick to anger?
Steve
--
Stephen Green
http://blogs.sun.com/searchguy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Darren J Moffat wrote:
Stephen Green wrote:
stgr...@blue:~$ pgrep -lf zfs
7471 zfs create tank/mysql
stgr...@blue:~$ pfexec truss -p 7471
door_call(7, 0x080F7008)(sleeping...)
I suspect this is probably a nameservice lookup call running
'pfiles 7471' should confirm.
Looks
On Jun 1, 2009, at 4:57 AM, Darren J Moffat wrote:
Stephen Green wrote:
Hi, folks. I just built a new box and I'm running the latest
OpenSolaris bits. uname says:
SunOS blue 5.11 snv_111b i86pc i386 i86pc Solaris
I just did an image-update last night, but I was seeing this
probl
Hi, folks. I just built a new box and I'm running the latest OpenSolaris
bits. uname says:
SunOS blue 5.11 snv_111b i86pc i386 i86pc Solaris
I just did an image-update last night, but I was seeing this problem in
111a too.
I built myself a pool out of four 1TB disks (WD Caviar Green, if tha
We have a pair of 3511s that are host to a couple of ZFS filesystems.
Over the weekend we had a power hit, and when we brought the server that
the 3511s are attached to back up, the ZFS filesystem was hosed. Are we
totally out of luck here? There's nothing here that we can't recover,
given enou
16 matches
Mail list logo