On 5/28/2010 1:24 PM, schatten wrote:
Hi,
whenever I create a new zfs my PC hangs at boot. Basically where the login
screen should appear. After booting from livecd and removing the zfs the boot
works again.
This also happened when I created a new zpool for the other half of my HDD.
Any idea
Okay.
I had/have a running snv134 install on one half of my disk. I created a zfs
(zfs create rpool/VB) for my virtualbox. Then zfs set
mountpoint=/export/home/schatten/VirtulBox rpool/VB. Then a reboot and it hangs
right before the login should appear.
I removed the zfs with an OSOL livecd
I should note that all of it works. I have access to the ZFS/zpool while
running OSOL. I can create files and stuff in the newly created zfs but the
reboot hangs. Looks like the reboot has a flaw.
Not to mention the reboot is no real reboot. 2009.06 had a reboot that powered
off the PC. snv134
Upsala.
And another note: A shutdown brings the same results. Hangs before the login
screen. So no matter if I do a reboot or a powercycle.
I also can't revert to OSOL 2009.06 as my hardware is not recognized. 2009.06
won't find my two SLI graphiccards.
--
This message posted from
On 5/29/2010 12:22 AM, schatten wrote:
Okay.
I had/have a running snv134 install on one half of my disk. I created a zfs
(zfs create rpool/VB) for my virtualbox. Then zfs set
mountpoint=/export/home/schatten/VirtulBox rpool/VB. Then a reboot and it hangs
right before the login should appear.
Hi,
I'm running snv_134 on 64-bit x86 motherboard, with 2 SATA drives. The zpool
rpool uses whole disk of each drive. I've installed grub on both discs, and
mirroring seems to be working great.
I just started testing what happens when a drive fails. I kicked off some
activities and unplugged
On Sat, May 29, 2010 at 12:54 AM, Matt Connolly
matt.connolly...@gmail.com wrote:
But with one of the drives unplugged, the system hangs at boot. On both
drives (with the other unplugged) grub loads, and the system starts to boot.
However, it gets stuck at the Hostname: Vault line and never
On 5/29/2010 12:48 AM, schatten wrote:
Yep, that is correct. The rpool also has stuff like swap and 1-2 other
mountpoints I forgot. Just the default installation layout.
I am really not sure if I did something wrong or if there is a bug. But if it
is a bug, why do only I see it?
Hmm
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cassandra Pugh
I was wondering if there is a special option to share out a set of
nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Gregory J. Benscoter
After looking through the archives I havent been able to assess the
reliability of a backup procedure which employs zfs send and recv.
If there's data corruption in the
On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
j...@andrunas.net wrote:
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
As Tomas said, export the pool before shutdown.
If you have a pool which causes unexpected trouble at
Op Sat, 29 May 2010 20:34:54 +0200 schreef Kees Nuyt k.n...@zonnet.nl:
On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
j...@andrunas.net wrote:
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
As Tomas said, export the
I have 6 zfs pools and after rebooting init 6 the vpath device path names have
changed for some unknown reason. But I can't detach, remove and reattach to the
new device namesANY HELP! please
pjde43m01 - - - - FAULTED -
pjde43m02 - - - - FAULTED -
On 30 maj 2010, at 01.53, morris hooten wrote:
I have 6 zfs pools and after rebooting init 6 the vpath device path names
have changed for some unknown reason. But I can't detach, remove and reattach
to the new device namesANY HELP! please
pjde43m01 - - - -
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at
least, but the zpool import command only looks in /dev. One thing you can try
is doing this:
# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a
And then see if 'zpool import -d /tmpdev' finds the pool.
On
Also, the zpool.cache may be out of date. To clear its entries,
zpool export poas43m01
and ignore any errors.
Then
zpool import
and see if the pool is shown as importable, perhaps with new device names.
If not, then try the zpool import -d option that Mark described.
--
On May 28, 2010, at 10:35 AM, Bob Friesenhahn wrote:
On Fri, 28 May 2010, Gregory J. Benscoter wrote:
I’m primarily concerned with in the possibility of a bit flop. If this
occurs will the stream be lost? Or will the file that that bit flop occurred
in be the only degraded file? Lastly how
Are the indicated devices actually under /pseudo or are they really
under /devices/pseudo ?
Also, have you tried a 'devfsadm -C' to re-configure the /dev links?
this might allow you to recognize the new vpath devices...
-Erik
On 5/29/2010 4:53 PM, morris hooten wrote:
I have 6 zfs
18 matches
Mail list logo