> I see, thanks.
> And as Jörg said, I only need a 64 bit binary. I
> didn't know, but there is one for ls, and it does
> work as expected:
>
> $ /usr/bin/amd64/ls -l .gtk-bookmarks
> -rw-r--r-- 1 user opc0 oct. 16 2057
> .gtk-bookmarks
>
> This is a bit absurd. I thought Solaris
my hunch is you need to boot from live cd (in virtual box case, iso) and
installgrub -m /boot/grub/stage1 /boot/grub/stage2 $diskB_root_slice
Ivan.
Currently grub hasn't been installed for disk attached to root mirror later.
> Hi all,
>
> after installing OpenSolaris 2008.05 in VirtualBox
> I
> 0n Thu, Aug 14, 2008 at 09:00:12AM -0700, Rich
> Teer wrote:
> >Summary: Solaris Express Community Edition
> (SXCE) is like the OpenSolaris
> >of old; OpenSolaris .xx is apparently Sun's
> intended future direction
> >for Solaris. Based on what I've heard, I've not
> tried the latter.
> Richard Elling wrote:
> > For ZFS, there are some features which conflict
> with the
> > notion of user quotas: compression, copies, and
> snapshots come
> > immediately to mind. UFS (and perhaps VxFS?) do
> not have
> > these features, so accounting space to users is
> much simpler.
> > Indeed,
> Steve,
>
> > Can someone tell me or point me to links that
> describe how to
> > do the following.
> >
> > I had a machine that crashed and I want to move to
> a newer machine
> > anyway. The boot disk on the old machine is fried.
> The two disks I
> was
> using for a zfs pool on that machi
> Hello andrew,
>
> Thursday, April 24, 2008, 11:03:48 AM, you wrote:
>
> a> What is the reasoning behind ZFS not enabling the
> write cache for
> a> the root pool? Is there a way of forcing ZFS to
> enable the write cache?
>
> The reason is that EFI labels are not supported for
> booting.
IIRC
> On Mon, 10 Mar 2008, Ivan Wang wrote:
>
> > Hi Lori,
> >
> > Do you happen to know any update on Live Update zfs
> support?
>
> You mean Live Upgrade? It's being worked on as we
> speak to make it
> zfs-aware. It should be available at the sa
Hi Lori,
Do you happen to know any update on Live Update zfs support?
Thanks,
Ivan.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Moving to indiana-discuss..
Please do not start battling each other, bringing this issue to indiana-discuss
is only to show potential gotcha when assuming a specific PATH setting in
utilities/scripts
Cheers,
Ivan
This message posted from opensolaris.org
_
> What does 'which chmod' show? I think that Indiana
> chose to have
> /usr/gnu/bin at the head of the path, so you're
> probably picking up the
> GNU chmod, which doesn't handle NFSv4 ACLs. Manually
> running
> /usr/bin/chmod should solve your problem.
Would it be better if this issue is brough
ow, estimating size of a zfs root pool is still
required, better not to go with a carefree grow-as-needed mindset.
Ivan.
>
> Ivan Wang wrote:
> >>> Erik Trimble wrote:
> >>> After both drives are replaced, you will
> automatically see the
> >>> add
> > Erik Trimble wrote:
> > After both drives are replaced, you will automatically see the
> > additional space.
>
> I believe currently after the last replace an
> import/export sequence
> is needed to force zfs to see the increased size.
What if root fs is also in this pool? will there be any
Hi all,
Forgive me if this is a dumb question. Is it possible for a two-disk mirrored
zpool to be seamlessly enlarged by gradually replacing previous disk with
larger one?
Say, in a constrained desktop, only space for two internal disks is available,
could I just begin with two 160G disks, the
> This bug was rendered moot via 6528732 in build
> snv_68 (and s10_u5). We
> now store physical devices paths with the vnodes, so
> even though the
> SATA framework doesn't correctly support open by
> devid in early boot, we
But if I read it right, there is still a problem in SATA framework (fai
> > If i put the database in hotbackup mode,then i will
> have to ensure
> > that the filesystem is consistent as well.So, you
> are saying that
> > taking a ZFS snapshot is the only method to
> guarantee consistency in
> > the filesystem since it flushes all the buffers to
> the filesystem , so
>
> On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B.
> Rang wrote:
> >
> > That's only one cause of panics.
> >
> > At least two of gino's panics appear due to
> corrupted space maps, for
> > instance. I think there may also still be a case
> where a failure to
> > read metadata during a transact
So any date on when install utility will support zfs root fresh install?
almost can't wait for that.
Cheers,
Ivan.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/
HI all,
Recently zfs boot is delivered in scheduled b62, so is there any words when and
how may we use live upgrade with zfs root? Since I only use SXCR now and
sometimes you just need to boot to older BE in case of a no-so-good build, live
upgrade becomes very handy for me on that.
Cheers,
I
>
> Ivan Wang wrote:
> >
> > Hi,
> >
> > However, this raises another concert that during
> recent discussions regarding to disk layout of a zfs
> system
> (http://www.opensolaris.org/jive/thread.jspa?threadID=
> 25679&tstart=0) it was said that cur
> Ian Collins wrote:
> > Thanks for the heads up.
> >
> > I'm building a new file server at the moment and
> I'd like to make sure I
> > can migrate to ZFS boot when it arrives.
> >
> > My current plan is to create a pool on 4 500GB
> drives and throw in a
> > small boot drive.
> >
> > Will I be ab
20 matches
Mail list logo