Many thanks for answering my question. Hopefully my noisy X4200
will be installed in the data centre tomorrow (Thursday); I had
a set back today while fighting with the Remote Console feature
of ILOM 1.1.1 (i.e., it doesn't work). :-(
Just ssh into it and use the serial console from within
The setup below works fine for me.
macmini:~ jimb$ mount | grep jimb
ride:/xraid2/home/jimb on /private/var/automount/home/jimb (nosuid, automounted)
macmini:~ jimb$ nidump fstab / | grep jimb
ride:/xraid2/home/jimb /home/jimb nfs rw,nosuid,tcp 0 0
NFS server: Solaris 10 11/06 x86_64 + patches,
Same problem here after some patching :(((
42GB free in a 4.2TB zpool
We can't upgrade to U3 without planning it.
Is there any way to solve the problem?? remove latest patches?
Our uptime with ZFS is going very low ...
thanks
Gino
This message posted from opensolaris.org
OSX *loves* NFS - it's a lot faster than Samba - but
you need a bit of extra work.
You need a user on the other end with the right uid and gid
(assuming you're using NFSv3 - you probably are).
Have a look at :
With the CPU overhead imposed in checksum of blocks by ZFS, on a large
sequential write test, the CPU was heavily loaded in a test that I ran. By
turning off the checksum, the CPU load was greatly reduced. Obviously, this
caused a tradeoff in reliability for CPU cycles.
Would the logic behind
With the CPU overhead imposed in checksum of blocks by ZFS, on a large
sequential write test, the CPU was heavily loaded in a test that I ran.
By turning off the checksum, the CPU load was greatly reduced.
Obviously, this caused a tradeoff in reliability for CPU cycles.
What hardware platform
Ivan Buetler wrote:
Is this true for OpenSolaris? My experience:
I was trying to upgrade from SunOS 5.11 snv_28 to SunOS 5.11 snv_54 where
my NGZ zone roots were set to a zfs mount point like below:
NAME USED AVAIL REFER MOUNTPOINT
zpool 93.8G 40.1G26K
In short - make sure your UID on Mac is enough to
access the files on
nfs (as it would be if you would try to access those
files locally).
Or perhaps you tried from user with uid=0 in which
case it's mapped to
nobody user by default.
--
Best regards,
Robert
Exactly as Robert
Ivan Buetler wrote:
Jerry, Thank you for your response. See my zonecfg of the named NGZ here:
[EMAIL PROTECTED] ~ # zonecfg -z named export
create -b
set zonepath=/zpool/zones/named
set autoboot=true
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add
I believe there is a write limit (commonly 10
writes) on CF and
similar storage devices, but I don't know for sure.
Apart from that
I think it's a good idea.
James C. McPherson
As a consequence, the /tmp, /var, and swap could eventually be moved to the ZFS
hard drives to greatly
[EMAIL PROTECTED] wrote:
I believe there is a write limit (commonly 10
writes) on CF and
similar storage devices, but I don't know for sure.
Apart from that
I think it's a good idea.
James C. McPherson
As a consequence, the /tmp, /var, and swap could eventually be moved
to the ZFS hard
Dave Sneddon wrote:
Can anyone shed any light on whether the actual software side of this canbr
be achieved? Can I share my entire ZFS pool as a folder or network drivebr
so WinXP can read it? Will this be fast enough to read/write to at DV speedsbr
(25mbit/s)? Once the pool is set up and I have
On Thu, 8 Feb 2007, [EMAIL PROTECTED] wrote:
Many thanks for answering my question. Hopefully my noisy X4200
will be installed in the data centre tomorrow (Thursday); I had
a set back today while fighting with the Remote Console feature
of ILOM 1.1.1 (i.e., it doesn't work). :-(
Just ssh
I am seeing what I think is very peculiar behaviour of ZFS after sending a
full stream to a remote host - the upshot being that I can't send an
incremental stream afterwards.
What I did was this:
host1 is Solaris 10 Update 2 SPARC
host2 is Solaris 10 Update 2 x86
host1 # zfs snapshot
We've gotten a lot of questions lately about when we'll have
an updated version of support for booting from zfs. We
are aiming at a new version of this going in to build 60. New
instructions for setting up this configuration will be
made available at the same time. If build 60 turns out
not
Hello Trevor,
Thursday, February 8, 2007, 6:23:21 PM, you wrote:
TW I am seeing what I think is very peculiar behaviour of ZFS after sending a
TW full stream to a remote host - the upshot being that I can't send an
TW incremental stream afterwards.
TW What I did was this:
TW host1 is Solaris
TW Am I using send/recv incorrectly or is there something else
going on here that
TW I am missing?
It's a known bug.
umount and rollback file system on host 2. You should see 0 used space
on a snapshot and then it should work.
Bug ID? Is it related to atime changes?
--
Best
Hello Kory,
Thursday, February 8, 2007, 12:33:13 AM, you wrote:
KW I run the ZFS command and get this below. How do you fix a degraded
KW disk?
KW zpool replace moodle c1t3d0
KW invalid vdev specification
KW use '-f' to override the following errors:
KW /dev/dsk/c1t3d0s0 is part of active
That's how I usually use the console on the X4200. However, that
arrangement doesn't work when one wants to (re)install Solaris.
Unless there's a way of telling the installer to use the serial
console while booting from DVD, rather than using the GUI?
I thought there were a grub use ttya and
On Thu, 8 Feb 2007, [EMAIL PROTECTED] wrote:
I thought there were a grub use ttya and use ttyb line on the DVD?
Yes but one needs to be able to see that menu in order to select
the correct item first. A chicken-and-egg situation!
Not that it matters so much for this case now, as I've hooked
Hello Wade,
Thursday, February 8, 2007, 8:00:40 PM, you wrote:
TW Am I using send/recv incorrectly or is there something else
going on here that
TW I am missing?
It's a known bug.
umount and rollback file system on host 2. You should see 0 used space
on a snapshot and then it should
Hello Wade,
Thursday, February 8, 2007, 8:00:40 PM, you wrote:
TW Am I using send/recv incorrectly or is there something else
going on here that
TW I am missing?
It's a known bug.
umount and rollback file system on host 2. You should see 0 used space
on a snapshot
Bill Moloney wrote:
Thanks for the input Darren, but I'm still confused about DNODE
atomicity ... it's difficult to imagine that a change that is made
anyplace in the zpool would require copy operations all the way back
up to the uberblock
This is in fact what happens. However, these changes
Would the logic behind ZFS take full advantage of a heavily multicored
system, such as on the Sun Niagara platform? Would it utilize of the
32 concurrent threads for generating its checksums? Has anyone
compared ZFS on a Sun Tx000, to that of a 2-4 thread x64 machine?
Pete and I are working
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 01/16 - 01/31
=
Size of all threads during
This month's FROSUG (Front Range OpenSolaris User Group) meeting is on
Thursday, February 22, 2007. Our presentation is ZFS as a Root File
System by Lori Alt. In addition, Jon Bowman will be giving an OpenSolaris
Update, and we will also be doing an InstallFest. So, if you want help
installing an
26 matches
Mail list logo