On Wed, Darren J Moffat wrote:
I have 12 36G disks (in a single D2 enclosure) connected to a V880 that
I want to share to a v40z that is on the same gigabit network switch.
I've already decided that NFS is not the answer - the performance of ON
consolidation builds over NFS just doesn't cut
I have a StorEdge 3510 FC array which is currently configured in the following
way:
* logical-drives
LDLD-IDSize Assigned Type Disks Spare Failed Status
ld0 255ECBD0 2.45TB Primary RAID5 10
Actually, just tried this on a non-cloned filesystem with the same results. I
can't believe there is a bug with rm -rf, so is this something to do with
ACLs ?
Help!
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
Why does the Java Web Console service keep going into maintenance mode? This
has happened for the past few builds (current is nv44). It works for a day or
so after a new install then it breaks. Here the the symptoms:
sol11:$ svcs -x
svc:/system/webconsole:console (java web console)
State:
I'm not at the machine to check at the moment, but I didn't create the /u05
mountpoint manually. ZFS created it automatically when I did :-
% zfs set mountpoint=/u05 zfspool/u05
You would hope that ZFS didn't get the underlying permissions wrong!
This message posted from opensolaris.org
Tom Simpson wrote:
Can anyone help? I have a cloned filesystem (/u05) from a snapshot of /u02.
The owner/group of the clone is (oracle:dba).
If I do
oracle% cd /u05/app
oracle% rm -rf R2DIR
Are you sure you have adequate permissions to descend into and remove
the subdirectories?
..
Luke Lonergan wrote:
Torrey,
On 8/1/06 10:30 AM, Torrey McMahon [EMAIL PROTECTED] wrote:
http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml
Look at the specs page.
I did.
This is 8 trays, each with 14 disks and two active Fibre channel
attachments.
That means
Darren J Moffat wrote:
performance, availability, space, retention.
OK, something to work with. I would recommend taking advantage of ZFS'
dynamic stripe over 2-disk mirrors. This should give good performance,
with good data availability. If you monitor the status of the disks
Tom Simpson wrote:
After I created the filesystem and moved all the data in, I did :-
root% chown -R oracle:dba /u05
All that does is change the owner/group of the files/directories. It
doesn't change the permissions of the directories and files.
What are the permissions of the
Your suspicions are correct, it's not possible to upgrade an
existing raidz pool to raidz2. You'll actually have to create the
raidz2 pool from scratch.
Noel
On Aug 2, 2006, at 10:02 AM, Frank Cusack wrote:
Will it be possible to update an existing raidz to a raidz2? I
wouldn't
think
On Aug 1, 2006, at 22:23, Luke Lonergan wrote:
Torrey,
On 8/1/06 10:30 AM, Torrey McMahon [EMAIL PROTECTED] wrote:
http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml
Look at the specs page.
I did.
This is 8 trays, each with 14 disks and two active Fibre channel
Jonathan Edwards wrote:
Now with thumper - you are SPoF'd on the motherboard and operating
system - so you're not really getting the availability aspect from dual
controllers .. but given the value - you could easily buy 2 and still
come out ahead .. you'd have to work out some sort of timely
Richard,
On 8/2/06 11:37 AM, Richard Elling [EMAIL PROTECTED] wrote:
Now with thumper - you are SPoF'd on the motherboard and operating
system - so you're not really getting the availability aspect from dual
controllers .. but given the value - you could easily buy 2 and still
come out
Richard Elling wrote:
Jonathan Edwards wrote:
Now with thumper - you are SPoF'd on the motherboard and operating
system - so you're not really getting the availability aspect from
dual controllers .. but given the value - you could easily buy 2 and
still come out ahead .. you'd have to work
From talking with the web console (Lockhart) folks, this appears to
be a manifestation of:
6430996 The SMF services related to smcwebserver goes to maintainance
state after node reboot
This will be fixed in build 46 of Solaris Nevada.
Details, including workaround:
I believe this is
prasad wrote:
I have a StorEdge 3510 FC array which is currently configured in the following
way:
* logical-drives
LDLD-IDSize Assigned Type Disks Spare Failed Status
ld0 255ECBD0 2.45TB
Dave,
I'm copying the zfs-discuss alias on this as well...
It's possible that not all necessary patches have been installed or they
maybe hitting CR# 6428258. If you reboot the zone does it continue to
end up in maintenance mode? Also do you know if the necessary ZFS/Zones
patches have been
Torrey McMahon [EMAIL PROTECTED] wrote:
Are any other hosts using the array? Do you plan on carving LUNs out of
the RAID5 LD and assigning them to other hosts?
There are no other hosts using the array. We need all the available space
(2.45TB) on just one host. One option was to create
I know this is going to sound a little vague but...
A coworker said he read somewhere that ZFS is more efficient if you
configure pools from entire disks instead of just slices of disks. I'm
curious if there is any merit to this?
The use case that we had been discussing was something to the
On Wed, 2 Aug 2006, Joseph Mocker wrote:
The use case that we had been discussing was something to the effect of
building a 2 disk system, install the OS on slice 0 of disk 0 and make the
rest of the disk available for 1/2 of a zfs mirror. Then disk 1 would probably
be partitioned the same,
Thanks Steve. The workaround (rm -f /var/webconsole/tmp/console_*.tmp) and a
restart fixed it.
I appreciate the quick response. You guys are good!
Ron
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello Joseph,
Thursday, August 3, 2006, 2:02:28 AM, you wrote:
JM I know this is going to sound a little vague but...
JM A coworker said he read somewhere that ZFS is more efficient if you
JM configure pools from entire disks instead of just slices of disks. I'm
JM curious if there is any
Spencer Shepler wrote:
On Wed, Darren J Moffat wrote:
I have 12 36G disks (in a single D2 enclosure) connected to a V880 that
I want to share to a v40z that is on the same gigabit network switch.
I've already decided that NFS is not the answer - the performance of ON
consolidation builds over
On Aug 2, 2006, at 17:03, prasad wrote:
Torrey McMahon [EMAIL PROTECTED] wrote:
Are any other hosts using the array? Do you plan on carving LUNs
out of
the RAID5 LD and assigning them to other hosts?
There are no other hosts using the array. We need all the available
space (2.45TB) on
24 matches
Mail list logo