Hello Richard,
Thursday, June 12, 2008, 6:54:29 AM, you wrote:
RE Oracle bails out after 10 minutes (ORA-27062) ask me how I know... :-P
So how do you know?
--
Best regards,
Robert Milkowskimailto:[EMAIL PROTECTED]
One thing I should mention on this is that I've had _very_ bad
experience with using single-LUN ZFS filesystems over FC.
that is, using an external SAN box to create a single LUN, export that
LUN to a FC-connected host, then creating a pool as follows:
zpool create tank LUN_ID
It works fine,
Hi,
I've got an external hard disk and I've done the stuff with zpool - so
its all working.
The problem I have, however, is whether it is possible to actually set
it up so that zfs devices mount just like cd's and drives formatted as
fat.
___
I've got a couple of identical old sparc boxes
running nv90 - one
on ufs, the other zfs. Everything else is the same.
(SunBlade
150 with 1G of RAM, if you want specifics.)
The zfs root box is significantly slower all
around. Not only is
initial I/O slower, but it seems much less
I've got a couple of identical old sparc boxes
running nv90 - one
on ufs, the other zfs. Everything else is the same.
(SunBlade
150 with 1G of RAM, if you want specifics.)
The zfs root box is significantly slower all
around. Not only is
initial I/O slower, but it seems much less
Peter Hawkins wrote:
Can zpool on U3 be patched to V4? I've applied the latest cluster and it
still seems to be V3.
Yes, you can patch your way up to the Sol 10 U4 kernel (or even U5
kernel) which will give you zpool v4 support. The particular patch you
need is 120011-14 or 120012-14
I am seeing the same problem using a seperate virtual disk for the pool.
This is happening with Solaris 10 U3, U4 and U5
SCSI reservations is know to be an issue with clustered solaris
http://blogs.sun.com/SC/entry/clustering_solaris_guests_that_run
I wonder if this is the same problem. Maybe
On Mon, Jun 16, 2008 at 12:05 PM, Matthew Gardiner
[EMAIL PROTECTED] wrote:
I think that if you notice the common thread; those who run SPARC's
are having performance issues vs. those who are running x86.
Not that simple. I'm seeing performance issues on x86 just as
much as sparc. My sparc
Answer is:
# zpool import
(which will pick up the zpool on the HDD and lists its name and id)
# zpool import rpool
(rpool is default opensolaris zpool)
This message posted from opensolaris.org
___
zfs-discuss mailing list
Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror of
a T3B lun and a corresponding lun of a SE3511 brick. I did this since I was new
with ZFS and wanted to ensure that my data would survive an array failure. It
turns out that I was smart for doing this :)
I had a
On Mon, 16 Jun 2008 16:21:26 +0100
Peter Tribble [EMAIL PROTECTED] wrote:
The *real* common thread is that you need ridiculous amounts
of memory to get decent performance out of ZFS
That's FUD. Older systems might not have enough memory, but newer ones
can't hardly be bought with less then
I'm doing some simple testing of ZFS block reuse and was wondering when
deferred frees kick in. Is it on some sort of timer to ensure data
consistency? Does an other routine call it? Would something as simple as
sync(1M) get the free block list written out so future allocations could
use the
Added an vdev using rdm and that seems to be stable over reboots
however the pools based on a virtual disk now also seems to be stable after
doing an export and import -f
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi,
zpool does not to create a pool on USB disk (formatted in FAT32).
# /usr/sbin/zpool create alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy
or
# /usr/sbin/zpool create alpha /dev/rdsk/c5t0d0p0
cannot use '/dev/rdsk/c5t0d0p0': must be a block device or regular file
What is gonna
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
The same story
# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot
On Mon, 16 Jun 2008 18:23:35 +0100
Andrius [EMAIL PROTECTED] wrote:
The same story
# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy
Are you sure you're not on that device?
Are you also sure your usb stick is called c5t0d0p0?
What does rmformat (as
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:23:35 +0100
Andrius [EMAIL PROTECTED] wrote:
The same story
# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy
Are you sure you're not on that device?
Are you also sure your usb stick is called c5t0d0p0?
What
Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
The same story
#
Has anybody stored 1/2 billion small ( 50KB) files in a ZFS data store?
If so, any feedback in how many file systems [and sub-file systems, if
any] you used?
How were ls times? And insights in snapshots, clones, send/receive, or
restores in general?
How about NFS access?
Thanks
Steffen
I'm not sure why people obsess over this issue so much. Disk is cheap.
We have a fair number of 3510 and 2540 on our SAN. They make RAID-5 LUNs
available to various servers.
On the servers we take RAID-5 LUNs from different arrays and ZFS mirror them.
So if any array goes away we are still
Neal Pollack wrote:
Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
The same story
Thanks to the help in a previous post I have imported my pool. However I would
appreciate some help with my next problem.
This all arose because my motherboard failed while my zpool was resilvering
from a failed disk. I moved the disks to a new motherboard and imported the
pool with the help
On Mon, 16 Jun 2008 18:38:11 +0100
Andrius [EMAIL PROTECTED] wrote:
The device is on, but it is empty. It is not a stick, it is a mobile
hard disk Iomega 160 GB.
Like Neal writes: check if the drive is mounted. Do a df -h
Unmount it if neccessary (umount /dev/dsk/c5t0d0) and then do a zpool
Andrius wrote:
Neal Pollack wrote:
Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
On Mon, 16 Jun 2008 20:00:59 +0200
dick hoogendijk [EMAIL PROTECTED] wrote:
Unmount it if neccessary (umount /dev/dsk/c5t0d0)
Should be /dev/dsk/c5t1d0 --
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
On Mon, 16 Jun 2008 20:04:08 +0200
dick hoogendijk [EMAIL PROTECTED] wrote:
Should be /dev/dsk/c5t1d0 --
Sh***t! No it should not. rmformat showed c5t0d0, didn't it?
So be careful. A typo is quickly made (see my msgs) ;-)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ +
Miles Nordin wrote:
a == Andrius [EMAIL PROTECTED] writes:
a # umount /dev/rdsk/c5t0d0p0
maybe there is another problem, too, but this is wrong. type 'df -k'
as he suggested and use the device or pathname listed there.
This is end of df -k
/vol/dev/dsk/c5t0d0/unnamed_rmdisk:c
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:54:04 +0100
Andrius [EMAIL PROTECTED] wrote:
That is true, disc is detected automatically. But
# umount /dev/rdsk/c5t0d0p0
umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab
umount /dev/dsk/c5t0d0 should do it.
The same
# umount
Try 'zpool replace'.
- Eric
On Mon, Jun 16, 2008 at 10:57:40AM -0700, Peter Hawkins wrote:
Thanks to the help in a previous post I have imported my pool. However I
would appreciate some help with my next problem.
This all arose because my motherboard failed while my zpool was resilvering
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:
/rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk too.
--
Dick Hoogendijk --
dick hoogendijk wrote:
On Mon, 16 Jun 2008 20:00:59 +0200
dick hoogendijk [EMAIL PROTECTED] wrote:
Unmount it if neccessary (umount /dev/dsk/c5t0d0)
Should be /dev/dsk/c5t1d0 --
Still the same
# umount /dev/rdsk/c5t1d0
umount: warning: /dev/rdsk/c5t1d0 not in mnttab
umount:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:
/rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk
Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
happen within the next year?
My use-case is home user. I have 16 disks spinning, two towers of
eight disks each, exporting some of them as iSCSI targets. Four disks
are 1TB disks already in ZFS mirrors, and 12 disks are 180 -
On Mon, 16 Jun 2008, Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:
/rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris version.
If so, umount
On Mon, Jun 16, 2008 at 6:42 PM, Steffen Weiberle
[EMAIL PROTECTED] wrote:
Has anybody stored 1/2 billion small ( 50KB) files in a ZFS data store?
If so, any feedback in how many file systems [and sub-file systems, if
any] you used?
I'm not quite there yet, although I have a thumper with about
Martin Winkelman wrote:
On Mon, 16 Jun 2008, Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:
/rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick
It's probably also mounted on /media depending on your solaris
On Mon, 16 Jun 2008, Andrius wrote:
# eject /rmdisk/unnamed_rmdisk
No such file or directory
# eject /dev/rdsk/c5t0d0s0
/dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
# eject rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
# eject
Martin Winkelman wrote:
On Mon, 16 Jun 2008, Andrius wrote:
# eject /rmdisk/unnamed_rmdisk
No such file or directory
# eject /dev/rdsk/c5t0d0s0
/dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
# eject rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
#
This is actually quite a tricky fix as obviously data and meta data have
to be relocated. Although there's been no visible activity in this bug
there has been substantial design activity to allow the RFE to be easily
fixed.
Anyway, to answer your question, I would fully expect this RFE would
be
On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk [EMAIL PROTECTED] wrote:
On Mon, 16 Jun 2008 16:21:26 +0100
Peter Tribble [EMAIL PROTECTED] wrote:
The *real* common thread is that you need ridiculous amounts
of memory to get decent performance out of ZFS
That's FUD. Older systems might not
Since Volume Management has control and eject didn't work, just turning
off Volume Management will do the trick.
# svcadm disable volfs
Now you can remove it safely.
Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Paul Gress wrote:
Since Volume Management has control and eject didn't work, just turning
off Volume Management will do the trick.
# svcadm disable volfs
Now you can remove it safely.
Paul
Thanks! It works. Volume managagement is that thing that does not exist
in zfs perhaps and made
On Mon, 16 Jun 2008, Andrius wrote:
Thanks! It works. Volume managagement is that thing that does not exist in
zfs perhaps and made disk managemet more easy. Thanks for everybody for
advices.
Volume Manager should be off before creating pools in removable disks.
Probably it will work to
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
different clients at the same time (i.e. they should be both active)
So we need to guarantee that both x4500 contain the same
On Mon, 16 Jun 2008 20:04:47 +0100
Peter Tribble [EMAIL PROTECTED] wrote:
Hogwash. What is the reasonable minimum? I'm suspecting it's well
over 2G.
2Gb is perfectly alright.
And as for being unable to get machines with less than 2G, just look
at Sun's price list
I'm not saying you can't
Why would you have to buy smaller disks? You can replace the 320's
with 1tb drives and after the last 320 is out of the raidgroup, it
will grow automatically.
On 6/16/08, Miles Nordin [EMAIL PROTECTED] wrote:
Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
happen within
Bob Friesenhahn wrote:
On Mon, 16 Jun 2008, Andrius wrote:
Thanks! It works. Volume managagement is that thing that does not
exist in zfs perhaps and made disk managemet more easy. Thanks for
everybody for advices.
Volume Manager should be off before creating pools in removable disks.
On Mon, 16 Jun 2008, Andrius wrote:
After commenting
# kill -HUP 'pgrep vold'
kill: invalid id
We're in the 21st century, so
# pkill -HUP vold
should work just fine.
--
Rich Teer, SCSA, SCNA, SCSECA
CEO,
My Online Home Inventory
URLs: http://www.rite-group.com/rich
Andrius wrote:
Bob Friesenhahn wrote:
On Mon, 16 Jun 2008, Andrius wrote:
Thanks! It works. Volume managagement is that thing that does not
exist in zfs perhaps and made disk managemet more easy. Thanks for
everybody for advices.
Volume Manager should be off before creating pools in
Bob Friesenhahn wrote:
On Mon, 16 Jun 2008, Andrius wrote:
After commenting
# kill -HUP 'pgrep vold'
kill: invalid id
It looks like you used forward quotes rather than backward quotes.
I did just try this procedure myself with my own USB drive and it works
fine.
Bob
Andrius wrote:
That is true, but
# kill -HUP `pgrep vold`
usage: kill [ [ -sig ] id ... | -l ]
I think you already did this as per a previous message:
# svcadm disable volfs
As such, vold isn't running. Re-enable the service and you should be fine.
-Brian
Brian H. Nelson wrote:
Andrius wrote:
That is true, but
# kill -HUP `pgrep vold`
usage: kill [ [ -sig ] id ... | -l ]
I think you already did this as per a previous message:
# svcadm disable volfs
As such, vold isn't running. Re-enable the service and you should be fine.
-Brian
| I guess I find it ridiculous you're complaining about ram when I can
| purchase 4gb for under 50 dollars on a desktop.
|
| Its not like were talking about a 500 dollar purchase.
'On a desktop' is an important qualification here. Server RAM is
more expensive, and then you get to multiply it by
Remind me again what a veritas license is. If you can't find ram for
less than that you need to find a new var/disti
On 6/16/08, Chris Siebenmann [EMAIL PROTECTED] wrote:
| I guess I find it ridiculous you're complaining about ram when I can
| purchase 4gb for under 50 dollars on a
Matthew C Aycock wrote:
Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror
of a T3B lun and a corresponding lun of a SE3511 brick. I did this since I
was new with ZFS and wanted to ensure that my data would survive an array
failure. It turns out that I was smart
Tried zpool replace. Unfortunately that takes me back into the cycle where as
soon as the resilver starts the system hangs, not even CAPS Lock works. When I
reset the system I have about a 10 second window to detach the device again to
get the system back before it freezes. Finally detached it
Richard Elling wrote:
Matthew C Aycock wrote:
Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror
of a T3B lun and a corresponding lun of a SE3511 brick. I did this since I
was new with ZFS and wanted to ensure that my data would survive an array
failure. It
Hello,
I am new to open solaris and am trying to setup a ZFS based storage solution.
I am looking at setting up a system with the following specs:
Intel BOXDG33FBC
Intel Core 2 Duo 2.66Ghz
2 or 4 GB ram
For the drives I am looking at using a
LSI SAS3081E-R
I've been reading around and it
Aaron Moore wrote:
I am new to open solaris and am trying to setup a ZFS based storage
solution.
I am looking at setting up a system with the following specs:
Intel BOXDG33FBC Intel Core 2 Duo 2.66Ghz 2 or 4 GB ram
For the drives I am looking at using a LSI SAS3081E-R
I've been
60 matches
Mail list logo