Hi, Roman
If you need to ocupy a disk on rootpool, you need has at least 2 disks
in the system on such case.
use c[m]t[n]d[p]s0 as the second device, suppose you've SMI labeled it
and let s0 take
the entire space of that disk.
Good luck!
Roman Morokutti wrote:
> Hi,
>
> I am new to ZFS and
Excellent News, Tim.
That util will be handy and popular under SMF. Looking forward to
that.
Tim Foster wrote:
Hi all,
I put together the attached one-pager on the ZFS Automatic Snapshots
service which I've been maintaining on my blog to date.
I would like to see if this could be integ
Hi, Roman,
The disable disk cache option is a choice by manual setting, on some
particular envionment.
ZIL default setting to 'on'. Regardless regular pool or rootpool.
There's CR #6648965 might also related to this. It talked about
slog/l2cache/spare should
able to be supported in
- Original Message
> From: Marion Hakanson <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc: zfs-discuss@opensolaris.org
> Sent: Friday, February 1, 2008 1:01:46 PM
> Subject: Re: [zfs-discuss] Un/Expected ZFS performance?
>
> [EMAIL PROTECTED] said:
> > . . .
> > ZFS filesystem [on Stora
You'd have to go back and read my previous thread, I did about 6 weeks
of trying to find a solution using zfs, and quotas, with mind of
directly replacing the NetApps we have. It just can not be done. (yet)
The closest would be to use mirror mounts, but that would require
upgrading all 500 ser
Darren J Moffat wrote:
Dave Lowenstein wrote:
Nope, doesn't work.
Try presenting one of those lun snapshots to your host, run cfgadm -
al,
then run zpool import.
#zpool import
no pools available to import
Does format(1M) see the luns ? If format(1M) can't see them it is
unlikely that ZFS
[EMAIL PROTECTED] said:
> FYI, you can use the '-c' option to compare results from various runs and
> have one single report to look at.
That's a handy feature. I've added a couple of such comparisons:
http://acc.ohsu.edu/~hakansom/thumper_bench.html
Marion
_
On Mon, 4 Feb 2008, Darren J Moffat wrote:
> At this time the libzfs C interfaces are not stable public documented
> interfaces so there are no Perl bindings for them either.
>
> The commands are the only stable and documented interfaces to ZFS at this
> time.
Perhaps not stable, but it's hard to
We¹re looking at building out sever ZFS servers, and are considering an x86
platform vs a Sun 5520 as the base platform. Any comments from the floor on
comparative performance as a ZFS server? We¹d be using the LSI 3801
controllers in either case.
___
zfs
Hi,
On Sat, 12 Jan 2008, Alan Romeril wrote:
> Hello All,
>In a moment of insanity I've upgraded from a 5200+ to a Phenom 9600 on my
> zfs server and I've had a lot of problems with hard hangs when accessing the
> pool.
> The motherboard is an Asus M2N32-WS, which has had the latest availab
I've got ZFS running on Solaris s10x_u3wos_10 X86 on a v40z, which has
two PCI SCSI controllers, each connected to it's own external HP
Diskarray (MSA30) with 7 disks + hot spare.
Both controllers are:
LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
The disks are a mix o
On Feb 4, 2008 4:37 PM, Robin Guo <[EMAIL PROTECTED]> wrote:
> If you use a whole disk for a rootpool, you must use a slice notation
> (e.g. c0d0s0) so that it is labeled with an SMI label.
Will ZFS recognize that it has the whole disk at this point (and thus
leave cache enabled on it) or not? Man
Try it, it doesn't work.
Format sees both but you can't import a clone of pool "u001" if pool
"u001" is already imported, even by giving it a new name.
Darren J Moffat wrote:
> Dave Lowenstein wrote:
>> Nope, doesn't work.
>>
>> Try presenting one of those lun snapshots to your host, run cfgad
Andrew Robb writes:
> The big problem that I have with non-directio is that buffering delays
> program execution. When reading/writing files that are many times
> larger than RAM without directio, it is very apparent that system
> response drops through the floor- it can take several minutes f
On Mon, Feb 04, 2008 at 03:14:15PM +, Tim Foster wrote:
> Filesystems are grouped together either by setting their names as a
> space separated list in an SMF instance property, or queried dynamically
SMF supports multi-valued properties. I think you should use that,
rather than d
Hi, Roman
You can use 'zpool attach' to attach mirror into it. But cannot 'zpool
add' new slice into it.
rootpool can be a single disk device, or a device slice, or in a
mirrored configuration.
If you use a whole disk for a rootpool, you must use a slice notation
(e.g. c0d0s0) so that it
Tim;
Excellent work. This is one great feature that should have been implemented
into the ZFS long ago.
I also recommend integrating this functionality with the ZFS GUI. And a global
manager that manages snapshots of multiple servers would be the dream of a
system admin.
Keepup the good wo
Hi all,
I put together the attached one-pager on the ZFS Automatic Snapshots
service which I've been maintaining on my blog to date.
I would like to see if this could be integrated into ON and believe that
a first step towards this is a project one-pager: so I've attached a
draft version.
I'm ha
Dave Lowenstein wrote:
> Nope, doesn't work.
>
> Try presenting one of those lun snapshots to your host, run cfgadm -al,
> then run zpool import.
>
>
> #zpool import
> no pools available to import
Does format(1M) see the luns ? If format(1M) can't see them it is
unlikely that ZFS will either
Jorgen Lundman wrote:
> If we were to get two x4500s, with the idea of keeping one as a passive
> standby (serious hardware failure) are there any clever solutions in
> doing so?
>
> We can not use ZFS itself, but rather zpool volumes, with UFS on-top. I
Why can't you use ZFS filesystems and i
Jan Dreyer wrote:
> Hi,
>
> this may be a perl question more than a zfs question, but anyway:
> are there any perl modules hanging around to access the zfs
> administrative commands?!
> I wish to write some scripts to to some scheduled jobs with our ZFS
> systems; preferably in perl. But I found
Just another thought. After setting up a ZFS root on
slice c0d0s4, it should be just possible after starting
into it, to add the remaining slices into the created
ZFS pool. Is this possible?
Roman
This message posted from opensolaris.org
___
zfs-dis
22 matches
Mail list logo