> [EMAIL PROTECTED] said:
> > But I don't see how copying a label will do any good. Won't that just
> > confuse ZFS and make it think it's talking to one of the other disks?
>
> No, the disk label doesn't contain any ZFS info, it just tells the disk
> drivers (scsi_vhci, in this case) where the
Hi
I have a Supermicro PDSGE with an AMI BIOS, I have UFS boot disk connected via
IDE and a ZFS pool comprising 4 x 400GB entire disks on the SATA ports.
After I created the ZFS pool the BIOS freezes after the SATA disks are
detected, this is because the EFI label is not recognised by the BIOS,
[EMAIL PROTECTED] said:
> But I don't see how copying a label will do any good. Won't that just
> confuse ZFS and make it think it's talking to one of the other disks?
No, the disk label doesn't contain any ZFS info, it just tells the disk
drivers (scsi_vhci, in this case) where the disk slices
On Tue, Aug 21, 2007 at 03:55:59PM -0700, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
> > With this OS version, format is giving lines such as:
> > 9. c2t2104D9600099d0
> > /[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1077,[EMAIL
> > PROTECTED]/[EMAIL PROTECTED],0/[EMAIL
> If the disk/LUN was really not reformatted, you might be able to restore
> the label by first converting the disk to EFI, and then copying a label
> from one of the working disks.
Well, you might get somewhere by putting an EFI label on (because then
the top of the data portion will move down th
[EMAIL PROTECTED] said:
> With this OS version, format is giving lines such as:
> 9. c2t2104D9600099d0
> /[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1077,[EMAIL
> PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
> whereas, again to my recollection, previously the drive man
> On Tue, Aug 21, 2007 at 02:56:11PM -0700, Eric Schrock wrote:
> > What does 'zdb -l /dev/dsk/s0' show for each device?
>
> bash-3.00# zdb -l /dev/dsk/c2t2104D9600099d0s0
> cannot open '/dev/dsk/c2t2104D9600099d0s0': I/O error
> bash-3.00# zdb -l /dev/dsk/c2t2104D9600099d0s1
> cannot
On Tue, Aug 21, 2007 at 03:09:47PM -0700, Eric Schrock wrote:
> There are no ZFS-recognizable labels on this device. Did you explicitly
> create it on slice 2? From the looks of things, it seems like your disk
> label is corrupt...
Not at all (to my recollection, I specified the devices as
"c2t2
There are no ZFS-recognizable labels on this device. Did you explicitly
create it on slice 2? From the looks of things, it seems like your disk
label is corrupt...
- Eric
On Tue, Aug 21, 2007 at 05:00:37PM -0500, Jeff Bachtel wrote:
> On Tue, Aug 21, 2007 at 02:56:11PM -0700, Eric Schrock wrote
On Tue, Aug 21, 2007 at 02:56:11PM -0700, Eric Schrock wrote:
> What does 'zdb -l /dev/dsk/s0' show for each device?
bash-3.00# zdb -l /dev/dsk/c2t2104D9600099d0s0
cannot open '/dev/dsk/c2t2104D9600099d0s0': I/O error
bash-3.00# zdb -l /dev/dsk/c2t2104D9600099d0s1
cannot open '/dev/dsk
What does 'zdb -l /dev/dsk/s0' show for each device?
- Eric
On Tue, Aug 21, 2007 at 04:45:40PM -0500, Jeff Bachtel wrote:
> On Tue, Aug 21, 2007 at 01:39:26PM -0700, Richard Elling wrote:
> > Jeff Bachtel wrote:
> > >On Tue, Aug 21, 2007 at 09:38:17AM -0700, Richard Elling wrote:
> > >>can you se
On Tue, Aug 21, 2007 at 01:39:26PM -0700, Richard Elling wrote:
> Jeff Bachtel wrote:
> >On Tue, Aug 21, 2007 at 09:38:17AM -0700, Richard Elling wrote:
> >>can you see the LUN in format?
> >
> >Yes. Additionally, I can dd from the rdsk device to a file, and get
> >enough data to confirm that the d
Mario Goebbels wrote:
>> There are however a few cases where it will not be optimal. Eg, 129k files
>> will use up 256k of space. However, you can work around this problem by
>> turning on compression.
>
> Doesn't ZFS pack the last block into one of a multiple of 512?
Unfortunately, not yet.
Sort of... I couldn't even do a prtvtoc /dev/rdsk/c0t3d0s2. I found
that if I exported the pool I could view the partition table with format
or prtvtoc, etc. but no luck doing a newfs on the disk as it complained
about a ZFS filesystem being on it, just like the attach command did.
I ended up re
Have you tried to "blank" out c0t3d0s2 using dd and zeros?
Btw, "zpool attach -f zpol01 ..." won't work ;) (zpol01 = zpool01?)
On 8/21/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
>
>
>
> I'm looking for ideas to resolve the problem below…
>
> # zpool attach -f zpol01 c0t2d0 c0t3d0
> invalid vd
Hello,
I will try to concentrate in this post the informations about the configurations
that i'm deploying, thinking that it should be usefull for somebody else..
The objective of my tests is:
High Availability services with ZFS/NFS on solaris 10 using a two-node Sun
Cluster.
The scenarios are
I'm looking for ideas to resolve the problem below...
# zpool attach -f zpol01 c0t2d0 c0t3d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t3d0s0 is part of an active ZFS pool on zpool01. Please see
zpool(1M)
# zpool status
pool: zpool01
state: ONLINE
s
The OpenSolaris ZFS FAQ is here:
http://www.opensolaris.org/os/community/zfs/faq
Other resources are listed here:
http://www.opensolaris.org/os/community/zfs/links/
Cindy
Brandorr wrote:
> P.S. - Is there a ZFS FAQ somewhere?
>
___
zfs-discuss mail
On 8/21/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
> Brandorr wrote:
> > Is ZFS efficient at handling huge populations of tiny-to-small files -
> > for example, 20 million TIFF images in a collection, each between 5
> > and 500k in size?
>
> Do you mean efficient in terms of space used? If so,
On Tue, Aug 21, 2007 at 02:39:00PM +0200, Mario Goebbels wrote:
> > There are however a few cases where it will not be optimal. Eg, 129k files
> > will use up 256k of space. However, you can work around this problem by
> > turning on compression.
>
> Doesn't ZFS pack the last block into one of
Yaniv Aknin wrote:
> It looks to me a lot like a conditional program flow (first we calculate
> the duration, then we commit() the speculation if duration is > limit and
> discard() it otherwise) rather than discrete probes that fire
> independently. I read the manual as saying that conditional f
Ralf,
> Torrey McMahon wrote:
>> AVS?
>>
> Jim Dunham will probably shoot me, or worse, but I recommend thinking
> twice about using AVS for ZFS replication.
That's is why the call this a discussion group, as it encourages
differing opinions,
> Basically, you only have a
> few options:
>
> 1)
> Is ZFS efficient at handling huge populations of tiny-to-small files -
> for example, 20 million TIFF images in a collection, each between 5
> and 500k in size?
>
> I am asking because I could have sworn that I read somewhere that it
> isn't, but I can't find the reference.
It depends, what typ
> There are however a few cases where it will not be optimal. Eg, 129k files
> will use up 256k of space. However, you can work around this problem by
> turning on compression.
Doesn't ZFS pack the last block into one of a multiple of 512?
If not, it's a surprise that there isn't a pseudo-com
Torrey McMahon wrote:
> AVS?
>
Jim Dunham will probably shoot me, or worse, but I recommend thinking
twice about using AVS for ZFS replication. Basically, you only have a
few options:
1) Using a battery buffered hardware RAID controller, which leads to
bad ZFS performance in many cases,
2)
Brandorr wrote:
> Is ZFS efficient at handling huge populations of tiny-to-small files -
> for example, 20 million TIFF images in a collection, each between 5
> and 500k in size?
>
> I am asking because I could have sworn that I read somewhere that it
> isn't, but I can't find the reference.
>
I
hello Robert,
I already tried this :( When you mount the fs, you need to set the zoned param
to off. If you mount the dataset and then set the zoned param to on, the
mountpoint disappear :(
Kind regards,
Christophe
This message posted from opensolaris.org
___
Thanks Michael.
Here is my (slightly corrected) version of the script, after I've done a bit of
investment in the dtrace manual (always knew it's powerful, but it's more than
that...).
While it appears to run, and when I tried lowering the limit I started getting
results, I'd appreciate it if
Hello christophe,
Thursday, August 16, 2007, 8:53:31 AM, you wrote:
c> Hello,
c> is there a way to add a zfs dataset to a zone online?
c> The only info I found is to configure the dataset in the zone via zonecfg
then reboot the zone.
c> I would like to be able to add new separated dataset for
29 matches
Mail list logo