Re: [zfs-discuss] Resilver misleading output

2010-12-14 Thread Lin Ling

On Dec 14, 2010, at 1:58 AM, Giovanni Tirloni wrote:

> 
> 
> On Tue, Dec 14, 2010 at 6:34 AM, Bruno Sousa  wrote:
> Hello everyone,
> 
> I have a pool consisting of 28 1TB sata disks configured in 15*2 vdevs
> raid1 (2 disks per mirror)2 SSD in miror for the ZIL and 3 SSD's for L2ARC,
> and recently i added two more disks.
> For some reason the resilver process kicked in, and the system is
> noticeable slower, but i'm clueless to what should i do , because the zpool
> status says that the resilver process has finished.
> 
> This system is running opensolaris snv_134, has 32GB of memory, and here's
> the zpool output
> 
> zpool status -xv vol0
>  pool: vol0
>  state: ONLINE
> status: One or more devices is currently being resilvered.  The pool will
>continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
>  scrub: resilver in progress for 13h24m, 100.00% done, 0h0m to go
> config:
> 
> zpool iostat snip
> 
> mirror-12  ONLINE   0 0 0
>c8t5000C5001A11A4AEd0ONLINE   0 0 0
>c8t5000C5001A10CFB7d0ONLINE   0 0 0
> 1.71G resilvered
>  mirror-13  ONLINE   0 0 0
>c8t5000C5001A0F621Dd0ONLINE   0 0 0
>c8t5000C50019EB3E2Ed0ONLINE   0 0 0
>  mirror-14  ONLINE   0 0 0
>c8t5000C5001A0F543Dd0ONLINE   0 0 0
>c8t5000C5001A105D8Cd0ONLINE   0 0 0
>  mirror-15  ONLINE   0 0 0
>   c8t5000C5001A0FEB16d0  ONLINE   0 0 0
>   c8t5000C50019C1D460d0ONLINE   0 0 0
> 4.06G resilvered
> 
> 
> Any idea for this type of situation?
> 
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970


If you have snapshot deletion/creation on going, then see 6981250/6891824.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread Lin Ling

What caused the resilvering to kick off in the first place?

Lin

On Sep 29, 2010, at 8:46 AM, LIC mesh wrote:

> It's always running less than an hour.
> 
> It usually starts at around 300,000h estimate(at 1m in), goes up to an 
> estimate in the millions(about 30mins in) and restarts.
> 
> Never gets past 0.00% completion, and K resilvered on any LUN.
> 
> 64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs.
> 
> 
> 
> 
> On Wed, Sep 29, 2010 at 11:40 AM, Scott Meilicke 
>  wrote:
> Has it been running long? Initially the numbers are way off. After a while it 
> settles down into something reasonable.
> 
> How many disks, and what size, are in your raidz2?  
> 
> -Scott
> 
> 
> On 9/29/10 8:36 AM, "LIC mesh"  wrote:
> 
> Is there any way to stop a resilver?
> 
> We gotta stop this thing - at minimum, completion time is 300,000 hours, and 
> maximum is in the millions.
> 
> Raidz2 array, so it has the redundancy, we just need to get data off.
> 
> 
> We value your opinion!  How may we serve you better?Please click the survey 
> link to tell us how we are doing: 
> http://www.craneae.com/surveys/satisfaction.htm
> 
> Your feedback is of the utmost importance to us. Thank you for your time.
> 
> Crane Aerospace & Electronics Confidentiality Statement:
> The information contained in this email message may be privileged and is 
> confidential information intended only for the use of the recipient, or any 
> employee or agent responsible to deliver it to the intended recipient. Any 
> unauthorized use, distribution or copying of this information is strictly 
> prohibited and may be unlawful. If you have received this communication in 
> error, please notify the sender immediately and destroy the original message 
> and all attachments from your electronic files.
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rquota didnot show userquota (Solaris 10)

2009-12-11 Thread Lin Ling


This is
CR 6907830 rquotad(1M) doesn't return quotas for ZFS if NFS client 
mountpoint differs from entry in /etc/mnttab


Fix is in progress.
Thanks,
Lin

On 11/26/09 04:59, Willi Burmeister wrote:

Hi,

we have a new fileserver running on X4275 hardware with Solaris 10U8.

On this fileserver we created one test dir with quota and mounted these
on another Solaris 10 system. Here the quota command didnot show the 
used quota. Does this feature only work with OpenSolaris or is it 
intended to work on Solaris 10?


Here what we did on the server:

# zfs create -o mountpoint=/export/home2 zpool1/home
# zfs set sharenfs=rw=sparcs zpool1/home
# zfs set userqu...@wib=1m zpool1/home

# mkdir /export/home2/wib
# cp  /export/home2/wib
# chown -Rh wib:sysadmin /export/home2/wib

# zfs userspace zpool1/home
TYPENAMEUSED  QUOTA  
POSIX User  root  3K   none  
POSIX User  wib 154K 1M  


# quota -v wib
Disk quotas for wib (uid 90):
Filesystem usage  quota  limittimeleft  files  quota  limittimeleft
/export/home2
 154   1024   1024   -  -  -  -   -

and the client:

# mount :/export/home2/wib /mnt

% cd /mnt
% du -sk .
154 .

% quota -v wib
Disk quotas for wib (uid 90):
Filesystem usage  quota  limittimeleft  files  quota  limittimeleft


A simple snoop on the network shows us:

  client -> server   PORTMAP C GETPORT prog=100011 (RQUOTA) vers=1 proto=UDP
  server -> client   PORTMAP R GETPORT port=32865
  client -> server   RQUOTA C GETQUOTA Uid=90 Path=/export/home2/wib
  server -> client   RQUOTA R GETQUOTA No quota

Why 'no quota'?

Both systems are nearly fully patched.


Any help is appreciated. Thanks in advance.

Willi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Lin Ling


Jorgen,

"quota user1" only prints out information if user1's quota is exceeded.

Try "quota -v user1".

e.g.

(server)
suzuki# zfs set userqu...@lling=10m foo/fs
suzuki# share
-...@foo/fs/foo/fs   rw   "" 



(client)
headers# quota -v lling
Disk quotas for lling (uid 23498):
Filesystem usage  quota  limittimeleft  files  quota  limit
timeleft

/net/suzuki/foo/fs
  0  10240  10240   -  -  -  
-   -


headers# quota -v 23498
Disk quotas for lling (uid 23498):
Filesystem usage  quota  limittimeleft  files  quota  limit
timeleft

/net/suzuki/foo/fs
  0  10240  10240   -  -  -  
-   -



On 05/20/09 09:29, Matthew Ahrens wrote:

Jorgen Lundman wrote:


I have been playing around with osol-nv-b114 version, and the ZFS 
user and group quotas.


First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone 
else involved).


Thanks for the feedback!

I was unable to get ZFS quota to work with rquota. (Ie, NFS mount the 
volume on another server, and issue "quota 1234". It returns nothing).


This should work, at least on Solaris clients.  Perhaps you can only 
request information about yourself from the client?


--matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-27 Thread Lin Ling


On 04/27/09 14:13, ольга крыжановская wrote:

Will this work with Linux rquota clients, too?

Olga
  


It should be.

The ZFS userquota support for rquotad (CR 6824968) went into snv_114.
It uses the same rquotad protocol. As long as the client can talk to 
rquotad,
it will receive the usage/quota/limit values  (but not applicable to 
timeleft/files/fquota/flimit/timeleft fields).


Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: Log device for rpool (/ root partition) not supported?

2009-01-08 Thread Lin Ling

This is bug 6727463.

On 01/07/09 13:49, Robert Bauer wrote:
> Why is it impossible to have a ZFS pool with a log device for the rpool 
> (device used for the root partition)?
> Is this a bug?
> I can't boot a ZFS partition / on a zpool which uses also a log device. Maybe 
> its not supported because then grub should support it too?
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-15 Thread Lin Ling

Hi Duff,

The OpenSolaris bug reporting system is not very robust yet. The team is 
aware of it and plans to make it better.
So, the bugs you filed might have been lost.

I have filed bug 6617080 for you.  You should be able to see it thru 
bugs.opensolaris.org tomorrow.
I will contact Larry to get the core file for the bug.

Thanks,
Lin

J Duff wrote:
> I've tried to report this bug through the http://bugs.opensolaris.org/ site 
> twice. The first time on September 17, 2007 with the title "ZFS Kernel Crash 
> During Disk Writes (SATA and SCSI)". The second time on September 19, 2007 
> with the title "ZFS or Storage Subsystem Crashes when Writing to Disk". After 
> initial entry of the bug and confirmation screen, I've never heard anything 
> back. I've search the bug database repeatedly looking for the entry and a 
> corresponding bug ID. I've found nothing familiar.
>
> Larry (from the sd group?) requested I upload the corefile which I did, but I 
> haven't heard from him again.
>
> It would be good if an email were sent to the submitter of a bug indicating 
> the state of the submission. If for some reason it was filtered out, or is in 
> a hold state for a long period of time, the email would be reassuring.
>
> This is a serious bug which causes a crash during heavy disk writes. We 
> cannot complete our quality testing as long as this bug remains. Thanks for 
> you interest.
>
> Duff
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot for Solaris SPARC

2007-08-13 Thread Lin Ling

Hi Paul,

ZFS Boot on Sparc is not available yet.
We are targeting to have it in the onnv gate in a couple of months.

Thanks,
Lin


Paul Lippai wrote:

> Hi,
>
> Searching this alias I can find a number of guides and scripts that
> describe the configuration of Solaris to boot from a ZFS rootpool.
>
> However, these guides appear to be Solaris 10 x86 specific.
>
> Is the creation of a ZFS boot disk available for Solaris SPARC ?
>
> If so, could you point me in the direction of where I can obtain details
> of this new feature from.
>
> Thanks and Regards,
>
> Paul.
>
> PS: Please email me direct  as I am not subscribed to this alias.Thanks.
> -- 
>   * Paul Lippai *
>
> *Sun Microsystems, Inc.*
> University of Warwick Science Park,
> Millburn Hill Road, Coventry CV4 7HS. UK
> Mobile +44 (0)7803-229-452
> Email [EMAIL PROTECTED]
> *http://uk.sun.com/proactive-services*
>
>
>
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>  
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot != ZFS Root

2007-07-02 Thread Lin Ling
Jesse Hallio wrote:

>I've been trying to upgrade to snv66 and I've been using a single disk ZFS 
>pool (not the whole 8-disk pool, more about that in another post). I installed 
>snv66 in Parallels and ufsdumped the filesystem to the one disk pool. The one 
>disk pool also has a snv55b install that I can boot into.
>  
>

Do you mean you have an old mountroot zfs root running snv55b, and now 
you are
trying to upgrade it to snv66?

If so, take a look at
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/mntroot-transition/

>(I'm not sure if the new ZFS boot's single disk pools only -limitation still 
>applies at this point.)
>
>  
>

Yes, the limitation still applies: single disk or mirrored config, and 
SMI label only.

Lin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: Re: ZFS Boot manual setup in b65

2007-06-13 Thread Lin Ling


Douglas Atique wrote:

Now that I know *what*, could you perhaps explain to my *why*? I understood 
zpool import and export operations much as mount and unmount, like maybe some 
checks on the integrity of the pool and updates to some structure on the OS to 
maintain the imported/exported state of that pool. But now I suspect this state 
information is in fact maintained in the pool itself. Does this make sense?

  


In short, once a pool is exported, it's not available/visible for live 
usage, even after reboot.



In that case, may I suggest that you add a note to the manual 
(http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) stating that 
the pool should not be exported prior to booting off it?

  


done


Thanks for your help!


You are welcome.

Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: ZFS Boot manual setup in b65

2007-06-12 Thread Lin Ling


Douglas Atique wrote:

Right. But can I generate them automatically somehow on the next boot? I have 
followed the instructions that loop-mount / and tar the contents of devices and 
dev and untar them to the root pool. I just want to know if there is an 
alternative way to do it. For example, what if I add some new hardware after I 
have switched to ZFS for my root fs? How will it be added to /devices and /dev? 
Couldn't the same principle be applied to generate all of these directories on 
the first boot off the ZFS root pool?

  


Once you switch over to zfs root, adding new hardware should just behave 
as what you expect on ufs root.
Copy /devices and /dev is just a one-time thing (as part of 
'installation') to setup the initial zfs root.



Sorry, I didn't make myself clear. When grub is installed on a ZFS pool (no matter if I 
do installgrub to my c0d0s5 or if I create a ZFS pool on my c0d0s0 and do installgrub to 
it) the GRUB menu is not shown. Ever. Just by installing GRUB back to a UFS slice makes 
it work again. This "doesn't work" refers to the difficulties with installing 
GRUB on a ZFS pool and having it work successfully. Apparently the only trouble is with 
the video, as the menu is not shown, but the options remain functional (if I remember the 
order by heart, that is). This is a different problem than my panic on boot off the ZFS 
pool in c0d0s5, though.

  


'installgrub' will always put grub on the same disk location (first 3 
cylinders), not on a ZFS pool.

You only need to installgrub when you want a new grub bits on the disk.
Since you are using s0 as the default menu.lst location, you should 
always installgrub for s0:


# installgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0

At this point, my suggestion would be:

1. boot up s4 (SXCEb65)
2. destroy ZFS rootpool rootfs on s5
3. use TimS's script to setup the ZFS rootpool/rootfs on s5 from SXCEb65 
(s4)


If this works, switch to use your s0 for menu control:
1. boot up ZFS root
2. # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d0s0
3. on s0 menu.lst, add the ZFS entry

If this works, then you can start play around.

Lin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS Boot manual setup in b65

2007-06-11 Thread Lin Ling


Hi Doug,

I need more information:
   You need /devices and /dev on zfs root to boot.  Not sure what you 
mean by 'it doesn't work'?

   What OS version is running on your boot slice (s0)?
   Is this where your zfs root pool (s5) built from?

'installgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0' puts the new grub 
on c0d0
(with boot device points to s0) and that should be sufficient for your 
case.
'-m' overrides MBR, don't need to do it.  Not sure the real impact is, 
but might

be ok in your case.

Lin

Douglas Atique wrote:

Hi Doug, from the information I read so far, I assume
you have

c0d0s0  - ufs root
c0d0s5  - zfs root pool 'snv' and root filesystem
'b65'



Hi Lin,
My complete layout follows:
c0d0s0: boot slice (holds a manually maintained /boot) -- UFS
c0d0s1: the usual swap slice
c0d0s3: S10U3 root -- UFS
c0d0s4: SXCE root -- UFS
c0d0s5: "snv" pool -- ZFS latest version
c0d0s6: an experimental "boot from sources the hard way" slice -- UFS
c0d0s7: "common" pool, mounted on /export -- ZFS older version

My menu.lst entries all use "root" to direct grub to one of the slices: 
c0d0s3 --> (hd0,0,d),

c0d0s4 --> (hd0,0,e),
c0d0s5 --> (hd0,0,f),
c0d0s6 --> (hd0,0,g).

  

installgrub on c0d0s0 puts grub on the same disk as
c0d0s5, but 
indicates which slice is the default boot slice.
So, once you default boot from c0d0s5, you should not
need
'root (hd0,0,f)' in your menu.lst entry which could
confuse the mapping.
(I'll try to make the doc a little bit more clear.)


Yes, that was a mistake from mine. I just copied the menu.lst from c0d0s0 to 
c0d0s5 as I tried to make that the grub partition. However, changing that 
didn't work, as it didn't work to reformat c0d0s0 as a ZFS pool. I don't quite 
understand what is going on when I boot, but I observe that whenever the grub 
slice is ZFS the menu is not shown, as if somehow the video drivers (or BIOS 
routines, whatever) couldn't be accessed by GRUB on ZFS, but could on UFS. 
Could there be any correlation between the video access and the filesystem type?

  

That said, your original menu.lst does look fine
(assuming you did copy
the new grub into c0d0s0).

Play around a little bit more and send us more detail
information.
e.g. menu.lst entries from the rootpool or ufs root,
where is grub 
installed,

have you copied the devices dir to the zfs root
filesystem (step 4), is 
snv/b65 good...etc.



This is another issue. I have followed the /devices and /dev creation steps on 
the ZFS-boot manual, but it doesn't work. Couldn't I create a clean /devices 
and /dev instead? Does reconfiguration boot work when the /devices and /dev are 
empty?

I will post the complete menu.lst grub entries, but not right now, as I don't 
have the notebook with me.

One additional comment: I have also tried to installgrub -m, i.e. on the master 
boot record. That also didn't work, but I am thinking if it would make any 
difference to wipe the hard disk completely and start again from scratch. Do 
you think there could be some older GRUB stuff hidden somewhere in the disk 
that I didn't upgrade? Consider that the first installation on the disk (when I 
partitioned it the way it is today) was of S10U3, which used an older grub.

-- Doug
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Boot manual setup in b65

2007-06-06 Thread Lin Ling


Hi Doug, from the information I read so far, I assume you have

c0d0s0  - ufs root
c0d0s5  - zfs root pool 'snv' and root filesystem 'b65'

installgrub on c0d0s0 puts grub on the same disk as c0d0s5, but 
indicates which slice is the default boot slice.
So, once you default boot from c0d0s5, you should not need
'root (hd0,0,f)' in your menu.lst entry which could confuse the mapping.
(I'll try to make the doc a little bit more clear.)

That said, your original menu.lst does look fine (assuming you did copy
the new grub into c0d0s0).

Play around a little bit more and send us more detail information.
e.g. menu.lst entries from the rootpool or ufs root, where is grub 
installed,
have you copied the devices dir to the zfs root filesystem (step 4), is 
snv/b65 good...etc.


Lin


Douglas Atique wrote:

An additional information:
I noticed that I was overlooking steps 6 and 7 in the instructions 
(http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/). I already 
have slice s0 in my disk dedicated to GRUB and it features a /boot of its own, 
so I was thinking that it wouldn't make a difference to have GRUB in one or 
another slice. But reading the instructions more carefully, I noticed that it 
says clearly that GRUB has to be installed in the ZFS slice, even if there is 
another UFS slice, even if they are on different disks.
So I tried installgrub into c0d0s5, my ZFS root pool slice. I also mounted that 
pool as a filesystem and copied my /boot to it. And then something strange 
happens. When GRUB is loaded from c0d0s0 it works fine. Booting GRUB from 
c0d0s5 the menu is not displayed and only a blank screen is seen until the 
default option is loaded by timeout.
Could I have done something wrong in the GRUB installation?

-- Doug
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot: Now, how can I do a pseudo live upgrade?

2007-05-25 Thread Lin Ling


Malachi de Ælfweald wrote:
No, I did mean 'snapshot -r' but I thought someone on the list said 
that the '-r' wouldn't work until b63... hmmm...




'snapshot -r' is available before b62, however, '-r' may run into a 
stack overflow (bug 6533813) which is fixed in b63.


Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot from compressed zfs

2007-04-22 Thread Lin Ling

Eric Schrock wrote:

This is:

6538017 ZFS boot to support gzip decompression

This should be fixed in the near future.  In the meantime, lzjb should
work just fine (albeit with lower compression ratio).
  


Unfortunately, lzjb is not working well and needs to be fixed as well, see:

6541114 GRUB/ZFS fails to load files from a default compressed (lzjb) root

Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Restrictions on ZFS boot?

2007-04-22 Thread Lin Ling

Mario Goebbels wrote:

With "one disk" I basically mean pools consisting of a single toplevel vdev. 
The current documentation poses this restriction, either a single disk or a mirror.

  


Yes, it is still the case that the roopool has to be either a single 
vdev pool or a mirror.

Currently, we have no plan to support additional kind of root pool yet.

Lin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-29 Thread Lin Ling


We will make the manual and netinstall instructions available to 
non-SWAN folks shortly.

The manual instruction is available at

http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/


We are still working on the Netinstall/DVD binary/setup kit.
Will send out a notice when it's available.

Thank you for your support and enthusiasm.
Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-28 Thread Lin Ling


We currently have a working prototype for SPARC (via newboot SPARC project).
We don't have a firm date yet, but shouldn't be too far away :-).

Lin

Matty wrote:

Howdy,

This is awesome news that ZFS boot support is available for x86
platforms. Do any of the ZFS developers happen to know when ZFS boot
support for SPARC will be available?

Thanks,
- Ryan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-28 Thread Lin Ling


Malachi de Ælfweald wrote:

Should I:
a) install b60, figure out how to bfu to b62, then try to convert to 
the zfs root

b) wait to install Solaris until b62 comes out
c) follow the original instructions from last year (with b60) and then 
figure out how to switch to the new mechanism when it is public




You can do either (a) or (b).
I would avoid (c) if you can just wait a day for bfu nightly or a week 
or 2 for b62.


Lin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-28 Thread Lin Ling


We will make the manual and netinstall instructions available to 
non-SWAN folks shortly.


Tim Foster also has a script to do the set up, wait for his blog.

Lin

Richard Elling wrote:

Cyril Plisko wrote:

First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.

I'd also like to suggest that the material referenced by HEADS UP
message [1] be made available to non-SWAN folks as well.

[1] http://opensolaris.org/os/community/on/flag-days/pages/2007032801/


This has already occurred.
http://www.opensolaris.org/os/community/on/flag-days/61-65/

maybe you were too quick on the trigger? :-)
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: update on zfs boot support

2007-03-11 Thread Lin Ling

Matty wrote:

How will /boot/grub/menu.lst be updated? Will the admin have to run
bootadm after the root clone is created, or will the zfs utility be
enhanced to populate / remove entries from the menu.lst?



The detail of how menu.lst will be updated is still being worked out.
We don't plan on using zfs utility to handle it though.

Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: update on zfs boot support

2007-03-11 Thread Lin Ling

Matty wrote:

I am curious how snapshots and clones will be integrated with grub.
Will it be posible to boot from a snapshot? I think this would be
useful when applying patches, since you could snapshot / ,/var and
/opt, patch the system, and revert back (by choosing a snapshot from
the grub menu) to the snapshot if something went awry. Is this how the
zfs boot team envisions this working?


You can snapshot/clone, and revert back by choosing the clone from the 
grub menu to boot.
Since snapshot is a read-only filesystem, directly booting from it is 
not supported for the initial release.

However, it is on our to-investigate list.

Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: update on zfs boot support

2007-03-09 Thread Lin Ling


Ivan Wang wrote:


Hi, 

However, this raises another concert that during recent discussions regarding to disk layout of a zfs system (http://www.opensolaris.org/jive/thread.jspa?threadID=25679&tstart=0) it was said that currently we'd better give zfs the whole device (rather than slices) and keep swap off zfs devices for better performance. 
If the above recommendation still holds, we still have to have a swap device out there othere than devices managed by zfs. is this limited by the design or implementation of zfs?


Ivan.
  


ZFS supports swap to /dev/vzol, however, I do not have data related to 
performance.

Also note that ZFS does not support dump yet, see RFE 5008936.

Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] update on zfs boot support

2007-03-09 Thread Lin Ling


Hi Ian,

I might misunderstand your plan. I assumed you'll throw in a small boot 
drive as the zfs root pool.

ZFS root pool can be a mirrored pool, so you don't need to use SVM mirror.

Lin

Ian Collins wrote:

Lin Ling wrote:

  

Ian Collins wrote:



Thanks for the heads up.

I'm building a new file server at the moment and I'd like to make sure I
can migrate to ZFS boot when it arrives.

My current plan is to create a pool on 4 500GB drives and throw in a
small boot drive.

Will I be able to drop the boot drive and move / over to the pool when
ZFS boot ships?
  
  

Yes, should be able to, given that you have already had an UFS boot
drive running root.



Thanks.

As I intend setting up my pool as a striped mirror, it looks from the
the other postings like this will not be suitable for the boot device.

So an SVM mirror on a couple of small drives may still be the best bet
for a small sever.

Ian

  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] update on zfs boot support

2007-03-08 Thread Lin Ling

Ian Collins wrote:

Thanks for the heads up.

I'm building a new file server at the moment and I'd like to make sure I
can migrate to ZFS boot when it arrives.

My current plan is to create a pool on 4 500GB drives and throw in a
small boot drive.

Will I be able to drop the boot drive and move / over to the pool when
ZFS boot ships?
  


Yes, should be able to, given that you have already had an UFS boot 
drive running root.


Lin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] update on zfs boot support

2007-03-08 Thread Lin Ling


Yes, the initial release of bootable zfs has restriction on the root 
pool: i.e.
no concatenation or RAIDZ, only single deviced pool or a mirrored 
configuration.
This is mainly due to limitations on how many disks the firmware can 
access at boot time.


Lin

Francois Dion wrote:

On Thu, 2007-03-08 at 14:22 -0800, Darren Dunham wrote:
  

This thread from a year ago suggests that at least the first round of
ZFS root pools will have restrictions that are not necessary on other
pools (like no concatenation or RAIDZ).

http://www.opensolaris.org/jive/thread.jspa?threadID=7089

I've not noticed any posts since that modify its content.



That would be too bad if raidz is not supported. I have been running a
server with "bootable" zfs (3 disk w/raidz) for the past 6 months (1U
server).

I've simply been using the trick that tabriz posted on her blog a while
back, but I lost only a small amount of space on each drive by using a
USB drive for the initial install and putting grub on its own. 


Performance is not earth shattering due to (I think) /var/tmp
and /var/log. And it's old ZFS code. I've not rebooted or upgraded since
then.

Francois
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Source to the on disk spec document

2006-07-24 Thread Lin Ling

Darren J Moffat wrote:

Where can I get the source file for the on disk spec document.  I want 
to update it with the changes to the structures for crypto support.  I 
need to do this for customer I'm meeting with to discuss the crypto 
features being added to ZFS.




You can find a copy of the on-disk spec in 
http://www.opensolaris.org/os/community/zfs/docs/.
As far as I know, there is no centralized place for the source file. You 
can find Tabriz's sources in
~tabriz/projects/zfs/zfs_boot/docs/ondiskformat*. Tabriz is on vacation 
until 8/7.


Lin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss