Re: [OpenIndiana-discuss] [developer] HBA recommended except LSI and ARECA

2014-04-30 Thread Fred Liu
Keith Wesolowski via smartos-discuss 于2014年5月1日星期四写道: > On Wed, Apr 30, 2014 at 10:13:49AM -0700, Fred Liu wrote: > > > Is 6805H/6405H(http://www.adaptec.com//en-us/products/series/6h/) > worthy of recommendation too? > > That uses the PM8001, which I believe is the sa

[smartos-discuss] Re: [developer] HBA recommended except LSI and ARECA

2014-04-30 Thread Fred Liu via smartos-discuss
Keith Wesolowski via smartos-discuss 于2014年5月1日星期四写道: > On Wed, Apr 30, 2014 at 10:13:49AM -0700, Fred Liu wrote: > > > Is 6805H/6405H(http://www.adaptec.com//en-us/products/series/6h/) > worthy of recommendation too? > > That uses the PM8001, which I believe is the sa

Re: [OpenIndiana-discuss] [developer] HBA recommended except LSI and ARECA

2014-04-30 Thread Fred Liu
Is 6805H/6405H(http://www.adaptec.com//en-us/products/series/6h/) worthy of recommendation too? Thanks. Fred > -Original Message- > From: Keith Wesolowski [mailto:keith.wesolow...@joyent.com] > Sent: 星期四, 五月 01, 2014 0:29 > To: Fred Liu > Cc: develo...@lists.illumos.or

[smartos-discuss] RE: [developer] HBA recommended except LSI and ARECA

2014-04-30 Thread Fred Liu via smartos-discuss
Is 6805H/6405H(http://www.adaptec.com//en-us/products/series/6h/) worthy of recommendation too? Thanks. Fred > -Original Message- > From: Keith Wesolowski [mailto:keith.wesolow...@joyent.com] > Sent: 星期四, 五月 01, 2014 0:29 > To: Fred Liu > Cc: develo...@lists.illumos.or

[smartos-discuss] failed to add cache device

2014-04-30 Thread Fred Liu via smartos-discuss
Hi, I failed to add cache device like following: [root@00-25-90-74-f5-04 ~]# zpool add zones cache c0t50015179596E5EB1d0p1 Assertion failed: rchildren == 2, file zpool_vdev.c, line 641 Abort (core dumped) [root@00-25-90-74-f5-04 ~]# uname -a SunOS 00-25-90-74-f5-04 5.11 joyent_20140418T031241Z i86

Re: [smartos-discuss] hacking grub to enable smartos booting from multi-version.

2014-04-29 Thread Fred Liu via smartos-discuss
Daniel, Can you post the detailed configuration about your iPXE? Thanks. Fred 发件人: Daniel Malon [daniel.ma...@me.com] 发送时间: 2014年4月29日 4:34 收件人: smartos-discuss@lists.smartos.org; Fred Liu 主题: Re: [smartos-discuss] hacking grub to enable smartos

[smartos-discuss] hacking grub to enable smartos booting from multi-version.

2014-04-29 Thread Fred Liu via smartos-discuss
Hi, It seems my grub only work with the fixed path name like "/platform/" and it cannot work with different name like "/platform-xxx/". Is it possible to switch to different versions? Thanks. Fred --- smartos-discuss Archives: https://www.lis

Re: [Spice-devel] Windows Guest Tools 0.65

2013-10-08 Thread Fred Liu
Tested and works well! Big thanks! Fred > -Original Message- > From: spice-devel-bounces+fred_liu=issi@lists.freedesktop.org > [mailto:spice-devel-bounces+fred_liu=issi@lists.freedesktop.org] On > Behalf Of Christophe Fergeau > Sent: 星期二, 十月 08, 2013 0:10 > To: spice-devel@lists.f

[slurm-dev] Re: slurm integration with FlexLM license manager

2013-07-03 Thread Fred Liu
David, Can you shed more lights on the mechanism of LSF License Scheduler within the most permissive territory? It is the closest solution to the ideal as far as I know. In a nutshell, FlexLM is not open-sourced and nobody knows the internals except the legal owner. And the only “interface” for

Re: [OpenIndiana-discuss] [UNSUBSCRIBE]

2013-06-09 Thread Fred Liu
Ken, I *DO* like this clear, professional, objective, and rational reply and the way to communicate. Thank you. Fred From: ken mays [mailto:maybird1...@yahoo.com] Sent: 星期一, 六月 10, 2013 1:06 To: Fred Liu; openindiana-discuss@openindiana.org Cc: mar...@martux.org Subject: Re: RE: [OpenIndiana

Re: [OpenIndiana-discuss] [UNSUBSCRIBE]

2013-06-09 Thread Fred Liu
...@yahoo.com] Sent: 星期日, 六月 09, 2013 17:47 To: openindiana-discuss@openindiana.org; Fred Liu Cc: mar...@martux.org Subject: Re: RE: [OpenIndiana-discuss] [UNSUBSCRIBE] Fred, The question can pertain to 'does Oracle Linux 6.4, Solaris 11.0, and Solaris 10u1-u10 work or install on the new SPARC

Re: [OpenIndiana-discuss] [UNSUBSCRIBE]

2013-06-08 Thread Fred Liu
> So, you may not drive a Mini Cooper or 2013 Koenigsegg Agera R but that > does not mean they have no purpose to specific markets. I think > OpenSXCE is kinda like an open-source 'Agera R' for high-end users. > Just my opinion... > > ~ Ken Mays Do you mean OpenSXCE will work on SPARC T5? Fred

Re: [OpenIndiana-discuss] opensolaris.org shutting down next month

2013-02-19 Thread Fred Liu
This one is really good! Thanks. Fred > -Original Message- > From: Hugh McIntyre [mailto:li...@mcintyreweb.com] > Sent: 星期六, 二月 16, 2013 1:29 > To: Discussion list for OpenIndiana > Subject: Re: [OpenIndiana-discuss] opensolaris.org shutting down next > month > > Is this going to be any

Re: [OmniOS-discuss] [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread Fred Liu
Is it possible to replicate the whole opensolaris site to illumos/openindiana/smartos/omnios site in a sub-catalog as archive? >-Original Message- >From: zfs-discuss-boun...@opensolaris.org >[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov >Sent: Sunday, February 17, 2

Re: [OpenIndiana-discuss] [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread Fred Liu
Is it possible to replicate the whole opensolaris site to illumos/openindiana/smartos/omnios site in a sub-catalog as archive? >-Original Message- >From: zfs-discuss-boun...@opensolaris.org >[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov >Sent: Sunday, February 17, 2

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread Fred Liu
Is it possible to replicate the whole opensolaris site to illumos/openindiana/smartos/omnios site in a sub-catalog as archive? >-Original Message- >From: zfs-discuss-boun...@opensolaris.org >[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov >Sent: Sunday, February 17, 2

[slurm-dev] RE: LSF command wrappers for Slurm?

2013-01-20 Thread Fred Liu
> > Does anyone have LSF command wrappers for Slurm or is interested in > working on that? > > (That's LSF commands and options running over the Slurm workload > manager, like what we have for PBS/Torque today) It is an attractive project. Any doc about what have done for PBS/Torque? Fred

[zfs-discuss] .send* TAG will be left if there is a corrupt/aborted zfs send?

2013-01-06 Thread Fred Liu
It is the first time for me to get this TAG. zfs holds cn03/3/is8119aw@issi-backup:daily-2012-12-14-17:26 NAMETAGTIMESTAMP cn03/3/is8119aw@issi-backup:daily-2012-12-14-17:26 .send-24928-0 Sun Jan 6 17:49:59 2013 cn03/3/is8119aw@issi-b

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-15 Thread Fred Liu
> >Even with infinite wire speed, you're bound by the ability of the source server >to generate the snapshot stream and the ability of the destination server to >write the snapshots to the media. > >Our little servers in-house using ZFS don't read/write that fast when pulling >snapshot contents of

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Fred Liu
> > We have found mbuffer to be the fastest solution. Our rates for large > transfers on 10GbE are: > > 280MB/smbuffer > 220MB/srsh > 180MB/sHPN-ssh unencrypted > 60MB/s standard ssh > > The tradeoff mbuffer is a little more complicated to script; rsh is, > well, you know;

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Fred Liu
Post in the list. > -Original Message- > From: Fred Liu > Sent: 星期五, 十二月 14, 2012 23:41 > To: 'real-men-dont-cl...@gmx.net' > Subject: RE: [zfs-discuss] any more efficient way to transfer snapshot > between two hosts than ssh tunnel? > > > >

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Fred Liu
> > I've heard you could, but I've never done it. Sorry I'm not much help, > except as a cheer leader. You can do it! I think you can! Don't give > up! heheheheh > Please post back whatever you find, or if you have to figure it out for > yourself, then blog about it and post that. Aha! Gotc

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-13 Thread Fred Liu
>Add the HPN patches to OpenSSH and enable the NONE cipher.  We can saturate a >gigabits link (980 mbps) between two FreeBSD hosts using that. >Without it, we were only able to hit ~480 mbps on a good day. >If you want 0 overhead, there's always netcat. :) 980mbps is awesome! I am thinking runnin

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-13 Thread Fred Liu
. Fred From: Adrian Smith [mailto:adrian.sm...@rmit.edu.au] Sent: 星期五, 十二月 14, 2012 12:08 To: Fred Liu Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel? Hi Fred, Try mbuffer (http://www.maier-komor.de

[zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-13 Thread Fred Liu
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [google-appengine] I am a new GAEer, it promts as below, how should I do ?

2012-10-09 Thread Fred Liu
BUT, actually, it works well in my local server. is it related to the python version ? my python version is Python 2.7.1+. Thanks, Fred 在 2012年10月9日星期二UTC+8上午9时28分01秒,Takashi Matsuo (Google)写道: > > > Please go to: > https://appengine.google.com/logs?&app_id=YOUR_APP_ID > &g

[google-appengine] I am a new GAEer, it promts as below, how should I do ?

2012-10-08 Thread Fred Liu
Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please reportyour problem and mention this error message and the query that caused it. -- You received this message because you a

[cifs-discuss] smb server failed to authenticate win7 cifs client via win2008 DC

2012-07-19 Thread Fred Liu
Hi, I use smartos-0713 and I set smb server in domain mode. But my win7 cifs client fails to authenticate. [root@00-25-90-74-f5-04 /zones/cross]# sharectl get smb system_comment= max_workers=1024 netbios_scope= lmauth_level=4 keep_alive=5400 wins_server_1= wins_server_2= wins_exclude= signing_ena

[389-users] anyone successfully configured autofs via ldap in solaris11?

2012-07-17 Thread Fred Liu
I use 1.2.10.12 version. Thanks. Fred -- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users

[OpenIndiana-discuss] anyone successfully configured autofs via ldap in 151a?

2012-07-17 Thread Fred Liu
I use 389-DS server. Thanks. Thanks. Fred ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-05-02 Thread Fred Liu
> > I don't think "accurate equations" are applicable in this case. > You can have estimates like "no more/no less than X" based on, > basically, level of redundancy and its overhead. ZFS metadata > overhead can also be smaller or bigger, depending on your data's > typical block size (fixed for zv

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-05-02 Thread Fred Liu
> >What problem are you trying to solve? How would you want referenced or >userused@... to work? > >To be more clear: space shared between a clone and its origin is >"referenced" by both the clone and the origin, so it is charged to both the >clone's and origin's userused@... properties. The add

Re: [zfs-discuss] current status of SAM-QFS?

2012-05-02 Thread Fred Liu
>IIRC, the senior product architects and perhaps some engineers have >left Oracle. A better question for your Oracle rep is whether there is a >plan to anything other than sustaining engineering for the product. I see. Thanks. Fred ___ zfs-discuss

Re: [zfs-discuss] current status of SAM-QFS?

2012-05-02 Thread Fred Liu
. > >If you want to know Oracle's roadmap for SAM-QFS then I recommend >contacting your Oracle account rep rather than asking on a ZFS discussion list. >You won't get SAM-QFS or Oracle roadmap answers from this alias. > My original purpose is to ask if there is an effort to integrate open-sourced

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-05-02 Thread Fred Liu
> >The size accounted for by the userused@ and groupused@ properties is the >"referenced" space, which is used as the basis for many other space >accounting values in ZFS (e.g. "du" / "ls -s" / stat(2), and the zfs accounting >properties "referenced", "refquota", "refreservation", "refcompressratio

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-05-02 Thread Fred Liu
>The time is the creation time of the snapshots. Yes. That is true. Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] current status of SAM-QFS?

2012-05-02 Thread Fred Liu
> > Still a fully supported product from Oracle: > > http://www.oracle.com/us/products/servers-storage/storage/storage- > software/qfs-software/overview/index.html > Yeah. But it seems no more updates since sun acquisition. Don't know Oracle's roadmap in aspect of data-tying. Thanks. Fred __

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-05-01 Thread Fred Liu
>On Apr 26, 2012, at 12:27 AM, Fred Liu wrote: >"zfs 'userused@' properties" and "'zfs userspace' command" are good enough to >gather usage statistics. >I think I mix that with NetApp. If my memory is correct, we have to set quotas >to ge

[zfs-discuss] current status of SAM-QFS?

2012-05-01 Thread Fred Liu
The subject says it all. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-26 Thread Fred Liu
>On 2012-04-26 11:27, Fred Liu wrote: >> "zfs 'userused@' properties" and "'zfs userspace' command" are good >> enough to gather usage statistics. >... >> Since no one is focusing on enabling default user/group quota now, th

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-26 Thread Fred Liu
> >2012/4/26 Fred Liu > >Currently, dedup/compression is pool-based right now, they don't have the >granularity on file system or user or group level. There is also a lot of >improving  space in this aspect. > >Compression is not pool-based, you can control it with t

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-26 Thread Fred Liu
“zfs 'userused@' properties” and “'zfs userspace' command” are good enough to gather usage statistics. I think I mix that with NetApp. If my memory is correct, we have to set quotas to get usage statistics under DataOnTAP. Further, if we can add an ILM-like feature to poll the time-related info(

Re: [zfs-discuss] cluster vs nfs

2012-04-25 Thread Fred Liu
I jump into this loop with different alternative -- ip-based block device. And I saw few successful cases with "HAST + UCARP + ZFS + FreeBSD". If zfsonlinux is robust enough, trying "DRBD + PACEMAKER + ZFS + LINUX" is definitely encouraged. Thanks. Fred > -Original Message- > From: zfs-

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-25 Thread Fred Liu
it possible to do it > without setting > such quotas? > >Thanks. > >Fred > _________ From: Fred Liu Sent: 星期三, 四月 25, 2012 20:05 To: develo...@lists.illumos.org Cc: 'zfs-discuss@opensolaris.org' Subject: RE: [developer] Setting defau

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-25 Thread Fred Liu
On Apr 24, 2012, at 2:50 PM, Fred Liu wrote: Yes. Thanks. I am not aware of anyone looking into this. I don't think it is very hard, per se. But such quotas don't fit well with the notion of many file systems. There might be some restricted use cases where it makes good sense, b

[zfs-discuss] FW: Setting default user/group quotas?

2012-04-24 Thread Fred Liu
-Original Message- From: Fred Liu Sent: 星期二, 四月 24, 2012 11:41 To: develo...@lists.illumos.org Subject: Setting default user/group quotas? It seems this feature is still not there yet. Any plan to do it? Or is it hard to do it? Thanks. Fred

Re: SAS Drive identification LEDs

2012-04-10 Thread Fred Liu
Yes. I understand. I have been seeking for long time to enable SES in my home server which is comprised of commodity hardwares. But I have got no luck. I will try sg3_utils if possible. Many thanks. Fred 在 2012年4月10日 下午4:52,Mike Pumford 写道: > Fred Liu wrote: >> >> Thanks. Can you

SAS Drive identification LEDs

2012-04-09 Thread Fred Liu
-- 已转发邮件 -- 发件人: Fred Liu 日期: 2012年4月9日 下午3:56 主题: Re: SAS Drive identification LEDs 收件人: Mike Pumford Thanks. Can you recommend what drive and enclosure can provide working SES in 9.0? Thanks. Fred 在 2012年3月30日 下午5:18,Mike Pumford 写道: > Fred Liu wrote: >>>

Re: SAS Drive identification LEDs

2012-03-30 Thread Fred Liu
> > How would you identify such a drive on any other system? > Normally, there are printed labels as the backup solution. Thanks. Fred ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe,

Re: [zfs-discuss] Windows 8 ReFS (OT)

2012-01-17 Thread Fred Liu
Looks really beautiful... > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of David Magda > Sent: 星期二, 一月 17, 2012 8:06 > To: zfs-discuss > Subject: [zfs-discuss] Windows 8 ReFS (OT) > > Kind of off topic, but I fig

Re: [OpenIndiana-discuss] The lx brand for OpenIndiana 151a

2012-01-09 Thread Fred Liu
Is there an english version? Thanks Fred >-Original Message- >From: Volker A. Brandt [mailto:v...@bb-c.de] >Sent: Monday, January 09, 2012 5:04 PM >To: Andrey Sokolov >Cc: Discussion list for OpenIndiana >Subject: Re: [OpenIndiana-discuss] The lx brand for OpenIndiana 151a > >Hello Andre

[discuss] SR-IOV?

2011-12-16 Thread Fred Liu
Hi, Is it also possible to enable SR-IOV in KVM-on-Illumos? Thanks. Fred --- illumos-discuss Archives: https://www.listbox.com/member/archive/182180/=now RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be Modify Your Subscrip

Re: [Lxc-users] autofs in lxc containers?

2011-11-15 Thread Fred Liu
Any progress on autofs now? Thanks. Fred >-Original Message- >From: Daniel Lezcano [mailto:daniel.lezc...@free.fr] >Sent: Sunday, June 26, 2011 12:48 AM >To: Fred Liu >Cc: Lxc-users@lists.sourceforge.net >Subject: Re: [Lxc-users] autofs in lxc containers? > >On 0

Re: [zfs-discuss] Oracle releases Solaris 11 for Sparc and x86 servers

2011-11-09 Thread Fred Liu
> > ... so when will zfs-related improvement make it to solaris- > derivatives :D ? > I am also very curious about Oracle's policy about source code. ;-) Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailm

[zfs-discuss] Oracle releases Solaris 11 for Sparc and x86 servers

2011-11-09 Thread Fred Liu
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[OpenIndiana-discuss] enable SR-IOV in KVM on OI?

2011-11-05 Thread Fred Liu
Is there any plan? Thanks. Fred ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-25 Thread Fred Liu
> Some people have trained their fingers to use the -f option on every > command that supports it to force the operation. For instance, how > often do you do rm -rf vs. rm -r and answer questions about every > file? > > If various zpool commands (import, create, replace, etc.) are used > against

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-25 Thread Fred Liu
Paul, Thanks. I understand now. Fred > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Paul Kraus > Sent: 星期一, 十月 24, 2011 22:38 > To: ZFS Discussions > Subject: Re: [zfs-discuss] FS Reliability WAS: about btrfs

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-21 Thread Fred Liu
> 3. Do NOT let a system see drives with more than one OS zpool at the > same time (I know you _can_ do this safely, but I have seen too many > horror stories on this list that I just avoid it). > Can you elaborate #3? In what situation will it happen? Thanks. Fred ___

Re: [OpenIndiana-discuss] Dennis Ritchie

2011-10-13 Thread Fred Liu
R.I.P Dennis. > -Original Message- > From: Apostolos Syropoulos [mailto:asyropou...@yahoo.com] > Sent: 星期四, 十月 13, 2011 19:55 > To: openindiana; lista solaris > Subject: [OpenIndiana-discuss] Dennis Ritchie > > A very very sad for the world of computing: > Dennis Ritchie, co-creator of Un

Re: [zfs-discuss] all the history

2011-09-19 Thread Fred Liu
0 0 c4t500151795910D221d0s0 ONLINE 0 0 0 errors: No known data errors Thanks. Fred From: Fred Liu Sent: 星期二, 九月 20, 2011 9:23 To: Tony Kim; 'Richard Elling' Subject: all the history Hi, Following is the history: The whole history is I found a ZIL dev

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
LABEL 2 failed to unpack label 2 LABEL 3 failed to unpack label 3 > -Original Message- > From: Fred Liu > Sent: 星期二, 九月 20,

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> -Original Message- > From: Richard Elling [mailto:richard.ell...@gmail.com] > Sent: 星期二, 九月 20, 2011 3:57 > To: Fred Liu > Cc: zfs-discuss@opensolaris.org > Subject: Re: [zfs-discuss] remove wrongly added device from zpool > > more below… > > On Sep

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > No, but your pool is not imported. > YES. I see. > and look to see which disk is missing"? > > The label, as displayed by "zdb -l" contains the heirarchy of the > expected pool config. > The contents are used to build the output you see in the "zpool import" > or "zpool status" > commands.

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0". > 1. Confirm that each disk provides 4 labels. > 2. Build the vdev tree by hand and look to see which disk is missing > > This can be tedious and time consuming. Do I need to export the pool first? Can you give more details

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
/pci15d9,400@1f,2/disk@5,0 > -Original Message- > From: Fred Liu > Sent: 星期一, 九月 19, 2011 23:35 > To: Fred Liu; Richard Elling > Cc: zfs-discuss@opensolaris.org > Subject: RE: [zfs-discuss] remove wrongly added device from zpool > > I get some good progress like foll

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
c22t5d0 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. Any suggestions? Thanks. Fred > -Original Message- > From: Fred Liu > Sent: 星期一, 九月 19, 2011 22:28 > To: 'Richard

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > You don't mention which OS you are using, but for the past 5 years of > [Open]Solaris > releases, the system prints a warning message and will not allow this > to occur > without using the force option (-f). > -- richard > Yes. There is a warning message, I used zpool add -f. Thanks. Fr

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
I use opensolaris b134. Thanks. Fred > -Original Message- > From: Richard Elling [mailto:richard.ell...@gmail.com] > Sent: 星期一, 九月 19, 2011 22:21 > To: Fred Liu > Cc: zfs-discuss@opensolaris.org > Subject: Re: [zfs-discuss] remove wrongly added device from zpool >

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
100% done: 1041082 pages dumped, dump succeeded rebooting... > -Original Message- > From: Fred Liu > Sent: 星期一, 九月 19, 2011 22:00 > To: Fred Liu; 'Edward Ned Harvey'

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
I also used zpool import -fFX cn03 in b134 and b151a(via live SX11 live cd). It resulted a core dump and reboot after about 15 min. I can see all the leds are blinking on the HDD within this 15 min. Can replacing empty ZIL devices help? Thanks. Fred > -Original Message- > From

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > I'll tell you what does not help. This email. Now that you know what > you're trying to do, why don't you post the results of your "zpool > import" command? How about an error message, and how you're trying to > go about fixing your pool? Nobody here can help you without > information. >

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > So... You accidentally added non-redundant disks to a pool. They were > not > part of the raidz2, so the redundancy in the raidz2 did not help you. > You > removed the non-redundant disks, and now the pool is faulted. > > The only thing you can do is: > Add the disks back to the pool (re-in

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > You can add mirrors to those lonely disks. > Can it repair the pool? Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > This one missing feature of ZFS, IMHO, does not result in "a long way > for > zfs to go" in relation to netapp. I shut off my netapp 2 years ago in > favor > of ZFS, because ZFS performs so darn much better, and has such > immensely > greater robustness. Try doing ndmp, cifs, nfs, iscsi on n

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> > That's a huge bummer, and it's the main reason why device removal has > been a > priority request for such a long time... There is no solution. You > can > only destroy & recreate your pool, or learn to live with it that way. > > Sorry... > Yeah, I also realized this when I send out this

[zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
Hi, For my carelessness, I added two disks into a raid-z2 zpool as normal data disk, but in fact I want to make them as zil devices. Any remedy solutions? Many thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensol

Re: [OpenIndiana-discuss] OpenCDE in OpenIndiana

2011-09-15 Thread Fred Liu
Adding Karsten in this list... Thanks. Fred > -Original Message- > From: Alan Coopersmith [mailto:alan.coopersm...@oracle.com] > Sent: 星期五, 九月 16, 2011 7:33 > To: Discussion list for OpenIndiana > Subject: Re: [OpenIndiana-discuss] OpenCDE in OpenIndiana > > On 09/15/11 09:18, Gary wro

RE: [discuss] I'm back!

2011-09-03 Thread Fred Liu
Great! Glad to hear it. ;-) Fred >-Original Message- >From: Erik Trimble [mailto:tr...@netdemons.com] >Sent: Saturday, September 03, 2011 4:20 PM >To: discuss@lists.illumos.org >Cc: Garrett D'Amore >Subject: Re: [discuss] I'm back! > >On 9/2/2011 10:48 PM, Garrett D'Amore wrote: >> Are you

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-27 Thread Fred Liu
> > I believe so, also it is more than just the T1C drive you need it > needs to be in a library and you also need the Oracle Key Management > system to be able to do the key management for it. > Yes. Single T1C is not a big deal. I mean the whole backup system(tape lib & drive, backup

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-27 Thread Fred Liu
> > The only way you will know of decrypting and decompressing causes a > problem in that case is if you try it on your systems. I seriously > doubt it will be unless the system is already heavily CPU bound and > your > backup window is already very tight. > That is true. > > My understanding

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Fred Liu
> > Yes, which is exactly what I said. > > All data as seen by the DMU is decrypted and decompressed, the DMU > layer > is what the ZPL layer is built ontop of so it has to be that way. > Understand. Thank you. ;-) > > There is always some overhead for doing a decryption and decompression, > t

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Fred Liu
> > The ZFS Send stream is at the DMU layer at this layer the data is > uncompress and decrypted - ie exactly how the application wants it. > Even the data compressed/encrypted by ZFS will be decrypted? If it is true, will it be any CPU overhead? And ZFS send/receive tunneled by ssh becomes th

Re: [zfs-discuss] Encryption accelerator card recommendations.[GPU acceleration of ZFS]

2011-06-29 Thread Fred Liu
> -Original Message- > From: David Magda [mailto:dma...@ee.ryerson.ca] > Sent: 星期二, 六月 28, 2011 10:41 > To: Fred Liu > Cc: Bill Sommerfeld; ZFS Discuss > Subject: Re: [zfs-discuss] Encryption accelerator card > recommendations.[GPU acceleration of ZFS] > >

Re: [zfs-discuss] Encryption accelerator card recommendations.[GPU acceleration of ZFS]

2011-06-27 Thread Fred Liu
FYI There is another thread named -- " GPU acceleration of ZFS" in this list to discuss the possibility to utilize the power of GPGPU. I posted here: Good day, I think ZFS can take advantage of using GPU for sha256 calculation, encryption and maybe compression. Modern video card, like 5xxx or 6x

Re: [OpenIndiana-discuss] Question about drive LEDs

2011-06-25 Thread Fred Liu
> In brief, I cloned a bootable SATA disk from the VMware (now > discontinued) Sun Unified Storage simulator, and then created the > graphics and definitions to match a Supermicro 4U chassis and system > board. > Everything worked exactly as a real one would, including disk locator > led's, disk pr

Re: [Lxc-users] autofs in lxc containers?

2011-06-25 Thread Fred Liu
> http://comments.gmane.org/gmane.linux.kernel.containers.lxc.general/894 > > Yeah. I sent the patchset to the mailing list maintainer but in the > meantime with some stree test I hanged the kernel, so there is a race > somewhere I should have to fixed but forget to continue with this patch. > >

Re: [OpenIndiana-discuss] Question about drive LEDs

2011-06-24 Thread Fred Liu
Mark, Can you post the content of your blog in this list? Many thanks. >> >> look here. >> >> http://stored-on-zfs.blogspot.com >> >> Fred > -Original Message- > From: Mark [mailto:mark0...@gmail.com] > Sent: 星期五, 六月 24, 2011 16:24 > To: openindiana-discuss@openindiana.org > Subject: R

Re: [zfs-discuss] zfs global hot spares?

2011-06-24 Thread Fred Liu
> > zpool status -x output would be useful. These error reports do not > include a > pointer to the faulty device. fmadm can also give more info. > Yes. Thanks. > mpathadm can be used to determine the device paths for this disk. > > Notice how the disk is offline at multiple times. There is so

Re: [Lxc-users] autofs in lxc containers?

2011-06-24 Thread Fred Liu
Also adding Helmut here. Hi, Helmut, Can you kindly help? Many thanks. Fred > -Original Message- > From: Fred Liu > Sent: 星期五, 六月 24, 2011 17:54 > To: 'Daniel Lezcano' > Cc: Lxc-users@lists.sourceforge.net > Subject: RE: [Lxc-users] autofs in lxc containe

Re: [Lxc-users] autofs in lxc containers?

2011-06-24 Thread Fred Liu
nel it works like > a smart. Great. Cool ! thanks for reporting the problem ! -- Daniel Can you kindly elaborate how to do the patching jobs? Many thanks. Fred > -Original Message- > From: Daniel Lezcano [mailto:daniel.lezc...@free.fr] > Sent: 星期一, 六月 20, 2011 20:

Re: [OpenIndiana-discuss] Question about drive LEDs

2011-06-22 Thread Fred Liu
>look here. > >http://stored-on-zfs.blogspot.com > Ooops, it is blocked by Great Wall firewall! Too frustrated. Anyway, thanks. Fred ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/open

Re: [OpenIndiana-discuss] Question about drive LEDs

2011-06-21 Thread Fred Liu
> >As an aside, I have built a fully functional "7210 Unified Storage >Server Clone" running on Supermicro hardware and some customised >definitions in the management software to match the new hardware. >That setup fully supported drive locator and failure indications, so the >generic hardware can

[Lxc-users] autofs in lxc containers?

2011-06-20 Thread Fred Liu
Hi, Anyone who successfully run autofs in lxc containers? Thanks. Fred -- EditLive Enterprise is the world's most technically advanced content authoring tool. Experience the power of Track Changes, Inline Image Editing

[Lxc-users] good os images

2011-06-17 Thread Fred Liu
Hi, Are there already good os images for LXC such as rhel to try? Thanks. Fred -- EditLive Enterprise is the world's most technically advanced content authoring tool. Experience the power of Track Changes, Inline Image

Re: [zfs-discuss] zfs global hot spares?

2011-06-17 Thread Fred Liu
> -Original Message- > From: Fred Liu > Sent: 星期四, 六月 16, 2011 17:28 > To: Fred Liu; 'Richard Elling' > Cc: 'Jim Klimov'; 'zfs-discuss@opensolaris.org' > Subject: RE: [zfs-discuss] zfs global hot spares? > > Fixing a typo in my las

Re: [zfs-discuss] zfs global hot spares?

2011-06-16 Thread Fred Liu
Fixing a typo in my last thread... > -Original Message- > From: Fred Liu > Sent: 星期四, 六月 16, 2011 17:22 > To: 'Richard Elling' > Cc: Jim Klimov; zfs-discuss@opensolaris.org > Subject: RE: [zfs-discuss] zfs global hot spares? > > > This message is f

Re: [zfs-discuss] zfs global hot spares?

2011-06-16 Thread Fred Liu
> This message is from the disk saying that it aborted a command. These > are > usually preceded by a reset, as shown here. What caused the reset > condition? > Was it actually target 11 or did target 11 get caught up in the reset > storm? > It happed in the mid-night and nobody touched the file

Re: [zfs-discuss] zfs global hot spares?

2011-06-15 Thread Fred Liu
> This is only true if the pool is not protected. Please protect your > pool with mirroring or raidz*. > -- richard > Yes. We use a raidz2 without any spares. In theory, with one disk broken, there should be no problem. But in reality, we saw NFS service interrupted: Jun 9 23:28:59 cn03 scsi_v

Re: [zfs-discuss] zfs global hot spares?

2011-06-15 Thread Fred Liu
> -Original Message- > From: Richard Elling [mailto:richard.ell...@gmail.com] > Sent: 星期三, 六月 15, 2011 14:25 > To: Fred Liu > Cc: Jim Klimov; zfs-discuss@opensolaris.org > Subject: Re: [zfs-discuss] zfs global hot spares? > > On Jun 14, 2011, at

<    1   2   3   4   >