Re: [zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-02-05 Thread Albert Shih
 Le 04/02/2013 ? 11:21:12-0500, Paul Kraus a écrit
> On Jan 31, 2013, at 5:16 PM, Albert Shih wrote:
> 
> > Well I've server running FreeBSD 9.0 with (don't count / on
> > differents disks) zfs pool with 36 disk.
> > 
> > The performance is very very good on the server.
> > 
> > I've one NFS client running FreeBSD 8.3 and the performance over NFS
> > is very good :
> > 
> > For example : Read from the client and write over NFS to ZFS:
> > 
> > [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar
> > 
> > real1m7.244s user0m0.921s sys 0m8.990s
> > 
> > this client is on 1Gbits/s network cable and same network switch as
> > the server.
> > 
> > I've a second NFS client running FreeBSD 9.1-Stable, and on this
> > second client the performance is catastrophic. After 1 hour the tar
> > isn't finish.  OK this second client is connect with 100Mbit/s and
> > not on the same switch.  But well from 2 min --> ~ 90 min ...:-(
> > 
> > I've try for this second client to change on the ZFS-NFS server the
> > 
> > zfs set sync=disabled
> > 
> > and that change nothing.
> 
> I have been using FreeBSD 9 with ZFS and NFS to a couple Mac OS X
> (10.6.8 Snow Leopard) boxes and I get between 40 and 50 MB/sec

Thanks for your answer.

Can you give me the average ping time between you'r client and NFS server ? 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
mar 5 fév 2013 16:15:11 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-01-31 Thread Albert Shih
Hi all,

I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
(I known it's bad). 

Well I've server running FreeBSD 9.0 with (don't count / on differents
disks) zfs pool with 36 disk.

The performance is very very good on the server.

I've one NFS client running FreeBSD 8.3 and the performance over NFS is
very good : 

For example : Read from the client and write over NFS to ZFS:

[root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar 

real1m7.244s
user0m0.921s
sys 0m8.990s

this client is on 1Gbits/s network cable and same network switch as the
server.

I've a second NFS client running FreeBSD 9.1-Stable, and on this second
client the performance is catastrophic. After 1 hour the tar isn't finish.
OK this second client is connect with 100Mbit/s and not on the same switch.
But well from 2 min --> ~ 90 min ...:-(

I've try for this second client to change on the ZFS-NFS server the

zfs set sync=disabled 

and that change nothing.

On a third NFS client linux (recent Ubuntu) I got the almost same catastrophic 
performance. With or without sync=disabled. 

Those three NFS client use TCP. 

If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is
not the problem.

I try to something like (find with google): 

net.inet.tcp.sendbuf_max: 2097152 -> 16777216
net.inet.tcp.recvbuf_max: 2097152 -> 16777216
net.inet.tcp.sendspace: 32768 -> 262144
net.inet.tcp.recvspace: 65536 -> 262144
net.inet.tcp.mssdflt: 536 -> 1452
net.inet.udp.recvspace: 42080 -> 65535
net.inet.udp.maxdgram: 9216 -> 65535
net.local.stream.recvspace: 8192 -> 65535
net.local.stream.sendspace: 8192 -> 65535


and that change nothing either. 

Anyone have any idea ? 

Regards.

JAS

-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 31 jan 2013 23:04:47 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-06 Thread Albert Shih
 Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
Hi,

Sorry, I'm very busy those past few days. 

> >> >
> >> > http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> 
> The commands described on that page do not have direct equivalents in
> zfs. There is currently no way to reduce the number of "top-level
> vdevs" in a pool or to change the RAID level.

OK. 

> >> > I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 
> >> > disk
> >> > I've 36x 3T and 12 x 2T.
> >> > Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> >> > to migrate all data on those 12 old disk on the new and remove those old
> >> > disk ?
> 
> In your specific example this means that you have 4 * RAIDZ2 of 12
> disks each. Zfs doesn't allow you to reduce that there are 4. Zfs
> doesn't allow you to change any of them from RAIDZ2 to any other
> configuration (eg RAIDZ). Zfs doesn't allow you to change the fact
> that you have 12 disks in a vdev.

OK thanks. 

> 
> If you don't have a full set of new disks on a new system, or enough
> room on backup tapes to do a backup-restore, there are only two ways
> for to add capacity to the pool:
> 1) add a 5th top-level vdev (eg. another set of 12 disks)

That's not a problem. 

> 2) replace the disks with larger ones one-by-one, waiting for a
> resilver in between

This is the point I don't see how to do it. I've 48 disk actually from
/dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To. 

I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..

So I add physically a new enclosure with new 12 disks for example 4To disk. 

I'm going to have new /dev/da48 --> /dev/da59. 

Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0. 
The first raidz2 going to be in «degraded state». So I going to tell the
pool the new disk is /dev/da48.

repeat this_process until /dev/da11 replace by /dev/da59.

But at the end how many space I'm going to use on those /dev/da48 -->
/dev/da51. Am I going to have 3To or 4To ? Because each time before
complete ZFS going to use only 3 To how at the end he going to magically
use 4To ? 

Second question, when I'm going to pull out the first enclosure meaning the
old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
new number of those disk meaning 

old /dev/da12 --> /dev/da0
old /dev/da13 --> /dev/da1
etc...
old /dev/da59 --> /dev/da47

how zfs going to manage that ? 

> > When I would like to change the disk, I also would like change the disk
> > enclosure, I don't want to use the old one.
> 
> You didn't give much detail about the enclosure (how it's connected,
> how many disk bays it has, how it's used etc.), but are you able to
> power off the system and transfer the all the disks at once?

Server : Dell PowerEdge 610
4 x enclosure : MD1200 with 12 disk of 3To
Connection : SAS
SAS Card : LSI
enclosures are chained : 

server --> MD1200.1 --> MD1200.2 --> MD1200.3 --> MD1200.4


> 
> 
> > And what happen if I have 24, 36 disks to change ? It's take mounth to do
> > that.
> 
> Those are the current limitations of zfs. Yes, with 12x2TB of data to
> copy it could take about a month.

OK. 

> 
> If you are feeling particularly risky and have backups elsewhere, you
> could swap two drives at once, but then you lose all your data if one
> of the remaining 10 drives in the vdev failed.

OK. 

Thanks for the help

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 6 déc 2012 09:20:55 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-01 Thread Albert Shih
 Le 30/11/2012 ? 15:52:09+0100, Tomas Forsman a écrit
> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
> 
> > Hi all,
> > 
> > I would like to knwon if with ZFS it's possible to do something like that :
> > 
> > http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> > 
> > meaning : 
> > 
> > I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
> > I've 36x 3T and 12 x 2T.
> > Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> > to migrate all data on those 12 old disk on the new and remove those old
> > disk ? 
> 
> You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
> replace, if you don't have autoreplace on)
> Repeat until done.

Wellin fact it's littre more complicate than that. 

When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one. 

And what happen if I have 24, 36 disks to change ? It's take mounth to do
that. 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
sam 1 déc 2012 12:17:39 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove disk

2012-11-30 Thread Albert Shih
Hi all,

I would like to knwon if with ZFS it's possible to do something like that :

http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html

meaning : 

I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I've 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ? 

Regards.


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
ven 30 nov 2012 15:18:32 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How many disk in one pool

2012-10-05 Thread Albert Shih
Hi all,

I'm actually running ZFS under FreeBSD. I've a question about how many
disks I «can» have in one pool. 

At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
(Dell) meaning 48 disks. I've configure with 4 raidz2 in the pool (one on
each MD1200)

On what I understand I can add more more MD1200. But if I loose one MD1200
for any reason I lost the entire pool. 

In your experience what's the «limit» ? 100 disk ? 

How FreeBSD manage 100 disk ? /dev/da100 ? 

Regards.

JAS

-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
ven 5 oct 2012 22:52:22 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshot size

2012-06-05 Thread Albert Shih
 Le 05/06/2012 ? 17:08:51+0200, Stefan Ring a écrit
> > Two questions from a newbie.
> >
> >        1/ What REFER mean in zfs list ?
> 
> The amount of data that is reachable from the file system root. It's
> just what I would call the contents of the file system.

OK thanks. 

> 
> >        2/ How can I known the size of all snapshot size for a partition ?
> >        (OK I can add zfs list -t snapshot)
> 
> zfs get usedbysnapshots 

Thansk 

Can I say 

    USED-REFER=snapshot size ? 


Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@jabber.obspm.fr
Heure local/Local time:
mar 5 jui 2012 17:16:07 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snapshot size

2012-06-05 Thread Albert Shih
Hi all,

Two questions from a newbie.

1/ What REFER mean in zfs list ? 

2/ How can I known the size of all snapshot size for a partition ?
(OK I can add zfs list -t snapshot)

Regards.

JAS


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@jabber.obspm.fr
Heure local/Local time:
mar 5 jui 2012 16:57:38 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs disapeare on FreeBSD.

2012-01-17 Thread Albert Shih
 Le 17/01/2012 à 06:31:22-0800, Brad Stone a écrit
> Try zpool import

Thanks. 

It's working.

Regards.

-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mar 17 jan 2012 15:35:31 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs disapeare on FreeBSD.

2012-01-17 Thread Albert Shih
Hi all. 

I'm totale newbie on ZFS so if I ask some stupid question, please don't
send some angry mail to this mailing list, send-it directly to me ;-)

Well I've a Dell server running FreeBSD 9.0 with 4 MD1200 with 48 disks. 
It's connect through a LSI card. So I can see all /dev/da0 --> /dev/da47. 

I've create a zpool with 4 raidz2 (one for each MD1200). 

After that I re-install the server (I put some wrong swap size), and I
don't find my zpool at all. 

Is that normal ? 

Regards.

JAS
-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mar 17 jan 2012 15:12:47 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-27 Thread Albert Shih
 Le 19/10/2011 à 19:23:26-0700, Rocky Shek a écrit

 Hi. 

 Thanks for this information. 

> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based on
> past experience

Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under FreBSD 
9.0 ? 

Best regards.

Regards.
-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 27 oct 2011 17:20:11 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Albert Shih
 Le 19/10/2011 à 10:52:07-0400, Krunal Desai a écrit
> On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih  wrote:
> > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> > two options :
> >
> >        1/ create a LV on the PERC H800 so the server see one volume and put
> >        the zpool on this unique volume and let the hardware manage the
> >        raid.
> >
> >        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
> >        and ZFS manage the raid.
> >
> > which one is the best solution ?
> >
> > Any advise about the RAM I need on the server (actually one MD1200 so 
> > 12x2To disk)
> 
> I know the PERC H200 can be flashed with IT firmware, making it in
> effect a "dumb" HBA perfect for ZFS usage. Perhaps the H800 has the
> same? (If not, can you get the machine configured with a H200?)

I'm not sure what you mean when you say «H200 flashed with IT firmware» ? 

> If that's not an option, I think Option 2 will work. My first ZFS
> server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID
> 0s in the PERC Option ROM, but Solaris did not seem to mind that.

OK.

I don't have choice (too complexe to explain and it's meanless here) but I
can only buy at Dell at this moment. 

On the Dell website I've the choice between : 


SAS 6Gbps External Controller
PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
LSI2032 SCSI Internal PCIe Controller Card

I've no idea what's the first thing is. But what I understand the best
solution is the first or the last ? 

Regards.

JAS

-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:44:39 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Albert Shih
 Le 19/10/2011 à 21:30:31+0700, Fajar A. Nugraha a écrit
> > Sorry to cross-posting. I don't knwon which mailing-list I should post this
> > message.
> >
> > I'll would like to use FreeBSD with ZFS on some Dell server with some
> > MD1200 (classique DAS).
> >
> > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> > two options :
> >
> >        1/ create a LV on the PERC H800 so the server see one volume and put
> >        the zpool on this unique volume and let the hardware manage the
> >        raid.
> >
> >        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
> >        and ZFS manage the raid.
> >
> > which one is the best solution ?
> 
> Neither.
> 
> The best solution is to find a controller which can pass the disk as
> JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
> (though others might disagree).

Thanks. That's going to be very complicate...but I'm going to try. 

> 
> >
> > Any advise about the RAM I need on the server (actually one MD1200 so 
> > 12x2To disk)
> 
> The more the better :)

Well, my employer is not so rich. 

It's first time I'm going to use ZFS on FreeBSD on production (I use on my
laptop but that's mean nothing), so what's in your opinion the minimum ram
I need ? Is something like 48 Go is enough ? 

> Just make sure do NOT use dedup untul you REALLY know what you're
> doing (which usually means buying lots of RAM and SSD for L2ARC).

Ok. 

Regards.

JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:30:49 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on Dell with FreeBSD

2011-10-19 Thread Albert Shih
Hi 

Sorry to cross-posting. I don't knwon which mailing-list I should post this
message. 

I'll would like to use FreeBSD with ZFS on some Dell server with some
MD1200 (classique DAS). 

When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
two options : 

1/ create a LV on the PERC H800 so the server see one volume and put
the zpool on this unique volume and let the hardware manage the
raid. 

2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
and ZFS manage the raid. 

which one is the best solution ? 

Any advise about the RAM I need on the server (actually one MD1200 so 12x2To 
disk)

Regards.

JAS
-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mer 19 oct 2011 16:11:40 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration for a thumper

2008-02-01 Thread Albert Shih
 Le 01/02/2008 à 11:17:14-0800, Marion Hakanson a écrit
> [EMAIL PROTECTED] said:
> > Depending on needs for space vs. performance, I'd probably pixk eithr  5*9 
> > or
> > 9*5,  with 1 hot spare. 
> 
> [EMAIL PROTECTED] said:
> > How you can check the speed (I'm totally newbie on Solaris) 
> 
> We're deploying a new Thumper w/750GB drives, and did space vs performance
> tests comparing raidz2 4*11 (2 spares, 24TB) with 7*6 (4 spares, 19TB).
> Here are our bonnie++ and filebench results:
>   http://acc.ohsu.edu/~hakansom/thumper_bench.html
> 

Lots of thanks for making this work. And let me to read it. 

Regards.

--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Ven 1 fév 2008 23:03:59 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration for a thumper

2008-01-30 Thread Albert Shih
 Le 30/01/2008 à 11:01:35-0500, Kyle McDonald a écrit
> Albert Shih wrote:
>> What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
>> that's mean I can make raidz with 6 or 7 or any number of disk).
>> 
>>   
> Depending on needs for space vs. performance, I'd probably pixk eithr 5*9 
> or 9*5,  with 1 hot spare.

Thanks for the tips...

How you can check the speed (I'm totally newbie on Solaris)

I've use 

mkfile 10g

for write and I've got same perf with 5*9 or 9*5.

Have you some advice about tool like iozone ? 

Regards.

--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Mer 30 jan 2008 17:10:55 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS configuration for a thumper

2008-01-30 Thread Albert Shih
Hi all

I've a Sun X4500 with 48 disk of 750Go

The server come with Solaris install on two disk. That's mean I've got 46
disk for ZFS.

When I look the defautl configuration of the zpool 

zpool create -f zpool1 raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0
zpool add -f zpool1 raidz c0t1d0 c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
zpool add -f zpool1 raidz c0t2d0 c1t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0
zpool add -f zpool1 raidz c0t3d0 c1t3d0 c4t3d0 c5t3d0 c6t3d0 c7t3d0
zpool add -f zpool1 raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0
zpool add -f zpool1 raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c6t5d0 c7t5d0
zpool add -f zpool1 raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 c7t6d0
zpool add -f zpool1 raidz c0t7d0 c1t7d0 c4t7d0 c5t7d0 c6t7d0 c7t7d0

that's mean there'are pool with 5 disk and other with 6 disk.

When I want to do the same I've got this message :

mismatched replication level: pool uses 5-way raidz and new vdev uses 6-way 
raidz

I can force this with «-f» option.

But what's that mean (sorry if the question is stupid). 

What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
that's mean I can make raidz with 6 or 7 or any number of disk).

Regards.

--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Mer 30 jan 2008 16:36:49 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shared ZFS pools

2006-12-06 Thread Albert Shih
 Le 06/12/2006 à 05:05:55+0100, Flemming Danielsen a écrit
> Hi
> I have 2 questions on use of ZFS.
> How do I ensure I have site redundancy using zfs pools, as I see it we only
> ensures mirrors between 2 disks. I have 2 HDS on one each site and I want to 
> be
> able to loose the one of them and my pools should still be running. F.inst.
>  
> I have created 2 luns on each site (A and B) named AA, AB and BA, BB. I then
> create my pool and mirror AA to BA and AB to BB. If I lose site B hosting BA
> and BB can I be sure they do no hold both copies of any data?
>  
I'm asking a question (maybe stupid) what you use to attach two 2disks on 2
different site ? You using FC attachement ?

Regards.


--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Wed Dec 6 09:06:30 CET 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need Clarification on ZFS

2006-12-05 Thread Albert Shih
 Le 04/12/2006 à 23:34:39-0800, Jason A. Hoffman a écrit
> Hi Mastan,
> 
> >Like this , Can We share zfs file system between two machines. If  
> >so please explain it.
> 
> It's always going from machine 1 to machine 2?
> 
> zfs send [EMAIL PROTECTED] | ssh [EMAIL PROTECTED] | zfs  
> recv filesystem-one-machine2
> 
> will stream a snapshot from the first machine to a filesystem/device/ 
> snapshot on machine2

That's impressive. Whath the size of the file you send throught ssh ? Is
that size is exactly same of the FS or the occupation of FS ? Can I send
just the diff ? For example

At t=0  I send a big file using your command
at t=t+1I just send the diff not a big file 

Regards.

 
--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Tue Dec 5 14:53:13 CET 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS on multi-volume

2006-12-05 Thread Albert Shih
 Le 04/12/2006 à 21:24:26-0800, Anton B. Rang a écrit
> It is possible to configure ZFS in the way you describe, but your performance 
> will be limited by the older array.
> 
> All mirror writes have to be stored on both arrays before they are considered 
> complete, so writes will be as slow as the slowest disk or array involved.
> 

OK. 

It's possible to configure the server, the high level raid array, and the
pool of my old array raid to do :

1/ When the server read/write he do from high level raid
2/ The server make a copie of all data from high level raid to the
pool of my old array «when he have the time». But I want this
automatics. I don't want this by using something like rsync.

What I want to do is make a NFS server with the new high level raid array
with primary data. But I want also using my old-low-level raid array to
make backup (in case I'm lost my high-level raid array) and only backup.

Do you think ZFS can help me ?

Best regards

--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Tel  : 01 44 27 86 88
FAX  : 01 44 27 69 35
GSM(UFR) : 06 85 05 58 43
Heure local/Local time:
Tue Dec 5 14:16:01 CET 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on multi-volume

2006-12-04 Thread Albert Shih
Hi all

Sorry if my question is not very clear, I'm not very familiar with ZFS (why
I ask this question).

Suppose I've lot of low-cost raid array disk (like Brownie meaning IDE/SATA
disk)) all in SCSI attachement (lot of ~ 10 and the sum of space is ~ 20 To). 
Now if I buy some «high» level big raid array disk on FC attachement and a big 
Sun 
server, can I create a ZFS fs on all disks with :

All data is on the new big raid array disk (using hardware raid)

and 

All data is mirroring on the sum of my old low-cost raid array

? 

If it's possible what do you think of the perf ?

The purpose is make a big NFS server with primary data on a high-level raid
array disk but using ZFS to mirror all data on the all old-raid-array.

Regards.
--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Mon Dec 4 23:04:04 CET 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss