[zfs-discuss] UFS or ZFS for MYSQL and APACHE web server data and Database
I have two 200GB ISCSI LUN's setup on each X4600 M2 running Solaris 10 X86 update 4. The data on these ISCSI disks for one server will be Apache and on the other server MYSQL. My question is should I setup these disks as a ZFS Pool filesystem or as a UFS soft partition (eventually the LUN will be expanded). What would offer the best performance and easier to expand later on? What file system as less problems setting up data this way? This will be on production systems so I need to the best results. The disks are connected to a Equallogic box that's setup as a Raid 50. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS disk give a kstat_create namespace collision
We have a zfs pool setup as an iscsi disk attached to an equallogic box. When we boot the system up we get the following warning. WARNING: kstat_create('mdi', 0, 'ssd0.t1.iscsi0'): namespace collision I believe this is regards to the zfs iscsi disks. This may not be the right forum but I don't know where else to find a solution. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN
What would be the commands for the three way mirror or an example of what your describing. I thought the 200gb would have to be the same size to attach to the existing mirror and you would have to attach two LUN disks vs one LUN. Once it attaches it automatically reslivers or syncs the disk then if I wanted to I could remove the two 73 GB disks or still keep them in the pool and expand the pool later if I want? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that's 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what's the best approach to moving the ZFS data off the ZFS mirror to this LUN, or joining this LUN to ZFS but not have a ZFS mirror setup anymore because of the disk waste with the mirror setup. Whatever the approach is the current data cannot be lost, and the least downtime would help too. I thought the best approach might be to create the LUN as a 200 GB ZFS pool, and then do a ZFS export from the 73GB drive and a ZFS import on the 200 GB iscsci LUN. This LUN would still be setup as a RAID 50 even though its a ZFS file system so we would still have some redundancy and we could eventually grow this pool. Or would it be better to go UFS on the LUN and copy the data over? What would be the best approach for moving this data or configuring the disks involved? I'm open to any suggestions? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Break a ZFS mirror and concatenate the disks
Currently c2t2d0 c2t3d0 are setup in a mirror. I want to break the mirror and save the data on c2t2d0 (which both drives are 73g. Then I want to concatenate c2t2do to c2t3d0 so I have a pool of 146GB no longer in a mirror just concatenated. But since their mirror right now I need the data save on one disk so I don't lose everything. I don't need to add new disks that not an option I want to break the mirror so I can expand the disks together in a pool but save the data. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Break a ZFS mirror and concatenate the disks
We have a ZFS mirror setup of two 73GB's disk, but we are running out of space. I want to break the mirror and join the disk to have a pool of 146GB, and of course not lose the data doing this. What are the commands? To break the mirror I would do zpool detach moodle c1t3d0 NAMESTATE READ WRITE CKSUM moodle ONLINE 0 0 0 mirrorONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 Then could I do zpool add moodle c1t3d0 without losing the data? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] changing mdb memory values
I want to set the values of arc c and arc p (C_max and P_addr) to different memory values. What would be the hexademical value for 256mb and for 128mb? I'm trying to use "mdb -k" to limit the amount of memory ZFS uses. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS with raidz
The reason for this question is we currently have our disk setup in a hardware raid5 on a EMC device and these disks are configured as a zfs file system. Would it benefit us to have the disk be setup as a raidz along with the hardware raid 5 that is already setup too? Or with this double raid slow our performance with both a software and hardware raid setup? Or would raidz setup be better than the hardware raid5 setup? Also if we do set the disks as a raidz would it benefit use more if we specified each disks in the raidz or create them as Luns then specify the setup in raidz. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS with raidz
Using raidz in zfs or raidz2 do all the disks have to be the same size. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS causing slow boot up
We created 10,000 zfs file systems with no data in them yet, and it seems after we did this our boot up process takes over an hour. This is on a v210 with 1gb of memory. We've rebooted the machine twice and it still takes an hour, and the system does have the latest Solaris 10 patches installed. Does there need to be any kernel parameter Changes in Solaris 10 Sparc to allow the mount process on boot-up to be faster? We do have any alerts or warnings in the /var/adm/message log. Before we created all these ZFS file systems our reboot process would take minutes to boot back up. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS Degraded Disks
What are the necessary steps to try to troubleshoot a degraded disk, and also what are the steps for replacing a disk in a ZFS mirrored pool. I have a identical disk, but it has UFS filesystem on it,(but not used for any purpose), can I format the disk and then make this a replacement in the ZFS mirror This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: ZFS Storage Pool advice
Basically then wilth data being stored on the ZFS disks (no applications), and web servers logs, it would benefit us more to have the 3 luns setup in one ZFS Storage Pool? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: ZFS Storage Pool advice
Basically then wilth data being stored on the ZFS disks (no applications), and web servers logs, it would benefit us more to have the 3 luns setup in one ZFS Storage Pool? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS Storage Pool advice
The Luns will be on separate "SPA" controllers"not on all the same controller, so that's why I thought if we split our data on different disks and ZFS Storage Pools we would get better IO performance. Correct? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS Storage Pool advice
Also there will be no NFS services on this system. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS Storage Pool advice
Were looking for pure performance. What will be contained in the LUNS is Student User account files that they will access and Department Share files like, MS word documents, excel files, PDF. There will be no applications on the ZFS Storage pools or pool Does this help on what strategy might be best? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here's are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I'm trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3 LUNS as one big disks under ZFS and create 1 huge ZFS storage pool. Example: LUN1 200gb ZFS Storage Pool "pooldata1" LUN2 200gb ZFS Storage Pool "pooldata2" LUN3 200gb ZFS Storage Pool "pooldata3" or LUN 600gb ZFS Storage Pool "alldata" This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss