[zfs-discuss] Solaris 11 Express

2010-11-15 Thread Wolfraider
Just went to Oracle's website and just noticed that you can download Solaris 11 
Express.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS pool issues with COMSTAR

2010-10-07 Thread Wolfraider
We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with 
no errors, everything looks good but when we try to access zvols shared out 
with COMSTAR, windows reports that the devices have bad blocks. Everything has 
been working great until last night and no changes have been done to this 
system in weeks. We are really hoping we can get our data back from this 
system. The 3 volumes that start with CV are shared with a server called CV, 
likewise with DPM.
The drives are connected to the opensolaris box with Fibrechannel and we are 
also using Fibrechannel in target mode for the host side.

opensolaris b134

ad...@comstar2:~$ zpool status
  pool: pool_1
 state: ONLINE
 scrub: scrub in progress for 0h0m, 0.01% done, 93h49m to go
config:

NAMESTATE READ WRITE CKSUM
pool_1  ONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c12t60050CC000F01A8E007Dd0  ONLINE   0 0 0
c12t60050CC000F01A8E007Ed0  ONLINE   0 0 0
c12t60050CC000F01A8E007Fd0  ONLINE   0 0 0
c12t60050CC000F01A8E0080d0  ONLINE   0 0 0
c12t60050CC000F01A8E0081d0  ONLINE   0 0 0
  raidz2-1  ONLINE   0 0 0
c12t60050CC000F01A8E0082d0  ONLINE   0 0 0
c12t60050CC000F01A8E0083d0  ONLINE   0 0 0
c12t60050CC000F01A8E0084d0  ONLINE   0 0 0
c12t60050CC000F01A8E0085d0  ONLINE   0 0 0
c12t60050CC000F01A8E0086d0  ONLINE   0 0 0
  raidz2-2  ONLINE   0 0 0
c12t60050CC000F01A8E0087d0  ONLINE   0 0 0
c12t60050CC000F01A8E0088d0  ONLINE   0 0 0
c12t60050CC000F01A8E0089d0  ONLINE   0 0 0
c12t60050CC000F01A8E008Ad0  ONLINE   0 0 0
c12t60050CC000F01A8E008Bd0  ONLINE   0 0 0
  raidz2-3  ONLINE   0 0 0
c12t60050CC000F01A8E008Cd0  ONLINE   0 0 0
c12t60050CC000F01A8E008Dd0  ONLINE   0 0 0
c12t60050CC000F01A8E008Ed0  ONLINE   0 0 0
c12t60050CC000F01A8E008Fd0  ONLINE   0 0 0
c12t60050CC000F01A8E0090d0  ONLINE   0 0 0
  raidz2-4  ONLINE   0 0 0
c12t60050CC000F01A8E0091d0  ONLINE   0 0 0
c12t60050CC000F01A8E0092d0  ONLINE   0 0 0
c12t60050CC000F01A8E0093d0  ONLINE   0 0 0
c12t60050CC000F01A8E0094d0  ONLINE   0 0 0
c12t60050CC000F01A8E0095d0  ONLINE   0 0 0
  raidz2-5  ONLINE   0 0 0
c12t60050CC000F01A8E0096d0  ONLINE   0 0 0
c12t60050CC000F01A8E0097d0  ONLINE   0 0 0
c12t60050CC000F01A8E0098d0  ONLINE   0 0 0
c12t60050CC000F01A8E0099d0  ONLINE   0 0 0
c12t60050CC000F01A8E009Ad0  ONLINE   0 0 0
  raidz2-6  ONLINE   0 0 0
c12t60050CC000F01A8E009Bd0  ONLINE   0 0 0
c12t60050CC000F01A8E009Cd0  ONLINE   0 0 0
c12t60050CC000F01A8E009Dd0  ONLINE   0 0 0
c12t60050CC000F01A8E009Ed0  ONLINE   0 0 0
c12t60050CC000F01A8E009Fd0  ONLINE   0 0 0
  raidz2-7  ONLINE   0 0 0
c12t60050CC000F01A8E00A0d0  ONLINE   0 0 0
c12t60050CC000F01A8E00A1d0  ONLINE   0 0 0
c12t60050CC000F01A8E00A2d0  ONLINE   0 0 0
c12t60050CC000F01A8E00A3d0  ONLINE   0 0 0
c12t60050CC000F01A8E00A4d0  ONLINE   0 0 0
  raidz2-8  ONLINE   0 0 0
c12t60050CC000F01A8E00A5d0  ONLINE   0 0 0
c12t60050CC000F01A8E00A6d0  ONLINE   0 0 0
c12t60050CC000F01A8E00

Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-16 Thread Wolfraider
We have the following setup configured.  The drives are running on a couple PAC 
PS-5404s. Since these units do not support JBOD, we have configured each 
individual drive as a RAID0 and shared out all 48 RAID0’s per box. This is 
connected to the solaris box through a dual port 4G Emulex fibrechannel card 
with MPIO enabled (round-robin).  This is configured with the 18 raidz2 vdevs 
and 1 big pool. We currently have 2 zvols created with the size being around 
40TB sparse (30T in use). This in turn is shared out using a fibrechannel 
Qlogic QLA2462 in target mode, using both ports. We have 1 zvol connected to 1 
windows server and the other zvol connected to another windows server with both 
windows servers having a qlogic 2462 fibrechannel adapter, using both ports and 
MPIO enabled. The windows servers are running Windows 2008 R2. The zvols are 
formatted NTFS and used as a staging area and D2D2T system for both Commvault 
and Microsoft Data Protection Manager backup solutions. The SAN system sees 
mostly writes since it is used for backups.

We are using Cisco 9124 fibrechannel switches and we have recently upgraded to 
Cisco 10G Nexus switches for our Ethernet side. Fibrechannel support on the 
Nexus will be in a few years due to the cost. We are just trying to fine tune 
our SAN for the best performance possible and we don’t really have any 
expectations right now. We are always looking to improve something. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-16 Thread Wolfraider
We downloaded zilstat from 
http://www.richardelling.com/Home/scripts-and-programs-1 but we never could get 
the script to run. We are not really sure how to debug. :(

./zilstat.ksh 
dtrace: invalid probe specifier 
#pragma D option quiet
 inline int OPT_time = 0;
 inline int OPT_txg = 0;
 inline int OPT_pool = 0;
 inline int OPT_mega = 0;
 inline int INTERVAL = 1;
 inline int LINES = -1;
 inline int COUNTER = -1;
 inline int FILTER = 0;
 inline string POOL = "";
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Wolfraider
Cool, we can get the Intel X25-E's for around $300 a piece from HP with the 
sled. I don't see the X25-M available so we will look at 4 of the X25-E's.

Thanks :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Wolfraider
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC 
devices to our pool. We are looking into getting 4 – 32GB  Intel X25-E SSD 
drives. Would this be a good solution to slow write speeds? We are currently 
sharing out different slices of the pool to windows servers using comstar and 
fibrechannel. We are currently getting around 300MB/sec performance with 
70-100% disk busy.

Opensolaris snv_134
Dual 3.2GHz quadcores with hyperthreading
16GB ram
Pool_1 – 18 raidz2 groups with 5 drives a piece and 2 hot spares
Disks are around 30% full
No dedup
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] corrupt pool?

2010-07-19 Thread Wolfraider
Our server locked up hard yesterday and we had to hard power it off and back 
on. The server locked up again on reading ZFS config (I left it trying to read 
the zfs config for 24 hours). I went through and removed the drives for the 
data pool we created and powered on the server and it booted successfully. I 
removed the pool from the system and reattached the drives and tried to 
re-import the pool. It has now been trying to import for about 6 hours. Does 
anyone know how to recover this pool? Running version 134.

Thanks
Travis
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel 
attached. We would like all 144 drives added to the same large pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
> Mirrors are made with vdevs (LUs or disks), not
> pools.  However, the
> vdev attached to a mirror must be the same size (or
> nearly so) as the
> original. If the original vdevs are 4TB, then a
> migration to a pool made
> with 1TB vdevs cannot be done by replacing vdevs
> (mirror method).
>  -- richard

Both luns that we are sharing out with comstar are vdevs. It sounds like we can 
create the new temporary pool, create a couple new luns the same size as the 
old ones and then create mirrors between the two. Wait until it is synced and 
break the mirror. This is what we were thinking we could do, just wanted to 
make sure.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
> On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
> > The original drive pool was configured with 144 1TB
> drives and a hardware raid 0 strip across every 4
> drives to create 4TB luns.
> 
> For the archives, this is not a good idea...

Exactly, This is the reason I want to blow all the old configuration away. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
We are running the latest dev release.

I was hoping to just mirror the zfs voumes and not the whole pool. The original 
pool size is around 100TB in size. The spare disks I have come up with will 
total around 40TB. We only have 11TB of space in use on the original zfs pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
The original drive pool was configured with 144 1TB drives and a hardware raid 
0 strip across every 4 drives to create 4TB luns. These luns where then 
combined into 6 raidz2 luns and added to the zfs pool. I would like to delete 
the original hardware raid 0 stripes and add the 144 drives directly to the zfs 
pool. This should improve performance considerably since we are not doing a 
raid on top of a raid and fix the whole stripe size issue. Since this pool will 
be deleted and recreated, I will need to move the data off to something else.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Wolfraider
We would like to delete and recreate our existing zfs pool without losing any 
data. The way we though we could do this was attach a few HDDs and create a new 
temporary pool, migrate our existing zfs volume to the new pool, delete and 
recreate the old pool and migrate the zfs volumes back. The big problem we have 
is we need to do all this live, without any downtime. We have 2 volumes taking 
up around 11TB and they are shared out to a couple windows servers with 
comstar. Anyone have any good ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS backup configuration

2010-03-25 Thread Wolfraider
We are sharing the LUNS out with Comstar from 1 big pool. In essence, we 
created our own low cost SAN. We currently have our windows clients connected 
with Fibrechannel to the COMSTAR target.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS backup configuration

2010-03-25 Thread Wolfraider
> On Mar 25, 2010, at 7:20 AM, Wolfraider wrote:
> > This assumes that you have the storage to replicate
> or at least restore all data to a DR site. While this
> is another way to do it, it is not really cost
> effective in our situation.
> 
> If the primary and DR site aren't compatible, then it
> won't be much of a 
> DR solution... :-P

Which assumes that we have a DR site. lol We are working towards a full DR site 
but the funding has been a little tight. That project should be started in the 
next year or 2.
 
> > What I am thinking is basically having 2 servers.
> One has the zpool attached and sharing out our data.
> The other is a cold spare. The zpool is stored on 3
> JBOD chassis attached with Fibrechannel. I would like
> to export the config at specific intervals and have
> it ready to import on the cold spare if the hot spare
> ever dies. The eventual goal would be to configure an
> active/passive cluster for the zpool.
> 
> You don't need to export the pool to make this work.
>  Just import it
> n the cold spare when the primary system dies.  KISS.
> If you'd like
> that to be automatic, then run the HA cluster
> software.

Which, when I asked the question, I wasn't sure how it all worked. I didn't 
know if the import process need a config file or not. I am learning alot, very 
quickly. We will be looking into the HA cluster in the future. The spare is a 
cold spare for alot of different roles so we cant dedicate it to just the 
solaris box (yet :) ).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS backup configuration

2010-03-25 Thread Wolfraider
This assumes that you have the storage to replicate or at least restore all 
data to a DR site. While this is another way to do it, it is not really cost 
effective in our situation.

What I am thinking is basically having 2 servers. One has the zpool attached 
and sharing out our data. The other is a cold spare. The zpool is stored on 3 
JBOD chassis attached with Fibrechannel. I would like to export the config at 
specific intervals and have it ready to import on the cold spare if the hot 
spare ever dies. The eventual goal would be to configure an active/passive 
cluster for the zpool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS backup configuration

2010-03-25 Thread Wolfraider
It seems like the zpool export will ques the drives and mark the pool as 
exported. This would be good if we wanted to move the pool at that time but we 
are thinking of a disaster recovery scenario. It would be nice to export just 
the config to where if our controller dies, we can use the zpool import on 
another box to get back up and running.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS backup configuration

2010-03-24 Thread Wolfraider
Sorry if this has been dicussed before. I tried searching but I couldn't find 
any info about it. We would like to export our ZFS configurations in case we 
need to import the pool onto another box. We do not want to backup the actual 
data in the zfs pool, that is already handled through another program.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss