Re: [zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-16 Thread Charles Hedrick
We use this configuration. It works fine. However I don't know enough about the details to answer all of your questions. The disks are accessible from both systems at the same time. Of course with ZFS you had better not actually use them from both systems. Actually, let me be clear about what

[zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
We're getting the notorious cannot destroy ... dataset already exists. I've seen a number of reports of this, but none of the reports seem to get any response. Fortunately this is a backup system, so I can recreate the pool, but it's going to take me several days to get all the data back. Is

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
Incidentally, this is on Solaris 10, but I've seen identical reports from Opensolaris. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
# zfs destroy -r OIRT_BAK/backup_bad cannot destroy 'OIRT_BAK/backup_...@annex-2010-03-23-07:04:04-bad': dataset already exists No, there are no clones. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
So we tried recreating the pool and sending the data again. 1) compression wasn't set on the copy, even though I did sent -R, which is supposed to send all properties 2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung. 3) This is Solaris Cluster. We tried forcing a

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
Ah, I hadn't thought about that. That may be what was happening. Thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
So that eliminates one of my concerns. However the other one is still an issue. Presumably Solaris Cluster shouldn't import a pool that's still active on the other system. We'll be looking more carefully into that. -- This message posted from opensolaris.org

Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-22 Thread Charles Hedrick
I talked with our enterprise systems people recently. I don't believe they'd consider ZFS until it's more flexible. Shrink is a big one, as is removing an slog. We also need to be able to expand a raidz, possibly by striping it with a second one and then rebalancing the sizes. -- This message

[zfs-discuss] performance problem with Mysql

2010-02-20 Thread Charles Hedrick
We recently moved a Mysql database from NFS (Netapp) to a local disk array (J4200 with SAS disks). Shortly after moving production, the system effectively hung. CPU was at 100%, and one disk drive was at 100%. I had tried to follow the tuning recommendations for Mysql mostly: * recordsize set

Re: [zfs-discuss] performance problem with Mysql

2010-02-20 Thread Charles Hedrick
We had been using the same pool for a backup Mysql server for 6 months before using it for the primary server. Neither zpool status -v nor fmdump shows any signs of problems. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] performance problem with Mysql

2010-02-20 Thread Charles Hedrick
I hadn't considered stress testing the disks. Obviously that's a good idea. We'll look at doing something in May, when we have the next opportunity to take down the database. I doubt that doing testing during production is a good idea... -- This message posted from opensolaris.org

Re: [zfs-discuss] available space

2010-02-15 Thread Charles Hedrick
Thanks. That makes sense. This is raidz2. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] available space

2010-02-13 Thread Charles Hedrick
I have the following pool: NAME SIZE USED AVAILCAP HEALTH ALTROOT OIRT 6.31T 3.72T 2.59T58% ONLINE / zfs list shows the following for a typical file system: NAMEUSED AVAIL REFER MOUNTPOINT OIRT/sakai/production 1.40T 1.77T 1.40T

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-13 Thread Charles Hedrick
I have a similar situation. I have a system that is used for backup copies of logs and other non-critical things, where the primary copy is on a Netapp. Data gets written in batches a few times a day. We use this system because storage on it is a lot less expensive than on the Netapp. It's only

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-18 Thread Charles Hedrick
From the web page it looks like this is a card that goes into the computer system. That's not very useful for enterprise applications, as they are going to want to use an external array that can be used by a redundant pair of servers. I'm very interested in a cost-effective device that will

[zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
We have a server using Solaris 10. It's a pair of systems with a shared J4200, with Solaris cluster. It works very nicely. Solaris cluster switches over transparently. However as an NFS server it is dog-slow. This is the usual synchronous write problem. Setting zfs_disable fixes the problem.

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
Thanks. That's what I was looking for. Yikes! I hadn't realized how expensive the Zeus is. We're using Solaris cluster, so if the system goes down, the other one takes over. That means that if the ZIL is on a local disk, we lose it in a crash. Might as well just set zil_disable (something I'm

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
It turns out that our storage is currently being used for * backups of various kinds, run daily by cron jobs * saving old log files from our production application * saving old versions of java files from our production application Most of the usage is write-only, and a fair amount of it

Re: [zfs-discuss] core dump on zfs receive

2009-06-27 Thread Charles Hedrick
I'd like to maintain a backup of the main pool on an external drive. Can you suggest a way to do that? I was hoping to do zfs send | zfs receive and then do that with incrementals. It seems that this isn't going to work. How do people actually back up ZFS-based systems? -- This message posted

[zfs-discuss] core dump on zfs receive

2009-06-22 Thread Charles Hedrick
I'm trying to do a simple backup. I did zfs snapshot -r rp...@snapshot zfs send -R rp...@snapshot | zfs receive -Fud external/rpool zfs snapshot -r rp...@snapshot2 zfs send -RI rp...@snapshot1 rp...@snapshot2 | zfs receive -d external/rpool The receive coredumps $c

[zfs-discuss] two pools on boot disk?

2009-06-20 Thread Charles Hedrick
I have a small system that is going to be a file server. It has two disks. I'd like just one pool for data. Is it possible to create two pools on the boot disk, and then add the second disk to the second pool? The result would be a single small pool for root, and a second pool containing the

[zfs-discuss] how to do backup

2009-06-20 Thread Charles Hedrick
I have a USB disk, to which I want to do a backup. I've used send | receive. It works fine until I try to reboot. At that point the system fails to come up because the backup copy is set to be mounted at the original location so the system tries to mount two different things the same place. I