Re: [zfs-discuss] Re: Re: Heavy writes freezing system
If some aspect of the load is writing large amount of data into the pool (through the memory cache, as opposed to the zil) and that leads to a frozen system, I think that a possible contributor should be: |6429205||each zpool needs to monitor its throughput and throttle heavy writers| -r Anantha N. Srirama writes: Bug 6413510 is the root cause. ZFS maestros please correct me if I'm quoting an incorrect bug. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Re: Heavy writes freezing system
Rainer Heilke wrote: If you plan on RAC, then ASM makes good sense. It is unclear (to me anyway) if ASM over a zvol is better than ASM over a raw LUN. Hmm. I thought ASM was really the _only_ effective way to do RAC, but then, I'm not a DBA (and don't want to be ;-) We'll be just using raw LUN's. While the zvol idea is interesting, the DBA's are very particular about making sure the environment is set up in a way Oracle will support (and not hang up when we have a problem). ASM is relatively new technology. Traditionally, OPS and RAC were built over raw devices, directly or as represented by cluster-aware logical volume managers. DBAs tend to not like raw, so Sun Cluster (Solaris Cluster) supports RAC over QFS which is a very good solution. Some Sun Cluster customers run RAC over NFS, which also works surprisingly well. Meanwhile, Oracle continues to develop ASM to appease the DBAs who want filesystem-like solutions. IMHO, in the long run, Oracle will transition many customers to ASM and this means that it probably isn't worth the effort to make a file system be the best for Oracle, at the expense of other features and workloads. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Re: Heavy writes freezing system
We had a 2TB filesystem. No matter what options I set explicitly, the UFS filesystem kept getting written with a 1 million file limit. Believe me, I tried a lot of options, and they kept getting se t back on me. The limit is documented as 1 million inodes per TB. So something must not have gone right. But many people have complained and you could take the newfs source and fix the limitation. The discontinuity when going from 1TB to over 1TB is appaling. (1TB allows for 137million inodes; = 1TB allows for 1million per). The rationale is fsck time (but logging is forced anyway) The 1 million limit is arbitrary and too low... Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Re: Heavy writes freezing system
Rainer Heilke wrote On 01/17/07 15:44,: It turns out we're probably going to go the UFS/ZFS route, with 4 filesystems (the DB files on UFS with Directio). It seems that the pain of moving from a single-node ASM to a RAC'd ASM is great, and not worth it. The DBA group decided doing the migration to UFS for the DB files now, and then to a RAC'd ASM later, will end up being the easiest, safest route. Rainer Still curious as to if and when this bug will get fixed... If you're referring to bug 6413510 that Anantha mentioned then my earlier post today answered that: This problem was fixed in snv_48 last September and will be in S10_U4. Neil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss