Re: [zfs-discuss] Oracle DB sequential dump questions

2008-10-02 Thread Louwtjie Burger
Ta on the comments I'm going to use Jorg's 'star' to simulate some sequential backup workloads, using different blocksizes and see what the system do. I'll save some output and post for people that might match the same config, now or in the future. To be clear though: (currently) #tar cvf

[zfs-discuss] Oracle DB sequential dump questions

2008-09-30 Thread Louwtjie Burger
Server: T5120 on 10 U5 Storage: Internal 8 drives on SAS HW RAID (R5) Oracle: ZFS fs, recordsize=8K and atime=off Tape: LTO-4 (half height) on SAS interface. Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect the CPU cannot push more since it's a single thread doing al

Re: [zfs-discuss] LVM on ZFS

2008-01-21 Thread Louwtjie Burger
the right way ? > What's is the best way to manage volumes in Solaris? > Do you have a URL or document describing this !? > > cheers, > > TS > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > ht

Re: [zfs-discuss] Does Oracle support ZFS as a file system with Oracle RAC?

2007-12-18 Thread Louwtjie Burger
On 12/19/07, David Magda <[EMAIL PROTECTED]> wrote: > > On Dec 18, 2007, at 12:23, Mike Gerdts wrote: > > > 2) Database files - I'll lump redo logs, etc. in with this. In Oracle > >RAC these must live on a shared-rw (e.g. clustered VxFS, NFS) file > >system. ZFS does not do this. > > If y

Re: [zfs-discuss] JBOD performance

2007-12-16 Thread Louwtjie Burger
> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0 > 0.0 60.00.0 4280.8 0.0 35.00.0 583.1 0 100 c2t9d0 > 0.0 55.00.0 3938.2 0.0 35.00.0 636.1 0 100 c2t10d0 > 0.0 56.0

Re: [zfs-discuss] JBOD performance

2007-12-14 Thread Louwtjie Burger
> The throughput when writing from a local disk to the > zpool is around 30MB/s, when writing from a client Err.. sorry, the internal storage would be good old 1Gbit FCAL disks @ 10K rpm. Still, not the fastest around ;) ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] Best stripe-size in array for ZFS mail storage?

2007-11-30 Thread Louwtjie Burger
On Dec 1, 2007 7:15 AM, Vincent Fox <[EMAIL PROTECTED]> wrote: > We will be using Cyrus to store mail on 2540 arrays. > > We have chosen to build 5-disk RAID-5 LUNs in 2 arrays which are both > connected to same host, and mirror and stripe the LUNs. So a ZFS RAID-10 set > composed of 4 LUNs. Mu

Re: [zfs-discuss] Expanding a Harware RAID 5 Array vdev in ZFS?

2007-11-27 Thread Louwtjie Burger
On Nov 28, 2007 12:58 AM, Justin Tuttle <[EMAIL PROTECTED]> wrote: > I have searched high and low and cannot find the answer. I read about how zfs > uses a Device ID for identification, usually provided by the firmware of the > device. So if an controller presents an (array) lun w/a unique device

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-19 Thread Louwtjie Burger
> > Poor sequential read performance has not been quantified. > > >- COW probably makes that conflict worse > > > > > > This needs to be proven with a reproducible, real-world workload before it > makes sense to try to solve it. After all, if we cannot measure where > we are, > how can we prov

Re: [zfs-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-18 Thread Louwtjie Burger
On Nov 17, 2007 9:40 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote: > (Including storage-discuss) > > I have 6 6140s with 96 disks. Out of which 64 of them are Seagate > ST337FC (300GB - 1 RPM FC-AL) Those disks are 2Gb disks, so the tray will operate at 2Gb. > I created 16k seg size raid0 lun

Re: [zfs-discuss] zpool io to 6140 is really slow

2007-11-17 Thread Louwtjie Burger
You have a 6140 with SAS drives ?! When did this happen? On Nov 17, 2007 12:30 AM, Asif Iqbal <[EMAIL PROTECTED]> wrote: > I have the following layout > > A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using > A1 anfd B1 controller port 4Gbps speed. > Each controller has 2G NVRAM

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread Louwtjie Burger
> We are all anxiously awaiting data... > -- richard Would it be worthwhile to build a test case: - Build a postgresql database and import 1 000 000 (or more) lines of data. - Run a single and multiple large table scan queries ... and watch the system then, - Update a column of each row in th

[zfs-discuss] ZFS + DB + "fragments"

2007-11-12 Thread Louwtjie Burger
Hi After a clean database load a database would (should?) look like this, if a random stab at the data is taken... [8KB-m][8KB-n][8KB-o][8KB-p]... The data should be fairly (100%) sequential in layout ... after some days though that same spot (using ZFS) would problably look like: [8KB-m][ ][

Re: [zfs-discuss] ZFS + DB + default blocksize

2007-11-08 Thread Louwtjie Burger
On 11/8/07, Richard Elling <[EMAIL PROTECTED]> wrote: > Louwtjie Burger wrote: > > Hi > > > > What is the impact of not aligning the DB blocksize (16K) with ZFS, > > especially when it comes to random reads on single HW RAID LUN. > > > > Potentially

Re: [zfs-discuss] Yager on ZFS

2007-11-08 Thread Louwtjie Burger
On 11/8/07, Mark Ashley <[EMAIL PROTECTED]> wrote: > Economics for one. Yep, for sure ... it was a rhetoric question ;) > > Why would I consider a new solution that is safe, fast enough, stable > > .. easier to manage and lots cheaper? Rephrase, "Why would I NOT consider ...?" :) ___

[zfs-discuss] ZFS + DB + default blocksize

2007-11-07 Thread Louwtjie Burger
Hi What is the impact of not aligning the DB blocksize (16K) with ZFS, especially when it comes to random reads on single HW RAID LUN. How would one go about measuring the impact (if any) on the workload? Thank you ___ zfs-discuss mailing list zfs-disc

Re: [zfs-discuss] Yager on ZFS

2007-11-07 Thread Louwtjie Burger
On 11/7/07, can you guess? <[EMAIL PROTECTED]> wrote: > > Monday, November 5, 2007, 4:42:14 AM, you wrote: > > > > cyg> Having gotten a bit tired of the level of ZFS > > hype floating I think a personal comment might help here ... I spend a large part of my life doing system administration, and l

Re: [zfs-discuss] zfs mounting

2007-10-30 Thread Louwtjie Burger
The regular mount/umount commands can only be used if you have the filesystems present in /etc/vfstab. To create a zfs filesystem with the idea of using mount/umount you must specify 'mountpoint=legacy'. Now you can 'mount /d/d5' ... as per regular ufs. Zpools don't need mountpoints ... ie 'mount

Re: [zfs-discuss] ZFS & array NVRAM cache

2007-10-08 Thread Louwtjie Burger
Battery back-ed cache... Interestingly enough, I've seen this configuration in production (V880/SAP on Oracle) running Solaris 8 + Veritas Storage Foundation (for the RAID-1 part). Speed is good ... redundancy is good ... price is not (2/3). Uptime 499 days :) On 10/9/07, Wee Yeh Tan <[EMAIL PR

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-03 Thread Louwtjie Burger
Would it be easier to ... 1) Change ZFS code to enable a sort of directIO emulation and then run various tests... or 2) Use Sun's performance team, which have all the experience in the world when it comes to performing benchmarks on Solaris and Oracle .. + a Dtrace master to drill down and see wh

Re: [zfs-discuss] SAS-controller recommodations

2007-09-13 Thread Louwtjie Burger
http://www.sun.com/servers/entry/x4200/optioncards.jsp#m2pcie SG-XPCIE8SAS-E-Z ? On 9/13/07, Thomas Liesner <[EMAIL PROTECTED]> wrote: > Hi all, > i am about to put together a one month test configuration for a > graphics-production server (prepress-filer that is). I would like to test zfs > on

Re: [zfs-discuss] Problem attaching a disk to a mirror...

2007-08-21 Thread Louwtjie Burger
Have you tried to "blank" out c0t3d0s2 using dd and zeros? Btw, "zpool attach -f zpol01 ..." won't work ;) (zpol01 = zpool01?) On 8/21/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > > > > I'm looking for ideas to resolve the problem below… > > # zpool attach -f zpol01 c0t2d0 c0t3d0 > invalid vd

Re: [zfs-discuss] Performance Tuning - ZFS, Oracle and T2000

2007-08-19 Thread Louwtjie Burger
http://blogs.sun.com/realneel/entry/zfs_and_databases http://www.sun.com/servers/coolthreads/tnb/parameters.jsp http://www.sun.com/servers/coolthreads/tnb/applications_oracle.jsp Be careful with long running single queries... you want to throw lots of users at it ... or parallelize as much as p

[zfs-discuss] Ready for production? - zfs/oracle/oltp

2007-07-17 Thread Louwtjie Burger
Hi What is the general feeling for production readiness when it comes to: ZFS Oracle 10G R2 6140-type storage OLTP workloads 1-3TB sizes Running UFS with directio is stable, fast and one can sleep at night. Can the same be said for zfs at this moment? Should one hold out for Solaris 10 U4? (I b

Re: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread Louwtjie Burger
Roshan Perera writes: > Hi all, > > I am after some help/feedback to the subject issue explained below. > > We are in the process of migrating a big DB2 database from a > > 6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to > 25K 12 CPU dual core x 1800Mhz with ZFS 8TB s

Re: [zfs-discuss] zfs and 2530 jbod

2007-06-04 Thread Louwtjie Burger
On 5/30/07, James C. McPherson <[EMAIL PROTECTED]> wrote: Louwtjie Burger wrote: > I know the above mentioned kit (2530) is new, but has anybody tried a > direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z > card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is

[zfs-discuss] zfs and 2530 jbod

2007-05-30 Thread Louwtjie Burger
Hi there I know the above mentioned kit (2530) is new, but has anybody tried a direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the prefered HBA I suppose) Did it work correctly? Thank you __

Re: [zfs-discuss] zfs backup and restore

2007-05-24 Thread Louwtjie Burger
A good place to start is: http://www.opensolaris.org/os/community/zfs/ Have a look at: http://www.opensolaris.org/os/community/zfs/docs/ as well as http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide# Create some files, which you can use as disks within zfs and demo to you

Re: [zfs-discuss] ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-23 Thread Louwtjie Burger
HW RAID can offload some I/O bandwidth from the system, but new systems, like Thumper, should have more than enough bandwidth, so why bother with HW RAID? *devils advocate mode = on* Why bother you say... I'll asked the Storagetek division this, next time they come round asking (begging?) me

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-22 Thread Louwtjie Burger
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote: What if your HW-RAID-controller dies? in say 2 years or more.. What will read your disks as a configured RAID? Do you know how to (re)configure the >controller or restore the config without destroying your data? Do you know for sure that a >

Re: [zfs-discuss] Re: How does ZFS write data to disks?

2007-05-12 Thread Louwtjie Burger
I think it's also important to note _how_ one measure performance (which is black magic at the best of times). I personally like to see averages since doing #iostat -xnz 10 doesn't tell me anything really. Since zfs likes to "bundle and flush" I want my (very expensive ;) Sun storage to give me a

Re: [zfs-discuss] Filesystem Benchmark

2007-05-09 Thread Louwtjie Burger
> LUN are configured as RAID5 accross 15 disks. Won't such a large amount of spindles have a negative impact on performance (in a single RAID-5 setup) ... single I/O from system generates lots of backend I/O's ? ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Re: Re: Re: Re: concatination & stripe - zfs?

2007-04-26 Thread Louwtjie Burger
There are 3 slots in a V240; 1 x 64bit @ 33/66Mhz 2 x 64bit @ 33Mhz His suggestion was that you might be saturating the PCI slot, since their respective throughput (in theory) is 528MB and 264MB. A 2342 should (again, in theory) do 256MB (per port) ... so slotting the card into the 33Mhz slots

Re: [zfs-discuss] 6410 expansion shelf

2007-04-21 Thread Louwtjie Burger
The controller unit contains all of the cache. On 4/21/07, Albert Chin <[EMAIL PROTECTED]> wrote: On Thu, Mar 22, 2007 at 01:21:04PM -0700, Frank Cusack wrote: > Does anyone have a 6140 expansion shelf that they can hook directly to > a host? Just wondering if this configuration works. Previo

Re: [zfs-discuss] 6410 expansion shelf

2007-03-26 Thread Louwtjie Burger
Pity the price of a JBOD is so close to a Controller unit... On 3/27/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote: On 3/24/07, Frank Cusack <[EMAIL PROTECTED]> wrote: > On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <[EMAIL PROTECTED]> wrote: > > I should be able to reply to you next Tuesday -- my

Re: [zfs-discuss] 6410 expansion shelf

2007-03-25 Thread Louwtjie Burger
Greetings... Although I've not tried to directly connect a 6140 JBOD unit to a host, I've noticed that the JBOD's disk drives do not online on their own. Without the controller unit activated, the drives continue to flash as if waiting to online... when the hardware controller switches on it dis

Re: [zfs-discuss] ZFS with SAN Disks and mutipathing

2007-02-17 Thread Louwtjie Burger
http://docs.sun.com/source/819-0139/index.html On 2/17/07, Vikash Gupta <[EMAIL PROTECTED]> wrote: Hi, I just deploy the ZFS on an SAN attach disk array and it's working fine. How do i get dual pathing advantage of the disk ( like DMP in Veritas). Can someone point to correct doc and setup. T

Re: [zfs-discuss] Filebench, X4200 and Sun Storagetek 6140

2006-11-04 Thread Louwtjie Burger
time with filebench, but I will try to stick to what I've seen at clients ito db sizes, users, type of app, etc.On 11/4/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: Hi Louwtjie,Are you running FC or SATA-II disks in the 6140? How many spindles too?Best Regards,JasonOn 11/3/0

[zfs-discuss] Filebench, X4200 and Sun Storagetek 6140

2006-11-03 Thread Louwtjie Burger
Hi there I'm busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above available for tests, I'm open to suggestions on potential configs that I could run for you. Pop me a mail if you want something specific _or_ you have suggestions conc

[zfs-discuss] Solaris 6/06 ZFS and OpenSolaris ZFS

2006-08-30 Thread Louwtjie Burger
What are the major differences between the "first" zfs shipped in 06/06 Solaris 10, compared to the latest built's of OpenSolaris? Will there be any major functionality released to 06/06 Solaris zfs via patches? Will major zfs updates only be integrated into Solaris with the regular release cyc

[zfs-discuss] Re: commercial backup software and zfs

2006-08-17 Thread Louwtjie Burger
No ACL's ... This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: commercial backup software and zfs

2006-08-17 Thread Louwtjie Burger
Hi there Did a backup/restore on TSM, works fine. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Removing a device from a zfs pool

2006-08-11 Thread Louwtjie Burger
Hi there Are there any consideration given to this feature...? I would also agree that this will not only be a "testing" feature, but will find it's way into production. It would probably work on the same princaple of swap -a and swap -d ;) Just a little bit more complex. This message post

[zfs-discuss] 3510 JBOD ZFS vs 3510 HW RAID

2006-07-28 Thread Louwtjie Burger
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these unit