Ta on the comments
I'm going to use Jorg's 'star' to simulate some sequential backup workloads,
using different blocksizes and see what the system do.
I'll save some output and post for people that might match the same config, now
or in the future.
To be clear though: (currently)
#tar cvf
Server: T5120 on 10 U5
Storage: Internal 8 drives on SAS HW RAID (R5)
Oracle: ZFS fs, recordsize=8K and atime=off
Tape: LTO-4 (half height) on SAS interface.
Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect
the CPU cannot push more since it's a single thread doing al
the right way ?
> What's is the best way to manage volumes in Solaris?
> Do you have a URL or document describing this !?
>
> cheers,
>
> TS
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> ht
On 12/19/07, David Magda <[EMAIL PROTECTED]> wrote:
>
> On Dec 18, 2007, at 12:23, Mike Gerdts wrote:
>
> > 2) Database files - I'll lump redo logs, etc. in with this. In Oracle
> >RAC these must live on a shared-rw (e.g. clustered VxFS, NFS) file
> >system. ZFS does not do this.
>
> If y
> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
> 0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0
> 0.0 60.00.0 4280.8 0.0 35.00.0 583.1 0 100 c2t9d0
> 0.0 55.00.0 3938.2 0.0 35.00.0 636.1 0 100 c2t10d0
> 0.0 56.0
> The throughput when writing from a local disk to the
> zpool is around 30MB/s, when writing from a client
Err.. sorry, the internal storage would be good old 1Gbit FCAL disks @
10K rpm. Still, not the fastest around ;)
___
zfs-discuss mailing list
zfs-
On Dec 1, 2007 7:15 AM, Vincent Fox <[EMAIL PROTECTED]> wrote:
> We will be using Cyrus to store mail on 2540 arrays.
>
> We have chosen to build 5-disk RAID-5 LUNs in 2 arrays which are both
> connected to same host, and mirror and stripe the LUNs. So a ZFS RAID-10 set
> composed of 4 LUNs. Mu
On Nov 28, 2007 12:58 AM, Justin Tuttle <[EMAIL PROTECTED]> wrote:
> I have searched high and low and cannot find the answer. I read about how zfs
> uses a Device ID for identification, usually provided by the firmware of the
> device. So if an controller presents an (array) lun w/a unique device
>
> Poor sequential read performance has not been quantified.
>
> >- COW probably makes that conflict worse
> >
> >
>
> This needs to be proven with a reproducible, real-world workload before it
> makes sense to try to solve it. After all, if we cannot measure where
> we are,
> how can we prov
On Nov 17, 2007 9:40 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> (Including storage-discuss)
>
> I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
> ST337FC (300GB - 1 RPM FC-AL)
Those disks are 2Gb disks, so the tray will operate at 2Gb.
> I created 16k seg size raid0 lun
You have a 6140 with SAS drives ?! When did this happen?
On Nov 17, 2007 12:30 AM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> I have the following layout
>
> A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
> A1 anfd B1 controller port 4Gbps speed.
> Each controller has 2G NVRAM
> We are all anxiously awaiting data...
> -- richard
Would it be worthwhile to build a test case:
- Build a postgresql database and import 1 000 000 (or more) lines of data.
- Run a single and multiple large table scan queries ... and watch the system
then,
- Update a column of each row in th
Hi
After a clean database load a database would (should?) look like this,
if a random stab at the data is taken...
[8KB-m][8KB-n][8KB-o][8KB-p]...
The data should be fairly (100%) sequential in layout ... after some
days though that same spot (using ZFS) would problably look like:
[8KB-m][ ][
On 11/8/07, Richard Elling <[EMAIL PROTECTED]> wrote:
> Louwtjie Burger wrote:
> > Hi
> >
> > What is the impact of not aligning the DB blocksize (16K) with ZFS,
> > especially when it comes to random reads on single HW RAID LUN.
> >
>
> Potentially
On 11/8/07, Mark Ashley <[EMAIL PROTECTED]> wrote:
> Economics for one.
Yep, for sure ... it was a rhetoric question ;)
> > Why would I consider a new solution that is safe, fast enough, stable
> > .. easier to manage and lots cheaper?
Rephrase, "Why would I NOT consider ...?" :)
___
Hi
What is the impact of not aligning the DB blocksize (16K) with ZFS,
especially when it comes to random reads on single HW RAID LUN.
How would one go about measuring the impact (if any) on the workload?
Thank you
___
zfs-discuss mailing list
zfs-disc
On 11/7/07, can you guess? <[EMAIL PROTECTED]> wrote:
> > Monday, November 5, 2007, 4:42:14 AM, you wrote:
> >
> > cyg> Having gotten a bit tired of the level of ZFS
> > hype floating
I think a personal comment might help here ...
I spend a large part of my life doing system administration, and l
The regular mount/umount commands can only be used if you have the
filesystems present in /etc/vfstab. To create a zfs filesystem with
the idea of using mount/umount you must specify 'mountpoint=legacy'.
Now you can 'mount /d/d5' ... as per regular ufs.
Zpools don't need mountpoints ... ie 'mount
Battery back-ed cache...
Interestingly enough, I've seen this configuration in production
(V880/SAP on Oracle) running Solaris 8 + Veritas Storage Foundation
(for the RAID-1 part).
Speed is good ... redundancy is good ... price is not (2/3).
Uptime 499 days :)
On 10/9/07, Wee Yeh Tan <[EMAIL PR
Would it be easier to ...
1) Change ZFS code to enable a sort of directIO emulation and then run
various tests... or
2) Use Sun's performance team, which have all the experience in the
world when it comes to performing benchmarks on Solaris and Oracle ..
+ a Dtrace master to drill down and see wh
http://www.sun.com/servers/entry/x4200/optioncards.jsp#m2pcie
SG-XPCIE8SAS-E-Z ?
On 9/13/07, Thomas Liesner <[EMAIL PROTECTED]> wrote:
> Hi all,
> i am about to put together a one month test configuration for a
> graphics-production server (prepress-filer that is). I would like to test zfs
> on
Have you tried to "blank" out c0t3d0s2 using dd and zeros?
Btw, "zpool attach -f zpol01 ..." won't work ;) (zpol01 = zpool01?)
On 8/21/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
>
>
>
> I'm looking for ideas to resolve the problem below…
>
> # zpool attach -f zpol01 c0t2d0 c0t3d0
> invalid vd
http://blogs.sun.com/realneel/entry/zfs_and_databases
http://www.sun.com/servers/coolthreads/tnb/parameters.jsp
http://www.sun.com/servers/coolthreads/tnb/applications_oracle.jsp
Be careful with long running single queries... you want to throw lots
of users at it ... or parallelize as much as p
Hi
What is the general feeling for production readiness when it comes to:
ZFS
Oracle 10G R2
6140-type storage
OLTP workloads
1-3TB sizes
Running UFS with directio is stable, fast and one can sleep at night.
Can the same be said for zfs at this moment?
Should one hold out for Solaris 10 U4? (I b
Roshan Perera writes:
> Hi all,
>
> I am after some help/feedback to the subject issue explained below.
>
> We are in the process of migrating a big DB2 database from a
>
> 6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
> 25K 12 CPU dual core x 1800Mhz with ZFS 8TB s
On 5/30/07, James C. McPherson <[EMAIL PROTECTED]> wrote:
Louwtjie Burger wrote:
> I know the above mentioned kit (2530) is new, but has anybody tried a
> direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
> card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is
Hi there
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the
prefered HBA I suppose)
Did it work correctly?
Thank you
__
A good place to start is: http://www.opensolaris.org/os/community/zfs/
Have a look at:
http://www.opensolaris.org/os/community/zfs/docs/
as well as
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#
Create some files, which you can use as disks within zfs and demo to
you
HW RAID can offload some I/O bandwidth from the system, but new systems,
like Thumper, should have more than enough bandwidth, so why bother with
HW RAID?
*devils advocate mode = on*
Why bother you say...
I'll asked the Storagetek division this, next time they come round
asking (begging?) me
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to (re)configure the
>controller or restore the config without destroying your data? Do you know for sure
that a >
I think it's also important to note _how_ one measure performance
(which is black magic at the best of times).
I personally like to see averages since doing #iostat -xnz 10 doesn't
tell me anything really. Since zfs likes to "bundle and flush" I want
my (very expensive ;) Sun storage to give me a
> LUN are configured as RAID5 accross 15 disks.
Won't such a large amount of spindles have a negative impact on
performance (in a single RAID-5 setup) ... single I/O from system
generates lots of backend I/O's ?
___
zfs-discuss mailing list
zfs-discuss
There are 3 slots in a V240;
1 x 64bit @ 33/66Mhz
2 x 64bit @ 33Mhz
His suggestion was that you might be saturating the PCI slot, since
their respective throughput (in theory) is 528MB and 264MB.
A 2342 should (again, in theory) do 256MB (per port) ... so slotting
the card into the 33Mhz slots
The controller unit contains all of the cache.
On 4/21/07, Albert Chin
<[EMAIL PROTECTED]> wrote:
On Thu, Mar 22, 2007 at 01:21:04PM -0700, Frank Cusack wrote:
> Does anyone have a 6140 expansion shelf that they can hook directly to
> a host? Just wondering if this configuration works. Previo
Pity the price of a JBOD is so close to a Controller unit...
On 3/27/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
On 3/24/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
> On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
> > I should be able to reply to you next Tuesday -- my
Greetings...
Although I've not tried to directly connect a 6140 JBOD unit to a
host, I've noticed that the JBOD's disk drives do not online on their
own.
Without the controller unit activated, the drives continue to flash as
if waiting to online... when the hardware controller switches on it
dis
http://docs.sun.com/source/819-0139/index.html
On 2/17/07, Vikash Gupta <[EMAIL PROTECTED]> wrote:
Hi,
I just deploy the ZFS on an SAN attach disk array and it's working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).
Can someone point to correct doc and setup.
T
time with filebench, but I will try to stick to what I've seen at clients ito db sizes, users, type of app, etc.On 11/4/06,
Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
Hi Louwtjie,Are you running FC or SATA-II disks in the 6140? How many spindles too?Best Regards,JasonOn 11/3/0
Hi there
I'm busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above available for tests, I'm open to
suggestions on potential configs that I could run for you.
Pop me a mail if you want something specific _or_ you have suggestions
conc
What are the major differences between the "first" zfs shipped in 06/06 Solaris
10, compared to the latest built's of OpenSolaris?
Will there be any major functionality released to 06/06 Solaris zfs via patches?
Will major zfs updates only be integrated into Solaris with the regular release
cyc
No ACL's ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi there
Did a backup/restore on TSM, works fine.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi there
Are there any consideration given to this feature...?
I would also agree that this will not only be a "testing" feature, but will
find it's way into production.
It would probably work on the same princaple of swap -a and swap -d ;) Just a
little bit more complex.
This message post
Hi there
Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial
database (SAP SD scenario).
The cache on the HW raid helps, and the CPU load is less... but the solution
costs more and you _might_ not need the performance of the HW RAID.
Has anybody with access to these unit
44 matches
Mail list logo