Let ZFS deal with the redundancy part. I'm not
counting redundancy offered by traditional RAID as
you can see by just posts in this forums that -
1. It doesn't work.
2. It bites when you least expect it to.
3. You can do nothing but resort to tapes and LOT of
aspirin when you get bitten.
Christiaan Willemsen wrote:
Hi Richard,
Richard Elling wrote:
It should cost less than a RAID array...
Advertisement: Sun's low-end servers have 16 DIMM slots.
Sadly, those are by far more expensive than what I have here from our
own server supplier...
ok, that pushed a button. Let's
Why not go to 128-256 GBytes of RAM? It isn't that
expensive and would
significantly help give you a big performance boost
;-)
Would be nice, but it not that much inexpensive since we'd have to move up a
class in server choise, and besides the extra memory cost, also brings some
more money
Christiaan Willemsen wrote:
Why not go to 128-256 GBytes of RAM? It isn't that
expensive and would
significantly help give you a big performance boost
;-)
Would be nice, but it not that much inexpensive since we'd have to move up a
class in server choise, and besides the extra
On Mon, Jun 30, 2008 at 10:17 AM, Christiaan Willemsen
[EMAIL PROTECTED] wrote:
The question is: how can we maximize IO by using the best possible
combination of hardware and ZFS RAID?
Here are some generic concepts that still hold true:
More disks can handle more IOs.
Larger disks can put
On Tue, 1 Jul 2008, Johan Hartzenberg wrote:
Larger disks can put more data on the outer edge, where performance is
better.
On the flip side, disks with a smaller form factor produce less
vibration and are less sensitive to it so seeks stabilize faster with
less chance of error. The
On Mon, Jun 30, 2008 at 11:43 AM, Akhilesh Mritunjai
[EMAIL PROTECTED] wrote:
I'll probably be having 16 Seagate 15K5 SAS disks,
150 GB each. Two in HW raid1 for the OS, two in HW
raid 1 or 10 for the transaction log. The OS does not
need to be on ZFS, but could be.
Whatever you do, DO NOT
I feel I'm being mis-understood.
RAID - Redundant Array of Inexpensive Disks.
I meant to state that - Let ZFS deal with redundancy.
If you want to have an AID by all means have your RAID controller do all
kind of striping/mirroring it can to help with throughput or ease of managing
drives.
I'm new so opensolaris and very new to ZFS. In the past we have always used
linux for our database backends.
So now we are looking for a new database server to give us a big performance
boost, and also the possibility for scalability.
Our current database consists mainly of a huge table
Christiaan Willemsen wrote:
...
And that is exactly where ZFS comes in, at least as far as I read.
The question is: how can we maximize IO by using the best possible
combination of hardware and ZFS RAID?
...
For what I read, mirroring and striping should get me better performance
than
On Monday 30 June 2008 11:14:10 James C. McPherson wrote:
Christiaan Willemsen wrote:
...
And that is exactly where ZFS comes in, at least as far as I
read.
The question is: how can we maximize IO by using the best
possible combination of hardware and ZFS RAID?
...
For what I
Christiaan,
As ZFS tuning has already been suggested, remember:
a) Never tune unless you need to.
b) Never tune unless you have an untuned benchmark set of figures to
compare against after the system has been tuned - especially in ZFS-land
which, whilst it may not be quite there, is designed
Christiaan,
As ZFS tuning has already been suggested, remember:
a) Never tune unless you need to.
b) Never tune unless you have an untuned benchmark
set of figures to
compare against after the system has been tuned -
especially in ZFS-land
which, whilst it may not be quite there, is
Another thing: what about a seperate disk (or disk set) for the ZIL?
Would it be worth sacrificing two SAS disks for two SSD disks in raid 1
handling the ZIL?
This message posted from opensolaris.org
___
zfs-discuss mailing list
This is a bit of a sidebar to the discussion about getting the
best performance for PostgreSQL from ZFS, but may affect
you if you're doing sequential scans through the 70GB table
or its segments.
ZFS copy-on-write results in tables' contents being spread across
the full width of their
Christiaan,
So right now, I'm not babling about some ZFS tuning setting, but about the
advantages and disadvantages of using ZFS, hardware raid, or a combination of
the two.
I never accused you of babbling, I opened my response with As ZFS
tuning has already been suggested; and gave some
I'll probably be having 16 Seagate 15K5 SAS disks,
150 GB each. Two in HW raid1 for the OS, two in HW
raid 1 or 10 for the transaction log. The OS does not
need to be on ZFS, but could be.
Whatever you do, DO NOT mix zfs and HW RAID.
ZFS likes to handle redundancy all by itself. It's much
Christiaan Willemsen wrote:
I'm new so opensolaris and very new to ZFS. In the past we have always used
linux for our database backends.
So now we are looking for a new database server to give us a big performance
boost, and also the possibility for scalability.
Our current database
Akhilesh Mritunjai wrote:
I'll probably be having 16 Seagate 15K5 SAS disks,
150 GB each. Two in HW raid1 for the OS, two in HW
raid 1 or 10 for the transaction log. The OS does not
need to be on ZFS, but could be.
Whatever you do, DO NOT mix zfs and HW RAID.
I disagree. There
David Collier-Brown wrote:
ZFS copy-on-write results in tables' contents being spread across
the full width of their stripe, which is arguably a good thing
for transaction processing performance (or at least can be), but
makes sequential table-scan speed degrade.
If you're doing
20 matches
Mail list logo