I had an unclean shutdown because of a hang and suddenly my pool is degraded (I
realized something is wrong when python dumped core a couple of times).
This is before I ran scrub:
pool: mypool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
On 6/5/2010 1:30 PM, zfsnoob4 wrote:
I was talking about a write cache (slog/zil I suppose). This is just a media
server for home. The idea is when I copy an HD video from my camera to the
network drive it is always several GBs. So if it could copy the file to the SSD
first and then have it
- Brandon High bh...@freaks.com skrev:
Decreasing the block size increases the size of the dedup table
(DDT).
Every entry in the DDT uses somewhere around 250-270 bytes.
Are you sure it's that high? I was told it's ~150 per block, or ~1,2GB per
terabytes of storage with only 128k blocks
On 06.06.2010 08:06, devsk wrote:
I had an unclean shutdown because of a hang and suddenly my pool is degraded
(I realized something is wrong when python dumped core a couple of times).
This is before I ran scrub:
pool: mypool
state: DEGRADED
status: One or more devices has
On Sun, 6 Jun 2010, Roy Sigurd Karlsbakk wrote:
I mean I don't mind if I create or modify a file and it doesn't land
on disk because on unclean shutdown happened but a bunch of unrelated
files getting corrupted, is sort of painful to digest.
ZFS guarantees consistency in a redundant setup,
On Sun, Jun 6, 2010 at 3:26 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
- Brandon High bh...@freaks.com skrev:
Decreasing the block size increases the size of the dedup table
(DDT).
Every entry in the DDT uses somewhere around 250-270 bytes.
Are you sure it's that high? I was told
- Brandon High bh...@freaks.com skrev:
On Sun, Jun 6, 2010 at 3:26 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.net wrote:
- Brandon High bh...@freaks.com skrev:
Decreasing the block size increases the size of the dedup table
(DDT).
Every entry in the DDT uses somewhere around
I think both Bob and Thomas have it right. I am using VIrtualbox and just
checked, the host IO is cached on the SATA controller, although I thought I had
it enabled (this is VB-3.2.0).
Let me run this mode for a while and see of this happens again.
--
This message posted from opensolaris.org
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sun, Jun 6, 2010 at 10:46 AM, Brandon High bh...@freaks.com wrote:
No, that's the number that stuck in my head though.
Here's a reference from Richard Elling:
(http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038018.html)
Around 270 bytes, or one 512 byte sector.
-B
--
Brandon
Hi,
I'm looking to build a virtualized web hosting server environment accessing
files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540
with the following configuration:
- 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives)
- 2 Intel X-25 32GB SSD's
On 6/6/2010 6:22 PM, Ken wrote:
Hi,
I'm looking to build a virtualized web hosting server environment
accessing files on a hybrid storage SAN. I was looking at using the
Sun X-Fire x4540 with the following configuration:
* 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA
Sequential write for large files has no real
difference in speed between
an SSD and a HD.
That's not true. Indilinx based SSDs can write upto 200MB/s sequentially, and
Sandforce based even more. I don't know of any HD that can do that. Most HD are
considered good if they do half of that.
--
FWIW.
I use 4 intel 32gb ssds as read cache for each pool of 10 Patriot Torx drives
which are running in a raidz2 configuration. No Slogs as I haven't seen a
compliant SSD drive yet.
I am pleased with the results. The bottleneck really turns out to be the 24
port raid card they are plugged
14 matches
Mail list logo