> I am leaning towards AMD because of ECC support
well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040
This MB has two Intel ethernets and for a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/05/2010 03:21 AM, Edward Ned Harvey wrote:
> FWIW ... 5 disks in raidz2 will have capacity of 3 disks. But if you bought
> 6 disks in mirrored configuration, you have a small extra cost, and much
> better performance.
But the raidz2 can survive
Brian wrote:
Interesting comments..
But I am confused.
Performance for my backups (compression/deduplication) would most likely not be
#1 priority.
I want my VMs to run fast - so is it deduplication that really slows things
down?
Dedup requires a fair amount of CPU, but it really wants a
> I want my VMs to run fast - so is it deduplication that really slows
> things down?
>
> Are you saying raidz2 would overwhelm current I/O controllers to where
> I could not saturate 1 GB network link?
>
> Is the CPU I am looking at not capable of doing dedup and compression?
> Or are no CPUs ca
> Data in raidz2 is striped so that it is split across multiple disks.
Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk. It's not
simply striped. Whenever you read or write, you need to acce
On Thu, Feb 4, 2010 at 10:35 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 4 Feb 2010, Marc Nicholas wrote:
>
>>
>> The write IOPS between the X25-M and the X25-E are different since with
>> the X25-M, much
>> more of your data gets completely lost. Most of us prefer not to
On Thu, 4 Feb 2010, Marc Nicholas wrote:
The write IOPS between the X25-M and the X25-E are different since with the
X25-M, much
more of your data gets completely lost. Most of us prefer not to lose our data.
Would you like to qualify your statement further?
Google is your friend. And chec
On Thu, Feb 4, 2010 at 10:18 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 4 Feb 2010, Marc Nicholas wrote:
>
> Very interesting stats -- thanks for taking the time and trouble to share
>> them!
>>
>> One thing I found interesting is that the Gen 2 X25-M has higher write
>>
On Thu, 4 Feb 2010, Marc Nicholas wrote:
Very interesting stats -- thanks for taking the time and trouble to share them!
One thing I found interesting is that the Gen 2 X25-M has higher write IOPS
than the
X25-E according to Intel's documentation (6,600 IOPS for 4K writes versus 3,300
IOPS fo
On Thu, 4 Feb 2010, Brian wrote:
Was my raidz2 performance comment above correct? That the write
speed is that of the slowest disk? That is what I believe I have
read.
Data in raidz2 is striped so that it is split across multiple disks.
In this (sequential) sense it is faster than a single
Interesting comments..
But I am confused.
Performance for my backups (compression/deduplication) would most likely not be
#1 priority.
I want my VMs to run fast - so is it deduplication that really slows things
down?
Are you saying raidz2 would overwhelm current I/O controllers to where I cou
> I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
> mirrored boot drives.
You want to use compression and deduplication and raidz2. I hope you didn't
want to get any performance out of this system, because all of those are
compute or IO intensive.
FWIW ... 5 disks in raidz2
On Thu, Feb 4, 2010 at 7:54 PM, Brian wrote:
> It sounds like the consensus is more cores over clock speed. Surprising to
> me since the difference in clocks speed was over 1Ghz. So, I will go with a
> quad core.
>
Four cores @ 1.8Ghz = 7.2Ghz of threaded performance ([Open]Solaris is
relative
It sounds like the consensus is more cores over clock speed. Surprising to me
since the difference in clocks speed was over 1Ghz. So, I will go with a quad
core.
I was leaning towards 4GB of ram - which hopefully should be enough for dedup
as I am only planning on dedupping my smaller file sy
Hi Brian,
If you are considering testing dedup, particularly on large datasets,
see the list of known issues, here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Start with build 132.
Thanks,
Cindy
On 02/04/10 16:19, Brian wrote:
I am Starting to put together a home NAS s
Le 05/02/10 01:00, Brian a écrit :
Thanks for the reply.
Are cores better because of the compression/deduplication being mult-threaded or because of multiple streams? It is a pretty big difference in clock speed - so curious as to why core would be better. Glad to see your 4 core system i
I have a single zfs volume, shared out using COMSTAR and connected to a Windows
VM. I am taking snapshots of the volume regularly. I now want to mount a
previous snapshot, but when I go through the process, Windows sees the new
volume, but thinks it is blank and wants to initialize it. Any ideas
Peter Radig wrote:
I was interested in the impact the type of an SSD has on the performance of the
ZIL. So I did some benchmarking and just want to share the results.
My test case is simply untarring the latest ON source (528 MB, 53k files) on an
Linux system that has a ZFS file system mounted
Put your money into RAM, especially for dedup.
-- richard
On Feb 4, 2010, at 3:19 PM, Brian wrote:
> I am Starting to put together a home NAS server that will have the following
> roles:
>
> (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5
> HD streams at a time.
Very interesting stats -- thanks for taking the time and trouble to share
them!
One thing I found interesting is that the Gen 2 X25-M has higher write IOPS
than the X25-E according to Intel's documentation (6,600 IOPS for 4K writes
versus 3,300 IOPS for 4K writes on the "E"). I wonder if it'd perf
Thanks for the reply.
Are cores better because of the compression/deduplication being mult-threaded
or because of multiple streams? It is a pretty big difference in clock speed -
so curious as to why core would be better. Glad to see your 4 core system is
working well for you - so seems like
Le 04/02/10 20:26, Tonmaus a écrit :
Hi again,
thanks for the answer. Another thing that came to my mind is that you mentioned that you mixed the disks among the controllers. Does that mean you mixed them as well among pools? Unsurprisingly, the WD20EADS is slower than the Hitachi that is
I would go with cores (threads) rather than clock speed here. My home system
is a 4-core AMD @ 1.8Ghz and performs well.
I wouldn't use drives that big and you should be aware of the overheads of
RaidZ[x].
-marc
On Thu, Feb 4, 2010 at 6:19 PM, Brian wrote:
> I am Starting to put together a h
I was interested in the impact the type of an SSD has on the performance of the
ZIL. So I did some benchmarking and just want to share the results.
My test case is simply untarring the latest ON source (528 MB, 53k files) on an
Linux system that has a ZFS file system mounted via NFS over gigabit
* Brian (broco...@vt.edu) wrote:
> I am Starting to put together a home NAS server that will have the
> following roles:
>
> (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to
> 4 or 5 HD streams at a time. These will be streamed live to the NAS
> box during recording. (2) Pla
I am Starting to put together a home NAS server that will have the following
roles:
(1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD
streams at a time. These will be streamed live to the NAS box during recording.
(2) Playback TV (could be stream being recorded, co
Hi Ross,
Yes - zdb - is dumping out info in the form of:
Object lvl iblk dblk dsize lsize %full type
19116K512512512 100.00 ZFS plain file
264 bonus ZFS znode
dnode flags: USED_BYTES USERUSED_ACCOUN
Supermicro USAS-L8i controllers.
I agree with you, I'd much rather have the drives respond properly and promptly
than save a little power if that means I'm going to get strange errors from the
array. And these are the "green" drives, they just don't seem to cause me any
problems. The issues pe
Hi all,
Im trying to replace broken LUN in pool using zpool replace -f ,
but it fails. Physical disk is already replaced, and new lun has the
same address as broken one. But zpool detach/attach works.
This is simple configration:
pool: mypool
state: DEGRADED
status: One or more devices has e
On Thu, Feb 04, 2010 at 04:03:19PM -0500, Frank Cusack wrote:
> On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:
> >In Frank's case, IIUC, the better solution is to avoid the need for
> >unionfs in the first place by not placing pkg content in directories
> >that one might want to be writable from z
On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:
In Frank's case, IIUC, the better solution is to avoid the need for
unionfs in the first place by not placing pkg content in directories
that one might want to be writable from zones. If there's anything
about Perl5 (or anything else) that causes t
On Thu, Feb 04, 2010 at 03:19:15PM -0500, Frank Cusack wrote:
> BTW, I could just install everything in the global zone and use the
> default "inheritance" of /usr into each local zone to see the data.
> But then my zones are not independent portable entities; they would
> depend on some non-defaul
On 2/4/10 8:21 AM -0500 Ross Walker wrote:
Find -newer doesn't catch files added or removed it assumes identical
trees.
This may be redundant in light of my earlier post, but yes it does.
Directory mtimes are updated when a file is added or removed, and
find -newer will detect that.
-frank
___
On 2/4/10 8:00 AM +0100 Tomas Ögren wrote:
rsync by default compares metadata first, and only checks through every
byte if you add the -c (checksum) flag.
I would say rsync is the best tool here.
ah, i didn't know that was the default. no wonder recently when i was
incremental-rsyncing a few
BTW, I could just install everything in the global zone and use the
default "inheritance" of /usr into each local zone to see the data.
But then my zones are not independent portable entities; they would
depend on some non-default software installed in the global zone.
Just wanted to explain why
On 2/4/10 12:39 AM -0500 Ross Walker wrote:
On Feb 3, 2010, at 8:59 PM, Frank Cusack
wrote:
I think you misread the thread. Either find or ddiff will do it and
either will be better than rsync.
Find can find files that have been added or removed between two directory
trees?
How?
When a fi
On February 4, 2010 12:12:04 PM +0100 dick hoogendijk
wrote:
Why don't you just export that directory with NFS (rw) to your sparse zone
and mount it on /usr/perl5/mumble ? Or is this too simple a thought?
On February 4, 2010 1:41:20 PM +0100 Thomas Maier-Komor
wrote:
What about lofs? I thin
On 04/02/2010 12:42, Darren J Moffat wrote:
On 04/02/2010 12:13, Roshan Perera wrote:
Hi Darren,
Thanks - IBM basically haven't test clearcase with ZFS compression
therefore, they don't support currently. Future may change, as such
my customer cannot use compression. I have asked IBM for roadma
On 4 Feb 2010, at 16:35, Bob Friesenhahn wrote:
> On Thu, 4 Feb 2010, Darren J Moffat wrote:
>>> Thanks - IBM basically haven't test clearcase with ZFS compression
>>> therefore, they don't support currently. Future may change, as such my
>>> customer cannot use compression. I have asked IBM for
Hi again,
thanks for the answer. Another thing that came to my mind is that you mentioned
that you mixed the disks among the controllers. Does that mean you mixed them
as well among pools? Unsurprisingly, the WD20EADS is slower than the Hitachi
that is a fixed 7200 rpm drive. I wonder what imp
>>> Richard Elling 2/3/2010 6:06 PM >>>
On Feb 3, 2010, at 3:46 PM, Ross Walker wrote:
> On Feb 3, 2010, at 12:35 PM, Frank Cusack
> wrote:
>
> So was there a final consensus on the best way to find the difference between
> two snapshots (files/directories added, files/directories deleted an
On 03/02/2010 21:45, Aleksandr Levchuk wrote:
Hardware RAID6 + hot spare, worked well for us. So, I wanted to stick
our SAN for data protection. I understand that the end-to-end checks
of ZFS make it better at detecting corruptions.
In my case, I can imagine that ZFS would FREEZ the whole volume
On 04/02/2010 13:45, Karl Pielorz wrote:
--On 04 February 2010 11:31 + Karl Pielorz
wrote:
What would happen when I tried to 'online' ad2 again?
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt' failure, and marks 'ad2
putting storage-discuss@ and zfs-discuss@ as well.
On 04/02/2010 16:33, Robert Milkowski wrote:
Hi,
S10, SC3.2 + patches, Generic_142900-03, 2x T5220 with QLE2462 connected to
6540s.
We started to observe below messages yesterday at both nodes at the same time
after several weeks of runnin
Le 04/02/10 16:57, Tonmaus a écrit :
Hi Arnaud,
which type of controller is this?
Regards,
Tonmaus
I use two LSI SAS3081E-R in each server (16 hard disk trays, passive
backplane AFAICT, no expander).
Works very well.
Arnaud
___
zfs-discuss mai
On Thu, 4 Feb 2010, Darren J Moffat wrote:
Thanks - IBM basically haven't test clearcase with ZFS compression
therefore, they don't support currently. Future may change, as such my
customer cannot use compression. I have asked IBM for roadmap info to find
whether/when it will be supported.
--On 04 February 2010 08:58 -0500 Jacob Ritorto
wrote:
Seems your controller is actually doing only harm here, or am I missing
something?
The RAID controller presents the drives as both a mirrored pair, and JBOD -
*at the same time*...
The machine boots off the partition on the 'mirrore
Hi Arnaud,
which type of controller is this?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
it might not be a ZFS issue (and thus on the wrong list), but maybe there's
someone here who might be able to give us a good hint:
We are operating 13 x4500 and started to play with non-Sun blessed SSDs in
there. As we were running Solaris 10u5 before and wanted to use them as log
devi
On Thu, 4 Feb 2010, Karl Pielorz wrote:
The reason for testing this is because of a weird RAID setup I have
where if 'ad2' fails, and gets replaced - the RAID controller is going
to mirror 'ad1' over to 'ad2' - and cannot be stopped.
Does the raid controller not support a JBOD mode?
Regards
I think you'll do just fine then. And I think the extra platter will
work to your advantage.
-marc
On 2/3/10, Simon Breden wrote:
> Probably 6 in a RAID-Z2 vdev.
>
> Cheers,
> Simon
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss ma
Seems your controller is actually doing only harm here, or am I missing
something?
On Feb 4, 2010 8:46 AM, "Karl Pielorz" wrote:
--On 04 February 2010 11:31 + Karl Pielorz
wrote:
> What would happen...
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I'm kind stuck at trying to get my aging Netra 240 machine to boot
OpenSolaris. The live CD and installation worked perfectly, but when I
reboot and try to boot from the installed disk, I get:
Rebooting with command: boot disk0
Boot device: /p...
--On 04 February 2010 11:31 + Karl Pielorz
wrote:
What would happen when I tried to 'online' ad2 again?
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt' failure, and marks 'ad2'
(which at this point is a byte-for-byte
The delete queue and related blocks need further investigation...
r...@osol-dev:/data/zdb-test# zdb -dd data/zdb-test | more
Dataset data/zdb-test [ZPL], ID 641, cr_txg 529804, 24.5K, 6 objects
Object lvl iblk dblk dsize lsize %full type
0716K16K 15.0K16K
Interesting, can you explain what zdb is dumping exactly?
I suppose you would be looking for blocks referenced in the snapshot
that have a single reference and print out the associated file/
directory name?
-Ross
On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote:
Hi Ross,
zdb - f..
On Feb 4, 2010, at 2:00 AM, Tomas Ögren wrote:
On 03 February, 2010 - Frank Cusack sent me these 0,7K bytes:
On February 3, 2010 12:04:07 PM +0200 Henu
wrote:
Is there a possibility to get a list of changed files between two
snapshots? Currently I do this manually, using basic file sys
looking through some more code.. i was a bit premature in my last post - been a
long day.
extracting the guids and query the metadata seems to be logical -> i think
runnign a zfs send just to parse the data stream is a lot of overhead, when you
really only need to traverse metadata directly.
z
Hardware RAID6 + hot spare, worked well for us. So, I wanted to stick
our SAN for data protection. I understand that the end-to-end checks
of ZFS make it better at detecting corruptions.
In my case, I can imagine that ZFS would FREEZ the whole volume when a
single block or file is found to be corr
Hi Darren,
I totally agree with you and have raised some of the points mentioned but you
have given even more items to pass on.
I will update the alias when I hear further.
Many Thanks
Roshan
- Original Message -
From: Darren J Moffat
Date: Thursday, February 4, 2010 12:42 pm
Subject
On Wed, Feb 03, 2010 at 03:02:21PM -0800, Brandon High wrote:
> Another solution, for a true DIY x4500: BackBlaze has schematics for
> the 45 drive chassis that they designed available on their website.
> http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
On 04/02/2010 12:13, Roshan Perera wrote:
Hi Darren,
Thanks - IBM basically haven't test clearcase with ZFS compression therefore,
they don't support currently. Future may change, as such my customer cannot use
compression. I have asked IBM for roadmap info to find whether/when it will be
sup
On 04.02.2010 12:12, dick hoogendijk wrote:
>
> Frank Cusack wrote:
>> Is it possible to emulate a unionfs with zfs and zones somehow? My zones
>> are sparse zones and I want to make part of /usr writable within a zone.
>> (/usr/perl5/mumble to be exact)
>
> Why don't you just export that direct
Hi Ross,
zdb - f...@snapshot | grep "path" | nawk '{print $2}'
Enjoy!
Darren Mackay
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Darren,
Thanks - IBM basically haven't test clearcase with ZFS compression therefore,
they don't support currently. Future may change, as such my customer cannot use
compression. I have asked IBM for roadmap info to find whether/when it will be
supported.
Thanks
Roshan
- Original Mess
On Thu, Feb 4, 2010 at 2:09 AM, Frank Cusack
wrote:
> Is it possible to emulate a unionfs with zfs and zones somehow? My zones
> are sparse zones and I want to make part of /usr writable within a zone.
> (/usr/perl5/mumble to be exact)
>
> I can't just mount a writable directory on top of /usr/pe
On 04/02/2010 11:54, Roshan Perera wrote:
Anyone in the group using ZFS compression on clearcase vobs? If so any issues,
gotchas?
There shouldn't be any issues and I'd be very surprised if there was.
IBM support informs that ZFS compression is not supported. Any views on this?
Need more da
Hi All,
Anyone in the group using ZFS compression on clearcase vobs? If so any issues,
gotchas?
IBM support informs that ZFS compression is not supported. Any views on this?
Rgds
Roshan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
Hi All,
I've been using ZFS for a while now - and everything's been going well. I
use it under FreeBSD - but this question almost certainly should be the
same answer, whether it's FreeBSD or Solaris (I think/hope :)...
Imagine if I have a zpool with 2 disks in it, that are mirrored:
"
NAME
Frank Cusack wrote:
> Is it possible to emulate a unionfs with zfs and zones somehow? My zones
> are sparse zones and I want to make part of /usr writable within a zone.
> (/usr/perl5/mumble to be exact)
Why don't you just export that directory with NFS (rw) to your sparse zone
and mount it on /
Henu wrote:
So do you mean I cannot gather the names and locations of
changed/created/removed files just by analyzing a stream of
(incremental) zfs_send?
That's correct, you can't. Snapshots do not work at the file level.
--
Ian.
___
zfs-discuss m
Whoa! That is exactly what I've been looking for. Is there any
developement version publicly available for testing?
Regards,
Henrik Heino
Quoting Matthew Ahrens :
This is RFE 6425091 "want 'zfs diff' to list files that have changed
between snapshots", which covers both file & directory chang
So do you mean I cannot gather the names and locations of
changed/created/removed files just by analyzing a stream of
(incremental) zfs_send?
Quoting Andrey Kuzmin :
On Wed, Feb 3, 2010 at 6:11 PM, Ross Walker wrote:
On Feb 3, 2010, at 9:53 AM, Henu wrote:
Okay, so first of all, it's tr
Hi Simon
> I.e. you'll have to manually intervene
> if a consumer drive causes the system to hang, and
> replace it, whereas the RAID edition drives will
> probably report the error quickly and then ZFS will
> rewrite the data elsewhere, and thus maybe not kick
> the drive.
IMHO the relevant aspe
We got 50+ X4500/X4540's running in the same DC happiliy with ZFS.
Approximately 2500 drives and growing everyday...
Br
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun
Sorry fort he late answer.
Approximately it's 150 bytes per individual block. So increasing the
blocksize is a good idea.
Also when L1 and L2 arc is not enough system will start making disk IOPS and
RaidZ is not very effective for random IOPS and it's likely that when your
dram is not enough you
76 matches
Mail list logo