[zfs-discuss] ZFS: clarification on meaning of the autoreplace property

2010-03-17 Thread Dave Johnson
From pages 29,83,86,90 and 284 of the 10/09 Solaris ZFS Administration
guide, it sounds like a disk designated as a hot spare will:
1. Automatically take the place of a bad drive when needed
2. The spare will automatically be detached back to the spare
   pool when a new device is inserted and brought up to replace the
   original compromised one.

Should this work the same way for slices?

I have four active disks in a RAID 10 configuration,
for a storage pool, and the same disks are used
for mirrored root configurations, but only
only one of the possible mirrored root slice
pairs is currently active.

I wanted to designate slices on a 5th disk as
hot spares for the two existing pools, so
after partitioning the 5th disk (#4) identical
to the four existing disks, I ran:

# zpool add rpool spare c0t4d0s0
# zpool add store1 spare c0t4d0s7
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s0  ONLINE   0 0 0
c0t1d0s0  ONLINE   0 0 0
spares
  c0t4d0s0AVAIL

errors: No known data errors

  pool: store1
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
store1ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s7  ONLINE   0 0 0
c0t1d0s7  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t2d0s7  ONLINE   0 0 0
c0t3d0s7  ONLINE   0 0 0
spares
  c0t4d0s7AVAIL

errors: No known data errors
--
So It looked like everything was set up how I was
hoping until I emulated a disk failure by pulling
one of the online disks. The root pool responded
how I expected, but the storage pool, on slice 7,
did not appear to perform the autoreplace:

Not too long after pulling one of the online disks:


# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress for 0h0m, 10.02% done, 0h5m to go
config:

NAMESTATE READ WRITE CKSUM
rpool   DEGRADED 0 0 0
  mirrorDEGRADED 0 0 0
c0t0d0s0ONLINE   0 0 0
spare   DEGRADED84 0 0
  c0t1d0s0  REMOVED  0 0 0
  c0t4d0s0  ONLINE   0 084  329M resilvered
spares
  c0t4d0s0  INUSE currently in use

errors: No known data errors

  pool: store1
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
store1ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s7  ONLINE   0 0 0
c0t1d0s7  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t2d0s7  ONLINE   0 0 0
c0t3d0s7  ONLINE   0 0 0
spares
  c0t4d0s7AVAIL

errors: No known data errors

I was able to convert the state of store1 to DEGRADED by
writing to a file in that storage pool, but it always listed
the spare as available. This at the same time as showing
c0t1d0s7 as REMOVED in the same pool

Based on the manual, I expected the system to bring a
reinserted disk back on line automatically, but zpool status
still showed it as REMOVED. To get it back on line:

# zpool detach rpool c0t4d0s0
# zpool clear rpool
# zpool clear store1

Then status showed *both* pools resilvering. So the questions are:

1. Does autoreplace work on slices, or just complete disks?
2. Is there a problem replacing a bad disk with the same disk
   to get the autoreplace function to work?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: clarification on meaning of the autoreplace propert

2010-03-17 Thread Dave Johnson
 Hi Dave,
 
 I'm unclear about the autoreplace behavior with one
 spare that is
 connected to two pools. I don't see how it could work
 if the autoreplace 
 property is enabled on both pools, which formats and
 replaces a spare

Because I already partitioned the disk into slices. Then
I indicated the proper slice as the spare.

 disk that might be in-use in another pool (?) Maybe I
 misunderstand.
 
 1. I think autoreplace behavior might be inconsistent
 when a device is
 removed. CR 6935332 was filed recently but is not
 available yet through
 our public bug database.
 
 2. The current issue with adding a spare disk to a
 ZFS root pool is that 
 if a root pool mirror disk fails and the spare kicks
 in, the bootblock
 is not applied automatically. We're working on
 improving this
 experience.

While the bootblock may not have been applied automatically,
the root pool did show resilvering, but the storage pool
did not (at least per the status report)

 
 My advice would be to create a 3-way mirrored root
 pool until we have a
 better solution for root pool spares.

That would be sort of a different topic. I'm just interested
in understanding the functionality of the hot spare at this
point.

 
 3. For simplicity and ease of recovery, consider
 using your disks as
 whole disks, even though you must use slices for the
 root pool.

I can't do this with a RAID 10 configuration on the
storage pool, and a mirrored root pool. I only have
places for 5 disks on a 2RU/ 3.5 drive server

 If one disk is part of two pools and it fails, two
 pool are impacted. 

Yes. This is why I used slices instead of a whole disk
for the hot spare.

 The beauty of ZFS is no longer having to deal with
 slice administration, 
 except for the root pool.
 
 I like your mirror pool configurations but I would
 simplify it by
 converting store1 to using whole disks, and keep
 separate spare disks.`

I would have done that from the beginning with more
chassis space.

 One for the store1 pool, and either create a 3-way
 mirrored root pool
 or keep a spare disk connected to the system but
 unconfigured.

I still need confirmation on whether the hot spare function
will work with slices. I saw no errors when executing the commands
for the hot spare slices, but I got this funny response when I ran the 
test
 
Dave
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3ware support

2008-02-19 Thread Dave Johnson
Nice putrid spew of FUD regarding 3Ware cards.

Regarding the SuperMicro 8-port SATA PCI-X card, yes, that is a good 
recommendation.

-=dave
  - Original Message - 
  From: Rob Windsor 
  To: zfs-discuss@opensolaris.org 
  Sent: Tuesday, February 12, 2008 12:39 PM
  Subject: Re: [zfs-discuss] 3ware support

  3ware cards do not work (as previously specified).  Even in 
  linux/windows, they're pretty flaky -- if you had Solaris drivers, you'd 
  probably shoot yourself in a month anyway.

  I'm using the SuperMicro aoc-sat2-mv8 at the recommendation of someone 
  else on this list.  It's a JBOD card, which is perfect for ZFS.  Also, 
  you won't be paying for RAID functionality that you're wanting to 
  disable anyway.

  Rob++___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-11-06 Thread Dave Johnson
again i say (eventually) some zfs sendndmp type of mechanism seems the right 
way to go here *shrug*
 
-=dave



 Date: Mon, 5 Nov 2007 05:54:15 -0800 From: [EMAIL PROTECTED] To: 
 zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] HAMMER  Peter 
 Tribble wrote:   I'm not worried about the compression effect. Where I see 
 problems is  backing up million/tens of millions of files in a single   
 dataset. Backing up  each file is essentially a random read (and this isn't 
 helped by raidz  which gives you a single disks worth of random read I/O 
 per vdev). I  would love to see better ways of backing up huge numbers of 
 files.  It's worth correcting this point... the RAIDZ behavior you mention 
 only occurs if the read size is not aligned to the dataset's block size. 
 The checksum verifier must read the entire stripe to validate the data, but 
 it does that in parallel across the stripe's vdevs. The whole block is then 
 available for delivery to the application.  Although, backing up 
 millions/tens of millions of files in a single backup dataset is a bad idea 
 anyway. The metadata searches will kill you, no matter what backend 
 filesystem is supporting it.  zfs send is the faster way of backing up 
 huge numbers of files. But you pay the price in restore time. (But that's 
 the normal tradeoff)  --Joe 
 ___ zfs-discuss mailing list 
 zfs-discuss@opensolaris.org 
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-17 Thread Dave Johnson
From: Robert Milkowski [EMAIL PROTECTED]
 LDAP servers with several dozen millions accounts?
 Why? First you get about 2:1 compression ratio with lzjb, and you also
 get better performance.

a busy ldap server certainly seems a good fit for compression but when i 
said large i meant, as in bytes and numbers of files :)

seriously, is anyone out there using zfs for large storage servers?  you 
know, the same usage that 90% of the storage sold in the world is used for ? 
(yes, i pulled that figure out of my *ss ;)

are my concerns invalid with the current implementation of zfs with 
compression ?  is the compression so lightweight that it can be decompressed 
as fast as the disks can stream uncompressed backup data to tape while the 
server is still servicing clients ?  the days of nightly backups seem long 
gone in the space I've been working in the last several years... backups run 
almost 'round the clock it seems on our biggest systems (15-30Tb and 
150-300mil files , which may be small by the standard of others of you out 
there.)

what really got my eyes rolling about c9n and prompted my question was all 
this talk about gzip compression and other even heavierweight compression 
algor's.  lzjb is relatively lightweight but i could still see it being a 
bottleneck in a 'weekly full backups' scenario unless you had a very new 
system with kilowatts of cpu to spare.  gzip ? pulease.  bzip and lzma 
someone has *got* to be joking ?  i see these as ideal candiates for AVS 
scenarios where the aplication never requires full dumps to tape, but on a 
typical storage server ?  the compression would be ideal but would also make 
it impossible to backup in any reasonable window.

back to my postulation, if it is correct, what about some NDMP interface to 
ZFS ?  it seems a more than natural candidate.  in this scenario, 
compression would be a boon since the blocks would already be in a 
compressed state.  I'd imagine this fitting into the 'zfs send' codebase 
somewhere.

thoughts (on either c9n and/or 'zfs send ndmp') ?

-=dave

- Original Message - 
From: Robert Milkowski [EMAIL PROTECTED]
To: Dave Johnson [EMAIL PROTECTED]
Cc: roland [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Sent: Wednesday, October 17, 2007 2:35 AM
Subject: Re[2]: [zfs-discuss] HAMMER


 Hello Dave,

 Tuesday, October 16, 2007, 9:17:30 PM, you wrote:

 DJ you mean c9n ? ;)

 DJ does anyone actually *use* compression ?  i'd like to see a poll on 
 how many
 DJ people are using (or would use) compression on production systems that 
 are
 DJ larger than your little department catch-all dumping ground server.  i 
 mean,
 DJ unless you had some NDMP interface directly to ZFS, daily tape backups 
 for
 DJ any large system will likely be an excersize in futility unless the 
 systems
 DJ are largely just archive servers, at which point it's probably smarter 
 to
 DJ perform backups less often, coinciding with the workflow of migrating
 DJ archive data to it.  otherwise wouldn't the system just plain get 
 pounded?

 LDAP servers with several dozen millions accounts?
 Why? First you get about 2:1 compression ratio with lzjb, and you also
 get better performance.


 -- 
 Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-16 Thread Dave Johnson
you mean c9n ? ;)

does anyone actually *use* compression ?  i'd like to see a poll on how many 
people are using (or would use) compression on production systems that are 
larger than your little department catch-all dumping ground server.  i mean, 
unless you had some NDMP interface directly to ZFS, daily tape backups for 
any large system will likely be an excersize in futility unless the systems 
are largely just archive servers, at which point it's probably smarter to 
perform backups less often, coinciding with the workflow of migrating 
archive data to it.  otherwise wouldn't the system just plain get pounded?

-=dave

- Original Message - 
From: roland [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Tuesday, October 16, 2007 12:44 PM
Subject: Re: [zfs-discuss] HAMMER


 and what about compression?

 :D


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-05 Thread Dave Johnson
From: Anton B. Rang [EMAIL PROTECTED]
 For many databases, most of the I/O is writes (reads wind up
 cached in memory).

2 words:  table scan

-=dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New zfs pr0n server :)))

2007-09-07 Thread Dave Johnson
Yes, if you have any MFM/RLL drives in your possession, please disregard my 
recomendation ;)

-=dave

- Original Message - 
From: Paul Kraus [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Friday, September 07, 2007 5:31 AM
Subject: Re: [zfs-discuss] New zfs pr0n server :)))


 On 9/6/07, Dave Johnson [EMAIL PROTECTED] wrote:

 However, you may be able to lower the sound ever so slightly more by
 staggering the drives so that every other one is upside down, spinning 
 the
 opposite direction and thus minimizing accumulative rotational vibration.

Be careful here. I know some older disks are not designed to
 run upside down. Check the drive manufacturers data sheet on the
 drives you are using.

 -- 
 Paul Kraus
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New zfs pr0n server :)))

2007-09-07 Thread Dave Johnson
the up/down/up/down/... scenario should give the best results in minimizing 
accumulative rotation vibration.

-=dave

- Original Message - 
From: [EMAIL PROTECTED]
To: Dave Johnson [EMAIL PROTECTED]
Cc: Christopher Gibbs [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Sent: Friday, September 07, 2007 2:35 AM
Subject: Re: [zfs-discuss] New zfs pr0n server :)))



 I've seen the page with the pics of that server, and I agree with this 
 issue
 So I'd like to try to reverse half of the disks too, how would you advice
 to do this?
 My current setup is as follows, where up is normal disk upside paced, and 
 down is with upside plate down and electronics side up:
 up
 up
 up
 up
 up
 up
 up
 up

 would it be better to do this:
 up
 down
 up
 down
 up
 down
 up
 down

 or this:
 up
 up
 up
 up
 down
 down
 down
 down

 or maybe this?
 up
 up
 down
 down
 up
 up
 down
 down





 On Thu, Sep 06, 2007 at 03:24:25PM -0700, Dave Johnson wrote:
 Agreed !

 However, you may be able to lower the sound ever so slightly more by
 staggering the drives so that every other one is upside down, spinning 
 the
 opposite direction and thus minimizing accumulative rotational vibration.

 I had to make a makeshift temporary server when our NAS gateway device 
 had
 a problem that required we reinitialize the array (after moving all data
 off of course).  I used a Coolermaster CM Stacker with 16x750GB drives in
 SATA 4-drive carriers all going the same direction and the system made a
 horrendous oscillating buzz, as well as the occasional drive timeout
 warning from the RAID controller when the system was under high load (all
 drives part of single RAID6 array).

 After some thought, I decided to turn 2 of the 4 drive cages upside down 
 so
 that the config had 4 drives spinning normally, 4 upside down, 4 
 normally,
 and finally another 4 upside down.  The oscillation was gone completely 
 as
 were the rare drive timeouts under load.

 Your laced setup places the drives in so much dampening that it might not
 make much of a difference but still, might as well take care of it now
 rather than later when it's all buttoned up if it starts buzz.  It
 certainly couldn't hurt.

 -=dave

 - Original Message - 
 From: Christopher Gibbs [EMAIL PROTECTED]
 To: Diego Righi [EMAIL PROTECTED]
 Cc: zfs-discuss@opensolaris.org
 Sent: Thursday, September 06, 2007 8:06 AM
 Subject: Re: [zfs-discuss] New zfs pr0n server :)))


 Wow, what a creative idea. And I'll bet that allows for much more
 airflow than the 4-in-3 drive cages do. Very nice.
 
 On 9/6/07, Diego Righi [EMAIL PROTECTED] wrote:
 Unfortunately it only comes with 4 adapters, bare metal adapters 
 without
 any dampering /silencing and so on...
 ...anyway I wanted to make it the most silent I could, so I suspeded 
 all
 the 10 disks (8 sata 320gb and a little 2,5 pata root disk) with a
 flexible wire, like I posted in this italian forum, the page is in
 italian, but the pictures show the concept well enough:
 http://www.pcsilenzioso.it/forum/showthread.php?t=2397
 
 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 -- 
 Christopher Gibbs
 Email / LDAP Administrator
 Web Integration  Programming
 Abilene Christian University
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New zfs pr0n server :)))

2007-09-06 Thread Dave Johnson
Agreed !

However, you may be able to lower the sound ever so slightly more by 
staggering the drives so that every other one is upside down, spinning the 
opposite direction and thus minimizing accumulative rotational vibration.

I had to make a makeshift temporary server when our NAS gateway device had a 
problem that required we reinitialize the array (after moving all data off 
of course).  I used a Coolermaster CM Stacker with 16x750GB drives in SATA 
4-drive carriers all going the same direction and the system made a 
horrendous oscillating buzz, as well as the occasional drive timeout warning 
from the RAID controller when the system was under high load (all drives 
part of single RAID6 array).

After some thought, I decided to turn 2 of the 4 drive cages upside down so 
that the config had 4 drives spinning normally, 4 upside down, 4 normally, 
and finally another 4 upside down.  The oscillation was gone completely as 
were the rare drive timeouts under load.

Your laced setup places the drives in so much dampening that it might not 
make much of a difference but still, might as well take care of it now 
rather than later when it's all buttoned up if it starts buzz.  It certainly 
couldn't hurt.

-=dave

- Original Message - 
From: Christopher Gibbs [EMAIL PROTECTED]
To: Diego Righi [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, September 06, 2007 8:06 AM
Subject: Re: [zfs-discuss] New zfs pr0n server :)))


 Wow, what a creative idea. And I'll bet that allows for much more
 airflow than the 4-in-3 drive cages do. Very nice.

 On 9/6/07, Diego Righi [EMAIL PROTECTED] wrote:
 Unfortunately it only comes with 4 adapters, bare metal adapters without 
 any dampering /silencing and so on...
 ...anyway I wanted to make it the most silent I could, so I suspeded all 
 the 10 disks (8 sata 320gb and a little 2,5 pata root disk) with a 
 flexible wire, like I posted in this italian forum, the page is in 
 italian, but the pictures show the concept well enough:
 http://www.pcsilenzioso.it/forum/showthread.php?t=2397


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 -- 
 Christopher Gibbs
 Email / LDAP Administrator
 Web Integration  Programming
 Abilene Christian University
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-09 Thread Dave Johnson
roland [EMAIL PROTECTED] wrote:
  there is also no filesystem based approach in compressing/decompressing a 
  whole filesystem. you can have 499gb of data on a 500gb partition - and if 
  you need some more space you would think turning on compression on that fs 
  would solve your problem. but compression only affects files which are new. 
  i wished there was some zfs set compression=gzip zfs , zfs compress fs, 
  zfs uncompress fs and it would be nice if we could get compresion 
  information for single files. (as with ntfs)
 
one could kludge this by setting the compression parameters desired on the tree 
then using a perl script to walk the tree, copying each file to a tmp file, 
renaming the original to an arbitrary name, renaming the tmp to the name of the 
original, then updating the new file with the original file's metadata, do a 
checksum sanity check, then delete the uncompressed original.
 
-=dave___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-09 Thread dave johnson
Richard Elling [EMAIL PROTECTED] wrote:
 Dave Johnson wrote:
 roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
  
   there is also no filesystem based approach in 
 compressing/decompressing a whole filesystem.
  one could kludge this by setting the compression parameters desired on 
 the tree then using a perl script to walk the tree, copying each file to 
 a tmp file, renaming the original to an arbitrary name, renaming the tmp 
 to the name of the original, then updating the new file with the original 
 file's metadata, do a checksum sanity check, then delete the uncompressed 
 original.

 This solution has been proposed several times on this forum.
 It is simpler to use an archiving or copying tool (tar, cpio, pax,
 star, cp, rsync, rdist, install, zfs send/receive et.al.) to copy
 the tree once, then rename the top directory.  It makes no sense to
 me to write a copying tool in perl or shell.  KISS :-)

That's not compresing an existing file tree, that's creating a compressed 
copy, which isn't the problem asked.  How do you do that if your tree is 
full (which is probably the #1 anyone would want to compress an existin 
tree) ?

You must be lucking enough to use BLISS (buying luns increases storage 
st...) :)

-=dave 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread dave johnson
How other storage systems do it is by calculating a hash value for said file 
(or block), storing that value in a db, then checking every new file (or 
block) commit against the db for a match and if found, replace file (or 
block) with duplicate entry in db.


The most common non-proprietary hash calc for file-level deduplication seems 
to be the combination of the SHA1 and MD5 together.  Collisions have been 
shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but 
the probibility of collitions occuring simultaneously both is to small as 
the capacity of ZFS is to large :)


While computationally intense, this would be a VERY welcome feature addition 
to ZFS and given the existing infrastructure within the filesystem already, 
while non-trivial by any means, it seems a prime candidate.  I am not a 
programmer so I do not have the expertise to spearhead such a movement but I 
would think getting at least a placeholder Goals and Objectives page into 
the OZFS community pages would be a good start even if movement on this 
doesn't come for a year or more.


Thoughts ?

-=dave

- Original Message - 
From: Gary Mills [EMAIL PROTECTED]

To: Erik Trimble [EMAIL PROTECTED]
Cc: Matthew Ahrens [EMAIL PROTECTED]; roland [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org

Sent: Sunday, June 24, 2007 3:58 PM
Subject: Re: [zfs-discuss] zfs space efficiency



On Sun, Jun 24, 2007 at 03:39:40PM -0700, Erik Trimble wrote:

Matthew Ahrens wrote:
Will Murnane wrote:
On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
Now, wouldn't it be nice to have syscalls which would implement cp
and
mv, thus abstracting it away from the userland app?



A copyfile primitive would be great!  It would solve the problem of
having all those friends to deal with -- stat(), extended
attributes, UFS ACLs, NFSv4 ACLs, CIFS attributes, etc.  That isn't to
say that it would have to be implemented in the kernel; it could
easily be a library function.

I'm with Matt.  Having a copyfile library/sys call would be of
significant advantage.  In this case, we can't currently take advantage
of the CoW ability of ZFS when doing 'cp A B'  (as has been pointed out
to me).  'cp' simply opens file A with read(), opens a new file B with
write(), and then shuffles the data between the two.  Now, if we had a
copyfile(A,B) primitive, then the 'cp' binary would simply call this
function, and, depending on the underlying FS, it would get implemented
differently.  In UFS, it would work as it does now. For ZFS, it would
work like a snapshot, where file A and B share data blocks (at least
until someone starts to update either A or B).


Isn't this technique an instance of `deduplication', which seems to be
a hot idea in storage these days?  I wonder if it could be done
automatically, behind the scenes, in some fashion.

--
-Gary Mills--Unix Support--U of M Academic Computing and 
Networking-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Holding disks for home servers

2007-06-08 Thread dave johnson

I only see 15 disks in your CM stacker.

I designed and built a system for work with the CMStacker and relocated the 
power and IO panel from the top slot to the side cover (where the spot for a 
small fan is) and it works great.  A single Seasonic 600AS powers the entire 
system nicely with PF of 0.98.  This unit is designed for small heat 
signature nearline storage so performance wasn't a primary factor.  With 
16x750gb and a Geode-NX processor board the entire system runs right around 
253w.


I ran into acumulative vibration issues right off the bat and had 3 drive 
failures within the first 2 months not to mention the slow oscilating drone 
it produced.  taking 2 of the 4 drive carriers and flipping them upside down 
so that 1/2 the drives were spinning the other direction solved the 
vibration problem and it's been running solidly for 2+ years now in near 
silence.


For anyone using more than a single one of these drive sleds, If your data 
is important to you, I seriously urge you to consider staggering the 
orientation of them, however ugly it may appear.


You've been warned ;)

-=dave

- Original Message - 
From: Rob Logan [EMAIL PROTECTED]

To: ZFS discussion list zfs-discuss@opensolaris.org
Sent: Thursday, June 07, 2007 10:33 AM
Subject: [zfs-discuss] Holding disks for home servers




On the third upgrade of the home nas, I chose
http://www.addonics.com/products/raid_system/ae4rcs35nsa.asp to hold the
disks. each hold 5 disks, in the space of three slots and 4 fit into a
http://www.google.com/search?q=stacker+810 case for a total of 20
disks.

But if given a chance to go back in time, the
http://www.supermicro.com/products/accessories/mobilerack/CSE-M35TQ.cfm
has LEDs next to the drive, and doesn't vibrate as much.

photos in http://rob.com/sun/zfs/

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss