Hello Mika,

Tuesday, June 27, 2006, 10:19:05 AM, you wrote:

>>but there may not be filesystem space for double the data.
>>Sounds like there is a need for a zfs-defragement-file utility
MB> perhaps?
>>Or if you want to be politically cagey about naming choice, perhaps,
>>zfs-seq-read-optimize-file ?  :-)

MB> For Datawarehouse and streaming applications a 
MB> "seq-read-omptimization" could bring additional performance. For
MB> "normal" databases this should be benchmarked...

MB> This brings me back to another question. We have a production database,
MB> that is cloned on every end of month for end-of-month processing
MB> (currently with a feature on our storage array).

MB> I'm thinking about a ZFS version of this task. Requirements: the
MB> production database should not suffer from performance degradation,
MB> whilst running the clone in parallel. As ZFS does not clone all the
MB> blocks, I wonder how much the procution database will suffer from
MB> sharing most of the data with the clone (concurrent access vs. caching)

MB> Maybe we need a feature in ZFS to do a full clone (speak: copy all
MB> blocks) inside the pool, if performance is an issue.... just like the
MB> "Quick Copy" vs. "Shadow Image" -features on HDS Arrays... 

I belive you want a clone on different pool (so different disks) and
that way you get separation.

The most important problem with two DBs after current clone would be
shared spindles.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to