I'm doing a zfs send -R | zfs receive on a snv_129 system. The
target filesystem has dedup enabled, but since it was upgraded from
b125 the existing data is not deduped.
The pool is an 8-disk raidz2. The system has 8gb of memory, and a dual
core Athlon 4850e cpu.
I've set dedup=verify at the top
Michael Herf wrote:
I have also had slow scrubbing on filesystems with lots of files, and I
agree that it does seem to degrade badly. For me, it seemed to go from
24 hours to 72 hours in a matter of a few weeks.
I did these things on a pool in-place, which helped a lot (no rebuilding):
2.
Hi, I hope there's someone here who can possibly provide some assistance.
I've had this read problem now for the past 2 months and just can't get to the
bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung
750GB SATA drives). The motherboard is a ASUS M2N68-CM (4
the closest bug I can find it this : 6772082 (ahci: ZFS hangs when IO happens)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2009-Dec-16 00:26:28 +0800, Luca Morettoni l...@morettoni.net wrote:
As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ
we can't boot from a pool with raidz, any plan to have this feature?
Note that FreeBSD currently supports booting from RAIDZ (at least on
I just realized that with b129 there are now for each existing zpool a
system process is running now, e.g. zpool-rpool with pid 5. What is
the purpose of this process ?
Thanks
Detlef
--
Sent from my OpenSolaris Laptop
---
___
zfs-discuss mailing
Hello;
How do we dedup existing data?
Will a ZFS send to an output file in a temporary staging area in the
same pool and a subsequent reconstruct (zfs receive) from the file be
sufficient?
Or do I have to completely move the data out of the pool and back in again?
Warmest Regards
Steven
On 16.12.09 16:03, Detlef Drewanz wrote:
I just realized that with b129 there are now for each existing zpool a
system process is running now, e.g. zpool-rpool with pid 5. What is the
purpose of this process ?
Please read this ARC case materials:
System Duty Cycle Scheduling Class and ZFS IO
Detlef Drewanz wrote:
I just realized that with b129 there are now for each existing zpool a
system process is running now, e.g. zpool-rpool with pid 5. What is the
purpose of this process ?
A new observability interface introduced in PSARC/2009/615:
Steven Sim wrote:
Hello;
How do we dedup existing data?
Currently by running a zfs send | zfs recv.
Will a ZFS send to an output file in a temporary staging area in the
same pool and a subsequent reconstruct (zfs receive) from the file be
sufficient?
Yes but you can avoid the temp file
On Wed, Dec 16, 2009 at 3:32 PM, Darren J Moffat
darr...@opensolaris.org wrote:
Steven Sim wrote:
Hello;
How do we dedup existing data?
Currently by running a zfs send | zfs recv.
Will a ZFS send to an output file in a temporary staging area in the same
pool and a subsequent reconstruct
Darren;
A zfs send | zfs receive onto the same filesystem???
er... I tried the following...
#zfs snapshot myplace/myd...@prededup
The above created the following...
ad...@sunlight:~$ zfs list -t snapshot -r myplace/mydata
NAME USED AVAIL REFER
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Kjetil Torgrim Homme wrote:
for some reason I, like Steve, thought the checksum was calculated on
the uncompressed data, but a look in the source confirms you're right,
of course.
thinking about the consequences of changing it, RAID-Z recovery
Hi All,
We are looking at introducing EMC Clarion into our environment here. We
were discussing the following scenario and wonder if someone has an opinion.
Our product spans a number of servers with some of the data held within Veritas
and some held within ZFS. We have a requirement to
Yet again, I don't see how RAID-Z reconstruction is related to the
subject discussed (what data should be sha256'ed when both dedupe and
compression are enabled, raw or compressed ). sha256 has nothing to do
with bad block detection (may be it will when encryption is
implemented, but for now
Hi Richard,
How's the ranch? ;-)
This is most likely a naive question on my part. If recordsize is
set to 4k (or a multiple of 4k), will ZFS ever write a record that
is less than 4k or not a multiple of 4k?
Yes. The recordsize is the upper limit for a file record.
This includes
On Dec 15, 2009, at 6:24 PM, Bill Sommerfeld wrote:
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
After
running for a while (couple of months) the zpool seems to get
fragmented, backups take 72 hours and a scrub takes about 180
hours.
Are there periodic snapshots being created in
I'll first suggest questioning the measurement of speed you're getting,
12.5Mb/sec. I'll suggest another, more accurate method:
date ; zfs send somefilesystem | pv -b | ssh somehost zfs receive foo ;
date
At any given time, you can see how many bytes have transferred in aggregate,
and what time
Hi Bob,
On Dec 15, 2009, at 6:41 PM, Bob Friesenhahn wrote:
On Tue, 15 Dec 2009, Bill Sprouse wrote:
Hi Everyone,
I hope this is the right forum for this question. A customer is
using a Thumper as an NFS file server to provide the mail store for
multiple email servers (Dovecot). They
Thanks MIchael,
Useful stuff to try. I wish we could add more memory, but the x4500
is limited to 16GB. Compression was a question. Its currently off,
but they were thinking of turning it on.
bill
On Dec 15, 2009, at 7:02 PM, Michael Herf wrote:
I have also had slow scrubbing on
Hello;
Is the ZFS dedup single instancing across the entire pool or is it only
single instance inside each filesystem and not across the entire pool?
Warmest Regards
Steven Sim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Steven Sim wrote:
Hello;
Is the ZFS dedup single instancing across the entire pool or is it only
single instance inside each filesystem and not across the entire pool?
Opt in to it is per filesystem (dataset) but the deduplicates are
searched for and matched pool wide.
--
Darren J Moffat
Hi Steve,
I'm collecting dedup related questions/responses here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Thanks,
Cindy
On 12/16/09 08:56, Steven Sim wrote:
Hello;
Is the ZFS dedup single instancing across the entire pool or is it only
single instance inside each
On Wed, Dec 16, 2009 at 7:41 AM, Edward Ned Harvey
sola...@nedharvey.com wrote:
I'll first suggest questioning the measurement of speed you're getting,
12.5Mb/sec. I'll suggest another, more accurate method:
date ; zfs send somefilesystem | pv -b | ssh somehost zfs receive foo ;
date
The
On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
In his case 'zfs send' to /dev/null was still quite fast and the network
was also quite fast (when tested with benchmark software). The implication
is that ssh network transfer performace may have dropped
On Wed, 16 Dec 2009, Tim wrote:
read for a few seconds, then hang up again. I tried swapping the
drives around on different SATA ports on the motherboard (exported
pool, swapped SATA cables around, imported pool), but the problem
remains with the same drive (ie: problem moves to the next SATA
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Yet again, I don't see how RAID-Z reconstruction is related to the
subject discussed (what data should be sha256'ed when both dedupe and
compression are enabled, raw or compressed ). sha256 has nothing to do
with bad block detection (may be it
On Wed, Dec 16, 2009 at 7:25 PM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Yet again, I don't see how RAID-Z reconstruction is related to the
subject discussed (what data should be sha256'ed when both dedupe and
compression are enabled, raw
Before Veritas VM had support for this, you needed to use a different server to
import a disk group. You could use a different server for ZFS, which will also
take the backup load off the Server?
Cheers
--
This message posted from opensolaris.org
___
Andrey Kuzmin wrote:
On Wed, Dec 16, 2009 at 7:25 PM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Yet again, I don't see how RAID-Z reconstruction is related to the
subject discussed (what data should be sha256'ed when both dedupe and
On Wed, Dec 16, 2009 at 6:41 PM, Chris Murray chrismurra...@gmail.com wrote:
Hi,
I run a number of virtual machines on ESXi 4, which reside in ZFS file
systems and are accessed over NFS. I've found that if I enable dedup,
the virtual machines immediately become unusable, hang, and whole
On Wed, Dec 16, 2009 at 7:46 PM, Darren J Moffat
darr...@opensolaris.org wrote:
Andrey Kuzmin wrote:
On Wed, Dec 16, 2009 at 7:25 PM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Yet again, I don't see how RAID-Z reconstruction is related
I've set dedup to what I believe are the least resource-intensive
settings - checksum=fletcher4 on the pool, dedup=on rather than
I believe checksum=fletcher4 is acceptable in dedup=verify mode only.
What you're doing is seemingly deduplication with weak checksum w/o
verification.
I think
So if the ZFS checksum is set to fletcher4 at the pool level, and
dedup=on, which checksum will it be using?
If I attempt to set dedup=fletcher4, I do indeed get this:
cannot set property for 'zp': 'dedup' must be one of 'on | off | verify
| sha256[,verify]'
Could it be that my performance
On Wed, Dec 16, 2009 at 8:09 PM, Cyril Plisko cyril.pli...@mountall.com wrote:
I've set dedup to what I believe are the least resource-intensive
settings - checksum=fletcher4 on the pool, dedup=on rather than
I believe checksum=fletcher4 is acceptable in dedup=verify mode only.
What you're
I believe that fletcher4 was disabled for dedup in 128a. Setting dedup=on
overrides the checksum setting and forces sha256.
On Dec 16, 2009 9:10 AM, Cyril Plisko cyril.pli...@mountall.com wrote:
I've set dedup to what I believe are the least resource-intensive
settings - checksum=fletche...
I
On Dec 15, 2009, at 11:04 PM, Frank Cusack wrote:
I'm considering setting up a poor man's cluster. The hardware I'd
like to use for some critical services is especially attractive for
price/space/performance reasons, however it only has a single power
supply. I'm using S10 U8 and can't
Hi;
After some very hairy testing, I came up with the following procedure
for sending a zfs send datastream to a gzip staging file and later
receiving it back to the same filesystem in the same pool.
The above was to enable the filesystem data to be dedup.
Here is the procedure and under
Jack,
We'd like to get a crash dump from this system to
determine the root
cause of the system hang. You can get a crash dump
from a live system
like this:
# savecore -L
dumping to /dev/zvol/dsk/rpool/dump, offset 65536,
content: kernel
0:18 100% done
0% done: 49953 pages dumped,
Hi,
We have approximately 3 million active users and have a storage capacity of
300 TB in ZFS zpools.
The ZFS is mounted on Sun cluster using 3*T2000 servers connected with FC to
SAN storage.
Each zpool is a LUN in SAN which already provides raid so we're not doing
raidz on top of it.
We started
On 16-Dec-09, at 10:47 AM, Bill Sprouse wrote:
Hi Brent,
I'm not sure why Dovecot was chosen. It was most likely a
recommendation by a fellow University. I agree that it lacking in
efficiencies in a lot of areas. I don't think I would be
successful in suggesting a change at this
Hi all,
Is it sound to put rpool and ZIL on an a pair of SSDs (with rpool
mirrored)? I have (16) 500GB SATA disks for the data pools and they're
doing lots of database work, so I'd been hoping to cut down the seeks a
bit this way. Is this a sane, safe, practical thing to do and if so,
how
On Wed, 16 Dec 2009, Toby Thain wrote:
(As Damon pointed out) The problem seems not Dovecot per se but the choice of
mbox format, which is rather self-evidently inefficient.
Note that Bill never told us what mail storage format was used. I was
the one who suggested/assumed that 'mbox'
Mine is similar (4-disk RAIDZ1)
- send/recv with dedup on: 4MB/sec
- send/recv with dedup off: ~80M/sec
- send /dev/null: ~200MB/sec.
I know dedup can save some disk bandwidth on write, but it shouldn't save
much read bandwidth (so I think these numbers are right).
There's a warning in a
In some cases, root logged into the console can still function,
but if not, then you'd need to shutdown the system and run sync.
I can walk you through those steps if you need them.
If you've been tortured long enough, then feel free to upload a
crash dump and let us know.
Thanks,
Cindy
On
Bob that was my initial thought as well when I saw the problem stay with the
drive after moving it to a different SATA port, but then it doesn't explain why
a dd test runs fine. I guess I could try a longer dd test. My dd test could
have been just lucky and hit an ok part of disk.
Would a
On Wed, 16 Dec 2009, Tim wrote:
Bob that was my initial thought as well when I saw the problem stay
with the drive after moving it to a different SATA port, but then it
doesn't explain why a dd test runs fine. I guess I could try a
longer dd test. My dd test could have been just lucky and
On Dec 16, 2009, at 7:35 AM, Bill Sprouse wrote:
Hi Richard,
How's the ranch? ;-)
Good. Sunny, warm, turning green... perfect for the holidays :-)
This is most likely a naive question on my part. If recordsize is
set to 4k (or a multiple of 4k), will ZFS ever write a record that
is less
I'll dd the whole disk tonight. I was thinking it was bad spots, given how
some files I can copy (admittedly they are small ones) better then others
but in saying that, seeing the throughput at 349k/sec often is rather odd on
different files. And then the files that manage to copy ok, the
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Darren J Moffat wrote:
Andrey Kuzmin wrote:
Resilvering has noting to do with sha256: one could resilver long
before dedupe was introduced in zfs.
SHA256 isn't just used for dedup it is available as one of the
checksum algorithms right back to
I never formatted these drives when I built the box, I just added them to zfs.
I can try formatanalyzeread as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Some of you may recall my complaints in July about very poor Solaris
10 performance when re-reading large collections of medium-sized (5MB)
files, and recall verifying the situation on your own systems
(including OpenSolaris). I now have an IDR installed under Solaris
10U8 which includes a
On Wed, Dec 16, 2009 at 12:19 PM, Michael Herf mbh...@gmail.com wrote:
Mine is similar (4-disk RAIDZ1)
- send/recv with dedup on: 4MB/sec
- send/recv with dedup off: ~80M/sec
- send /dev/null: ~200MB/sec.
I know dedup can save some disk bandwidth on write, but it shouldn't save
much read
this is great, i remember reading about this. Wow, it sure did take awhile
huh? Glad they finally got it working for you. What exactly caused this
bug? If i remember right it wasn't just related to solaris. I remember
seeing the same behavior in FreeBSD.
On Wed, Dec 16, 2009 at 8:42 PM, Bob
On Wed, 16 Dec 2009, Richard Elling wrote:
the same way. A quick dtrace script would show how writes are
aligned to the partition boundaries, but the partition alignment is
left as an exercise for the implementer. -- richard
With 128K reads and writes, not very much apparent alignment in my
On Wed, 16 Dec 2009, Thomas Burgess wrote:
this is great, i remember reading about this. Wow, it sure did take awhile
huh? Glad they
finally got it working for you. What exactly caused this bug? If i remember
right it wasn't
just related to solaris. I remember seeing the same behavior in
I'm seeing similar results, though my file systems currently have
de-dupe disabled, and only compression enable, both systems being
I can't say this is your issue, but you can count on slow writes with
compression on. How slow is slow? Don't know. Irrelevant in this case?
Possibly.
On Wed, Dec 16, 2009 at 7:43 PM, Edward Ned Harvey
sola...@nedharvey.com wrote:
I'm seeing similar results, though my file systems currently have
de-dupe disabled, and only compression enable, both systems being
I can't say this is your issue, but you can count on slow writes with
compression
hmm interesting... haven't tried dd yet... just been running a read test via
formatanalyze and it's showing up the slow down.
Starts off reading fast...next time I looked at it, it was reading slowly and
was up to 5210. I started another read test on one of the other drives and
in a few
On Dec 16, 2009, at 6:26 PM, Jacob Ritorto wrote:
OK, it's been used twice now. Please closely define 'temporal
failure.'
Failure in the time domain. For example, a DBA drops a database
accidentally.
If the primary site and DR site are synced in time, then the DR site
will also have
at exactly the same spot it slows down ? I've just run the test a number of
times, and without fail at the exactly the same spot, the read will just crawl
along and erratically. It's at approx 51256xxx.
--
This message posted from opensolaris.org
I'm just doing a surface scan via the Samsung utility to see if I see the same
slow down..
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
Reposting as I have not gotten any response.
Here is the issue. I created a zpool with 64k recordsize and enabled dedupe on
it.
--zpool create -O recordsize=64k TestPool device1
--zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of
Le 17 déc. 09 à 03:19, Brent Jones a écrit :
Something must've changed in either SSH, or the ZFS receive bits to
cause this, but sadly since I upgrade my pool, I cannot roll back
these hosts :(
I'm not sure that's the best way, but to look at how ssh is slowing
down the transfer, I'm
hmm, not seeing the same slow down when I boot from the Samsung EStool CD and
run a diag which performs a surface scan...
could this still be a hardware issue, or possibly something with the Solaris
data format on the disk?
--
This message posted from opensolaris.org
65 matches
Mail list logo