On 03/ 5/13 12:16 PM, Ram Chander wrote:
Hi,
I am importing zfs snapshot to oracle solaris 11 from anther host
running oracle soalris 11. When the import happens via zfs recv ,
it locks the filesystem, df hangs and unable to use the filesystem.
Once the import completes, the filesystem is
Hi,
I am importing zfs snapshot to oracle solaris 11 from anther host running
oracle soalris 11. When the import happens via zfs recv , it locks the
filesystem, df hangs and unable to use the filesystem. Once the import
completes, the filesystem is back to normal and read/write works fine. The
I speak for myself... :-)
If the real bug is in procfs, I can file a CR.
When xattrs were designed right down the hall from me,
I don't think /proc interactions were considered, which
is why I mentioned an RFE.
Thanks,
Cindy
On 07/15/12 15:59, Cedric Blancher wrote:
On 14 July 2012
On 14 July 2012 02:33, Cindy Swearingen cindy.swearin...@oracle.com wrote:
I don't think that xattrs were ever intended or designed
for /proc content.
I could file an RFE for you if you wish.
So Oracle Newspeak now calls it an RFE if you want a real bug fixed, huh? ;-)
This is a real bug in
On Fri, Jul 13, 2012 at 2:16 AM, ольга крыжановская
olga.kryzhanov...@gmail.com wrote:
Can some one here explain why accessing a NFSv4/ZFS xattr directory
through proc is forbidden?
[...]
truss says the syscall fails with
open(/proc/3988/fd/10/myxattr, O_WRONLY|O_CREAT|O_TRUNC, 0666) Err#13
Yes, accessing the files through runat works.
I think /proc (and /dev/fd, which has the same trouble but only works
if the same process accesses the fds, for obvious reasons since
/dev/fd is per process and can not be shared between processes unlike
/proc/$pid/fd/) gets confused because the
I don't think that xattrs were ever intended or designed
for /proc content.
I could file an RFE for you if you wish.
Thanks,
Cindy
On 07/13/12 14:00, ольга крыжановская wrote:
Yes, accessing the files through runat works.
I think /proc (and /dev/fd, which has the same trouble but only works
Cindy, I was not trying to open a xattr for files in /proc.
1. Please read the openat() manual page
2. I opened a fd to the directory where the xattrs are in
3. My process, for example pid 123456, now has an open fd, for example
with the number 12, which points to this xattr directory
4. Now I
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Dmitry G. Kozhinov
I need to know exactly when the snapshot gets committed to the disk so I
can backup my disk.
You access your disk (for backup purpose) via operating
Hi Edward,
My question was does snapshot instantly get committed to disk or its
kept in memory till next TXG. If its kept in memory till next TXG will
sync=always solve this problem.
Regards,
Justin Skariah.
On 10/14/2011 05:33 PM, Edward Ned Harvey wrote:
From:
Its a problem if I am going to take a backup of the disk on which ZFS
resides, for eg: if its a enterprise storage. So I need to know exactly
when the snapshot gets committed to the disk so I can backup my disk.
Regards,
Justin Skariah.
On 10/15/2011 05:47 PM, Edward Ned Harvey wrote:
From:
I need to know exactly when the snapshot gets committed to the disk so I can
backup my disk.
You access your disk (for backup purpose) via operating system. The OS will
show you the disk in the logical state in which it got after all write
operations, regardless of whether they were
From: Gmail [mailto:justin.skar...@gmail.com]
My question was does snapshot instantly get committed to disk or its
kept in memory till next TXG. If its kept in memory till next TXG will
sync=always solve this problem.
So, *if* it's committed in the next TXG, why do you care and why are
Hello Everybody,
I have a doubt regarding zfs snapshots. When I take a zfs snapshot does it
commit to the disk after logging in ZIL or it remains in ZIL till a DMU
transaction group commit happens.
If it remains in ZIL, is it possible to commit it to the disk at that time.
will the following
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Justin Skariah
I have a doubt regarding zfs snapshots. When I take a zfs snapshot does it
commit to the disk after logging in ZIL or it remains in ZIL till a DMU
transaction
I've been fighting with getting the backup part of the zfs-auto-snapshot
working properly on Nexenta Core 3.x. I can get the snapshotting going, but as
soon as I run one of the backup jobs it works the first time, it does not work
after that, and it then prevents the snapshotting from working
[..SNIP..]
we decided to do just as you did,
get a truckload of drives and let others fiddle with
dedup until it's stable.
Agreed, deduplication on ZFS has never been stable enough to use in production
and I even regret using it in a testlab environment; it just grinds all I/O to
a halt
Interesting story. So basically: avoid dedup. But you didnt loose any
data. Right?
Not losing data is of course the primary issue, but having a system down for
days just to get a dataset removed, hurts a bit, especially if upper management
gets around.
Vennlige hilsener / Best regards
I also strongly encourage people to just stay away from dedup -- seems
awesome in a lot of respects, but days worth of zfs destroy, etc are
absolutely insane even if you don't make the blunders that I did.
Apparently, if you read enough of the shared horrors of this, you
should delete your
Interesting story. So basically: avoid dedup. But you
didnt loose any data. Right?
Correct, I did not lose any data.
pace
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Deleting the dedup'ed data won't work better,
since
ZFS will have to process it quite the same way as
if
you're destroying a ZFS volume.
This was my only dedup'd dataset, so I have no other
experience deleting them. Do you think that, had
there been no data, my zfs destroy would have
Interesting story. So basically: avoid dedup. But you didnt loose any data.
Right?
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Deleting the dedup'ed data won't work better, since ZFS will have to process it
quite the same way as if you're destroying a ZFS volume.
The only thing that really cuts through such a dataset is to destroy the
underlying zpool. So maybe it would have been better to zfs send/recc all the
data
Deleting the dedup'ed data won't work better, since
ZFS will have to process it quite the same way as if
you're destroying a ZFS volume.
This was my only dedup'd dataset, so I have no other experience deleting them.
Do you think that, had there been no data, my zfs destroy would have taken 3
I know there are lots of threads on this issue, but I figured I start a new one
to share my 3 day horror story. My hopes are that someone won't repeat my
mistakes and, if they do, they might find some hope of how to determine how
long it will take to get your pool back.
I began my adventure
I don't have any support contract.
Can you tell me the procedure to create it.
I emailed oracle couple weeks ago and this the site they pointed me to for my
non sun/oracle unit if it has 1-4 sockets:
Oracle Solaris Premier Subscription for Non-Oracle Hardware (1-4 socket
server)
when i tried to change the host id, am getting following error.
bash-3.00# ./test
ERROR2: Invalid hostid 0xac32f5c.
Please use hexadecimal.
Please help me.
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
when i tried to change the host id, am getting
following error.
bash-3.00# ./test
ERROR2: Invalid hostid 0xac32f5c.
Please use hexadecimal.
Please help me.
I suggest you using your support contract instead of trying to guess a fix.
the issue may be small and easily repairable, but
I don't have any support contract.
Can you tell me the procedure to create it.
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Yeah I cant add the faulty disk again.
Unfortunately, the original disk was severely damaged when the capacitor
overloaded. The backside of the disk was melted to the point that it would not
connect to a system anymore.
Between I tried import -f using pool id and its not working.
one more doubt,
If I remove all the disks from server and then attach it again and give an
import, will it work?
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Did you run devfsadm after you added the replacement disk?
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Akhilesh Nair wrote:
ok, Let me explian our problem.
Actually we have 11 disks inside the server
1) using 10 disks we have a created zfs pool using raid 1+0 config.
2) one disk was used for Operating system
3) we are using solaris 10 upate 5
4) suddenly one of the disk inside pool got
5) we have replaced the disk inside the pool
Have you used zfs commands to detach/replace the disc before physically
replacing it? (I am not zfs guru, and not sure what the commands should be -
zpool detach? zpool replace?)
If not, maybe replacing back a faulty disk is the option (it contains
yeah that was my fault
We haven't done any detach command. I simply went for the replacement.
now i again imported using force command, i am getting following error.
bin/bash-3.00# zpool import -Ff poolname
cannot open 'bigfs': no such pool
Assertion failed: (zhp =
am getting now this error.
bash-3.00# zdb -e mypool
WARNING: pool 'bigfs' could not be loaded as it was last accessed by another
system (host: myserver3 hostid: 0xac32f5c). See:
http://www.sun.com/msg/ZFS-8000-EY
zdb: can't open mypool: No such file or directory
bash-3.00# zpool import -f
We haven't done any detach command. I simply went for the replacement.
Probably you can undo such replacement - plug back the faulty disk, then
import the pool, and only then something like
# zpool remove ...
replace the faulty disk with the new one
# zpool add ...
- Dmitry.
--
This
Have you tried import your last step with the pool id instead of the name?
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Hi ,
We had a urgent issue on our production. We cant import our zfs pool.
Actually one of our disk was faulty in our pool and we replaced it. however
after that, the pool cannot be imported.
a) I tried removing /etc/zfs/zfs.cache file and imported but I cant .
b) I tried using force
On 12/ 6/10 10:16 AM, Akhilesh Nair wrote:
Hi ,
We had a urgent issue on our production. We cant import our zfs pool.
Then I suggest you call in and log a support call. This email list does
not guarantee a response time.
Are you using Solaris 10 or OpenSolaris / Solaris 11 express?
The
Thanks
Can you tlell me the phone number to log the issue
Also Please let me know procedure also?
Thanks
anair
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Akhilesh Nair
We had a urgent issue on our production. We cant import our zfs pool.
Actually one of our disk was faulty in our pool and we replaced it.
however
after
On 12/ 6/10 01:27 PM, Akhilesh Nair wrote:
Thanks
Can you tlell me the phone number to log the issue
Also Please let me know procedure also?
Thanks
anair
This page http://www.oracle.com/us/support/contact-068555.html contains
the contact details for support.
Regards,
Brian
ok, Let me explian our problem.
Actually we have 11 disks inside the server
1) using 10 disks we have a created zfs pool using raid 1+0 config.
2) one disk was used for Operating system
3) we are using solaris 10 upate 5
4) suddenly one of the disk inside pool got faulty and also for os
Hello Victor,
thanks a lot for your help. I'm just going to buy some new drives here to be
able to backup this pool, but anyway, what do you suggest to do with such pool
later? Complete destroy and reinstall? Or is it possible to run scrub on
read-only pool to recover it? I see this bugreport:
Hi all,
I have one issue. Want to install Sol11Xp on Dell PE R510 server.
It has 2x146GB SAS + 3x1TB SATA HDDs. Configuration is Raid 1 forfirst two and
RAID 5 for rest of them. Dell has its own RAID controller -
PERC 6/i, in my case.
When I go to install Solaris 11 Express, it can't find
I have an X4500 thumper box with 48x 500gb drives setup in a a pool and split
into raidz2 sets of 8 - 10 drives within the single pool.
I had a failed disk with i cfgadm unconfigured and replaced no problem, but it
wasn't recognised as a Sun drive in Format and unbeknown to me someone else
hi all
I'm using a custom snaopshot scheme which snapshots every hour, day,
week and month, rotating 24h, 7d, 4w and so on. What would be the best
way to zfs send/receive these things? I'm a little confused about how
this works for delta udpates...
Vennlige hilsener / Best regards
The
From: Richard Elling [mailto:richard.ell...@gmail.com]
It is relatively easy to find the latest, common snapshot on two file
systems.
Once you know the latest, common snapshot, you can send the
incrementals
up to the latest.
I've always relied on the snapshot names matching. Is there a
On Sep 25, 2010, at 7:42 PM, Edward Ned Harvey sh...@nedharvey.com wrote:
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
I'm using a custom snaopshot scheme which snapshots every hour, day,
week and
On Sep 26, 2010, at 4:41 AM, Edward Ned Harvey sh...@nedharvey.com wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
It is relatively easy to find the latest, common snapshot on two file
systems.
Once you know the latest, common snapshot, you can send the
incrementals
up to
hi all
I'm using a custom snaopshot scheme which snapshots every hour, day, week and
month, rotating 24h, 7d, 4w and so on. What would be the best way to zfs
send/receive these things? I'm a little confused about how this works for delta
udpates...
Vennlige hilsener / Best regards
roy
--
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
I'm using a custom snaopshot scheme which snapshots every hour, day,
week and month, rotating 24h, 7d, 4w and so on. What would be the best
way to zfs
Update to my own post. Further tests more consistently resulted in closer to
150MB/s.
When I took one disk offline, it was just shy of 100MB/s on the single disk.
There is both an obvious improvement with the mirror, and a trade-off (perhaps
the latter is controller related?).
I did the
On 7/28/2010 4:22 PM, Karol wrote:
I appear to be getting between 2-9MB/s reads from individual disks in my zpool
as shown in iostat -v
I expect upwards of 100MBps per disk, or at least aggregate performance on par
with the number of disks that I have.
My configuration is as follows:
Two
I appear to be getting between 2-9MB/s reads from individual disks in my zpool
as shown in iostat -v
I expect upwards of 100MBps per disk, or at least aggregate performance on par
with the number of disks that I have.
My configuration is as follows:
Two Quad-core 5520 processors
48GB ECC/REG
Also w/a Raidz(4 disk) + Raidz2 (8 disk) configuration thats slow as molasses
on reads . So would love to hear any troubleshooting tips / guides etc?
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
I'm kinda embarrassed to chime in since I can't help much, but I'm curious
about this.
What is your OS version, and what are the zpool and zfs versions? Can you
provide a relatively straight-forward method to reproduce your results? I'd
like to see how you're testing this for my own
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134
(5 seconds on a post b142 kernel), when the system is idle?
On an idle OpenSolaris 2009.06 (b111) system, /usr/demo/dtrace/iosnoop.d
shows no i/o activity for at least 15 minutes.
The same dtrace test on an idle b134
On 5/22/2010 4:41 AM, Erik Trimble wrote:
On 5/22/2010 1:18 AM, Kid FromLA wrote:
Was at a Sun Office today for a talk on ZFS and asked a question to
the Sun Engineer giving the talk about using UFSdump/restore for ZFS
and he said yeah that works if you want to use it..
'zfs send'
Was at a Sun Office today for a talk on ZFS and asked a question to the Sun
Engineer giving the talk about using UFSdump/restore for ZFS and he said yeah
that works if you want to use it..
Is this true?? I can find NOTHING on it even on the man page of UFSDUMP...
I've googled til I am
On 5/22/2010 1:18 AM, Kid FromLA wrote:
Was at a Sun Office today for a talk on ZFS and asked a question to the Sun
Engineer giving the talk about using UFSdump/restore for ZFS and he said yeah
that works if you want to use it..
Is this true?? I can find NOTHING on it even on the man
Erik Trimble erik.trim...@oracle.com wrote:
'zfs send' and 'zfs receive' are the closest equivalents.
ufsdump/ufsreceive are restricted solely to UFS filesystems.
This is why star implements the ideas behind ufsdump/ufsrestore in a
filesystem independent way. Star is even faster than
On 5/05/10 10:42 PM, Bruno Sousa wrote:
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus and
On 5/05/10 10:42 PM, Bruno Sousa wrote:
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus
Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see
deported volumes using vxdisk -o alldgs list.
Justin
___
zfs-discuss mailing list
On 04/21/10 02:16 PM, Erik Trimble wrote:
Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I
see pools that have been exported? Kind of like being able to see
deported volumes using vxdisk -o alldgs list.
Justin
Justin Lee Ewing wrote:
On 04/21/10 02:16 PM, Erik Trimble wrote:
Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I
see pools that have been exported? Kind of like being able to see
deported volumes using vxdisk -o alldgs list.
Justin
On 21 April, 2010 - Justin Lee Ewing sent me these 0,3K bytes:
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see deported
volumes using vxdisk -o alldgs list.
'zpool import'
/Tomas
--
Tomas Ögren,
Hi Justin,
Maybe I misunderstand your question...
When you export a pool, it becomes available for import by using
the zpool import command. For example:
1. Export tank:
# zpool export tank
2. What pools are available for import:
# zpool import
pool: tank
id: 7238661365053190141
On 04/ 9/10 01:28 PM, Peter Pauly wrote:
Why do OpenSolaris Bible and Pro OpenSolaris both mention that zfs has
encryption? I was under the impression that it has not yet been
included in the shipping code.
I've downloaded build 134 and can find no evidence that zfs has
encryption capability,
Why do OpenSolaris Bible and Pro OpenSolaris both mention that zfs has
encryption? I was under the impression that it has not yet been
included in the shipping code.
I've downloaded build 134 and can find no evidence that zfs has
encryption capability, but I'm no expert either.
I'm assuming that
IIRC, you can do encryption with loopback devices, which while not
ideal, does work as an interim solution until encrypted datasets are
available.
On Fri, Apr 9, 2010 at 12:28 PM, Peter Pauly ppa...@gmail.com wrote:
Why do OpenSolaris Bible and Pro OpenSolaris both mention that zfs has
Hi All.
default version of python in latest OS_b134: 2.6.4
but zfs depends on the python 2.4.
Whether there will be an updating to python2.5 or python2.6?
Thanks.
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
eXeC001er wrote:
Hi All.
default version of python in latest OS_b134: 2.6.4
but zfs depends on the python 2.4.
Whether there will be an updating to python2.5 or python2.6?
Sounds like:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6908227
The folks on zfs-discuss would be
Hi Dusan,
On 03/24/10 21:31, Dusan Radovanovic wrote:
Hello all,
I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have
linux experience, but have never used ZFS. I have tried to install OpenSolaris
Developer 134 on a 11TB HW RAID-5 virtual disk, but after the
I have two storages, both on snv133. Both filled with 1TB drives.
1) stripe over two raidz vdevs, 7 disks in each. In total avalable size is
(7-1)*2=12TB
2) zfs pool over HW raid, also 12TB.
Both storages keeps the same data with minor differences. First pool keeps 24
hourly snapshots + 7 daily
Just to make it a bit more clear
this is first pool
NAME STATE READ WRITE CKSUM
export ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c5t0d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
Never mind, I'm just and idiot.
Mixed again marketing TB's with real ones.
For backup storage it is real size 12.6TB.
For primary zpool seems to reports total size of all drives in raidz
(including used for redundancy) which ~14 marketing TB and ~12.8 real ones.
Then after subtraction of
Hello all,
I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have
linux experience, but have never used ZFS. I have tried to install OpenSolaris
Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I
can only use one 2TB disk, and I cannot partition
Is it possible for you to destroy virtual drive and present 12 discs
independently to the OS? Someone is using a term JBOD for it. If so, then you
might leave the RAID job to ZFS pool.
Karel
--
This message posted from opensolaris.org
___
From pages 29,83,86,90 and 284 of the 10/09 Solaris ZFS Administration
guide, it sounds like a disk designated as a hot spare will:
1. Automatically take the place of a bad drive when needed
2. The spare will automatically be detached back to the spare
pool when a new device is inserted and
Hi, Erik,
I've always wondered what the benefit (and difficulty to add to ZFS) would
be to having an async write cache for ZFS - that is, ZFS currently buffers
async writes in RAM, until it decides to aggregate enough of them to flush
to disk. I think it would be interesting to see what would
Just disregard this thread. I'm resolving the issue using other methods (not
including Solaris).
//Svein
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
On Mar 2, 2010, at 12:36 PM, Edward Ned Harvey wrote:
I have a system with a bunch of disks, and I’d like to know how much faster
it would be if I had an SSD for the ZIL; however, I don’t have the SSD and I
don’t want to buy one right now. The reasons are complicated, but it’s not a
cost
On 22/02/2010 18:02, N wrote:
no problem, I figured it out .. file format is not the issue any more.
Although, zfs-dupe at the block-level is not making any sense to me as my
testing shows that these image files contain similar data but do not dedup
unless they are the same exact match.
no problem, I figured it out .. file format is not the issue any more.
Although, zfs-dupe at the block-level is not making any sense to me as my
testing shows that these image files contain similar data but do not dedup
unless they are the same exact match.
--
This message posted from
Hi Any idea why zfs does not dedup files with this format ?
file /opt/XXX/XXX/data
VAX COFF executable - version 7926
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Hi Any idea why ZFS will not dedup image files with
VAX COFF data type format ?
#file /opt/XXX/XXX/data
VAX COFF executable - version 7926
Unless by extraordinary chance someone guesses what's happening for you, I'd
say you need to provide a lot more info to get an answer.
Start with
ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not
prefetch.
Can you point me to any reference? I didn't find anything stating yay or
nay, for either of these.
___
opensolaris-discuss mailing list
Doesn't this mean that if you enable write back, and you have
a single, non-mirrored raid-controller, and your raid controller
dies on you so that you loose the contents of the nvram, you have
a potentially corrupt file system?
It is understood, that any single point of failure could result
One more thing I¹d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by enabling adaptive
hello
i have made some benchmarks with my napp-it zfs-server
a href=http://www.napp-it.org/bench.png; target=_blankscreenshot/abr
br
Benchmark results via bonnie++ 1.03:br
a href=http://www.napp-it.org/bench.pdf;
target=_blankwww.napp-it.org/bench.pdf/abr
br
- 2gb vs 4 gb vs 8 gb rambr
- mirror
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote:
One more thing I’d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
Ok, I've done all the tests I plan to complete. For highest performance, it
seems:
. The measure I think is the most relevant for typical operation is
the fastest random read /write / mix. (Thanks Bob, for suggesting I do this
test.)
The winner is clearly striped mirrors in ZFS
.
Whatever. Regardless of what you say, it does show:
· Which is faster, raidz, or a stripe of mirrors?
· How much does raidz2 hurt performance compared to raidz?
· Which is faster, raidz, or hardware raid 5?
· Is a mirror twice as fast as a single disk for
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote:
Whatever. Regardless of what you say, it does show:
· Which is faster, raidz, or a stripe of mirrors?
· How much does raidz2 hurt performance compared to raidz?
· Which is faster, raidz, or hardware raid 5?
Richard Elling wrote:
...
As you can see, so much has changed, hopefully for the better, that running
performance benchmarks on old software just isn't very interesting.
NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they
would not be competitive in the market. The
1 - 100 of 402 matches
Mail list logo