requirements seems like a
big downer for *my* configuration, as I have just the one SSD, but I'll
persist and see what I can get out of it.
Thanks for the thoughts thus far!
Cheers,
Nathan.
On 21/11/2012 8:33 AM, Fajar A. Nugraha wrote:
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
(open
gs and a few other
things but it doesn't seem to change the behavious
Again - I'm looking for thoughts here - as I have only really just
started looking into this. Should I happen across anything interesting,
I'll followup this post.
Cheers,
Nathan. :)
__
On 29/05/2012 11:10 PM, Jim Klimov wrote:
2012-05-29 16:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC
ust replace the current
ones...)
I might just have to bite the bullet and try something with current SW. :).
Nathan.
On 05/29/12 08:54 PM, John Martin wrote:
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been us
On 29/05/2012 6:39 AM, Richard Elling wrote:
On May 28, 2012, at 5:48 AM, Nathan Kroenert wrote:
Hi folks,
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now
(which are 512 byte sector).
Anyone offer up
se so called 'advanced format'
drives (which as far as I can tell are in no way actually advanced, and
only benefit HDD makers and not the end user).
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Jim Klimov wrote:
>> It is is hard enough already to justify to an average wife that...
That made my night. Thanks, Jim. :)
On 03/20/12 10:29 PM, Jim Klimov wrote:
2012-03-18 23:47, Richard Elling wrote:
...
Yes, it is wrong to think that.
Ok, thanks, we won't try that :)
copy out, co
be looking at layers below ZFS. If you *can*, then
you start looking further up the stack.
Hope this helps somewhat. Let us know how you go.
Cheers!
Nathan.
On 02/ 1/12 04:52 AM, Mohammed Naser wrote:
Hi list!
I have seen less-than-stellar ZFS performance on a setup of one main
head connecte
worth considering something different ;)
Cheers!
Nathan.
On 12/19/11 09:05 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugraha wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it s
Do note, that though Frank is correct, you have to be a little careful
around what might happen should you drop your original disk, and only
the large mirror half is left... ;)
On 12/16/11 07:09 PM, Frank Cusack wrote:
You can just do fdisk to create a single large partition. The
attached mi
tely 100MB/s (which is about an average PC HDD
reading sequentially), I'd have thought it should be a lot faster than 12x.
Can we really only pull stuff from cache at only a little over one
gigabyte per second if it's dedup data?
Cheers!
Nathan.
___
o be
able to claim more available space for the same device, and to be lazy
in the CRC generation/checking arena. And to profoundly impact the time
it takes to read or update anything less than 4K. But - then again,
maybe I'm missing something.
mply catastrophically slow.)
Hope this helps at least a little.
Cheers,
Nathan.
On 06/14/11 03:20 PM, Maximilian Sarte wrote:
Hi,
I am posting here in a tad of desperation. FYI, I am running FreeNAS 8.0.
Anyhow, I created a raidz1 (tank1) with 4 x 2Tb WD EARS hdds.
All was doing ok until I dec
Hi Karl,
Is there any chance at all that some other system is writing to the
drives in this pool? You say other things are writing to the same JBOD...
Given that the amount flagged as corrupt is so small, I'd imagine not,
but thought I'd ask the question anyways.
Cheers!
Nath
te you get when you disable the disk cache.
Nathan.
On 8/03/2011 11:53 PM, Edward Ned Harvey wrote:
From: Jim Dunham [mailto:james.dun...@oracle.com]
ZFS only uses system RAM for read caching,
If your email address didn't say oracle, I'd just simply come out and say
you're craz
have dome something administratively silly... ;)
Nathan.
On 7/03/2011 12:14 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
We're heading into the 3rd hour of the zpool destroy on "others".
T
Actually, I find that tremendously encouraging. Lots of internal
Oracle folks still subscribed to the list!
Much better than none... ;)
Nathan.
On 02/26/11 03:29 PM, Yaverot wrote:
Sorry all, didn't realize that half of Oracle would auto-reply to a public
mailing list since they'
rticularly when they are sequential - using eSATA.
Note: All of this is with the 'cheap' view... You can most certainly buy
much better hardware... But bang for buck - I have been happy with the
above.
Cheers!
Nathan.
On 02/26/11 01:58 PM, Brandon High wrote:
On Fri, Feb 25, 2011 at 4:3
pushing 4 disks pretty much flat out on a PCI-X 133
3124 based card. (note that there was a pci and a pci-x version of the
3124, so watch out.)
Cheers!
Nathan.
On 02/24/11 02:10 AM, Andrew Gabriel wrote:
Krunal Desai wrote:
On Wed, Feb 23, 2011 at 8:38 AM, Mauricio Tavares
wrote:
I se
ng to tune zfs_vdev_max_pending...
Nonetheless, I'm now at a far more balanced point than when I started,
so that's a good thing. :)
Cheers,
Nathan.
On 15/02/2011 6:44 AM, Richard Elling wrote:
Hi Nathan,
comments below...
On Feb 13, 2011, at 8:28 PM, Nathan Kroenert wrote:
On 14/02/2
On 14/02/2011 4:31 AM, Richard Elling wrote:
On Feb 13, 2011, at 12:56 AM, Nathan Kroenert wrote:
Hi all,
Exec summary: I have a situation where I'm seeing lots of large reads starving
writes from being able to get through to disk.
What is the average service time of each disk? Mul
ue is identical. (Though I have since determined
that my HP raid controller is actually *slowing* my reads and writes to
disk! ;)
Cheers!
Nathan.
On 14/02/2011 4:08 AM, gon...@comcast.net wrote:
Hi Nathan,
Maybe it is buried somewhere in your email, but I did not see what
zfs version
I get the chance, I'll give the rpool thing a crack again, but
overall, it seems to me that the behavior I'm observing is not great...
I'm also happy to supply lockstats / dtrace output etc if it'll help.
Thoughts?
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://www.stringliterals.com/?p=77
This guy talks about it too under "Hard Drives".
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Sorry I probably didn't make myself exactly clear.
Basically drives without particular TLER settings drop out of RAID randomly.
* Error Recovery - This is called various things by various manufacturers
(TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to
recover the d
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
Is there a way except for buying enterprise (RAID specific) drives for a array
to use normal drives?
Does anyone have any success stories regarding a particular model?
The TLER cannot be edited on newer drives from Western Digital unfortu
While I am about to embark on building a home NAS box using OpenSolaris with
ZFS.
Currently I have a chassis that will hold 16 hard drives, although not in
caddies - down time doesn't bother me if I need to switch a drive, probably
could do it running anyways just a bit of a pain. :)
I am afte
I figured out what I did wrong. The filesystem as received on the external HDD
had multiple snapshots, but I failed to check for them. So I had created a
snapshot in order to send/recv on System2. That doesn't work, obviously.
A new local send/recv of the filesystem's correct snapshot did the tr
What is the best way to use an external HDD for initial replication of a large
ZFS filesystem?
System1 had filesystem; System2 needs to have a copy of filesystem.
Used send/recv on System1 to put filesys...@snap1 on connected external HDD.
Exported external HDD pool and connected/imported on Syst
gards,
Nathan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> Sorry,
> James
Andre, I've seen this before. What you have to do is ask James each question 3
times and on the third time he will tell the truth. ;)
I know it's not in the preview of 2010.2 (build 118).
On a serious note, James - do you know the status of the presentation r
I have not carried out any research into this area, but when I was
building my home server I wanted to use a Promise SATA-PCI card, but
alas (Open)Solaris has no support at all for the Promise chipsets.
Instead I used a rather old card based on the sil3124 chipset.
n
On Mon, Aug 3, 2009 at 9:35
Yes, please write more about this. The photos are terrific and I
appreciate the many useful observations you've made. For my home NAS I
chose the Chenbro ES34069 and the biggest problem was finding a
SATA/PCI card that would work with OpenSolaris and fit in the case
(technically impossible without
I'll maintain hope for seeing/hearing the presentation until you guys announce
that you had NASA store the tape for safe-keeping.
Bump'd.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
This is probably bug #6462803. The work-around goes something like this:
$ pfexec bash
# beadm mount opensolaris /mnt
# beadm unmount opensolaris
# svcadm clear svc:/system/filesystem/zfs/auto-snapshot:frequent
# svcadm clear svc:/system/filesystem/zfs/auto-snapshot:hourly
# svcadm clear svc:/syst
Regarding the SATA card and the mainboard slots, make sure that
whatever you get is compatible with the OS. In my case I chose
OpenSolaris which lacks support for Promise SATA cards. As a result,
my choices were very limited since I had chosen a Chenbro ES34069 case
and Intel Little Falls 2 mainboa
:04:31.2783 ereport.fs.zfs.checksum
Score one more for ZFS! This box has a measly 300GB mirrored, and I have
already seen dud data. (heh... It's also got non-ecc memory... ;)
Cheers!
Nathan.
Dennis Clarke wrote:
On Tue, 24 Mar 2009, Dennis Clarke wrote:
You would think so eh?
But a tr
LI-DS4
Cheers!
Nathan.
On 13/03/09 09:21 AM, Dave wrote:
Tim wrote:
On Thu, Mar 12, 2009 at 2:22 PM, Blake <mailto:blake.ir...@gmail.com>> wrote:
I've managed to get the data transfer to work by rearranging my disks
so that all of them sit on the integrated SATA contr
definitely time to bust out some mdb -K or boot -k and see what it's
moaning about.
I did not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, mea
definitely time to bust out some mdb -k and see what it's moaning about.
I did not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, measured for ex
g up all your memory, and your
physical backing storage is taking a while to catch up?
Nathan.
Blake wrote:
My dump device is already on a different controller - the motherboards
built-in nVidia SATA controller.
The raidz2 vdev is the one I'm having trouble with (copying the same
and sorting out it's own
ZIL and L2ARC would be interesting, though, given the propensity for
SSD's to be either fast read or fast write at the moment, you may well
require some whacky knobs to get it to do what you actually want it to...
hm.
Nathan.
Bill Sommerfeld wrote:
On Wed,
You could be the first...
Man up! ;)
Nathan.
Will Murnane wrote:
> On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert
> wrote:
>> Seems a little pricey for what it is though.
> For what it's worth, there's also a 9010B model that has only one sata
> port and room for s
device...
Seems a little pricey for what it is though.
It's going onto my list of what I'd buy if I had the money... ;)
Nathan.
On 01/30/09 12:10, Janåke Rönnblom wrote:
> ACARD have launched a new RAM disk which can take up to 64 GB of ECC RAM
> while still looking like a standar
akes?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Senior Systems Engineer Phone: +61 3
5461/gcfhw?a=view
--
//////
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 4
re keen to test the *actual* disk performance, you should just
use the underlying disk device like /dev/rdsk/c0t0d0s0
Beware, however, that any writes to these devices will indeed result in
the loss of the data on those devices, zpools or other.
Cheers.
Nathan.
Richard Elling wrote:
> Ro
command to work, but it would have it's merits...
Cheers!
Nathan.
Jacob Ritorto wrote:
> Hi,
> I just said zfs destroy pool/fs, but meant to say zfs destroy
> pool/junk. Is 'fs' really gone?
>
> thx
> jake
> _
Interesting. I'll have a poke...
Thanks!
Nathan.
Brandon High wrote:
> On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
> wrote:
>> Are you able to qualify that a little?
>>
>> I'm using a realtek interface with OpenSolaris and am yet to experience an
Are you able to qualify that a little?
I'm using a realtek interface with OpenSolaris and am yet to experience
any issues.
Nathan.
Brandon High wrote:
> On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
> wrote:
>> Several people reported this same problem. They changed
An interesting interpretation of using hot spares.
Could it be that the hot-spare code only fires if the disk goes down
whilst the pool is active?
hm.
Nathan.
Scot Ballard wrote:
> I have configured a test system with a mirrored rpool and one hot spare.
> I powered the systems off,
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS any
sort of redundancy to manage.
I'm not sure how you can class it a ZFS fail when the Disk subsystem has
failed...
Or - did I miss something? :)
Nathan.
Tom Bird wrote:
> Morning,
>
> F
quick
explanation...
It would be interesting to see if you see the same issues using a
Solaris or other OS client.
Hope this helps somewhat. Let us know how it goes.
Nathan.
fredrick phol wrote:
> I'm currently experiencing exactly the same problem and it's been driving me
>
enable stuff like gzip-9 compression, which
might, on the slower Atom style chips, get in the way.
Looking forward to any reports.
Nathan.
On 13/01/09 01:47 PM, JZ wrote:
> ok, was I too harsh on the list?
> sorry folks, as I said, I have the biggest ego.
>
> no one can hurt that by trying
So - will it be arriving in a patch? :)
Nathan.
Richard Elling wrote:
> Marion Hakanson wrote:
>> richard.ell...@sun.com said:
>>
>>> L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
>>> not back-ported to Solaris 10u6.
>>
I've had some success.
I started with the ZFS on-disk format PDF.
http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
The uberblocks all have magic value 0x00bab10c. Used od -x to find that value
in the vdev.
r...@opensolaris:~# od -A x -x /mnt/zpool.zones | grep "b10c 00ba"
0200
Thanks for the reply. I tried the following:
$ zpool import -o failmode=continue -d /mnt -f zones
But the situation did not improve. It still hangs on the import.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
I don't know if this is relevant or merely a coincidence but the zdb command
fails an assertion in the same txg_wait_synced function.
r...@opensolaris:~# zdb -p /mnt -e zones
Assertion failed: tx->tx_threads == 2, file ../../../uts/common/fs/zfs/txg.c,
line 423, function txg_wait_synced
Abort (
I have moved the zpool image file to an OpenSolaris machine running 101b.
r...@opensolaris:~# uname -a
SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris
Here I am able to attempt an import of the pool and at least the OS does not
panic.
r...@opensolaris:~# zpool import -d /mnt
pool: zo
I have a ZFS pool that has been corrupted. The pool contains a single device
which was actually a file on UFS. The machine was accidentally halted and now
the pool is corrupt. There are (of course) no backups and I've been asked to
recover the pool. The system panics when trying to do anything w
ach
'surprise!'.
:)
I scrub once every month or so, depending on the system.
So, in direct answer to your question, No - You don't *need* to scrub.
But - It's better if you do. ;)
My 2c.
Nathan.
On 10/11/08 11:38 AM, Douglas Walker wrote:
> Hi,
>
> I'm
A quick google shows that it's not so much about the mirror, but the BE...
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/
Might help?
Nathan.
On 7/11/08 02:39 PM, Krzys wrote:
> What am I doing wrong? I have sparc V210 and I am having difficulty with boot
> -L, I wa
Not wanting to hijack this thread, but...
I'm a simple man with simple needs. I'd like to be able to manually spin
down my disks whenever I want to...
Anyone come up with a way to do this? ;)
Nathan.
Jens Elkner wrote:
> On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Ch
ptions are available in that current zfs / zpool version...
That way, you would never need to do anything to bash/zfs once it was
done the first time... do it once, and as ZFS changes, the prompts
change automatically...
Or - is this old hat, and how we do it already? :)
Nathan.
On 10/10/08 05:0
Interesting.
heh - I was piping to tail -10, so output rate was not an issue.
That being said, there is a large delta in your results and mine... If I
get a chance, I'll look into it...
I suspect it's a cached versus I/O issue...
Nathan.
On 1/10/08 10:02 AM, Bob Friesenhahn wrote
ication...
I generally look to keep directories to a size that allows the utilities
that work on and in it to perform at a reasonable rate... which for the
most part is around the 100K files or less...
Perhaps you are using larger hardware than I am for some of this stuff? :)
Nathan.
On 1/10
I second that question, and also ask what brand folks like for
performance and compatibility?
Ebay is killing me with vast choice and no detail... ;)
Nathan.
Al Hopper wrote:
> On Wed, Aug 20, 2008 at 12:57 PM, Neal Pollack <[EMAIL PROTECTED]> wrote:
>> Ian Collins wrote:
>
AHCI ports.
It might seem like it'll be a lot of hassle getting it working, but in
the ZFS space, it works great pretty much out of the box (plus ethernet
address change if the nvidia driver is still busted... ;)
Cheers!
Nathan.
*Going like stink means going like a hairy goat - like lig
It starts with Z, which makes it the one of the last to be considered if
it's listed alphabetically?
Nathan.
Rahul wrote:
> hi
> can you give some disadvantages of the ZFS file system??
>
> plzz its urgent...
>
> help me.
>
>
> This mes
eadful xen
experiment :) so I'll be watching this thread with renewed interest to
see who else is doing what...
Nathan.
Bob Friesenhahn wrote:
> On Thu, 17 Jul 2008, Ben Rockwood wrote:
>
>> zfs list is mighty slow on systems with a large number of objects,
>> but ther
Even better would be using the ZFS block checksums (assuming we are only
summing the data, not it's position or time :)...
Then we could have two files that have 90% the same blocks, and still
get some dedup value... ;)
Nathan.
Charles Soto wrote:
> A really smart nexus for dedup
at on Monday...
Awesome. Now to work on audio...
heh.
Nathan.
Nathan Kroenert wrote:
> Hey all -
>
> Just spent quite some time trying to work out why my 2 disk mirrored ZFS
> pool was running so slow, and found an interesting answer...
>
> System: new Gigabyte M750sli-DS4,
from the nvidia
website
Seems snappy enough. With 4 cores @ 2.2Ghz (phenom 9550) it's looking
like it'll do what I wanted quite nicely.
Later...
Nathan.
--
//////
// Nathan Kroenert [EMAIL PROTECTED] //
ftware based raid / volume
manager operations are going to be pretty crappy.
I'm in the process of putting together a new play box that'll be AMD
Quad Core, 8GB memory and some newish SATA-II disks. I'll let you know
how that goes... It should smoke...
Cheers!
Nathan.
Bob F
;m
hoping to recapture that magic. (I actually wanted to buy some more 570
based MB's but cannot get 'em in Australia any more... :)
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
l be grabbing a couple of 'better' USB hubs (Mine
are pretty much the cheapest I could buy) and see how that goes.
For gags, take ZFS out of the equation and validate that your hardware
is actually providing a stable platform for ZFS... Mine wasn't...
Nathan.
Evan Geller wrote:
&g
sized disks, I know many don't consider
this an issue these days, but I'd still be inclined to keep /var (and
especially /var/tmp) separated from /
In ZFS, this is, of course, just two filesystems in the same pool, with
differing quotas...
:)
Nathan.
Rich Teer wrote:
> On Wed,
ms to ring true.
Not at all sure about SAS.
If I'm wrong here, hopefully someone else will provide the complete set
of logic for determining cache enabling semantics.
:)
Nathan.
Brian Hechinger wrote:
> On Wed, Jun 04, 2008 at 09:17:05PM -0400, Ellis, Mike wrote:
>> The FAQ
When on leased equipment and previously using VxVM we were able to migrate even
a lowly UFS filesystems from one storage array to another storage array via the
evacuate process. I guess this makes us only the 3rd customer waiting for this
feature.
It would be interesting to ask other users of
you get the chance, deliberately panic the box to
make sure you can actually capture a dump...
dumpadm is your friend as far as checking where you are going to dump
to, and it it's one side of your swap mirror, that's bad, M'Kay?
:)
Nathan.
Jorgen Lundman wrote:
> OK, this is
are that many of our disk /
target drivers were actually FMA'd up yet. heh - Shows what I know.
Does any of this make you feel any better (or worse)?
Nathan.
Mark A. Carlson wrote:
> fmd(1M) can log faults to syslogd that are already diagnosed. Why
> would you want the random spew as well
d pre caffeine... :)
Nathan
Vic Engle wrote:
> I'm hoping someone can help me understand a zfs data corruption symptom. We
> have a zpool with checksum turned off. Zpool status shows that data
> corruption occured. The application using the pool at the time reported a
> "read&q
yways. :)
Nathan.
Nicolas Williams wrote:
> On Wed, Apr 09, 2008 at 11:38:03PM -0400, Jignesh K. Shah wrote:
>> Can zfs send utilize multiple-streams of data transmission (or some sort
>> of multipleness)?
>>
>> Interesting read for background
>> http://people.planetpos
Did you do anything specific with the drive caches?
How is your ZFS performance?
Nathan. :)
Rich Teer wrote:
> On Wed, 19 Mar 2008, Terence Ng wrote:
>
>> I am new to Solaris. I have Sun X2100 with 2 x 80G harddisks (run as
>> email server, run tomcat, jboss and postgresql)
experience any sort of issues.
An external 500GB disk + external USB enclosure runs for what - $150?
That's what I use anyways. :)
Nathan.
Paul Kraus wrote:
> On Thu, Mar 6, 2008 at 10:22 AM, Brian D. Horn <[EMAIL PROTECTED]> wrote:
>
>> ZFS is not 32-bit safe. There
love the opportunity to roll their own.
OK - I'm going to shutup now. I think I have done this to death, and I
don't want to end up in everyone's kill filter.
Cheers!
Nathan.
Bob Friesenhahn wrote:
> On Tue, 4 Mar 2008, Nathan Kroenert wrote:
>>> The circus trick can
Bob Friesenhahn wrote:
> On Tue, 4 Mar 2008, Nathan Kroenert wrote:
>>
>> It does seem that some of us are getting a little caught up in disks
>> and their magnificence in what they write to the platter and read
>> back, and overlooking the potential value of a sim
ink of an eye. or two. ok - maybe three... ;)
Maybe we could also use the SPU's as well... OK - So, I'm possibly
dreaming here, but hell, if I'm dreaming, why not dream big. :)
Nathan.
Bob Friesenhahn wrote:
> On Mon, 3 Mar 2008, me wrote:
>
>> I'm sure people using
ctory? Does the rm operation need to hold that lock for all that
time? Is there a better way?
Oh - a 'find .' from the root of that filesystem will also hang waiting
for the lock. I can create new files and rm other files though, which is
good. I wonder what else might be potentially
ther hardware RAID(X) environments that might find this useful?
Thoughts?
And of course, sorry if we already do this... :)
Nathan.
Jeff Bonwick wrote:
>> I thought RAIDZ would correct data errors automatically with the parity data.
>
> Right. However, if the data is corrupted while in
Hm -
Based on this detail from the page:
Change lever for switching between "Rotation
+ Hammering" , "Neutral" and "Hammering only"
I'd hope it could still hammer... Though I'd suspect the size of nails
it would hammer would be somewhat limited... ;)
wishes.
Uwe - am I close here?
Nathan.
Nicolas Williams wrote:
> On Tue, Feb 26, 2008 at 06:34:04PM -0800, Uwe Dippel wrote:
>>> The rub is this: how do you know when a file edit/modify has completed?
>> Not to me, I'm sorry, this is task of the engineer, the implementer.
ul with this over and above, say, 1 minute
snapshots.
Nathan.
Uwe Dippel wrote:
>> atomic view?
>
> Your post was on the gory details on how ZFS writes. "Atomic View" here is,
> that 'save' of a file is an 'atomic' operation: at one moment in time you
are no newer patches for it, just in case it's one for which
there was a known problem. (which was worked around in the driver)
I *think* there was an issue with at least one or two...
Cheers!
Nathan.
Sandro wrote:
> hi folks
>
> I've been running my fileserver at home with
And would drive storage requirements through the roof!!
I like it!
;)
Nathan.
Jonathan Loran wrote:
>
> David Magda wrote:
>> On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
>>
>>> In some circles, CDP is big business. It would be a great ZFS offering.
>>
w existing
files are updated as well...
hm.
Cheers!
Nathan.
Richard Elling wrote:
> Nathan Kroenert wrote:
>> And something I was told only recently - It makes a difference if you
>> created the file *before* you set the recordsize property.
>
> Actually, it has always been
What about new blocks written to an existing file?
Perhaps we could make that clearer in the manpage too...
hm.
Mattias Pantzare wrote:
>> >
>> > If you created them after, then no worries, but if I understand
>> > correctly, if the *file* was created with 128K recordsize, then it'll
>> > k
uming I understand correctly.
Hopefully someone else on the list will be able to confirm.
Cheers!
Nathan.
Richard Elling wrote:
> Anton B. Rang wrote:
>>> Create a pool [ ... ]
>>> Write a 100GB file to the filesystem [ ... ]
>>> Run I/O against that file, doing
so looking for any other ideas on what
might be hurting me.
I also have set
zfs:zfs_nocacheflush = 1
in /etc/system
The Oracle Logs are on a separate Zpool and I'm not seeing the issue on
those filesystems.
The lockstats I have run are not yet all that interesting. If anyo
rites.
(A single thread of an N2 is only so fast... Just think of what you
could do with 64 of them ;)
I'll be interested to see what the others have to say. :)
Hope this helps.
Nathan.
Michael Stalnaker wrote:
> We’re looking at building out sever ZFS servers, and are considering
1 - 100 of 151 matches
Mail list logo