On 03/19/13 20:27, Jim Klimov wrote:
I disagree; at least, I've always thought differently:
the "d" device is the whole disk denomination, with a
unique number for a particular controller link ("c+t").
The disk has some partitioning table, MBR or GPT/EFI.
In these tables, partition "p0" stands f
Andrew Werchowiecki wrote:
Total disk size is 9345 cylinders
Cylinder size is 12544 (512 byte) blocks
Cylinders
Partition StatusType Start End Length%
= ==
ilisation and performance for a ZFS COMSTAR target.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for large
transfers on 10GbE are:
280MB/smbuffer
220MB/srsh
180MB/sHPN-ssh unencrypted
60MB/s standard ssh
The tradeoff mbuffer is a little more complicated to script; rsh is, well, you know; and hpn-ssh requires rebuilding ssh and (probably) maintaining a second copy of it.
are portable to a different controller), are you able/willing to
swap it for one that Solaris is known to support well?
--------
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@
or if you don't care about existing snapshots, use Shadow Migration to
move the data across.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Arne Jansen wrote:
We have finished a beta version of the feature.
What does FITS stand for?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Schweiss, Chip
How can I determine for sure that my ZIL is my bottleneck? If it is the
bottleneck, is it possible to keep adding m
-r export/home | wc -l
1951
$ echo 1951 / 365 | bc -l
5.34520547945205479452
$
So you're slightly ahead of my 5.3 years of daily snapshots:-)
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
On 05/28/12 20:06, Iwan Aucamp wrote:
I'm getting sub-optimal performance with an mmap based database
(mongodb) which is running on zfs of Solaris 10u9.
System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 * 8GB + 6 * 4GB)
ram (installed so it runs at 1333MHz) and 2 * 300GB 15K RPM disks
-
On 05/17/12 15:03, Bob Friesenhahn wrote:
On Thu, 17 May 2012, Paul Kraus wrote:
Why are you trying to tune the ARC as _low_ as possible? In my
experience the ARC gives up memory readily for other uses. The only
place I _had_ to tune the ARC in production was a couple systems
running an app
I just played and knocked this up (note the stunning lack of comments,
missing optarg processing, etc)...
Give it a list of files to check...
#define _FILE_OFFSET_BITS 64
#include
#include
#include
#include
#include
int
main(int argc, char **argv)
{
int i;
for (i = 1; i
relatively new, and the
controllers may not have been designed with SSDs in mind. That's likely
to be somewhat different nowadays, but I don't have any data to show
that either way.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-dis
aster, send only the difference between the current and recent
snapshots on the backup and then deploy it on backup.
Any ideas how this can be done?
It's called an incremental - it's part of the zfs send command line options.
--
Andrew Gabriel
;s nothing inbetween.
Actually, there are a number of disk firmware and cache faults
inbetween, which zfs has picked up over the years.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
10,000 synchronous write IOPs, but the underlying
devices are only performing about 1/10th of that, due to ZFS coalescing
multiple outstanding writes.
Sorry, I'm not familiar with what type of load bonnie generates.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle
, which is just silly to
fight with anyway.
Gregg
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/15/11 23:40, Tim Cook wrote:
On Tue, Nov 15, 2011 at 5:17 PM, Andrew Gabriel
mailto:andrew.gabr...@oracle.com>> wrote:
On 11/15/11 23:05, Anatoly wrote:
Good day,
The speed of send/recv is around 30-60 MBytes/s for initial
send and 17-25 MBytes
sec, so it's pretty much limited by
the ethernet.
Since you have provided none of the diagnostic data you collected, it's
difficult to guess what the limiting factor is for you.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discus
s of
disk (SSD), block numbers are moved around to achieve wear leveling, so
blacklistinng a block number won't stop you reusing that real block.
--
Andrew Gabriel (from mobile)
--- Original message ---
From: Edward Ned Harvey
To: didier.reb...@u-bourgogne.fr, zfs-discuss@opensolari
a ufs root disk, but any attempt to put a serious load
on it, and it corrupted data all over the place. So if you're going to
try one, make sure you hammer it very hard in a test environment before
you commit anything important to it.
--
Andrew Gabriel
Block:
1380679072Error Block: 1380679072
Aug 16 13:14:16 nas-hz-02 scsi: Vendor:
DELL Serial Number:
Aug 16 13:14:16 nas-hz-02 scsi: Sense Key: Unit Attention
Aug 16 13:14:16 nas-hz-02 scsi: ASC: 0x29 (device internal
re
size (although that alone doesn't necessarily tell you much - a dtrace
quantize aggregation would be better). Also check service times on the
disks (iostat) to see if there's one which is significantly worse and
might be going bad.
--
Andrew Gabriel
___
none of my 'data' disks have
been 'configured' yet. I wanted to ID them before adding them to pools.
Use p0 on x86 (whole disk, without regard to any partitioning).
Any other s or p device node may or may not be there, depending on what
partitions/slices are on
e end of the URL). It conks out at version 31 though.
I have systems back to build 125, so I tend to always force zpool
version 19 for that (and that automatically limits zfs version to 4).
There's also some info about some builds on the zfs wikipedia page
http://en.wikipedia.org/wiki/Zfs
e to be able to find a non corrupt version of the data.
When you have a new hardware setup, I would perform scrubs more
frequently as a further check that the hardware doesn't have any
systemic problems, until you have gained confidence in it.
?
What's the RAID layout of your pool "zpool status"?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ssion
and/or if you wish to reserve a minimum space...
zfs set reservation=50g logs/oracle
zfs set reservation=100g logs/session
Do I have to use the legacy mount options?
You don't have to.
--
Andrew Gabriel
___
zfs-discuss mailing list
z
0 bytes in 0 directories.
0 bytes in 0 files.
10143232 bytes free.
512 bytes per allocation unit.
19811 total allocation units.
19811 available allocation units.
andrew@opensolaris:~# mount -F pcfs /dev/zvol/dsk/rpool/vol1 /mnt
andrew@opensolaris:~#
--
Andrew Gabriel
___
gate
Barracuda XT 2Tb disks (which are a bit more Enterprise than the list
above), just plugged them in, and so far they're OK. Not had them long
enough to report on longevity.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolar
On 06/30/11 08:50 PM, Orvar Korvar wrote:
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I
can only see 300GB. Where is the rest? Is there a command I can do to reach the
rest of the data? Will scrub help?
Not much to go on - no one can answer this.
How did you g
On 06/27/11 11:32 PM, Bill Sommerfeld wrote:
On 06/27/11 15:24, David Magda wrote:
Given the amount of transistors that are available nowadays I think
it'd be simpler to just create a series of SIMD instructions right
in/on general CPUs, and skip the whole co-processor angle.
see: http://en.wi
Richard Elling wrote:
On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote:
Richard Elling wrote:
Actually, all of the data I've gathered recently shows that the number of IOPS
does not significantly increase for HDDs running random workloads. However the
response time does :-( My
taking into account priority, such as if the I/O
is a synchronous or asynchronous, and age of existing queue entries). I
had much fun playing with this at the time.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
diff [ | ]
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/14/11 01:08 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Donald Stahl
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in
is synchronous.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Certainly, fastfs (a similar although more dangerous option for ufs)
makes ufs to ufs copying significantly faster.
*ufsrestore works fine on ZFS filesystems (although I haven't tried it
with any POSIX ACLs on the original ufs filesystem, which would probably
simply get l
f oSol 134?
What does "zfs get sync" report?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ss, and I think you'll need a Windows system to
actually flash the BIOS.
You might want to do a google search on "3114 data corruption" too,
although it never hit me back when I used the cards.
--
Andrew Gabriel
___
zfs-discuss mailing li
drivers had been developed. I would suggest
looking for something more modern.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e scanned all the
surfaces on startup to build up an internal table of the relative
misalignment of tracks across the surfaces, but this rapidly became
unviable as drive capacity increased and this scan would take an
unreasonable length of time. It may be that modern drives learn this as
they
ely
provisioned, in order to deallocate blocks in the LUN which have
previously been allocated, but whose contents have since been invalidated.
In this case, both ZFS and whatever is providing the storage LUN would
need to support TRIM.
Out of interest, what other filesystems out there toda
dicator of impending failure, such
as the various error and retry counts.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/15/11 11:32 PM, Gal Buki wrote:
Hi
I have a pool with a raidz2 vdev.
Today I accidentally added a single drive to the pool.
I now have a pool that partially has no redundancy as this vdev is a single
drive.
Is there a way to remove the vdev
Not at the moment, as far as I know.
and
Sridhar,
You have switched to a new disruptive filesystem technology, and it has
to be disruptive in order to break out of all the issues older
filesystems have, and give you all the new and wonderful features.
However, you are still trying to use old filesystem techniques with it,
which is
earlier opensolaris versions, but it
no longer works).
If you have a support contract, raise a call and asked to be added to
RFE 6744320.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
em) immediately
(so you can repeat the hardware snapshot again if it fails), maybe you
will be lucky.
The right way to do this with zfs is to send/recv the datasets to a
fresh zpool, or (S10 Update 9) to create an extra zpool mirror and then
split it off with zpool split.
--
3017015200 50
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t0d0
/p...@0,0/pci1028,2...@1f,2/d...@0,0
Thanks for any idea.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris
itch to disable pseudo 512b access so
you can use the 4k native. The industry as a whole will transition to 4k
sectorsize over next few years, but these first 4k sectorsize HDs are
rather less useful with 4k sectorsize-aware OS's. Let's hope other
manufacturers get this right in their first
del is no longer available now. I'm going to have to swap out for
bigger disks in the not too distant future.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
S is 3 rather than 2?
If you look at zfs_create_fs(), you will see the first 3 items created
are:
Create zap object used for SA attribute registration
Create a delete queue.
Create root znode.
Hence, inode 3.
--
Andrew Gabriel
___
zfs-disc
ve you poor performance if
you are accessing both at the same time, as you are forcing head seeking
between them.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if the target is a new hard drive can I use
this zfs send al...@3 > /dev/c10t0d0 ?
That command doesn't make much sense for the purpose of doing anything
useful.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discu
e error you included isn't a timeout.
The SSD's themselves are all Intel X-25E's (32GB) with firmware 8860
and the LSI 1068 is a SAS1068E B3 with firmware 011c0200 (1.28.02.00).
I'm not intimately familiar with the firmware versions, but if you're
having problems, making s
What you say is true only on the system itself. On an NFS client system, 30
seconds of lost data in the middle of a file (as per my earlier example) is a
corrupt file.
-original message-
Subject: Re: [zfs-discuss] Solaris startup script location
From: Edward Ned Harvey
Date: 18/08/2010 17:17
>
ing a file sequentially,
you will likely find an area of the file is corrupt because the data was
lost.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andrew Gabriel wrote:
Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise
it complains that log device is missing :)
way to do this is to "zfs set sync=disabled ..." on relevant
filesystems.
I can't recall which build introduced this, but prior to that, you can
set zfs://zil_disable=1 in /etc/system but that applies to all
pools/filesystems.
--
Andrew Gabriel
undancy on flaky storage is not a good place to be.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o see output of: zfs list -t all -r zpool/filesystem
There is a trouble - snapshot is too old, and ,consequently, there is a
questions -- Can I browse pre-rollbacked corrupted branch of FS ? And, if I
can, how ?
--
Andrew Gabriel
___
zfs-discu
Tony MacDoodle wrote:
I have 2 ZFS pools all using the same drive type and size. The
question is can I have 1 global hot spare for both of those pools?
Yes. A hot spare disk can be added to more than one pool at the same time.
--
Andrew Gabriel
ped out drives, this works well,
and avoids ending up with sprawling lower capacity drives as your pool
grows in size. This is what I do at home. The freed-up drives get used
in other systems and for off-site backups. Over the last 4 years, I've
upgraded from 1/4TB, to 1/2TB, and now on 1TB dri
e you do a planned reduction of the pool
redundancy (e.g. if you're going to detach a mirror side in order to
attach a larger disk), most particularly if you are reducing the
redundancy to nothing.
--
Andrew Gabriel
___
zfs-discuss
use for one which is stopped.
However, you haven't given anything like enough detail here of your
situation and what's happening for me to make any worthwhile guesses.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolari
Just wondering if anyone has experimented with working out the best zvol
recordsize for a zvol which is backing a zpool over iSCSI?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
ential
implications before embarking on this route.
(As I said before, the zpool itself is not at any additional risk of
corruption, it's just that you might find the zfs filesystems with
sync=disabled appear to have been rewound by up to 30 seconds.)
If you're unsure, then adding SSD no
dea
for some other applications though (although Linux ran this way for many
years, seemingly without many complaints). Note that there's no
increased risk of the zpool going bad - it's just that after the reboot,
filesystems with sync=disabled will look like they were rewo
tcat.
I haven't figured out where to get netcat nor the syntax for using it yet.
I used a buffering program of my own, but I presume mbuffer would work too.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
between the machines due to the CPU limiting on the scp and gunzip
processes.
Also, if you have multiple datasets to send, might be worth seeing if
sending them in parallel helps.
--
Andrew Gabriel
___
zfs-discuss mailing list
z
takes nearly 19 hours now, and
hammers the heads quite hard. I keep meaning to reduce the scrub
frequency now it's getting to take so long, but haven't got around to
it. What I really want is pause/resume scrub, and the ability to trigger
the pause/resume from the screensaver (or
I find my home data growth is slightly less than the rate
of disk capacity increase, so every 18 months or so, I simply swap out
the disks for higher capacity ones.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
FS, amongst other Solaris features.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
evious installs). It's an amd64 box.
Both OS versions show the same problem.
Do I need to run a scrub? (will take days...)
Other ideas?
It might be interesting to run it under truss, to see which syscall is
returning that error.
--
Andrew Gabriel
___
for disks.
(Actually, vanity naming for disks should probably be brought out into
a separate RFE.)
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | Minley Road | Camberley | GU17 9QG | United Kingdom
ORACLE Corporat
few lines above, another test (for a valid bootfs name) does get
bypassed in the case of clearing the property.
Don't know if that alone would fix it.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | M
if NV ZIL. Trouble is that no other operating systems or
filesystems work this well with such relatively tiny amounts of NV
storage, so such a hardware solution is very ZFS-specific.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracl
up on the ARC (memory) anyway. If you don't have enough
RAM for this to help, then you could add more memory, and/or an SSD as a
L2ARC device ("cache" device in zpool command line terms).
--
Andrew Gabriel
___
zfs-discuss mailing list
zf
Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when working in SAN envi
Thomas Burgess wrote:
I scrub once a week.
I think the general rule is:
once a week for consumer grade drives
once a month for enterprise grade drives.
and before any planned operation which will reduce your
redundancy/resilience, such as swapping out a disk for a new larger one
when growin
Dedhi Sujatmiko wrote:
As a user of el-cheapo US$18 SIL3114, I managed to make the system
freeze continuously when one of SATA cable got disconnected. I am
using 8 disks RAIDZ2 driven by 2 x SIL3114
System is still able to answer the ping, but SSH and console are no
longer responsive, obviously
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones, both recursively. For about four
minutes ther
Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations. The pool is
currently feeding a tape backup while receiving a large filesystem.
Is this
Jesse Reynolds wrote:
Does ZFS store a log file of all operations applied to it? It feels like someone has gained access and run 'zfs destroy mailtmp' to me, but then again it could just be my own ineptitude.
Yes...
zpool history rpool
--
Andr
Darren J Moffat wrote:
You have done a risk analysis and if you are happy that your NTFS
filesystems could be corrupt on those ZFS ZVOLs if you lose data then
you could consider turning off the ZIL. Note though that it isn't
just those ZVOLs you are serving to Windows that lose access to a ZIL
Darren J Moffat wrote:
On 12/02/2010 09:55, Andrew Gabriel wrote:
Can anyone suggest how I can get around the above error when
sending/receiving a ZFS filesystem? It seems to fail when about 2/3rds
of the data have been passed from send to recv. Is it possible to get
more diagnostics out?
You
m is
currently running build 125 and receiving system something approximating
to 133, but I've had the same problem with this filesystem for all
builds I've used over the last 2 years.
--
Cheers
Andrew Gabriel
___
zfs-discuss mailing list
zf
hen I demonstrate this on the
SSD/Flash/Turbocharge Discovery Days I run the UK from time to time (the
name changes over time;-).
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Brandon High wrote:
On Wed, Feb 3, 2010 at 3:13 PM, David Dyer-Bennet wrote:
Which is to say that 45 drives is really quite a lot for a HOME NAS.
Particularly when you then think about backing up that data.
The origin of this thread was how to buy a J4500 (48 drive chassis).
One thin
LICON, RAY (ATTPB) wrote:
Thanks for the reply.
In many situations, the hardware design isn't up to me and budgets tend
to dictate everything these days. True, nobody wants to swap, but the
question is "if" you had to -- what design serves you best. Independent
swap slices or putting it all unde
so it doesn't need to swap.
Then it doesn't matter what the performance of the swap device is.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Michelle Knight wrote:
Fair enough.
So where do you think my problem lies?
Do you think it could be a limitation of the driver I loaded to read the ext3
partition?
Without knowing exactly what commands you typed and exactly what error
messages they produced, and which directories/files are
Robert Milkowski wrote:
I think one should actually compare whole solutions - including servers,
fc infrastructure, tape drives, robots, software costs, rack space, ...
Servers like x4540 are ideal for zfs+rsync backup solution - very
compact, good $/GB ratio, enough CPU power for its capacity
Edward Ned Harvey wrote:
A poster in another forum mentioned that Seagate (and Hitachi, amongst
others) is now selling something labeled as "NearLine SAS" storage
(e.g. Seagate's NL35 series).
Industry has moved again. Better get used to it.
Nearline SAS is a replacement for SATA. It's a low
Mark Grant wrote:
Yeah, this is my main concern with moving from my cheap Linux server with no
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice
as much to buy the 'enterprise' disks which appear to be exactly the same
drives with a flag set in the firmware to lim
Bob Friesenhahn wrote:
The interesting thing for the future will be non-volatile main memory,
with the primary concern being how to firewall damage due to a bug.
You would be able to turn your computer off and back on and be working
again almost instantaneously.
Some of us are old enough (just)
export/zones/s...@20091122 0 - 5.21G -
a20$
All the ones with USED = 0 haven't changed. Don't know if this info is
available without spinning up disks though.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-dis
Bill Sommerfeld wrote:
Yesterday's integration of
6678033 resilver code should prefetch
as part of changeset 74e8c05021f1 (which should be in build 129 when it
comes out) may improve scrub times, particularly if you have a large
number of small files and a large number of snapshots. I recentl
Colin Raven wrote:
Hi all!
I've decided to take the "big jump" and build a ZFS home filer
(although it might also do "other work" like caching DNS, mail,
usenet, bittorent and so forth). YAY! I wonder if anyone can shed some
light on how long a pool scrub would take on a fairly decent rig.
Th
1 - 100 of 177 matches
Mail list logo