will provide feedback and edge-case testing.
Sign me in, I'm not fluent with forth but testing something new is always fun.
Cool; to start with, do you have a virtual appliance software like VMware or
VirtualBox? Experience with generating ZFS pools in said software?
VirtualBox/Qemu, Qemu is able
.
Sign me in, I'm not fluent with forth but testing something new is always fun.
Cool; to start with, do you have a virtual appliance software like VMware or
VirtualBox? Experience with generating ZFS pools in said software?
I think that we may have something to test next month.
Right now
who is
interested in tackling this with me? I can't do it alone... I at least
need testers whom will provide feedback and edge-case testing.
Woohoo! Great! I am using ZFS boot environments with beadm, so I can
test a bit.
___
freebsd-hackers
29.09.2013 00:30, Teske, Devin wrote:
Interested in feedback, but moreover I would like to see who is
interested in tackling this with me? I can't do it alone... I at least
need testers whom will provide feedback and edge-case testing.
Sign me in, I'm not fluent with forth but testing
In my recent interview on bsdnow.tv, I was pinged on BEs in Forth.
I'd like to revisit this.
Back on Sept 20th, 2012, I posted some pics demonstrating what
exactly code that was in HEAD (at the time) was/is capable of.
These three pictures (posted the same day) tell a story:
1. You boot to the
Hi,
I want to encrypt some disk on my server with Zfs encryption property but it is
not available.
Are there anybody have got an experience about this?
[url]http://docs.oracle.com/cd/E23824_01/html/821-1448/gkkih.html#scrolltoc[/url]
[url]http://www.oracle.com/technetwork/articles/servers
Le 03/09/2013 14:14, Emre Çamalan a écrit :
Hi,
I want to encrypt some disk on my server with Zfs encryption property but it
is not available.
That would require ZFS v30. As far as I am aware Oracle has not
released the code under CDDL.
From http://forums.freebsd.org/showthread.php?t=30036
On Tue, Sep 3, 2013 at 6:22 AM, Florent Peterschmitt
flor...@peterschmitt.fr wrote:
Le 03/09/2013 14:14, Emre Çamalan a écrit :
Hi,
I want to encrypt some disk on my server with Zfs encryption property but it
is not available.
That would require ZFS v30. As far as I am aware Oracle has
Le 03/09/2013 16:53, Alan Somers a écrit :
GELI is full-disk encryption. It's far superior to ZFS encryption.
Yup, but is there a possibility to encrypt a ZFS volume (not a whole
pool) with a separate GELI partition?
Also, in-ZFS encryption would be a nice thing if it could work like an
LVM
On Tue, Sep 3, 2013 at 9:01 AM, Florent Peterschmitt
flor...@peterschmitt.fr wrote:
Le 03/09/2013 16:53, Alan Somers a écrit :
GELI is full-disk encryption. It's far superior to ZFS encryption.
Yup, but is there a possibility to encrypt a ZFS volume (not a whole
pool) with a separate GELI
On Fri, 30 Aug 2013, Patrick wrote:
On Fri, Aug 30, 2013 at 1:30 AM, Andriy Gapon a...@freebsd.org wrote:
I don't have an exact recollection of what is installed by freebsd-update -
are
*.symbols files installed?
Doesn't look like it. I wonder if I can grab that from a distro site
On Sat, 31 Aug 2013, Dmitry Morozovsky wrote:
I don't have an exact recollection of what is installed by freebsd-update
- are
*.symbols files installed?
Doesn't look like it. I wonder if I can grab that from a distro site
or somewhere?
it seems so:
On Thu, Aug 29, 2013 at 2:32 PM, Andriy Gapon a...@freebsd.org wrote:
on 29/08/2013 19:37 Patrick said the following:
I've got a system running on a VPS that I'm trying to upgrade from 8.2
to 8.4. It has a ZFS root. After booting the new kernel, I get:
Fatal trap 12: page fault while
on 30/08/2013 11:17 Patrick said the following:
H...
(kgdb) list *vdev_mirror_child_select+0x67
No symbol table is loaded. Use the file command.
Do I need to build the kernel from source myself? This kernel is what
freebsd-update installed during part 1 of the upgrade.
I don't have
On Fri, Aug 30, 2013 at 1:30 AM, Andriy Gapon a...@freebsd.org wrote:
I don't have an exact recollection of what is installed by freebsd-update -
are
*.symbols files installed?
Doesn't look like it. I wonder if I can grab that from a distro site
or somewhere?
On Fri, Aug 30, 2013 at 1:30 AM, Andriy Gapon a...@freebsd.org wrote:
I don't have an exact recollection of what is installed by freebsd-update -
are
*.symbols files installed?
Doesn't look like it. I wonder if I can grab that from a distro site
or somewhere?
I've got a system running on a VPS that I'm trying to upgrade from 8.2
to 8.4. It has a ZFS root. After booting the new kernel, I get:
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address = 0x40
fault code = supervisor read data, page
on 29/08/2013 19:37 Patrick said the following:
I've got a system running on a VPS that I'm trying to upgrade from 8.2
to 8.4. It has a ZFS root. After booting the new kernel, I get:
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address = 0x40
fault
uberblocks with their respective
transaction ids. You can take the highest one (it's not the last one)
and try to mount pool with:
zpool import -N -o readonly=on -f -R /pool -F -T transaction_id pool
I had good luck with ZFS recovery with the following approach:
1) Use zdb to identify a TXG
11.07.2013 17:43, Reid Linnemann написав(ла):
So recently I was trying to transfer a root-on-ZFS zpool from one pair of
disks to a single, larger disk. As I am wont to do, I botched the transfer
up and decided to destroy the ZFS filesystems on the destination and start
again. Naturally I was up
Hey presto!
/ zfs list
NAME USED AVAIL REFER MOUNTPOINT
bucket 485G 1.30T 549M legacy
bucket/tmp21K 1.30T21K legacy
bucket/usr 29.6G 1.30T 29.6G /mnt/usr
bucket/var 455G 1.30T 17.7G /mnt/var
bucket/var/srv 437G 1.30T 437G /mnt/var
So recently I was trying to transfer a root-on-ZFS zpool from one pair of
disks to a single, larger disk. As I am wont to do, I botched the transfer
up and decided to destroy the ZFS filesystems on the destination and start
again. Naturally I was up late working on this, being sloppy and drowsy
be to have ZFS open the device
non-exclusively. This patch will do that. Caveat programmer: I
haven't tested this patch in isolation.
Change 624068 by willa@willa_SpectraBSD on 2012/08/09 09:28:38
Allow multiple opens of geoms used by vdev_geom.
Also ignore the pool guid for spares
. But it's probably safe. An alternative, much more
complicated, solution would be to have ZFS open the device
non-exclusively. This patch will do that. Caveat programmer: I
haven't tested this patch in isolation.
This change is quite a bit more than necessary, and probably wouldn't
apply to FreeBSD
Will,
Thanks, that makes sense. I know this is all a crap shoot, but I've really
got nothing to lose at this point, so this is just a good opportunity to
rummage around the internals of ZFS and learn a few things. I might even
get lucky and recover some data!
On Thu, Jul 11, 2013 at 10:59 AM
The attached patch causes ZFS to base the minimum transfer size for a
new vdev on the GEOM provider's stripesize (physical sector size) rather
than sectorsize (logical sector size), provided that stripesize is a
power of two larger than sectorsize and smaller than or equal to
VDEV_PAD_SIZE
for everyone.
Regards
Steve
- Original Message -
From: Dag-Erling Smørgrav d...@des.no
To: freebsd...@freebsd.org; freebsd-hackers@freebsd.org
Cc: ivo...@freebsd.org
Sent: Wednesday, July 10, 2013 10:02 AM
Subject: Make ZFS use the physical sector size when computing initial ashift
Steven Hartland kill...@multiplay.co.uk writes:
Hi DES, unfortunately you need a quite bit more than this to work
compatibly.
*chirp* *chirp* *chirp*
DES
--
Dag-Erling Smørgrav - d...@des.no
___
freebsd-hackers@freebsd.org mailing list
there will be a nice conclusion come from that how people want to
proceed and we'll be able to get a change in that works for everyone.
Hmm. I wonder if the simplest approach would be the better. I mean, adding a
flag to zpool.
At home I have a playground FreeBSD machine with a ZFS zmirror, and, you
There's lots more to consider when considering a way foward not least of all
ashift isn't a zpool configuration option is per top level vdev, space
consideration of moving from 512b to 4k, see previous and current discussions
on zfs-de...@freebsd.org and z...@lists.illumos.org for details
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 07/10/13 02:02, Dag-Erling Smrgrav wrote:
The attached patch causes ZFS to base the minimum transfer size for
a new vdev on the GEOM provider's stripesize (physical sector size)
rather than sectorsize (logical sector size), provided
On Jul 10, 2013, at 11:21 AM, Xin Li delp...@delphij.net wrote:
Signed PGP part
On 07/10/13 02:02, Dag-Erling Smrgrav wrote:
The attached patch causes ZFS to base the minimum transfer size for
a new vdev on the GEOM provider's stripesize (physical sector size)
rather than sectorsize
- Original Message -
From: Xin Li
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 07/10/13 02:02, Dag-Erling Sm?rgrav wrote:
The attached patch causes ZFS to base the minimum transfer size for
a new vdev on the GEOM provider's stripesize (physical sector size)
rather than
.
This is on my list of things to upstream in the next week or so
after I add logic to the userspace tools to report whether or not
the TLVs in a pool are using an optimal allocation size. This is
only possible if you actually make ZFS fully aware of logical,
physical, and the configured allocation size
I add logic to the userspace tools to report whether or not the
TLVs in a pool are using an optimal allocation size. This is only
possible if you actually make ZFS fully aware of logical, physical,
and the configured allocation size. All of the other patches I've seen
just treat physical
solution.
This is on my list of things to upstream in the next week or so after
I add logic to the userspace tools to report whether or not the
TLVs in a pool are using an optimal allocation size. This is only
possible if you actually make ZFS fully aware of logical, physical
behind this particular solution.
This is on my list of things to upstream in the next week or so after
I add logic to the userspace tools to report whether or not the
TLVs in a pool are using an optimal allocation size. This is only
possible if you actually make ZFS fully aware of logical
allocation size. This is only
possible if you actually make ZFS fully aware of logical, physical,
and the configured allocation size. All of the other patches I've seen
just treat physical as logical.
Reading through your patch it seems that your logical_ashift equates to
the current ashift values
- Original Message -
From: Justin T. Gibbs
...
One issue I did spot in your patch is that you currently expose
zfs_max_auto_ashift as a sysctl but don't clamp its value which would
cause problems should a user configure values 13.
I would expect the zio pipeline to simply insert an
here is my real world production example of users mail as well as
documents.
/dev/mirror/home1.eli 2788 1545 1243 55% 1941057 20981181 8%
/home
Not the same data, I imagine.
A mix. 90% Mailboxes and user data (documents, pictures), rest are some
.tar.gz
On 2013-01-23 21:22, Wojciech Puchar wrote:
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both on reads and writes it
gives single
then stored on a different disk. You could think of it as a regular RAID-5
with stripe size of 32768 bytes.
PostgreSQL uses 8192 byte pages that fit evenly both into ZFS record size and
column size. Each page access requires only a single disk read. Random i/o
performance here should be 5
Wow!.! OK. It sounds like you (or someone like you) can answer some of my
burning questions about ZFS.
On Thu, Jan 24, 2013 at 8:12 AM, Adam Nowacki nowa...@platinum.linux.plwrote:
Lets assume 5 disk raidz1 vdev with ashift=9 (512 byte sectors).
A worst case scenario could happen if your
good with ZFS.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org
On 2013-01-24 15:24, Wojciech Puchar wrote:
For me the reliability ZFS offers is far more important than pure
performance.
Except it is on paper reliability.
This on paper reliability in practice saved a 20TB pool. See one of my
previous emails. Any other filesystem or hardware/software raid
On 2013-01-24 15:45, Zaphod Beeblebrox wrote:
Ok... so my question then would be... what of the small files. If I write
several small files at once, does the transaction use a record, or does
each file need to use a record? Additionally, if small files use
sub-records, when you delete that
$size : $count;size=$[size*2]; count=0; fi; done) imapfilesizelist
... now the new machine has two 2T disks in a ZFS mirror --- so I suppose
it won't waste as much space as a RAID-Z ZFS --- in that files less than
512 bytes will take 512 bytes? By far the most common case is 2048 bytes
... so
So far I've not lost a single ZFS pool or any data stored.
so far my house wasn't robbed.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
average file at 30,127 bytes. But for the full breakdown:
quite low. what do you store.
here is my real world production example of users mail as well as
documents.
/dev/mirror/home1.eli 2788 1545 124355%
On Thu, Jan 24, 2013 at 2:26 PM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
average file at 30,127 bytes. But for the full breakdown:
quite low. what do you store.
Apparently you're not really following
because of this.
I have never ever personally lost any data on ZFS. Yes, the performance is
another topic, and you must know what you are doing, and what is your
usage pattern, but from reliability standpoint, to me ZFS looks more durable
than anything else.
P.S.: My home NAS is running freebsd
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both on reads and writes it
gives single drive random I/O performance
This is because RAID-Z spreads each block out over all disks, whereas RAID5
(as it is typically configured) puts each block on only one disk. So to
read a block from RAID-Z, all data disks must be involved, vs. for RAID5
only one disk needs to have its head moved.
For other workloads
On 23 Jan 2013 20:23, Wojciech Puchar woj...@wojtek.tensor.gdynia.pl
wrote:
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees utis...@gmail.com wrote:
So we have to take your word for it?
Provide a link if you're going to make assertions, or they're no more
than
your own opinion.
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen
On Wed, Jan 23, 2013 at 12:22 PM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers
of data integrity checks and other bells
and whistles ZFS provides.
--Artem
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/papers on it though.
read original ZFS papers.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
gives single drive random I/O performance.
For reads - true. For writes it's probably behaves better than RAID5
yes, because as with reads it gives single drive performance. small writes
on RAID5 gives lower than single disk performance.
If you need higher performance, build your pool out
On 23 January 2013 21:24, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/papers on it though.
read original ZFS papers.
No, you are making the assertion, provide a link.
Chris
1 drive in performance only applies to number of random i/o
operations vdev can perform. You still get increased throughput. I.e.
5-drive RAIDZ will have 4x bandwidth of individual disks in vdev, but
unless your work is serving movies it doesn't matter.
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees utis...@gmail.com wrote:
So we have to take your word for it?
Provide a link if you're going to make assertions, or they're no more
than
your own opinion.
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen
lower than single disk performance.
If you need higher performance, build your pool out of multiple RAID-Z
vdevs.
even you need normal performance use gmirror and UFS
I've no objection. If it works for you -- go for it.
For me personally ZFS performance is good enough, and data integrity
...@freebsd.org
Here is a blog post that describes why this is true for IOPS:
http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman
On 23 Jan 2013 21:45, Michel Talon ta...@lpthe.jussieu.fr wrote:
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees utis...@gmail.com wrote:
So we have to take your word for it?
Provide a link if you're going to make assertions, or they're no more
than
your own opinion.
I've heard this
associated with mirroring.
Thanks for the link, but I could have done that; I am attempting to
explain to Wojciech that his habit of making bold assertions and
as you can see it is not a bold assertion, just you use something without
even reading it's docs.
Not mentioning doing any more
even you need normal performance use gmirror and UFS
I've no objection. If it works for you -- go for it.
both works. For todays trend of solving everything by more hardware ZFS
may even have enough performance.
But still it is dangerous for a reasons i explained, as well as it
promotes
On 01/23/13 14:27, Wojciech Puchar wrote:
both works. For todays trend of solving everything by more hardware
ZFS may even have enough performance.
But still it is dangerous for a reasons i explained, as well as it
promotes bad setups and layouts like making single filesystem out of
large
their own. As a ZFS developer, it should come as no surprise that
in my opinion and experience, the benefits of ZFS almost always outweigh
this downside.
--matt
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd
pretty much everywhere. I don't like to loose data and disks
are cheap. I have a fair amount of experience with all flavors ... and ZFS
just like me. And because i want performance and - as you described -
disks are cheap - i use RAID-1 (gmirror).
has become a go-to filesystem for most
On 2013-Jan-21 12:12:45 +0100, Wojciech Puchar woj...@wojtek.tensor.gdynia.pl
wrote:
That's why i use properly tuned UFS, gmirror, and prefer not to use
gstripe but have multiple filesystems
When I started using ZFS, I didn't fully trust it so I had a gmirrored
UFS root (including a full src
of experience with all flavors ... and ZFS
has become a go-to filesystem for most of my applications.
One of the best recommendations I can give for ZFS is it's
crash-recoverability. As a counter example, if you have most hardware RAID
going or a software whole-disk raid, after a crash it will generally
Hi,
On 01/20/13 23:26, Zaphod Beeblebrox wrote:
1) a pause for scrub... such that long scrubs could be paused during
working hours.
While not exactly pause, but isn't playing with scrub_delay works here?
vfs.zfs.scrub_delay: Number of ticks to delay scrub
Set this to a high value during
On Mon, Dec 17, 2012 at 05:22:50PM -0500, Rick Macklem wrote:
Zaphod Beeblebrox wrote:
Does windows 7 support nfs v4, then? Is it expected (ie: is it
worthwhile
trying) that nfsv4 would perform at a similar speed to iSCSI? It would
seem that this at least requires active directory (or
On 12/12/2012 17:57, Zaphod Beeblebrox wrote:
The performance of the iSCSI disk is
about the same as the local disk for some operations --- faster for
some, slower for others. The workstation has 12G of memory and it's
my perception that iSCSI is heavily cached and that this enhances it's
With a network file system (either SMB or NFS, it doesn't matter), you
need to ask the server for *each* of the following situations:
* to ask the server if a file has been changed so the client can use
cached data (if the protocol supports it)
* to ask the server if a file (or a
Wojciech Puchar wrote:
With a network file system (either SMB or NFS, it doesn't matter),
you
need to ask the server for *each* of the following situations:
* to ask the server if a file has been changed so the client can
use
cached data (if the protocol supports it)
* to
Does windows 7 support nfs v4, then? Is it expected (ie: is it worthwhile
trying) that nfsv4 would perform at a similar speed to iSCSI? It would
seem that this at least requires active directory (or this user name
mapping ... which I remember being hard).
Zaphod Beeblebrox wrote:
Does windows 7 support nfs v4, then? Is it expected (ie: is it
worthwhile
trying) that nfsv4 would perform at a similar speed to iSCSI? It would
seem that this at least requires active directory (or this user name
mapping ... which I remember being hard).
As far as I
On Mon, Dec 17, 2012 at 2:22 PM, Rick Macklem rmack...@uoguelph.ca wrote:
Zaphod Beeblebrox wrote:
Does windows 7 support nfs v4, then? Is it expected (ie: is it
worthwhile
trying) that nfsv4 would perform at a similar speed to iSCSI? It would
seem that this at least requires active
you cannot compare file serving and block device serving.
On Mon, 17 Dec 2012, Zaphod Beeblebrox wrote:
Does windows 7 support nfs v4, then? Is it expected (ie: is it worthwhile
trying) that nfsv4 would perform at a similar speed to
iSCSI? It would seem that this at least requires active
So... I have two machines. My Fileserver is a core-2-duo machine with
FreeBSD-9.1-ish ZFS, istgt and samba 3.6. My workstation is windows 7
on an i7. Both have GigE and are connected directly via a managed
switch with jumbo packets (specifically 9016) enabled. Both are using
tagged vlan
about the same as the local disk for some operations --- faster for
some, slower for others. The workstation has 12G of memory and it's
my perception that iSCSI is heavily cached and that this enhances it's
any REAL test means doing something that will not fit in cache.
But this is
as you show your needs for unshared data for single workstation is in
order of single large hard drive.
reducing drive count on file server by one and connecting this one drive
directly to workstation is the best solution
___
knowing one or other. Throughput
is a combination of these features. Pure disk performance serves as a
lower bound, but cache performance (especially on some of the ZFS
systems people are creating these days ... with 100's of gigs of RAM)
is an equally valid statistic and optimization
common to move from area to area in the game loading, unloading and
reloading the same data. My test is a valid comparison of the two
modes of loading the game ... from iSCSI and from SMB.
i don't know how windows cache network shares (iSCSI is treated as
local not network). Here is a main
-Original Message-
From: Zaphod Beeblebrox
Sent: Wednesday, December 12, 2012 6:57 PM
To: FreeBSD Hackers
Subject: iSCSI vs. SMB with ZFS.
So... I have two machines. My Fileserver is a core-2-duo machine with
FreeBSD-9.1-ish ZFS, istgt and samba 3.6. My workstation is windows 7
Hi,
I am importing zfs snapshot to freebsd-9 from anther host running
freebsd-9. When the import happens, it locks the filesystem, df hangs
and unable to use the filesystem. Once the import completes, the filesystem
is back to normal and read/write works fine. The same doesnt happen in
Solaris
We encountered a problem receiving a full ZFS stream from
a disk we had backed up. The problem was the receive was
aborting due to quota being exceeded so I did some digging
around and found that Oracle ZFS now has -x and -o options
as documented here:-
http://docs.oracle.com/cd/E23824_01/html
question but exactly where are the current ZFS files
located? I have been looking at the CVS on freebsd.org under
/src/contrib/opensolaris/ but that does not seem to be the current ones. Is
this correct?
Regards___
freebsd-hackers@freebsd.org
Hello,
Excuse me for this newb question but exactly where are the current ZFS files
located? I have been looking at the CVS on freebsd.org under
/src/contrib/opensolaris/ but that does not seem to be the current ones. Is
this correct?
Regards___
On Thu, Aug 02, 2012 at 22:48:50 +0200 , Fredrik wrote:
Hello,
Excuse me for this newb question but exactly where are the current ZFS
files located? I have been looking at the CVS on freebsd.org under
/src/contrib/opensolaris/ but that does not seem to be the current
ones. Is this correct
http://svnweb.freebsd.org/base/head/sys/cddl/contrib/opensolaris/common/
http://svnweb.freebsd.org/base/head/cddl/contrib/opensolaris/lib/
On 8/2/12, Fredrik starkbe...@gmail.com wrote:
Hello,
Excuse me for this newb question but exactly where are the current ZFS files
located? I have been
= 34 625142381 ada0 GPT (298G)
34128 1 freebsd-boot (64k)
162 26621952 2 freebsd-ufs (12G)
266221148388608 3 freebsd-swap (4.0G)
35010722 590131693 4 freebsd-zfs (281G)
boot code MBR (pmbr) and gptzfsboot loader
In the old loader
amd64
gpart show
=34 625142381 ada0 GPT (298G)
34128 1 freebsd-boot (64k)
162 26621952 2 freebsd-ufs (12G)
266221148388608 3 freebsd-swap (4.0G)
35010722 590131693 4 freebsd-zfs (281G)
boot code MBR (pmbr
)
35010722 590131693 4 freebsd-zfs (281G)
boot code MBR (pmbr) and gptzfsboot loader
In the old loader was F1,F2,F3 new no :(
Is there a way to boot system freebsd-ufs (ada0p2)
`gpart set -a bootonce -i 2 ada0` should do.
--
Sphinx of black quartz judge my vow.
# gpart set
is insignificant for the vast majority of
users and there are no performance penalties, so it seems that switching
to 4K sectors by default for all file systems would actually be a good idea.
This is heavily dependent on the size distribution. I can't quickly
check for ZFS but I've done some quick checks
On 23 August 2011 10:52, Ivan Voras ivo...@freebsd.org wrote:
I agree but there are at least two things going for making the increase
anyway:
1) 2 TB drives cost $80
2) Where the space is really important, the person in charge usually knows
it and can choose a non-default size like 512b
On 23/08/2011 11:59, Aled Morris wrote:
On 23 August 2011 10:52, Ivan Vorasivo...@freebsd.org wrote:
I agree but there are at least two things going for making the increase
anyway:
1) 2 TB drives cost $80
2) Where the space is really important, the person in charge usually knows
it and can
and ZFS drivers be able to either read the right sector size
from the underlying device or at least issue a warning?
The device never reports the actual sector size, so unless FreeBSD
keeps a database of 4k sector hard drives that report as 512 byte
sector hard drives, there is nothing that can
1 - 100 of 345 matches
Mail list logo