On Oct 12, 2013, at 10:56 AM, Mark Felder wrote:
On Sat, Oct 12, 2013, at 10:53, aurfalien wrote:
Hi,
I would like to first say that by no means is this a hey, why is my Mac
faster then my PC kind of email.
I'm really hoping its an LSI driver issue.
It may very well be an LSI
On Oct 12, 2013, at 10:56 AM, Mark Felder wrote:
On Sat, Oct 12, 2013, at 10:53, aurfalien wrote:
Hi,
I would like to first say that by no means is this a hey, why is my Mac
faster then my PC kind of email.
I'm really hoping its an LSI driver issue.
It may very well be an LSI
On Sun, 13 Oct 2013 13:17:20 +1000, yudi v wrote:
On Mon, Sep 30, 2013 at 2:47 AM, Ian Smith smi...@nimnet.asn.au wrote:
In freebsd-questions Digest, Vol 486, Issue 7, Message: 5
On Sat, 28 Sep 2013 16:25:33 +0200 Roland Smith rsm...@xs4all.nl wrote:
On Fri, Sep 27, 2013 at 05:37:55PM
9207 HBAs
1 LSI 9206 HBA
I've 13 vdev RaidZ setup.
Not a super system, but not a shabby one either.
Used local (non network) IOzone and dd tests for some simple prelim testing
just to gauge were I'm at with this bad boy.
My ZFS tunables are the same BTW as I did some tweaks.
My CentOS 6.4 box
On Sat, Oct 12, 2013, at 10:53, aurfalien wrote:
Hi,
I would like to first say that by no means is this a hey, why is my Mac
faster then my PC kind of email.
I'm really hoping its an LSI driver issue.
It may very well be an LSI firmware issue. What are the firmwares for
those HBAs?
On Oct 12, 2013, at 10:56 AM, Mark Felder wrote:
On Sat, Oct 12, 2013, at 10:53, aurfalien wrote:
Hi,
I would like to first say that by no means is this a hey, why is my Mac
faster then my PC kind of email.
I'm really hoping its an LSI driver issue.
It may very well be an LSI
On Oct 12, 2013, at 10:56 AM, Mark Felder wrote:
On Sat, Oct 12, 2013, at 10:53, aurfalien wrote:
Hi,
I would like to first say that by no means is this a hey, why is my Mac
faster then my PC kind of email.
I'm really hoping its an LSI driver issue.
It may very well be an LSI
On Mon, Sep 30, 2013 at 2:47 AM, Ian Smith smi...@nimnet.asn.au wrote:
In freebsd-questions Digest, Vol 486, Issue 7, Message: 5
On Sat, 28 Sep 2013 16:25:33 +0200 Roland Smith rsm...@xs4all.nl wrote:
On Fri, Sep 27, 2013 at 05:37:55PM +1000, yudi v wrote:
Hi all,
Is it possible
On Oct 9, 2013, at 6:43 AM, yudi v yudi@gmail.com wrote:
Generally, it's recommended to let ZFS manage the whole disk if possible,
so I was wondering if the second option is better.
I will be using couple of 3TB HDDs mirrored for data and want to encrypt
them.
IIRC, there is/was a major
*
*
--
There are few different ways to set-up geli with ZFS. I just want to get
some opinions (benefits and disadvantages) about the below two options
*First option*: (most commonly encountered set-up)
Have geli on the block device and ZFS on top of the geli
On 10/02/2013 08:13 PM, Matthew Seaman wrote:
On 02/10/2013 16:34, Nikos Vassiliadis wrote:
Is there a way to know if a zfs pool had an unclean shutdown?
An attribute or maybe something during mount time similar to what ufs
does (WARNING: / was not properly dismounted)?
Other than looking
On 03/10/2013 17:20, Nikos Vassiliadis wrote:
I am after a really specific use-case and the last minute transactions
are important. Using a zpool over geli over a zvol. I'd like to know if
during shutdown the kernel flushes all zfs files caches in order so
these last minutes transactions won't
Hi,
Is there a way to know if a zfs pool had an unclean shutdown?
An attribute or maybe something during mount time similar to what ufs
does (WARNING: / was not properly dismounted)?
Thanks, Nikos
___
freebsd-questions@freebsd.org mailing list
http
On 02/10/2013 16:34, Nikos Vassiliadis wrote:
Is there a way to know if a zfs pool had an unclean shutdown?
An attribute or maybe something during mount time similar to what ufs
does (WARNING: / was not properly dismounted)?
Other than looking at the system logs for evidence of an abnormal
In freebsd-questions Digest, Vol 486, Issue 7, Message: 5
On Sat, 28 Sep 2013 16:25:33 +0200 Roland Smith rsm...@xs4all.nl wrote:
On Fri, Sep 27, 2013 at 05:37:55PM +1000, yudi v wrote:
Hi all,
Is it possible to suspend to disk (hibernate) when using geli for full disk
encryption.
container and ZFS on top. There are two options for the swap with this
set-up, either use a swap file on the ZFS pool or use a separate partition
for swap and encrypt that. What I want to know is will either of this work
with suspend to disk.
FreeBSD does not support suspend to disk (ACPI
On Fri, Sep 27, 2013 at 05:37:55PM +1000, yudi v wrote:
Hi all,
Is it possible to suspend to disk (hibernate) when using geli for full disk
encryption.
As far as I can tell, FreeBSD doesn't support suspend to disk on all
architectures. On amd64 the necessary infrastructure doesn't exist,
Hi all,
Is it possible to suspend to disk (hibernate) when using geli for full disk
encryption. My set-up is listed below. So I am going to have an encrypted
container and ZFS on top. There are two options for the swap with this
set-up, either use a swap file on the ZFS pool or use a separate
Hi,
I managed to install with geli+root on ZFS setup but have a few
questions. Most of the instructions just list commands but offer very
little explanation.
I adapted the instructions in
https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE to suit my needs.
Here's the process I used
for zfs datasets. Just because
Not so clear, if you are using a mixture of filesystems you may
very sensibly opt to keep all your export controls in one place, similarly
if you have servers running multiple OSs then not having to remember that
the FreeBSD/ZFS box manages it's exports
September 2013 19:39, Steve O'Hara-Smith st...@sohara.org wrote:
On Tue, 10 Sep 2013 12:10:13 +0100
krad kra...@gmail.com wrote:
which is why you shouldnt use /etc/exports for zfs datasets. Just
because
Not so clear, if you are using a mixture of filesystems you may
very
you shouldnt use /etc/exports for zfs datasets. Just
because
Not so clear, if you are using a mixture of filesystems you may
very sensibly opt to keep all your export controls in one place,
similarly
if you have servers running multiple OSs then not having to remember
which is why you shouldnt use /etc/exports for zfs datasets. Just because
you can do something doesn't mean you should eg dancing down the motorway
at night in dark clothing is never a good idea, no matter how confident you
are in your skills.
On 9 September 2013 15:22, Steve O'Hara-Smith st
On Tue, 10 Sep 2013 12:10:13 +0100, krad wrote:
which is why you shouldnt use /etc/exports for zfs datasets. Just because
you can do something doesn't mean you should eg dancing down the motorway
at night in dark clothing is never a good idea, no matter how confident you
are in your skills
On Tue, 10 Sep 2013 12:10:13 +0100
krad kra...@gmail.com wrote:
which is why you shouldnt use /etc/exports for zfs datasets. Just because
Not so clear, if you are using a mixture of filesystems you may
very sensibly opt to keep all your export controls in one place, similarly
if you
always the zfs commands for zfs filesystems, otherwise why else would they
be there? Do it manually and you could get conflicts later down the line
On 6 September 2013 19:43, aurfalien aurfal...@gmail.com wrote:
Hi,
Wondering whats the correct way to share ZFS, /etc/exports or via zfs
On 08/16/2013 8:49 am, dweimer wrote:
On 08/15/2013 10:00 am, dweimer wrote:
On 08/14/2013 9:43 pm, Shane Ambler wrote:
On 14/08/2013 22:57, dweimer wrote:
I have a few systems running on ZFS with a backup script that
creates
snapshots, then backs up the .zfs/snapshot/name directory to make
On Fri, 6 Sep 2013 11:43:03 -0700
aurfalien aurfal...@gmail.com wrote:
Hi,
Wondering whats the correct way to share ZFS, /etc/exports or via zfs
commands which alter /etc/zfs/exports?
As far as I can see both work just fine. The first has the benefit
that it puts your ZFS exports
On 09/09/2013 22:38, dweimer wrote:
A quick update on this, in case anyone else runs into it, I did
finally try on the 2nd of this month to delete my UFS volume, and
create a new ZFS volume to replace it. I recreated the Squid cache
directories and let squid start over building up cache. So
Hi,
Wondering whats the correct way to share ZFS, /etc/exports or via zfs commands
which alter /etc/zfs/exports?
I see a lot of both on line.
Thanks in advance,
- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org
Hi,
I want to encrypt some disk on my server with Zfs encryption property but it is
not available.
Are there anybody have got an experience about this?
[url]http://docs.oracle.com/cd/E23824_01/html/821-1448/gkkih.html#scrolltoc[/url]
[url]http://www.oracle.com/technetwork/articles/servers
On 03/09/2013 14:14, Emre Çamalan wrote:
Hi,
I want to encrypt some disk on my server with Zfs encryption property but it
is not available.
Are there anybody have got an experience about this?
It can't happen because Oracle has stopped open sourcing ZFS.
http://forums.freebsd.org
On 08/15/2013 10:00 am, dweimer wrote:
On 08/14/2013 9:43 pm, Shane Ambler wrote:
On 14/08/2013 22:57, dweimer wrote:
I have a few systems running on ZFS with a backup script that creates
snapshots, then backs up the .zfs/snapshot/name directory to make
sure
open files are not missed
On 08/14/2013 9:43 pm, Shane Ambler wrote:
On 14/08/2013 22:57, dweimer wrote:
I have a few systems running on ZFS with a backup script that creates
snapshots, then backs up the .zfs/snapshot/name directory to make
sure
open files are not missed. This has been working great but all
I have a few systems running on ZFS with a backup script that creates
snapshots, then backs up the .zfs/snapshot/name directory to make sure
open files are not missed. This has been working great but all of the
sudden one of my systems has stopped working. It takes the snapshots
fine, zfs
On 14/08/2013 22:57, dweimer wrote:
I have a few systems running on ZFS with a backup script that creates
snapshots, then backs up the .zfs/snapshot/name directory to make sure
open files are not missed. This has been working great but all of the
sudden one of my systems has stopped working
Okay Ive been down this road before but seem to have lost my notes
and cat seem to find the original google doc on the process
how does one configure a Qlogic 8Gb fiber card as target and attach the
zpool
___
freebsd-questions@freebsd.org mailing list
Hi.
I read that FreeBSD 9.2 will bring TRIM to ZFS. Does anyone know if this
works even if the zpool is a mirror?
John
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any
In the last episode (Aug 02), John Andreasson said:
I read that FreeBSD 9.2 will bring TRIM to ZFS. Does anyone know if this
works even if the zpool is a mirror?
The vdev type doesn't matter. It'll work on plain disks, mirrors, and
raidzs. If for some reason you have a pool built on geom
Confirmed in beta 1.
- aurf
On Aug 2, 2013, at 2:12 AM, John Andreasson wrote:
Hi.
I read that FreeBSD 9.2 will bring TRIM to ZFS. Does anyone know if this
works even if the zpool is a mirror?
John
___
freebsd-questions@freebsd.org mailing
But then zfs doesn't access every block on the disk does it, only the
allocated ones
On 20 July 2013 21:07, Daniel Feenberg feenb...@nber.org wrote:
On Sat, 20 Jul 2013, Steve O'Hara-Smith wrote:
On Sat, 20 Jul 2013 18:14:20 +0100
Frank Leonhardt fra...@fjl.co.uk wrote:
It's worth
On 21/07/2013 17:31, Steve O'Hara-Smith wrote:
On Sun, 21 Jul 2013 14:13:39 +0930
Shane Ambler free...@shaneware.biz wrote:
On 21/07/2013 04:42, Steve O'Hara-Smith wrote:
It's a pity there are now only two manufacturers of spinning rust.
I thought there was three left - Seagate WD and
On Sun, 21 Jul 2013 14:13:39 +0930
Shane Ambler free...@shaneware.biz wrote:
On 21/07/2013 04:42, Steve O'Hara-Smith wrote:
It's a pity there are now only two manufacturers of spinning rust.
I thought there was three left - Seagate WD and Toshiba
I assumed Toshiba were out of the
Steve O'Hara-Smith st...@sohara.org wrote:
It's a pity there are now only two manufacturers of spinning rust.
I didn't think there were _any_! Haven't oxide-coated platters gone
the way of the dodo bird?
___
freebsd-questions@freebsd.org mailing list
On Sun, 21 Jul 2013 00:27:01 -0700
per...@pluto.rain.com (Perry Hutchison) wrote:
Steve O'Hara-Smith st...@sohara.org wrote:
It's a pity there are now only two manufacturers of spinning rust.
I didn't think there were _any_! Haven't oxide-coated platters gone
the way of the dodo bird?
On 2013-07-20 07:25, aurfalien wrote:
Hi,
Is this;
http://lists.freebsd.org/pipermail/freebsd-current/2012-September/036777.html
... available in the form of a patch for stable rels?
Its ZFS TRIM support.
According to /usr/src/UPDATING, yes:
20130605:
Added ZFS TRIM
to explicitly physically swap out a failed mirror
component,
in which case one can make sure the system is OK before the replacement drive
goes in.
Agreed. Blaming gmirror for this kind of thing overlooks the overall
design and operating procedures of the system, and assuming ZFS would
have
On Sat, 20 Jul 2013 18:14:20 +0100
Frank Leonhardt fra...@fjl.co.uk wrote:
It's worth noting, as a warning for anyone who hasn't been there, that
the number of times a second drive in a RAID system fails during a
rebuild is higher than would be expected. During a rebuild the remaining
and writes every sector on multiple disks,
including unused sectors, it can detect latent problems that may have
existed since the drive was new but which haven't been used for data yet,
or have gone bad since the last write, but haven't been read since.
The ZFS scrub processes only sectors
On 21/07/2013 04:42, Steve O'Hara-Smith wrote:
It's a pity there are now only two manufacturers of spinning rust.
I thought there was three left - Seagate WD and Toshiba
___
freebsd-questions@freebsd.org mailing list
On Jul 16, 2013, at 11:42 AM, Warren Block wrote:
On Tue, 16 Jul 2013, aurfalien wrote:
On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:
I doubt that you would save any ram having the os on a non-zfs drive as
you will already be using zfs chances are that non-zfs drives would only
Hi,
Is this;
http://lists.freebsd.org/pipermail/freebsd-current/2012-September/036777.html
... available in the form of a patch for stable rels?
Its ZFS TRIM support.
- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org
you could just give the raw disks to zfs
On 15 July 2013 17:23, Scott Ballantyne s...@ssr.com wrote:
Hi,
I have the current situation:
sdb@gigawattmomma$ zpool status zroot
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
You would in theory as from what i remember every zfs filesystem takes up
64 kb of ram, so the savings could be massive 8)
On 16 July 2013 10:41, Shane Ambler free...@shaneware.biz wrote:
On 16/07/2013 14:41, aurfalien wrote:
On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
On Mon, 15
AM, Johan Hendriks joh.hendr...@gmail.com**
javascript:;
wrote:
[ ... ]
I would us a zfs for the os.
I have a couple of servers that did not survive a power failure with
gmirror.
The problems i had was when the power failed one disk was in a
rebuilding
state and then when the background
On 16/07/2013 14:41, aurfalien wrote:
On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
On Mon, 15 Jul 2013, aurfalien wrote:
... thats the question :)
At any rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS, should I also ZFS or simply gmirror as I've a
dedicated
On 16/07/2013 10:41, Shane Ambler wrote:
On 16/07/2013 14:41, aurfalien wrote:
On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
On Mon, 15 Jul 2013, aurfalien wrote:
... thats the question :)
At any rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS, should I also
rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS, should I also ZFS or simply gmirror as I've a
dedicated pair of 256GB SSD drives for it. I didn't ask for SSD
sys drives, this system just came with em.
This is more of a best practices q.
ZFS has data integrity
On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:
On 16/07/2013 14:41, aurfalien wrote:
On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
On Mon, 15 Jul 2013, aurfalien wrote:
... thats the question :)
At any rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS
Op dinsdag 16 juli 2013 schreef Charles Swiger (cswi...@mac.com) het
volgende:
Hi--
On Jul 16, 2013, at 10:33 AM, Johan Hendriks
joh.hendr...@gmail.comjavascript:;
wrote:
[ ... ]
I would us a zfs for the os.
I have a couple of servers that did not survive a power failure with
gmirror
On Tue, 16 Jul 2013, aurfalien wrote:
On Jul 16, 2013, at 2:41 AM, Shane Ambler wrote:
I doubt that you would save any ram having the os on a non-zfs drive as
you will already be using zfs chances are that non-zfs drives would only
increase ram usage by adding a second cache. zfs uses it's own
Hi--
On Jul 16, 2013, at 10:33 AM, Johan Hendriks joh.hendr...@gmail.com wrote:
[ ... ]
I would us a zfs for the os.
I have a couple of servers that did not survive a power failure with
gmirror.
The problems i had was when the power failed one disk was in a rebuilding
state and then when
Hi--
On Jul 16, 2013, at 11:27 AM, Johan Hendriks joh.hendr...@gmail.com wrote:
Well, don't do that. :-)
When the server reboots because of a powerfailure at night, then it boots.
Then it starts to rebuild the mirror on its own, and later the fsck kicks in.
Not much i can do about it.
On 07/16/13 21:27, Johan Hendriks wrote:
Op dinsdag 16 juli 2013 schreef Charles Swiger (cswi...@mac.com) het
volgende:
Hi--
On Jul 16, 2013, at 10:33 AM, Johan Hendriks
joh.hendr...@gmail.comjavascript:;
wrote:
[ ... ]
I would us a zfs for the os.
I have a couple of servers that did
-zfs), with boot code installed, obviously. Do
I need to do that with the second mirror, or can I just use the whole
thing for a freebsd-zfs filesystem?
Sorry this was a bit long. Thanks in advance for any help.
Best,
Scott
--
s...@ssr.com
___
freebsd
... thats the question :)
At any rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS, should I also ZFS or simply gmirror as I've a dedicated pair
of 256GB SSD drives for it. I didn't ask for SSD sys drives, this system just
came with em.
This is more of a best
On Mon, 15 Jul 2013, aurfalien wrote:
... thats the question :)
At any rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS, should I also ZFS or simply gmirror as I've a dedicated pair
of 256GB SSD drives for it. I didn't ask for SSD sys drives, this system just
came
On Jul 15, 2013, at 9:23 PM, Warren Block wrote:
On Mon, 15 Jul 2013, aurfalien wrote:
... thats the question :)
At any rate, I'm building a rather large 100+TB NAS using ZFS.
However for my OS, should I also ZFS or simply gmirror as I've a dedicated
pair of 256GB SSD drives
a freebsd box doesn't boot normally since upgrading from 9.1-rc3 to
9.1-release. it boots to the point that /usr is mounted, then errors
mount: /usr: unknown special file or file system
if i boot to single user, zfs mount -a manually, then it comes up fine.
what function key do i gotta press
On 7/10/13 1:50 PM, Michael Sierchio ku...@tenebras.com wrote:
On Wed, Jul 10, 2013 at 12:34 PM, Tom Worster f...@thefsb.org wrote:
# mount -p /etc/fstab
thanks for answering, michael.
i have now spotted the problem. the zfs_enable line in rc.conf was fubar.
i must have done some bad vi on it
Question:
How does the ZFS option 'copies=n' and raid relate to and interact with
each other? specifically recovery in the event of a failure. For
example, is having three disks in a raid-1 configuration with copies=1
effectively the same as having three disks in a raid-0 with copies=3
07.06.2013 18:52, Quartz:
Question:
How does the ZFS option 'copies=n' and raid relate to and interact with
each other? specifically recovery in the event of a failure. For
example, is having three disks in a raid-1 configuration with copies=1
effectively the same as having three disks
In the last episode (Jun 07), Quartz said:
How does the ZFS option 'copies=n' and raid relate to and interact with
each other? specifically recovery in the event of a failure. For
example, is having three disks in a raid-1 configuration with copies=1
effectively the same as having three
On May 23, 2013, at 11:09 AM, Michael Sierchio ku...@tenebras.com wrote:
On Thu, May 23, 2013 at 5:33 AM, Warren Block wbl...@wonkity.com wrote:
..
One thing mentioned earlier is that ZFS wants lots of memory. 4G-8G
minimum, some might say as much as the server will hold
On Thu, 23 May 2013 11:00:21 +0200
Albert Shih albert.s...@obspm.fr wrote:
Before I'm installing my server under 9.0 + ZFS I do some benchmarks with
ionice to compare
FreeBSD 9.0+ ZFS + 12 disk SATA 7200 rpm vs CentOS + H700 + 12 disk
SAS 15krpm
(Both are same Dell poweredge
Le 17/05/2013 ? 20:03:30-0400, Paul Kraus a écrit
ZFS is stable, it is NOT as tuned as UFS just due to age. UFS in all of it's
various incarnations has been tuned far more than any filesystem has any
right to be. I spent many years managing Solaris system and I was truly
amazed at how
Le 18/05/2013 ? 09:02:15-0400, Paul Kraus a écrit
On May 18, 2013, at 3:21 AM, Ivailo Tanusheff
ivailo.tanush...@skrill.com wrote:
If you use HBA/JBOD then you will rely on the software RAID of the
ZFS system. Yes, this RAID is good, but unless you use SSD disks to
boost performance
soft update or migrate
from UFS to ZFS?
i heard so much about soft update -that is added in freebsd9.1- which can
fix file corruption in acceptable way with low cost but i don't know how
much is reliable and efficient.
in the other hand, i think migration from UFS to ZFS can be another
solution
is:
is it better to upgrade my freebsd to 9.1 and use soft update or migrate
from UFS to ZFS?
That's a judgement call, which means it depends.
i heard so much about soft update -that is added in freebsd9.1- which can
fix file corruption in acceptable way with low cost but i don't know how
much
is more suitable for my server. using
soft-update or ZFS. please help me to select the best one.
thank you so much
On Thu, May 23, 2013 at 4:28 PM, Warren Block wbl...@wonkity.com wrote:
On Thu, 23 May 2013, saeedeh motlagh wrote:
hello every body
i have a question about fixing file corruption
also invest in a decent UPS.
i do not have any problem in RAM and hardware.
i don't know which approach is more suitable for my server. using
soft-update or ZFS. please help me to select the best one.
thank you so much
On Thu, May 23, 2013 at 4:28 PM, Warren Block wbl...@wonkity.com wrote
.
The lack of a UPS can be considered a hardware problem.
i don't know which approach is more suitable for my server. using soft-update
or ZFS. please help me to select the best one.
Please don't top-post, as it makes responding to your message more
difficult. One thing mentioned earlier is that ZFS
On May 23, 2013, at 4:53 AM, Albert Shih albert.s...@obspm.fr wrote:
Have you ever try to update a ZFS Pool on 9.0 to 9.1 ?
I recently upgraded my home server from 9.0 to 9.1, actually, I did exported my
data zpool (raidZ2), did a clean installation of 9.1, then imported my data
zpool
On Thu, May 23, 2013 at 5:33 AM, Warren Block wbl...@wonkity.com wrote:
..
One thing mentioned earlier is that ZFS wants lots of memory. 4G-8G
minimum, some might say as much as the server will hold.
Not necessarily so - deduplication places great demands on memory, but that
can
Hi all,
I've a server under FreeBSD 9.0 with a large ZFS pool (~ 150To)
This server is use mainly for backup and NFS server, he also have 4 Gb/s
interface bound with LACP.
If the nfs client is close to the server (physical distance) everything is
fine.
When the client is far away (NFS over
into it.
i do not have any problem in RAM and hardware.
i don't know which approach is more suitable for my server. using
soft-update or ZFS. please help me to select the best one.
If power failure is an issue, you have no guarantee of data loss
protection unless you use networked storage
I've upgraded a machine with freebsd-update from 8.3 to 9.1.
After the first restart I edited /etc/fstab in single user mode because
the names on the disks had changed. But the zpool I have seem to have a
problem and I'm not sure on how to recover it.
May 22 12:00:39 kernel: ZFS WARNING
On May 18, 2013, at 10:16 PM, kpn...@pobox.com wrote:
On Sat, May 18, 2013 at 01:29:58PM +, Ivailo Tanusheff wrote:
Not sure about your calculations, hope you trust them, but in my previous
company we have a 3-4 months period when a disk fails almost every day on 2
year old servers, so
Hi,
The overhead depends of the quantity of the changes you made since the oldest
snapshot and the current data on the ZFS pool.
The snapshots keep only the differences between the live system and each other,
so if you have made 10GB changes over the last 7 days and your oldest snapshot
is 7
Hi,
If you use HBA/JBOD then you will rely on the software RAID of the ZFS system.
Yes, this RAID is good, but unless you use SSD disks to boost performance and a
lot of RAM the hardware raid should be more reliable and mush faster.
I didn't get if you want to use the system to dual boot Linux
named mypool issue:
zfs set copies=2 mypool
Best regards,
Ivailo Tanusheff
-Original Message-
From: b...@todoo.biz [mailto:b...@todoo.biz]
Sent: Saturday, May 18, 2013 10:46 AM
To: Ivailo Tanusheff
Subject: Re: ZFS install on a partition
Le 18 mai 2013 à 09:21, Ivailo Tanusheff
On May 18, 2013, at 3:21 AM, Ivailo Tanusheff ivailo.tanush...@skrill.com
wrote:
If you use HBA/JBOD then you will rely on the software RAID of the ZFS
system. Yes, this RAID is good, but unless you use SSD disks to boost
performance and a lot of RAM the hardware raid should be more
On May 18, 2013, at 12:49 AM, kpn...@pobox.com wrote:
On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote:
On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
3. Should I avoid using ZFS since my system is not well tuned and It would
be asking for trouble to use ZFS
-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Paul Kraus
Sent: Saturday, May 18, 2013 4:02 PM
To: Ivailo Tanusheff
Cc: Liste FreeBSD
Subject: Re: ZFS install on a partition
On May 18, 2013, at 3:21 AM, Ivailo Tanusheff
Hi,
I have a question regarding ZFS install on a system setup using an Intel
Modular.
This system runs various flavor of FreeBSD and Linux using a shared pool
(LUNs).
These LUNs have been configured in RAID 6 using the internal controller (LSI
logic).
So from the OS point of view
Your hardware raid should be faster than ZFS raid. Don't use zfs raid
because there will be no benefit. You'll get the performance of
software raid using CPU time, along with lost space for already backed
up data.
ZFS should work fine. A lot of the tuning on the wiki page isn't needed
On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
I know I should install a system using HBA and JBOD configuration - but
unfortunately this is not an option for this server.
I ran many ZFS pools on top of hardware raid units, because that is what we
had. It works fine
On 18 May 2013, at 01:15, Joshua Isom jri...@gmail.com wrote:
Your hardware raid should be faster than ZFS raid. Don't use zfs raid
because there will be no benefit.
Self healing much ?
I wouldn't dream of dropping it for a 20mb/s performance increase from a HW
controller.
What
this is not an option for this server.
I ran many ZFS pools on top of hardware raid units, because that is what we
had. It works fine and the NVRAM write cache of the better hardware raid
systems give you a performance boost.
What would you advise ?
1. Can I use an existing partition
Le 18 mai 2013 à 06:49, kpn...@pobox.com a écrit :
On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote:
On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote:
3. Should I avoid using ZFS since my system is not well tuned and It would
be asking for trouble to use ZFS
1 - 100 of 1713 matches
Mail list logo