Can I ask what the prevailing opinion is on ZFS and encryption on CentOS 7? Is
encryption in ZFS stable? If so, I would use it on a new naked drive I just
added. If not, I might stick with LUKS and XFS for now. The drive is a 2 Tb SSD.
Thanks.
___
On 8/10/20 1:43 AM, Robert G (Doc) Savage via CentOS wrote:
> As if last weekend's UEFI debacle wasn't bad enough, it now seems the
> latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
> ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
> inaccessible on my
On Mon, Aug 10, 2020 at 01:43:00AM -0500, Robert G (Doc) Savage via CentOS
wrote:
> As if last weekend's UEFI debacle wasn't bad enough, it now seems the
> latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
> ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
On 10/08/2020 07:43, Robert G (Doc) Savage via CentOS wrote:
As if last weekend's UEFI debacle wasn't bad enough, it now seems the
latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
inaccessible on my C8
As if last weekend's UEFI debacle wasn't bad enough, it now seems the
latest C8 kernel (4.18.0-193.14.2) is incompatible with the current
ZFSOnLinux packages (0.8.4-1). When booted to the latest kernel, ZFS is
inaccessible on my C8 storage server. When I back off to the prior
kernel
mark wrote:
> mark wrote:
>>
>> testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
>> pulled one drive (11-drive, one hot spare pool), and it resilvered with
>> the hot spare. zpool status -x shows me state: DEGRADED status: One or
>> more devices could not be used because
try, zpool replace export1 sdb sdl
but it says the spare is already in use, so I'm not sure why the resilver
isn't already in progress. you might have to remove sdl from the spares
list before you can use it in a replace.
On Fri, Jun 14, 2019 at 9:03 AM mark wrote:
> Hi, folks,
>
>
mark wrote:
> Hi, folks,
>
>
> testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
> pulled one drive (11-drive, one hot spare pool), and it resilvered with
> the hot spare. zpool status -x shows me state: DEGRADED
> status: One or more devices could not be used because the
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare. zpool status -x shows me
state: DEGRADED
status: One or more devices could not be used because the label is missing or
On Thu, Dec 6, 2018 at 3:23 PM Jonathan Billings wrote:
>
> On Dec 6, 2018, at 17:45, david wrote:
> >
> > Folks
> >
> > I have two USB connected drives, configured as a mirrored-pair in ZFS.
> > It's been working fine UNTIL I updated Centos
> > from 3.10.0-862.14.4.el7.x86_64
> > to
On Dec 6, 2018, at 17:45, david wrote:
>
> Folks
>
> I have two USB connected drives, configured as a mirrored-pair in ZFS. It's
> been working fine UNTIL I updated Centos
> from 3.10.0-862.14.4.el7.x86_64
> to 3.10.0-957.1.3.el7.x86_64
>
> The import of the pools didn't happen at boot.
Folks
I have two USB connected drives, configured as a mirrored-pair in
ZFS. It's been working fine UNTIL I updated Centos
from 3.10.0-862.14.4.el7.x86_64
to 3.10.0-957.1.3.el7.x86_64
The import of the pools didn't happen at boot. When I tried executing:
zpool list
I got the diagnostic
Il 03-07-2018 15:39 Nux! ha scritto:
Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
Just recently there was an issue with data loss.
https://github.com/zfsonlinux/zfs/issues/7401
I know; I was on the very same github issue you linked.
While the bug was very unfortunate, ZFS
cian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Gionatan Danti"
> > To: "CentOS mailing list"
> > Sent: Monday, 25 June, 2018 22:19:22
>
an Danti"
> To: "CentOS mailing list"
> Sent: Monday, 25 June, 2018 22:19:22
> Subject: [CentOS] ZFS on Linux repository
> Hi list,
> we all know why ZFS is not included in RHEL/CentOS distributions: its
> CDDL license is/seems not compatible with GPL license.
&g
I'm using ZFS On Linux with CentOS 6 since almost two years.
http://zfsonlinux.org/
I'm not using it to boot from it or for any vm stuff, but just for storage
disks.
Recently updated to ZOL version 0.7.9.
To install, I simply follow the instructions from this page:
Il 25-06-2018 23:59 Yves Bellefeuille ha scritto:
I think the simplest solution would be to add this (which I haven't
tried):
http://zfsonlinux.org/
https://github.com/zfsonlinux/zfs/wiki/RHEL-and-CentOS
to the Available Repositories for CentOS wiki page:
Gionatan Danti wrote:
> I searched the list but I did not found anything regarding native ZFS.
> Any feedback on the matter is welcomed. Thanks.
I think the simplest solution would be to add this (which I haven't
tried):
http://zfsonlinux.org/
Hi list,
we all know why ZFS is not included in RHEL/CentOS distributions: its
CDDL license is/seems not compatible with GPL license.
I'm not a layer, and I do not have any strong opinion on the matter.
However, as a sysadmin, I found ZFS to be extremely useful, especially
considering BTRFS
On Thu, October 30, 2014 21:47, david wrote:
Folks
I have a ZFS file system. It seems to be scrubbing too often. As I
type, it's 5 hours into the process with 36 hours to go, and seems to
be doing it several times a week on a slow drive.
I cannot find an option to control the frequency;
Folks
I have a ZFS file system. It seems to be scrubbing too often. As I
type, it's 5 hours into the process with 36 hours to go, and seems to
be doing it several times a week on a slow drive.
I cannot find an option to control the frequency; crontab has no
references. Any clues?
David
On 10/30/2014 6:47 PM, david wrote:
I have a ZFS file system. It seems to be scrubbing too often. As I
type, it's 5 hours into the process with 36 hours to go, and seems to
be doing it several times a week on a slow drive.
I cannot find an option to control the frequency; crontab has no
OK, I found it. Is this option documented somewhere? Are there
other frequency settings? like once-a-month?
At 08:15 PM 10/30/2014, you wrote:
On 10/30/2014 6:47 PM, david wrote:
I have a ZFS file system. It seems to be scrubbing too often. As
I type, it's 5 hours into the process with
On 10/30/2014 8:46 PM, david wrote:
OK, I found it. Is this option documented somewhere? Are there other
frequency settings? like once-a-month?
i've only used ZFS on solaris, where there are no automatic scrubs
unless you script your own via cron, and freeNAS where they are done at
On Mon, September 15, 2014 18:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
Am I the only one who is tempted to say: people, could you kindly
start deciphering your abbreviations. I know, I know, computed
science uses _that_ abbreviation for years. But we definitely
On 2014-09-15 , kkel...@wombat.san-francisco.ca.us wrote:
So the ZoL folks want one more feature before calling it 1.0; otherwise
they believe it's production ready. Only your own testing can convince
you that it's truly production ready.
--keith
That's encouraging news, something I've
On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
1. a throw-away line meant as a joke,
I didn't take it as a joke so much as a comment that where he works,
high-end hardware is available for the asking. SLC is the most
expensive sort of SSD; if it's so
On Tue, September 16, 2014 12:03 pm, Warren Young wrote:
On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
1. a throw-away line meant as a joke,
I didn't take it as a joke so much as a comment that where he works,
high-end hardware is available for the
Warren Young wrote:
On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
1. a throw-away line meant as a joke,
I didn't take it as a joke so much as a comment that where he works,
high-end hardware is available for the asking. SLC is the most
expensive
Given the upcoming elections,
Referendum.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Tue, September 16, 2014 12:47 pm, m.r...@5-cent.us wrote:
Warren Young wrote:
On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
1. a throw-away line meant as a joke,
I didn't take it as a joke so much as a comment that where he works,
high-end
Andrew Holway wrote:
Given the upcoming elections,
Referendum.
I sit, and type, corrected.
mark
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Referendum.
I sit, and type, corrected.
:)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
I would strongly suggest anyone interested in ZFS on Linux join the
zfs-discuss list. http://zfsonlinux.org/lists.html There is a fairly good
signal to noise ratio.
On 16 September 2014 20:17, Andrew Holway andrew.hol...@gmail.com wrote:
Referendum.
I sit, and type, corrected.
:)
Valeri Galtsev wrote:
On Tue, September 16, 2014 12:47 pm, m.r...@5-cent.us wrote:
Warren Young wrote:
On 9/15/2014 16:54, Paul Heinlein wrote:
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
snip
that
adding an SSD to a ZFS pool to accelerate it isn't free. Where he
works, it effectively *is*
On Tue, 2014-09-16 at 13:47 -0400, m.r...@5-cent.us wrote:
Given the upcoming elections, I like Scottish Law Commission,
I don't think they will get independence from the clowns in London,
England, this time, but I do wish them every possible success !
--
Regards,
Paul.
England, EU.
Maybe you can tune ZFS further, but I tried it in userspace (with FUSE) and
reading was a almost 5 times slower than MDADM.
That alone is meaningless. MDADM with which filesystem?
Zfsonlinux does not work in user space, it is a kernel module. Just try it.
On 2014-09-15, Chris chris2...@postbox.xyz wrote:
On 09/08/2014 09:00 PM, Andrew Holway wrote:
Try ZFS http://zfsonlinux.org/
Maybe you can tune ZFS further, but I tried it in userspace (with FUSE)
and reading was a almost 5 times slower than MDADM.
Just running ZFS in the kernel is going to
Maybe you can tune ZFS further, but I tried it in userspace (with FUSE)
and reading was a almost 5 times slower than MDADM.
ZFS on Linux is backed by the US government as ZFS will be used as the
primary filesystem to back the parallel distributed filesystem 'Lustre'.
Lustre is used in the
On 9/15/2014 01:37, Andrew Holway wrote:
To set expectation. Actually, the most recent release (0.63) of ZFS on
Linux is not that quick.
Compared to what? To ZFS on Solaris, ZFS on FreeBSD, or ext4 on Linux?
Any comparison between ZFS and non-ZFS probably overlooks things like
On Mon, Sep 15, 2014 at 4:37 AM, Andrew Holway andrew.hol...@gmail.com
wrote:
ZFS on Linux is backed by the US government as ZFS will be used as the
primary filesystem to back the parallel distributed filesystem 'Lustre'.
wow, the US government!!. *sarcasm implied*
FC
--
During times of
On Mon, Sep 15, 2014 at 1:16 PM, Fernando Cassia fcas...@gmail.com wrote:
On Mon, Sep 15, 2014 at 4:37 AM, Andrew Holway andrew.hol...@gmail.com
wrote:
ZFS on Linux is backed by the US government as ZFS will be used as the
primary filesystem to back the parallel distributed filesystem
On 09/15/2014 08:18 AM, Miguel Medalha wrote:
That alone is meaningless. MDADM with which filesystem?
ext4
Zfsonlinux does not work in user space, it is a kernel module. Just try it.
Isn't fuse / zfs (partly?) in userspace?
--
Gruß,
Christian
Les Mikesell wrote:
On Mon, Sep 15, 2014 at 1:16 PM, Fernando Cassia fcas...@gmail.com
wrote:
On Mon, Sep 15, 2014 at 4:37 AM, Andrew Holway andrew.hol...@gmail.com
wrote:
ZFS on Linux is backed by the US government as ZFS will be used as the
primary filesystem to back the parallel
On Mon, Sep 15, 2014 at 3:20 PM, Les Mikesell lesmikes...@gmail.com wrote:
Ummm, like you've walked on the moon
LOL. I will begin saying that the US government backs JavaFX then, just
because NASA uses it in some projects.
On Mon, Sep 15, 2014 at 3:27 PM, Chris chris2...@postbox.xyz wrote:
Isn't fuse / zfs (partly?) in userspace?
I believe there´s two separate efforts to run ZFS on Linux. One uses FUSE,
the other reimplemented ZFS as a loadable kernel module.
FC
___
On Mon, Sep 15, 2014 at 3:18 AM, Miguel Medalha miguelmeda...@sapo.pt
wrote:
Zfsonlinux does not work in user space, it is a kernel module. Just try
it.
There´s a copy-on-write file system in the GPL Linux kernel, merged into
the mainline Linux kernel in January 2009.
On Mon, Sep 15, 2014 at 03:29:31PM -0300, Fernando Cassia wrote:
On Mon, Sep 15, 2014 at 3:27 PM, Chris chris2...@postbox.xyz wrote:
Isn't fuse / zfs (partly?) in userspace?
I believe there´s two separate efforts to run ZFS on Linux. One uses FUSE,
the other reimplemented ZFS as a
On Mon, 15 Sep 2014, Fernando Cassia wrote:
It´s called BTRFS.
It´s supported by SUSE, Fujitsu, Oracle, among others.
Yeah, but is it supported by the *US Government* ???
Steve___
CentOS mailing list
CentOS@centos.org
2014-09-15 22:51 GMT+03:00 Steve Thompson s...@vgersoft.com:
On Mon, 15 Sep 2014, Fernando Cassia wrote:
It´s called BTRFS.
It´s supported by SUSE, Fujitsu, Oracle, among others.
Yeah, but is it supported by the *US Government* ???
zfs release zero dot something does not sound like
On 9/15/2014 13:58, Eero Volotinen wrote:
zfs release zero dot something does not sound like production ready ?
https://clusterhq.com/blog/state-zfs-on-linux/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Steve Thompson wrote:
On Mon, 15 Sep 2014, Fernando Cassia wrote:
It´s called BTRFS.
It´s supported by SUSE, Fujitsu, Oracle, among others.
Yeah, but is it supported by the *US Government* ???
Hey, selinux is
___
CentOS mailing list
Eero Volotinen wrote:
2014-09-15 22:51 GMT+03:00 Steve Thompson s...@vgersoft.com:
On Mon, 15 Sep 2014, Fernando Cassia wrote:
It´s called BTRFS.
It´s supported by SUSE, Fujitsu, Oracle, among others.
Yeah, but is it supported by the *US Government* ???
zfs release zero dot something
Any comparison between ZFS and non-ZFS probably overlooks things like
fully-checksummed data (not just metadata) and redundant copies. ZFS will
always be slower than filesystems without these features. TANSTAAFL.
Not really true. It hugely depends on your workload. For example, if you
On 9/15/2014 14:48, Andrew Holway wrote:
Any comparison between ZFS and non-ZFS probably overlooks things like
fully-checksummed data (not just metadata) and redundant copies. ZFS will
always be slower than filesystems without these features. TANSTAAFL.
Not really true. It hugely depends
The SSD and second CPU core are not free.
Where I come from, the streets are paved with SLC
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Mon, September 15, 2014 4:49 pm, Andrew Holway wrote:
The SSD and second CPU core are not free.
Where I come from, the streets are paved with SLC
Is it Salt Lake City that you are from? (that is so reach with Second
Level Cache... That is what you actually meant I figure)
Am I the only
On Mon, 15 Sep 2014, Valeri Galtsev wrote:
On Mon, September 15, 2014 4:49 pm, Andrew Holway wrote:
The SSD and second CPU core are not free.
Where I come from, the streets are paved with SLC
Is it Salt Lake City that you are from? (that is so reach with
Second Level Cache... That is
On 2014-09-15, Warren Young war...@etr-usa.com wrote:
On 9/15/2014 13:58, Eero Volotinen wrote:
zfs release zero dot something does not sound like production ready ?
https://clusterhq.com/blog/state-zfs-on-linux/
That's a super (and timely!) post on XFS. I saw in particular this
section.
On 09/08/2014 09:00 PM, Andrew Holway wrote:
Try ZFS http://zfsonlinux.org/
Maybe you can tune ZFS further, but I tried it in userspace (with FUSE)
and reading was a almost 5 times slower than MDADM.
--
Gruß,
Christian
___
CentOS mailing list
I had promised to weigh in on my experiences using ZFS in a production
environment. We've been testing it for a few months now, and confidence
is building. We've started using it in production about a month ago
after months of non production testing.
I'll append my thoughts in a cross-post
On 11/30/2013 06:20 AM, Andrew Holway wrote:
Hey,
http://zfsonlinux.org/epel.html
If you have a little time and resource please install and report back
any problems you see.
Andrew,
I want to run /var on zfs, but when I try to move /var over it won't
boot thereafter, with errors about
Grub only needs to know about the filesystems that it uses to boot the
system. Mounting of the other file systems including /var is the
responsibility of the system that has been booted. I suspect that you have
something else wrong if you can't boot with /var/ on ZFS.
I may be wrong, but I don't
On 1/6/2014 3:26 PM, Cliff Pratt wrote:
Grub only needs to know about the filesystems that it uses to boot the
system. Mounting of the other file systems including /var is the
responsibility of the system that has been booted. I suspect that you have
something else wrong if you can't boot
On 12/19/2013, 04:00 , li...@benjamindsmith.com wrote:
BackupPC is a great product, and if I knew of it and/or it was available
when I started, I would likely have used it instead of cutting code. Now
that we've got BackupBuddy working and integrated, we aren't going to be
switching as it
On 12/18/2013, 04:00 , li...@benjamindsmith.com wrote:
I may be being presumptuous, and if so, I apologize in advance...
It sounds to me like you might consider a disk-to-disk backup solution.
I could suggest dirvish, BackupPC, or our own home-rolled rsync-based
solution that works rather
On Wed, Dec 18, 2013 at 9:13 AM, Chuck Munro chu...@seafoam.net wrote:
Not presumptuous at all! I have not heard of backupbuddy (or dirvish),
so I should investigate. Your description makes it sound somewhat like
OS-X Time Machine, which I like a lot. I did try backuppc but it got a
bit
On 12/18/2013 07:50 AM, Les Mikesell wrote:
I've always considered backuppc to be one of those rare things that
you set up once and it takes care of itself for years. If you have
problems with it, someone on the backuppc mail list might be able to
help. It does tend to be slower than native
On Wed, Dec 18, 2013 at 3:13 PM, Lists li...@benjamindsmith.com wrote:
I would differentiate BackupBuddy in that there is no incremental and
full distinction. All backups are full in the truest sense of the
word,
For the people who don't know, backuppc builds a directory tree for
each backup
On 12/18/2013 03:04 PM, Les Mikesell wrote:
For the people who don't know, backuppc builds a directory tree for
each backup run where the full runs are complete and the incrementals
normally only contain the changed files. However, when you access the
incremental backups through the web
On 12/18/2013 3:41 PM, Lists wrote:
Should I read this as BackupPC now has its own filesystem driver? If
so, wow. Or do you mean that there are command line tools to read/copy
BackupPC save points?
web interface, primarily. you can restore any portion of any version of
any backup to the
On Wed, Dec 18, 2013 at 5:41 PM, Lists li...@benjamindsmith.com wrote:
On 12/18/2013 03:04 PM, Les Mikesell wrote:
For the people who don't know, backuppc builds a directory tree for
each backup run where the full runs are complete and the incrementals
normally only contain the changed files.
On 12/14/2013 08:50 AM, Chuck Munro wrote:
Hi Ben,
Yes, the initial replication of a large filesystem is *very* time
consuming! But it makes sleeping at night much easier. I did have to
crank up the inotify kernel parameters by a significant amount.
I did the initial replication using
On 12/14/2013, 04:00 , li...@benjamindsmith.com wrote:
We checked lsyncd out and it's most certainly an very interesting tool.
I*will* be using it in the future!
However, we found that it has some issues scaling up to really big file
stores that we haven't seen (yet) with ZFS.
For
On 12/04/2013 06:05 AM, John Doe wrote:
Not sure if I already mentioned it but maybe have a look at:
http://code.google.com/p/lsyncd/
We checked lsyncd out and it's most certainly an very interesting tool.
I *will* be using it in the future!
However, we found that it has some issues scaling
On 04.12.2013 14:05, n...@li.nux.ro wrote:
On 04.12.2013 14:05, John Doe wrote:
From: Listsli...@benjamindsmith.com
Our next big test is to try out ZFS filesystem send/receive in
lieu
of
our current backup processes based on rsync. Rsync is a fabulous
tool,
but is beginning to show
On 04.12.2013 14:05, John Doe wrote:
From: Listsli...@benjamindsmith.com
Our next big test is to try out ZFS filesystem send/receive in lieu
of
our current backup processes based on rsync. Rsync is a fabulous
tool,
but is beginning to show performance/scalability issues dealing with
the
On 05.12.2013 22:46, Chuck Munro wrote:
On 04.12.2013 14:05, John Doe wrote:
From: Listsli...@benjamindsmith.com
Our next big test is to try out ZFS filesystem send/receive in
lieu
of
our current backup processes based on rsync. Rsync is a fabulous
tool,
but is beginning to show
From: Lists li...@benjamindsmith.com
Our next big test is to try out ZFS filesystem send/receive in lieu of
our current backup processes based on rsync. Rsync is a fabulous tool,
but is beginning to show performance/scalability issues dealing with the
many millions of files being backed
On 04.12.2013 14:05, John Doe wrote:
From: Lists li...@benjamindsmith.com
Our next big test is to try out ZFS filesystem send/receive in lieu
of
our current backup processes based on rsync. Rsync is a fabulous
tool,
but is beginning to show performance/scalability issues dealing with
Andrew,
We've been testing ZFS since about 10/24, see my original post (and
replies) asking about its suitability ZFS on Linux in production on
this list. So far, it's been rather impressive. Enabling compression
better than halved the disk space utilization in a low/medium bandwidth
(mainly
On Sat, Nov 30, 2013 at 9:20 AM, Andrew Holway andrew.hol...@gmail.comwrote:
Hey,
http://zfsonlinux.org/epel.html
If you have a little time and resource please install and report back
any problems you see.
A filesystem or Volume sits within a zpool
a zpool is made up of vdevs
vdevs are
Hey,
http://zfsonlinux.org/epel.html
If you have a little time and resource please install and report back
any problems you see.
A filesystem or Volume sits within a zpool
a zpool is made up of vdevs
vdevs are made up of block devices.
zpool is similar to LVM volume
vdev is similar to raid set
On 24.Okt.2013, at 22:59, John R Pierce wrote:
On 10/24/2013 1:41 PM, Lists wrote:
Was wondering if anybody here could weigh in with real-life experience?
Performance/scalability?
I've only used ZFS on Solaris and FreeBSD.some general observations...
...
3) NEVER let a zpool fill up
On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb wne...@gmail.com wrote:
3) NEVER let a zpool fill up above about 70% full, or the performance
really goes downhill.
Why is it? It sounds cost intensive, if not ridiculous.
Disk space not to used, forbidden land...
Is the remaining 30% used by
On 11/4/2013 10:43 AM, Les Mikesell wrote:
On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb
wne...@gmail.com wrote:
3) NEVER let a zpool fill up above about 70% full, or the performance
really goes downhill.
Why is it? It sounds cost intensive, if not ridiculous.
Disk space not to used,
On 11/04/2013 08:01 PM, John R Pierce wrote:
On 11/4/2013 10:43 AM, Les Mikesell wrote:
On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb
wne...@gmail.com wrote:
3) NEVER let a zpool fill up above about 70% full, or the performance
really goes downhill.
Why is it? It sounds cost intensive, if
On 11/4/2013 3:21 PM, Nicolas Thierry-Mieg wrote:
but why would this be much worse with ZFS than eg ext4?
because ZFS works considerably differently than extfs... its a
copy-on-write system to start with.
--
john r pierce 37N 122W
somewhere on the middle
Greetings,
On Fri, Oct 25, 2013 at 3:57 AM, Keith Keller
kkel...@wombat.san-francisco.ca.us wrote:
I don't have my own, but I have heard of other shops which have had lots
of success with ZFS on OpenSolaris and their variants.
And I know of a shop which could not recover a huge ZFS on freebsd
On 10/25/2013 11:14 AM, Chuck Munro wrote:
To keep the two servers in sync I use 'lsyncd' which is essentially a
front-end for rsync that cuts down thrashing and overhead dramatically
by excluding the full filesystem scan and using inotify to figure out
what to sync. This allows
To be honest is not easier to install on server FreeBSD or Solaris where ZFS is
natively supported? I moved my own server to FreeBSD and I didn't noticed huge
difference between Linux distros and freebsd, I have no idea what about Solaris
but it might be still similar environment.
Sent from
On Thu, Oct 24, 2013 at 01:41:17PM -0700, Lists wrote:
We are a CentOS shop, and have the lucky, fortunate problem of having
ever-increasing amounts of data to manage. EXT3/4 becomes tough to
manage when you start climbing, especially when you have to upgrade, so
we're contemplating
On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
On 10/24/2013 1:41 PM, Lists wrote:
Was wondering if anybody here could weigh in with real-life experience?
Performance/scalability?
I've only used ZFS on Solaris and FreeBSD.some general observations...
1) you need a
On Sat, Oct 26, 2013 at 4:36 PM, Ray Van Dolson ra...@bludgeon.org wrote:
On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
On 10/24/2013 1:41 PM, Lists wrote:
Was wondering if anybody here could weigh in with real-life experience?
Performance/scalability?
I've only used
On Oct 24, 2013, at 8:01 PM, Lists li...@benjamindsmith.com wrote:
Not sure enough of the vernacular
Yes, ZFS is complicated enough to have a specialized vocabulary.
I used two of these terms in my previous post:
- vdev, which is a virtual device, something like a software RAID. It is one
On 10/24/2013 11:18 PM, Warren Young wrote:
All of the ZFSes out there are crippled relative to what's shipping in
Solaris now, because Oracle has stopped releasing code. There are nontrivial
features in zpool v29+, which simply aren't in the free forks of older
versions o the Sun code.
On 10/24/2013 11:18 PM, Warren Young wrote:
- vdev, which is a virtual device, something like a software RAID. It is one
or more disks, configured together, typically with some form of redundancy.
- pool, which is one or more vdevs, which has a capacity equal to all of its
vdevs added
On 10/25/2013 10:33 AM, Lists wrote:
LVM2 complicates administration terribly.
huh? it hugely simplifies it for me, when I have lots of drives. I just
wish mdraid and lvm were better integrated. to see how it should have
been done, see IBM AIX's version of lvm.you grow a jfs file system,
On 10/25/2013, 05:00 , centos-requ...@centos.org wrote:
We are a CentOS shop, and have the lucky, fortunate problem of having
ever-increasing amounts of data to manage. EXT3/4 becomes tough to
manage when you start climbing, especially when you have to upgrade, so
we're contemplating
On Fri, Oct 25, 2013 at 1:40 PM, John R Pierce pie...@hogranch.com wrote:
On 10/25/2013 10:33 AM, Lists wrote:
LVM2 complicates administration terribly.
huh? it hugely simplifies it for me, when I have lots of drives. I just
wish mdraid and lvm were better integrated. to see how it should
1 - 100 of 168 matches
Mail list logo