Il 03-07-2018 15:39 Nux! ha scritto:
Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
Just recently there was an issue with data loss.
https://github.com/zfsonlinux/zfs/issues/7401
I know; I was on the very same github issue you linked.
While the bug was very unfortunate, ZFS remai
But besides this issue it was in my experience rock solid.
On Tue, Jul 3, 2018 at 3:40 PM Nux! wrote:
> Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
> Just recently there was an issue with data loss.
> https://github.com/zfsonlinux/zfs/issues/7401
>
> hth
> Lucian
>
> --
> Sent
Watch out, ZFS on Linux is not as good as on FreeBSD/Solaris.
Just recently there was an issue with data loss.
https://github.com/zfsonlinux/zfs/issues/7401
hth
Lucian
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.nux.ro
- Original Message -
> From: "Gionatan Danti"
>
I'm using ZFS On Linux with CentOS 6 since almost two years.
http://zfsonlinux.org/
I'm not using it to boot from it or for any vm stuff, but just for storage
disks.
Recently updated to ZOL version 0.7.9.
To install, I simply follow the instructions from this page:
https://github.com/zfsonlinux/zf
Il 25-06-2018 23:59 Yves Bellefeuille ha scritto:
I think the simplest solution would be to add this (which I haven't
tried):
http://zfsonlinux.org/
https://github.com/zfsonlinux/zfs/wiki/RHEL-and-CentOS
to the Available Repositories for CentOS wiki page:
https://wiki.centos.org/fr/AdditionalR
Gionatan Danti wrote:
> I searched the list but I did not found anything regarding native ZFS.
> Any feedback on the matter is welcomed. Thanks.
I think the simplest solution would be to add this (which I haven't
tried):
http://zfsonlinux.org/
https://github.com/zfsonlinux/zfs/wiki/RHEL-and-Cent
I had promised to weigh in on my experiences using ZFS in a production
environment. We've been testing it for a few months now, and confidence
is building. We've started using it in production about a month ago
after months of non production testing.
I'll append my thoughts in a cross-post from
On 1/6/2014 3:26 PM, Cliff Pratt wrote:
> Grub only needs to know about the filesystems that it uses to boot the
> system. Mounting of the other file systems including /var is the
> responsibility of the system that has been booted. I suspect that you have
> something else wrong if you can't boot w
Grub only needs to know about the filesystems that it uses to boot the
system. Mounting of the other file systems including /var is the
responsibility of the system that has been booted. I suspect that you have
something else wrong if you can't boot with /var/ on ZFS.
I may be wrong, but I don't t
On 11/30/2013 06:20 AM, Andrew Holway wrote:
> Hey,
>
> http://zfsonlinux.org/epel.html
>
> If you have a little time and resource please install and report back
> any problems you see.
>
Andrew,
I want to run /var on zfs, but when I try to move /var over it won't
boot thereafter, with errors ab
On 12/19/2013, 04:00 , li...@benjamindsmith.com wrote:
> BackupPC is a great product, and if I knew of it and/or it was available
> when I started, I would likely have used it instead of cutting code. Now
> that we've got BackupBuddy working and integrated, we aren't going to be
> switching as it
On Wed, Dec 18, 2013 at 5:41 PM, Lists wrote:
> On 12/18/2013 03:04 PM, Les Mikesell wrote:
>> For the people who don't know, backuppc builds a directory tree for
>> each backup run where the full runs are complete and the incrementals
>> normally only contain the changed files. However, when you
On 12/18/2013 3:41 PM, Lists wrote:
> Should I read this as "BackupPC now has its own filesystem driver"? If
> so, wow. Or do you mean that there are command line tools to read/copy
> BackupPC save points?
web interface, primarily. you can restore any portion of any version of
any backup to the
On 12/18/2013 03:04 PM, Les Mikesell wrote:
> For the people who don't know, backuppc builds a directory tree for
> each backup run where the full runs are complete and the incrementals
> normally only contain the changed files. However, when you access the
> incremental backups through the web int
On Wed, Dec 18, 2013 at 3:13 PM, Lists wrote:
> >
> I would differentiate BackupBuddy in that there is no "incremental" and
> "full" distinction. All backups are "full" in the truest sense of the
> word,
For the people who don't know, backuppc builds a directory tree for
each backup run where the
On 12/18/2013 07:50 AM, Les Mikesell wrote:
> I've always considered backuppc to be one of those rare things that
> you set up once and it takes care of itself for years. If you have
> problems with it, someone on the backuppc mail list might be able to
> help. It does tend to be slower than nat
On Wed, Dec 18, 2013 at 9:13 AM, Chuck Munro wrote:
>
> Not presumptuous at all! I have not heard of backupbuddy (or dirvish),
> so I should investigate. Your description makes it sound somewhat like
> OS-X Time Machine, which I like a lot. I did try backuppc but it got a
> bit complex to manag
On 12/18/2013, 04:00 , li...@benjamindsmith.com wrote:
> I may be being presumptuous, and if so, I apologize in advance...
>
> It sounds to me like you might consider a disk-to-disk backup solution.
> I could suggest dirvish, BackupPC, or our own home-rolled rsync-based
> solution that works rath
On 12/14/2013 08:50 AM, Chuck Munro wrote:
> Hi Ben,
>
> Yes, the initial replication of a large filesystem is *very* time
> consuming! But it makes sleeping at night much easier. I did have to
> crank up the inotify kernel parameters by a significant amount.
>
> I did the initial replication usi
On 12/14/2013, 04:00 , li...@benjamindsmith.com wrote:
> We checked lsyncd out and it's most certainly an very interesting tool.
> I*will* be using it in the future!
>
> However, we found that it has some issues scaling up to really big file
> stores that we haven't seen (yet) with ZFS.
>
> For
On 12/04/2013 06:05 AM, John Doe wrote:
> Not sure if I already mentioned it but maybe have a look at:
> http://code.google.com/p/lsyncd/
We checked lsyncd out and it's most certainly an very interesting tool.
I *will* be using it in the future!
However, we found that it has some issues scalin
On 04.12.2013 14:05, n...@li.nux.ro wrote:
>>> >>On 04.12.2013 14:05, John Doe wrote:
> From: Lists
>
>>> >>Our next big test is to try out ZFS filesystem send/receive in
>>> >>lieu
>>> >>of
>>> >>our current backup processes based on rsync. Rsync i
On 05.12.2013 22:46, Chuck Munro wrote:
>> On 04.12.2013 14:05, John Doe wrote:
From: Lists
>> Our next big test is to try out ZFS filesystem send/receive in
>> lieu
>> of
>> our current backup processes based on rsync. Rsync is a fabulous
>> tool,
>> but is begi
> On 04.12.2013 14:05, John Doe wrote:
>> >From: Lists
>> >
>>> >>Our next big test is to try out ZFS filesystem send/receive in lieu
>>> >>of
>>> >>our current backup processes based on rsync. Rsync is a fabulous
>>> >>tool,
>>> >>but is beginning to show performance/scalability issues dealing wi
On 04.12.2013 14:05, John Doe wrote:
> From: Lists
>
>> Our next big test is to try out ZFS filesystem send/receive in lieu
>> of
>> our current backup processes based on rsync. Rsync is a fabulous
>> tool,
>> but is beginning to show performance/scalability issues dealing with
>> the
>> many
From: Lists
> Our next big test is to try out ZFS filesystem send/receive in lieu of
> our current backup processes based on rsync. Rsync is a fabulous tool,
> but is beginning to show performance/scalability issues dealing with the
> many millions of files being backed up, and we're hoping th
On Sat, Nov 30, 2013 at 9:20 AM, Andrew Holway wrote:
> Hey,
>
> http://zfsonlinux.org/epel.html
>
> If you have a little time and resource please install and report back
> any problems you see.
>
> A filesystem or Volume sits within a zpool
> a zpool is made up of vdevs
> vdevs are made up of blo
Andrew,
We've been testing ZFS since about 10/24, see my original post (and
replies) asking about its suitability "ZFS on Linux in production" on
this list. So far, it's been rather impressive. Enabling compression
better than halved the disk space utilization in a low/medium bandwidth
(mainly
On 11/4/2013 3:21 PM, Nicolas Thierry-Mieg wrote:
> but why would this be much worse with ZFS than eg ext4?
because ZFS works considerably differently than extfs... its a
copy-on-write system to start with.
--
john r pierce 37N 122W
somewhere on the middle
On 11/04/2013 08:01 PM, John R Pierce wrote:
> On 11/4/2013 10:43 AM, Les Mikesell wrote:
>> On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb
>>> wrote:
>>
>>> 3) NEVER let a zpool fill up above about 70% full, or the performance
> really goes downhill.
>>>
Why is it? It sounds cost inte
On 11/4/2013 10:43 AM, Les Mikesell wrote:
> On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb
>> wrote:
>>> >>
>> 3) NEVER let a zpool fill up above about 70% full, or the performance
>> >>really goes downhill.
>>
>> >Why is it? It sounds cost intensive, if not ridiculous.
>> >Disk space not to used,
On Mon, Nov 4, 2013 at 12:15 PM, Markus Falb wrote:
>
>> 3) NEVER let a zpool fill up above about 70% full, or the performance
>> really goes downhill.
>
> Why is it? It sounds cost intensive, if not ridiculous.
> Disk space not to used, forbidden land...
> Is the remaining 30% used by some ZFS in
On 24.Okt.2013, at 22:59, John R Pierce wrote:
> On 10/24/2013 1:41 PM, Lists wrote:
>> Was wondering if anybody here could weigh in with real-life experience?
>> Performance/scalability?
>
> I've only used ZFS on Solaris and FreeBSD.some general observations...
...
> 3) NEVER let a zpool
Greetings,
On Fri, Oct 25, 2013 at 3:57 AM, Keith Keller
wrote:
>
> I don't have my own, but I have heard of other shops which have had lots
> of success with ZFS on OpenSolaris and their variants.
And I know of a shop which could not recover a huge ZFS on freebsd and
had to opt for something li
To be honest is not easier to install on server FreeBSD or Solaris where ZFS is
natively supported? I moved my own server to FreeBSD and I didn't noticed huge
difference between Linux distros and freebsd, I have no idea what about Solaris
but it might be still similar environment.
Sent from my
On 10/25/2013 11:14 AM, Chuck Munro wrote:
> To keep the two servers in sync I use 'lsyncd' which is essentially a
> front-end for rsync that cuts down thrashing and overhead dramatically
> by excluding the full filesystem scan and using inotify to figure out
> what to sync. This allows almost-rea
On Sat, Oct 26, 2013 at 4:36 PM, Ray Van Dolson wrote:
> On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
> > On 10/24/2013 1:41 PM, Lists wrote:
> > > Was wondering if anybody here could weigh in with real-life experience?
> > > Performance/scalability?
> >
> > I've only used ZFS o
On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
> On 10/24/2013 1:41 PM, Lists wrote:
> > Was wondering if anybody here could weigh in with real-life experience?
> > Performance/scalability?
>
> I've only used ZFS on Solaris and FreeBSD.some general observations...
>
> 1) you n
On Thu, Oct 24, 2013 at 01:41:17PM -0700, Lists wrote:
> We are a CentOS shop, and have the lucky, fortunate problem of having
> ever-increasing amounts of data to manage. EXT3/4 becomes tough to
> manage when you start climbing, especially when you have to upgrade, so
> we're contemplating swit
On 10/26/2013 06:40 AM, John R Pierce wrote:
>
> to see how it should have
> been done, see IBM AIX's version of lvm.you grow a jfs file system,
> it automatically grows the underlying LV (logical volume), online,
> live.
lvm can do this with the --resizefs flag for lvextend, one command t
On re-reading, I realized I didn't complete some of my thoughts:
On 10/25/2013 00:18, Warren Young wrote:
> ZFS is nicer in this regard, in that it lets you schedule the scrub
> operation. You can obviously schedule one for btrfs,
...with cron...
> but that doesn't take into account scrub time.
On 10/25/2013 1:26 PM, Warren Young wrote:
> On 10/25/2013 00:44, John R Pierce wrote:
>> >current version of OpenZFS no longer relies on 'version numbers',
>> >instead it has 'feature flags' for all post v28 features.
> This must be the zpool v5000 thing I saw while researching my previous
> post.
On 10/25/2013 11:33, Lists wrote:
>
> I'm just trying to find the best tool for the job.
Try everything. Seriously.
You won't know what you like, and what works *for you* until you have
some experience. Buy a Drobo for the home, replace one of your old file
servers with a FreeBSD ZFS box, ena
On 10/25/2013 00:44, John R Pierce wrote:
> current version of OpenZFS no longer relies on 'version numbers',
> instead it has 'feature flags' for all post v28 features.
This must be the zpool v5000 thing I saw while researching my previous
post. Apparently ZFSonLinux is doing the same thing, or
On Fri, Oct 25, 2013 at 1:40 PM, John R Pierce wrote:
> On 10/25/2013 10:33 AM, Lists wrote:
>> LVM2 complicates administration terribly.
>
> huh? it hugely simplifies it for me, when I have lots of drives. I just
> wish mdraid and lvm were better integrated. to see how it should have
> been don
On 10/25/2013, 05:00 , centos-requ...@centos.org wrote:
> We are a CentOS shop, and have the lucky, fortunate problem of having
> ever-increasing amounts of data to manage. EXT3/4 becomes tough to
> manage when you start climbing, especially when you have to upgrade, so
> we're contemplating switc
On 10/25/2013 10:33 AM, Lists wrote:
> LVM2 complicates administration terribly.
huh? it hugely simplifies it for me, when I have lots of drives. I just
wish mdraid and lvm were better integrated. to see how it should have
been done, see IBM AIX's version of lvm.you grow a jfs file system,
On 10/24/2013 11:18 PM, Warren Young wrote:
> - vdev, which is a virtual device, something like a software RAID. It is one
> or more disks, configured together, typically with some form of redundancy.
>
> - pool, which is one or more vdevs, which has a capacity equal to all of its
> vdevs added
On 10/24/2013 11:18 PM, Warren Young wrote:
> All of the ZFSes out there are crippled relative to what's shipping in
> Solaris now, because Oracle has stopped releasing code. There are nontrivial
> features in zpool v29+, which simply aren't in the free forks of older
> versions o the Sun code.
On Oct 24, 2013, at 8:01 PM, Lists wrote:
> Not sure enough of the vernacular
Yes, ZFS is complicated enough to have a specialized vocabulary.
I used two of these terms in my previous post:
- vdev, which is a virtual device, something like a software RAID. It is one
or more disks, configured
On 10/24/2013 05:29 PM, Warren Young wrote:
> On 10/24/2013 17:12, Lists wrote:
>> 2) The ability to make the partition bigger by adding drives with very
>> minimal/no downtime.
> Be careful: you may have been reading some ZFS hype that turns out not
> as rosy in realiIdeally, ZFS would work like
On 10/24/2013 5:29 PM, Warren Young wrote:
> The least complicated*safe* way to add 1 TB to a pool is add*two* 1 TB
> disks to the system, create a ZFS mirror out of them, and add*that*
> vdev to the pool. That gets you 1 TB of redundant space, which is what
> you actually wanted. Just realiz
On 10/24/2013 5:31 PM, Warren Young wrote:
> To be fair, you want to treat XFS the same way.
>
> And it, too is "unstable" on 32-bit systems with anything but smallish
> filesystems, due to lack of RAM.
I thought it had stack requirements that 32 bit couldn't meet, and it
would simply crash, so i
On 10/24/2013 14:59, John R Pierce wrote:
> On 10/24/2013 1:41 PM, Lists wrote:
>
> 1) you need a LOT of ram for decent performance on large zpools. 1GB ram
> above your basic system/application requirements per terabyte of zpool
> is not unreasonable.
To be fair, you want to treat XFS the same wa
On 10/24/2013 17:12, Lists wrote:
>
> 2) The ability to make the partition bigger by adding drives with very
> minimal/no downtime.
Be careful: you may have been reading some ZFS hype that turns out not
as rosy in reality.
Ideally, ZFS would work like a Drobo with an infinite number of drive
b
On 10/24/2013 4:12 PM, Lists wrote:
> On 10/24/2013 02:47 PM, SilverTip257 wrote:
>> >You didn't mention XFS.
>> >Just curious if you considered it or not.
> Most definitely. There are a few features that I'm looking for:
>
> 1) MOST IMPORTANT: STABLE!
XFS is quite stable in CentOS 6.4 64bit.
ther
We tested ZFS on CentOS 6.4 a few months ago using a descend Supermicro
server with 16GB RAM and 11 drives on RaidZ3. Same specs as a middle range
storage server that we build mainly using FreeBSD.
Performance was not bad but eventually we run into a situation were we
could not import a pool anymo
Am 25.10.2013 um 00:47 schrieb John R Pierce :
> On 10/24/2013 2:59 PM, Lists wrote:
(*) ran into a guy who had 100s of zfs 'file systems' (mount points),
per user home directories, and was doing nightly snapshots going back
several years, and his zfs commands were taking a long lo
On 10/24/2013 02:47 PM, SilverTip257 wrote:
> You didn't mention XFS.
> Just curious if you considered it or not.
Most definitely. There are a few features that I'm looking for:
1) MOST IMPORTANT: STABLE!
2) The ability to make the partition bigger by adding drives with very
minimal/no downtim
On 10/24/2013 2:59 PM, Lists wrote:
>> >(*) ran into a guy who had 100s of zfs 'file systems' (mount points),
>> >per user home directories, and was doing nightly snapshots going back
>> >several years, and his zfs commands were taking a long long time to do
>> >anything, and he couldn't figure out
On 2013-10-24, SilverTip257 wrote:
> On Thu, Oct 24, 2013 at 4:41 PM, Lists wrote:
>
>> We are a CentOS shop, and have the lucky, fortunate problem of having
>> ever-increasing amounts of data to manage. EXT3/4 becomes tough to
>> manage when you start climbing, especially when you have to upgrad
On 10/24/2013 01:59 PM, John R Pierce wrote:
> 1) you need a LOT of ram for decent performance on large zpools. 1GB ram
> above your basic system/application requirements per terabyte of zpool
> is not unreasonable.
That seems quite reasonable to me. Our existing equipment has far more
than eno
On Thu, Oct 24, 2013 at 4:41 PM, Lists wrote:
> We are a CentOS shop, and have the lucky, fortunate problem of having
> ever-increasing amounts of data to manage. EXT3/4 becomes tough to
> manage when you start climbing, especially when you have to upgrade, so
> we're contemplating switching to Z
On 10/24/2013 1:41 PM, Lists wrote:
> Was wondering if anybody here could weigh in with real-life experience?
> Performance/scalability?
I've only used ZFS on Solaris and FreeBSD.some general observations...
1) you need a LOT of ram for decent performance on large zpools. 1GB ram
above your
ext4 isn't going to help too much. Our biggest concerns are:
compression, and unlimited inodes
On Mon, Jan 5, 2009 at 8:36 AM, Matej Cepl wrote:
> On 2008-12-30, 15:32 GMT, Tony Placilla wrote:
>> The root answer is that if he wants to use ZFS (which is
>> a *good* choice) he should use some f
On 2008-12-30, 15:32 GMT, Tony Placilla wrote:
> The root answer is that if he wants to use ZFS (which is
> a *good* choice) he should use some flavor of Solaris
I would just add that RHEL 5.3 (and thus CentOS 5.3) when it
happens, will have ext4fs as a technology preview, which may
fulfill OT
>>> On Mon, Dec 29, 2008 at 7:24 PM, in message
<49596a48.4000...@bradbury.edu.hk>, Christopher Chan
wrote:
>> I agree in general with most every opinion. Especially Davide's comment
> above. Very good analogy
>> Open Solaris may be your best choice.
>> I would suggest you do pay attention t
> I agree in general with most every opinion. Especially Davide's comment
> above. Very good analogy
> Open Solaris may be your best choice.
> I would suggest you do pay attention to Solaris itself. It's free (as in
> beer) from Sun & it works.
Except for patches unless you want to browse Sun's
>>> On Mon, Dec 29, 2008 at 2:54 AM, in message
<8e388b67-1d39-4095-95c5-132b02e4f...@ifom-ieo-campus.it>, Davide Cittaro
wrote:
> On Dec 29, 2008, at 7:09 AM, John R Pierce wrote:
>
>> Bill Campbell wrote:
>>> I would go with Opensolaris.
>>
>>
>> for a dedicated production storage server,
Mag Gam wrote:
> I am planning to use ZFS on my Centos 5.2 systems. The data I am
> storing is very large text files where each file can range from 10M to
> 20G. I am very interested on the compression feature of ZFS, and it
> seems no other native Linux FS supports it.
>
> My question are: Is ZFS
On Dec 29, 2008, at 7:09 AM, John R Pierce wrote:
> Bill Campbell wrote:
>> I would go with Opensolaris.
>
>
> for a dedicated production storage server, I would go with Solaris 10.
> unless there's some specific feature/capability you need thats only in
> OpenSolaris.
Totally agree. Solaris 10
Bill Campbell wrote:
> I would go with Opensolaris.
for a dedicated production storage server, I would go with Solaris 10.
unless there's some specific feature/capability you need thats only in
OpenSolaris.
___
CentOS mailing list
CentOS@cento
On Sun, Dec 28, 2008, Davide Cittaro wrote:
>
>On Dec 28, 2008, at 7:16 PM, Mag Gam wrote:
>
>> I am planning to use ZFS on my Centos 5.2 systems. The data I am
>> storing is very large text files where each file can range from 10M to
>> 20G. I am very interested on the compression feature of ZFS,
Mag Gam wrote:
> I am planning to use ZFS on my Centos 5.2 systems. The data I am
> storing is very large text files where each file can range from 10M to
> 20G. I am very interested on the compression feature of ZFS, and it
> seems no other native Linux FS supports it.
>
> My question are: Is ZFS
On Dec 28, 2008, at 9:17 PM, Jure Pečar wrote:
> Nexenta is the next
> best thing if you want "kinda linux like" environment, altough linux
> zones
> on solaris might also be worth checking.
Definitely, even if support for kernel 2.6 is not completely working
(at least not 2 months ago...).
On Sun, 28 Dec 2008 20:27:10 +0100
Vnpenguin wrote:
> On Sun, Dec 28, 2008 at 8:02 PM, Les Mikesell
> wrote:
> > Mag Gam wrote:
> >> Any thoughts or ideas?
> >
> > I'd be surprised if anyone is using zfs/fuse/linux combinations
> > seriously. Why not just run your archive server on opensolaris
thanks everyone for your fair and balanced opinions and experiences!
On Sun, Dec 28, 2008 at 2:38 PM, Davide Cittaro
wrote:
>
> On Dec 28, 2008, at 7:16 PM, Mag Gam wrote:
>
>> I am planning to use ZFS on my Centos 5.2 systems. The data I am
>> storing is very large text files where each file c
On Dec 28, 2008, at 7:16 PM, Mag Gam wrote:
> I am planning to use ZFS on my Centos 5.2 systems. The data I am
> storing is very large text files where each file can range from 10M to
> 20G. I am very interested on the compression feature of ZFS, and it
> seems no other native Linux FS supports i
Am 28.12.2008 um 20:02 schrieb Les Mikesell:
> Mag Gam wrote:
>> I am planning to use ZFS on my Centos 5.2 systems. The data I am
>> storing is very large text files where each file can range from 10M
>> to
>> 20G. I am very interested on the compression feature of ZFS, and it
>> seems no other
On Sun, Dec 28, 2008 at 8:02 PM, Les Mikesell wrote:
> Mag Gam wrote:
>> I am planning to use ZFS on my Centos 5.2 systems. The data I am
>> storing is very large text files where each file can range from 10M to
>> 20G. I am very interested on the compression feature of ZFS, and it
>> seems no oth
Mag Gam wrote:
> I am planning to use ZFS on my Centos 5.2 systems. The data I am
> storing is very large text files where each file can range from 10M to
> 20G. I am very interested on the compression feature of ZFS, and it
> seems no other native Linux FS supports it.
>
> My question are: Is ZFS
81 matches
Mail list logo