Yep, and thank you to Suse, Fujitsu, and all the contributors.
I suppose we can all be charitable when reading this from the Red Hat
Whitepaper at:
https://www.redhat.com/whitepapers/rha/gfs/GFS_INS0032US.pdf:
<<
Red Hat GFS is the world’s leading cluster file system for Linux.
>>
If that is
On Thu, Aug 17, 2017 at 5:47 AM, Austin S. Hemmelgarn
wrote:
> Also, I don't think I've ever seen any patches posted from a Red Hat address
> on the ML, so I don't think they were really all that involved in
> development to begin with.
Unfortunately the email domain
On 2017-08-17 02:25, GWB wrote:
<<
Or else it could be an argument that they
expect Btrfs to do their job while they watch cat videos from the
intertubes. :-)
My favourite quote from the list this week, and, well, obviously, that
is the main selling point of file systems like btrfs, zfs, and
<<
Or else it could be an argument that they
expect Btrfs to do their job while they watch cat videos from the
intertubes. :-)
>>
My favourite quote from the list this week, and, well, obviously, that
is the main selling point of file systems like btrfs, zfs, and various
other lvm and raid set
On Wed, Aug 16, 2017 at 8:01 AM, Qu Wenruo wrote:
> BTW, when Fujitsu tested the postgresql workload on btrfs, the result is
> quite interesting.
>
> For HDD, when number of clients is low, btrfs shows obvious performance
> drop.
> And the problem seems to be mandatory
On Wed, Aug 16, 2017 at 09:53:57AM -0400, Austin S. Hemmelgarn wrote:
> > So apart from some central DBs for the storage management system
> > itself, CoW is mostly no issue for us.
> > But I've talked to some friend at the local super computing centre and
> > they have rather general issues with
On Thu, Aug 03, 2017 at 08:08:59PM +0200, waxhead wrote:
> BTRFS biggest problem is not that there are some bits and pieces that
> are thoroughly screwed up (raid5/6 (which just got some fixes by the
> way)), but the fact that the documentation is rather dated.
>
> There is a simple status
[ ... ]
>>> Snapshots work fine with nodatacow, each block gets CoW'ed
>>> once when it's first written to, and then goes back to being
>>> NOCOW.
>>> The only caveat is that you probably want to defrag either
>>> once everything has been rewritten, or right after the
>>> snapshot.
>> I thought
[ ... ]
> But I've talked to some friend at the local super computing
> centre and they have rather general issues with CoW at their
> virtualisation cluster.
Amazing news! :-)
> Like SUSE's snapper making many snapshots leading the storage
> images of VMs apparently to explode (in terms of
> We use the crcs to catch storage gone wrong, [ ... ]
And that's an opportunistically feasible idea given that current
CPUs can do that in real-time.
> [ ... ] It's possible to protect against all three without COW,
> but all solutions have their own tradeoffs and this is the setup
> we chose.
On 2017-08-16 10:11, Christoph Anton Mitterer wrote:
On Wed, 2017-08-16 at 09:53 -0400, Austin S. Hemmelgarn wrote:
Go try BTRFS on top of dm-integrity, or on a
system with T10-DIF or T13-EPP support
When dm-integrity is used... would that be enough for btrfs to do a
proper repair in the
On Wed, 2017-08-16 at 09:53 -0400, Austin S. Hemmelgarn wrote:
> Go try BTRFS on top of dm-integrity, or on a
> system with T10-DIF or T13-EPP support
When dm-integrity is used... would that be enough for btrfs to do a
proper repair in the RAID+nodatacow case? I assume it can't do repairs
now
On 2017年08月16日 21:12, Chris Mason wrote:
On Mon, Aug 14, 2017 at 09:54:48PM +0200, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 11:53 -0400, Austin S. Hemmelgarn wrote:
Quite a few applications actually _do_ have some degree of secondary
verification or protection from a crash. Go
On 2017-08-16 09:12, Chris Mason wrote:
My real goal is to make COW fast enough that we can leave it on for the
database applications too. Obviously I haven't quite finished that one
yet ;) But I'd rather keep the building block of all the other btrfs
features in place than try to do crcs
On 2017-08-16 09:31, Christoph Anton Mitterer wrote:
Just out of curiosity:
On Wed, 2017-08-16 at 09:12 -0400, Chris Mason wrote:
Btrfs couples the crcs with COW because
this (which sounds like you want it to stay coupled that way)...
plus
It's possible to protect against all three
Just out of curiosity:
On Wed, 2017-08-16 at 09:12 -0400, Chris Mason wrote:
> Btrfs couples the crcs with COW because
this (which sounds like you want it to stay coupled that way)...
plus
> It's possible to protect against all three without COW, but all
> solutions have their own tradeoffs
On Mon, Aug 14, 2017 at 09:54:48PM +0200, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 11:53 -0400, Austin S. Hemmelgarn wrote:
Quite a few applications actually _do_ have some degree of secondary
verification or protection from a crash. Go look at almost any
database
software.
On 2017-08-15 10:41, Christoph Anton Mitterer wrote:
On Tue, 2017-08-15 at 07:37 -0400, Austin S. Hemmelgarn wrote:
Go look at Chrome, or Firefox, or Opera, or any other major web
browser.
At minimum, they will safely bail out if they detect corruption in
the
user profile and can trivially
On Tue, 2017-08-15 at 07:37 -0400, Austin S. Hemmelgarn wrote:
> Go look at Chrome, or Firefox, or Opera, or any other major web
> browser.
> At minimum, they will safely bail out if they detect corruption in
> the
> user profile and can trivially resync the profile from another system
> if
>
On 2017-08-14 15:54, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 11:53 -0400, Austin S. Hemmelgarn wrote:
Quite a few applications actually _do_ have some degree of secondary
verification or protection from a crash. Go look at almost any
database
software.
Then please give proper
On 08/14/2017 09:08 PM, Chris Murphy wrote:
> On Mon, Aug 14, 2017 at 8:23 AM, Goffredo Baroncelli
> wrote:
>
>> Form a theoretical point of view, if you have a "PURE" COW file-system, you
>> don't need a journal. Unfortunately a RAID5/6 stripe update is a RMW cycle,
>> so
On Mon, 2017-08-14 at 11:53 -0400, Austin S. Hemmelgarn wrote:
> Quite a few applications actually _do_ have some degree of secondary
> verification or protection from a crash. Go look at almost any
> database
> software.
Then please give proper references for this!
This is from 2015, where
On Mon, 2017-08-14 at 10:23 -0400, Austin S. Hemmelgarn wrote:
> Assume you have higher level verification. Would you rather not be
> able
> to read the data regardless of if it's correct or not, or be able to
> read it and determine yourself if it's correct or not?
What would be the
On Mon, Aug 14, 2017 at 8:23 AM, Goffredo Baroncelli wrote:
> Form a theoretical point of view, if you have a "PURE" COW file-system, you
> don't need a journal. Unfortunately a RAID5/6 stripe update is a RMW cycle,
> so you need a journal to keep it in sync. The same is
On 14/08/17 16:53, Austin S. Hemmelgarn wrote:
> Quite a few applications actually _do_ have some degree of secondary
> verification or protection from a crash.
I am glad your applications do and you have no need of this feature.
You are welcome not to use it. I, on the other hand, definitely
On 2017-08-14 11:13, Graham Cobb wrote:
On 14/08/17 15:23, Austin S. Hemmelgarn wrote:
Assume you have higher level verification.
But almost no applications do. In real life, the decision
making/correction process will be manual and labour-intensive (for
example, running fsck on a virtual
On 14/08/17 15:23, Austin S. Hemmelgarn wrote:
> Assume you have higher level verification.
But almost no applications do. In real life, the decision
making/correction process will be manual and labour-intensive (for
example, running fsck on a virtual disk or restoring a file from backup).
>
On 2017-08-14 08:24, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 14:36 +0800, Qu Wenruo wrote:
And how are you going to write your data and checksum atomically
when
doing in-place updates?
Exactly, that's the main reason I can figure out why btrfs disables
checksum for nodatacow.
On 08/14/2017 09:08 AM, Qu Wenruo wrote:
>
>>
>> Supposing to log for each transaction BTRFS which "data NOCOW blocks" will
>> be updated and their checksum, in case a transaction is interrupted you know
>> which blocks have to be checked and are able to verify if the checksum
>> matches and
On 2017年08月14日 20:32, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 15:46 +0800, Qu Wenruo wrote:
The problem here is, if you enable csum and even data is updated
correctly, only metadata is trashed, then you can't even read out
the
correct data.
So what?
This problem occurs anyway
On Mon, 2017-08-14 at 15:46 +0800, Qu Wenruo wrote:
> The problem here is, if you enable csum and even data is updated
> correctly, only metadata is trashed, then you can't even read out
> the
> correct data.
So what?
This problem occurs anyway *only* in case of a crash,.. and *only* if
On Mon, 2017-08-14 at 14:36 +0800, Qu Wenruo wrote:
> > And how are you going to write your data and checksum atomically
> > when
> > doing in-place updates?
>
> Exactly, that's the main reason I can figure out why btrfs disables
> checksum for nodatacow.
Still, I don't get the problem here...
<cales...@scientia.net>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: RedHat 7.4 Release Notes: "Btrfs has been deprecated" - wut?
On 2017年08月12日 15:42, Christoph Hellwig wrote:
On Sat, Aug 12, 2017 at 02:10:18AM +0200, Christoph Anton Mitterer wrote:
Qu Wenruo wrote:
ntia.net>
> Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
> Subject: Re: RedHat 7.4 Release Notes: "Btrfs has been deprecated" - wut?
>
>
>
> On 2017年08月12日 15:42, Christoph Hellwig wrote:
> > On Sat, Aug 12, 2017 at 02:10:18AM +0200, Christoph Anton Mitte
On 2017年08月13日 22:08, Goffredo Baroncelli wrote:
On 08/12/2017 02:12 PM, Hugo Mills wrote:
On Sat, Aug 12, 2017 at 01:51:46PM +0200, Christoph Anton Mitterer wrote:
On Sat, 2017-08-12 at 00:42 -0700, Christoph Hellwig wrote:
[...]
good, but csum is not
I don't think
On 2017年08月12日 15:42, Christoph Hellwig wrote:
On Sat, Aug 12, 2017 at 02:10:18AM +0200, Christoph Anton Mitterer wrote:
Qu Wenruo wrote:
Although Btrfs can disable data CoW, nodatacow also disables data
checksum, which is another main feature for btrfs.
Then decoupling of the two should
On 08/12/2017 02:12 PM, Hugo Mills wrote:
> On Sat, Aug 12, 2017 at 01:51:46PM +0200, Christoph Anton Mitterer wrote:
>> On Sat, 2017-08-12 at 00:42 -0700, Christoph Hellwig wrote:
[...]
>> good, but csum is not
>
>I don't think this is a particularly good description of
On Sat, Aug 12, 2017 at 01:51:46PM +0200, Christoph Anton Mitterer wrote:
> On Sat, 2017-08-12 at 00:42 -0700, Christoph Hellwig wrote:
> > And how are you going to write your data and checksum atomically when
> > doing in-place updates?
>
> Maybe I misunderstand something, but what's the big
On Sat, 2017-08-12 at 00:42 -0700, Christoph Hellwig wrote:
> And how are you going to write your data and checksum atomically when
> doing in-place updates?
Maybe I misunderstand something, but what's the big deal with not doing
it atomically (I assume you mean in terms of actually writing to
On Sat, Aug 12, 2017 at 02:10:18AM +0200, Christoph Anton Mitterer wrote:
> Qu Wenruo wrote:
> >Although Btrfs can disable data CoW, nodatacow also disables data
> >checksum, which is another main feature for btrfs.
>
> Then decoupling of the two should probably decoupled and support for
>
Qu Wenruo wrote:
>Although Btrfs can disable data CoW, nodatacow also disables data
>checksum, which is another main feature for btrfs.
Then decoupling of the two should probably decoupled and support for
notdatacow+checksumming be implemented?!
I'm not an expert, but I wouldn't see why this
On 2017年08月07日 23:27, Chris Murphy wrote:
On Fri, Aug 4, 2017 at 8:05 AM, Qu Wenruo wrote:
For example, if one day there is some dm-csum to support verify csum of
given ranges (and skip unrelated ones specified by higher levels), btrfs
support for data csum is
On Fri, Aug 4, 2017 at 8:05 AM, Qu Wenruo wrote:
>
> For example, if one day there is some dm-csum to support verify csum of
> given ranges (and skip unrelated ones specified by higher levels), btrfs
> support for data csum is no longer an exclusive feature.
How would
Hi Qu,
On Fri, Aug 4, 2017 at 10:05 PM, Qu Wenruo wrote:
>
>
> On 2017年08月02日 16:38, Brendan Hide wrote:
>>
>> The title seems alarmist to me - and I suspect it is going to be
>> misconstrued. :-/
>>
>> From the release notes at
>>
On 2017年08月02日 16:38, Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-03 16:45, Brendan Hide wrote:
On 08/03/2017 09:22 PM, Austin S. Hemmelgarn wrote:
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
There are no higher-level management tools (e.g. RAID
management/monitoring, etc.)...
[snip]
Austin S. Hemmelgarn posted on Thu, 03 Aug 2017 15:03:53 -0400 as
excerpted:
>> Same thing with the trim feature that is marked OK . It clearly says
>> that is has performance implications. It is marked OK so one would
>> expect it to not cause the filesystem to fail, but if the performance
>>
On Thu, Aug 3, 2017 at 2:45 PM, Brendan Hide wrote:
>
> To counter, I think this is a big problem with btrfs, especially in terms of
> user attrition. We don't need "GUI" tools. At all. But we do need that btrfs
> is self-sufficient enough that regular users don't get
On 08/03/2017 09:22 PM, Austin S. Hemmelgarn wrote:
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
There are no higher-level management tools (e.g. RAID
management/monitoring, etc.)...
[snip]
As far as 'higher-level' management tools,
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On Wed, Aug 2, 2017 at 3:11 AM, Wang Shilong wrote:
> I haven't seen active btrfs developers from some time, Redhat looks
> put most of their efforts on XFS, It is time to switch to SLES/opensuse!
I disagree. We need one or more Btrfs developers involved in Fedora.
On 2017-08-03 14:08, waxhead wrote:
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
> Brendan Hide wrote:
> > The title seems alarmist to me - and I suspect it is going to be
> > misconstrued. :-/
> >
> > From the release notes at
> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Li
> >
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 08/03/2017 12:22 AM, Chris Murphy wrote:
Also more interesting is this Stratis project that started up a few months ago:
https://github.com/stratis-storage/stratisd
Which also includes this design document:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
This concept, if
On Wed, Aug 2, 2017 at 2:38 AM, Brendan Hide wrote:
> The title seems alarmist to me - and I suspect it is going to be
> misconstrued. :-/
Josef pushed bak on the HN thread with very sound reasoning about why
this is totally unsurprising. RHEL runs old kernels, and
On Thu, Aug 3, 2017 at 1:44 AM, Chris Mason wrote:
>
> On 08/02/2017 04:38 AM, Brendan Hide wrote:
>>
>> The title seems alarmist to me - and I suspect it is going to be
>> misconstrued. :-/
>
>
> Supporting any filesystem is a huge amount of work. I don't have a problem
> with
On 08/02/2017 04:38 AM, Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
Supporting any filesystem is a huge amount of work. I don't have a
problem with Redhat or any distro picking and choosing the projects they
want to support.
At
On 2017-08-02 08:55, Lutz Vieweg wrote:
On 08/02/2017 01:25 PM, Austin S. Hemmelgarn wrote:
And this is a worst-case result of the fact that most
distros added BTRFS support long before it was ready.
RedHat still advertises "Ceph", and given Ceph initially recommended
btrfs as
the
On 08/02/2017 01:25 PM, Austin S. Hemmelgarn wrote:
And this is a worst-case result of the fact that most
distros added BTRFS support long before it was ready.
RedHat still advertises "Ceph", and given Ceph initially recommended btrfs as
the filesystem to use for its nodes, it is interesting
On 2017-08-02 04:38, Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
I haven't seen active btrfs developers from some time, Redhat looks
put most of their efforts on XFS, It is time to switch to SLES/opensuse!
On Wed, Aug 2, 2017 at 4:38 PM, Brendan Hide wrote:
> The title seems alarmist to me - and I suspect it is going to be
>
62 matches
Mail list logo