On 08/03/2017 12:22 AM, Chris Murphy wrote:
Also more interesting is this Stratis project that started up a few months ago:
https://github.com/stratis-storage/stratisd
Which also includes this design document:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
This concept, if success
the write performance
big time.
(That backup topic is the one reason we use btrfs for a lot of
/home/ directories.)
I understand that XFS is expected to get some COW-features in the future
as well - but it remains to be seen what performance and robustness
implications that will have on XFS.
Regards,
On 08/05/2016 10:03 PM, Gabriel C wrote:
On 04.08.2016 18:53, Lutz Vieweg wrote:
I was today hit by what I think is probably the same bug:
A btrfs on a close-to-4TB sized block device, only half filled
to almost exactly 2 TB, suddenly says "no space left on device"
upon any attempt t
/lkml/2016/3/28/230
Looks also similar to the subject of the lenghty thread titled
"6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem"
that started with:
http://www.spinics.net/lists/linux-btrfs/msg50599.html
Regards,
Lutz Vieweg
--
To unsubsc
the compression and filesystem shrinking may not be needed in
> your use case, the data integrity features are almost certainly an advantage.
Btrfs sure has some nifty features, and I understand that for some
stuff like "subvolumes" or "deduplication" are important.
But a hundr
g more
performant than block device based snapshot" may fade away
with the replacement of magnetic disks with SSDs in the long run.
Regards,
Lutz Vieweg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
with ugly block-device based snapshots
for backup) or try my luck with OpenZFS :-(
Regards,
Lutz Vieweg
On 01/11/2016 02:45 PM, cheater00 . wrote:
After remounting, the bug doesn't transpire any more, Data gets resized.
It is my experience that this bug will go untriggered for weeks at a
.
Regards,
Lutz Vieweg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
copy.
Chances are this will also take hours.
Regards,
Lutz Vieweg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
files like
VM images results in excessively fragmented files.
And taking snapshots kind of counteracts "nodatacow".
What does "filefrag" tell about your VM images on btrfs?
(As much as I like btrfs for other purposes, I currently stay
with XFS for VM images, database files a
Testing the patch took much longer than I anticipated due to pre-4.1-kernels
being "too risky" for use on our servers, but now it's done and I can say:
This patch, as integrated in linux-4.1, has successfully removed the lags.
Thanks!
Regards,
Lutz Vieweg
On 04/22/2015 06:09
inished ok, albeight after a loong time.)
Regards,
Lutz Vieweg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
;t
The same on /some/xfs/testdir works fine.
This bug is not present in upstream "psmisc" sources as of current master in
git://git.code.sf.net/p/psmisc/code
An fuser executable compiled from the current upstream sources works fine also
for btrfs.
Regards,
Lutz Vieweg
--
To u
seconds per commit, but see peak times much higher.
Since we see this problem very frequently on some shared development servers,
I will try to install this ASAP.
Meanwhile, can anybody already tell success stories about successfully removing
lags by this patch?
Regards,
Lutz Vieweg
--
To
On 02/06/2015 06:20 AM, Qu Wenruo wrote:
From: Lutz Vieweg
use case: You have two huge files on a btrfs, you assume they contain the same
bytes,
but you do not know for sure.
Is there a way to get a checksum of both files from btrfs with less effort than
reading the whole of both files and
-internal CRCs might be of use, here...)
Regards,
Lutz Vieweg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
; lsattr someexistingfile
will show the C flag as "not set". It takes some
reading to realize that btrfs cannot change the non-COW
flag on files bigger than 0 bytes.
Maybe "chattr +C" could print a warning if a file
to change attributes for is > 0 bytes long?
Regards,
Lutz Vi
rfs try to write _two_ copies of
everything to _one_ remaining device of a degraded two-disk raid1?
(If yes, then this means a raid1 would have to be planned with
twice the capacity just to be sure that one failing disk will
not lead to an out-of-diskspace situation. Not good.)
Regards,
Lutz Vieweg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
/linux/kernel/git/mason/btrfs-progs.git
without manual intervention, but was easy to fix.)
Regards,
Lutz Vieweg
PS: Will now proceed with some less basic resilience tests... ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
/linux/kernel/git/mason/btrfs-progs.git
without manual intervention, but was easy to fix.)
Regards,
Lutz Vieweg
PS: Will now proceed with some less basic resilience tests... ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
those awful hardware RAID controllers, which caused
us additional downtime more often than they prevented downtime.
Regards,
Lutz Vieweg
On 11/14/2013 03:02 AM, Lutz Vieweg wrote:
Hi,
on a server that so far uses an MD RAID1 with XFS on it we wanted
to try btrfs, instead.
But even the most
e testfile is not readable anymore. (At this point, no messages
are to be found in dmesg/syslog - I would expect such on an
input/output error.)
So the bottom line is: All the double writing that comes with RAID1
mode did not provide any usefule resilience.
I am kind of sure this is not as
22 matches
Mail list logo