The original csum error message only outputs inode number, offset, check
sum and expected check sum.
However no root objectid is outputted, which sometimes makes debugging
quite painful under multi-subvolume case (including relocation).
Also the checksum output is decimal, which seldom makes
The original csum error message only outputs inode number, offset, check
sum and expected check sum.
However no root objectid is outputted, which sometimes makes debugging
quite painful under multi-subvolume case (including relocation).
Also the checksum output is decimal, which seldom makes
At 02/08/2017 05:55 PM, Vasco Visser wrote:
Thank you for the explanation. What I would still like to know is how
to relate the chunk level abstraction to the file level abstraction.
According to the btrfs output there is 2G of data space is available
and 24G of data space is being used. Does
I had a file read fail repeatably, in syslog, lines like this
kernel: BTRFS warning (device dm-5): csum failed ino 2241616 off
51580928 csum 4redacted expected csum 2redacted
I rmed the file.
Another error more recently, 5 instances which look like this:
kernel: BTRFS warning (device dm-5):
At 02/08/2017 10:09 PM, Filipe Manana wrote:
On Wed, Feb 8, 2017 at 1:56 AM, Qu Wenruo wrote:
Just as Filipe pointed out, the most time consuming part of qgroup is
btrfs_qgroup_account_extents() and
btrfs_qgroup_prepare_account_extents().
there's an "and" so the
At 02/08/2017 09:56 PM, Filipe Manana wrote:
On Wed, Feb 8, 2017 at 12:39 AM, Qu Wenruo wrote:
At 02/07/2017 11:55 PM, Filipe Manana wrote:
On Tue, Feb 7, 2017 at 12:22 AM, Qu Wenruo
wrote:
At 02/07/2017 12:09 AM, Goldwyn Rodrigues
On Tue, Feb 07, 2017 at 05:02:53PM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> Before we destroy all work queues (and wait for their tasks to complete)
> we were destroying the work queues used for metadata I/O operations, which
> can result in a use-after-free
On Wed, Feb 08, 2017 at 05:51:28PM +0100, David Sterba wrote:
Hi,
could you please merge this single-patch pull request, for 4.10 still? There
are quite a few patches on top of v4.10-rc7 so this IMHO does not look like
look too bad even late in the release cycle. Though it's a fix for an
On 08/02/17 18:38, Libor Klepáč wrote:
> I'm interested in using:
...
> - send/receive for offisite backup
I don't particularly recommend that. I do use send/receive for onsite
backups (I actually use btrbk). But for offsite I use a traditional
backup tool (I use dar). For three main reasons:
[ ... ]
> The issue isn't total size, it's the difference between total
> size and the amount of data you want to store on it. and how
> well you manage chunk usage. If you're balancing regularly to
> compact chunks that are less than 50% full, [ ... ] BTRFS on
> 16GB disk images before with
Это копия сообщения, которое вы отправили Балашева Майя Валерьевна через
Культурный фонд "Наследие"
Это письмо отправлено с сайта http://www.xn8sbkcebuvoch5b6a.xn--p1ai/ от:
LolacakEncum
Оfтеnтiмеs, тhe sаyіng, “тhеrе arе plеnтy of fish in тhe sеа,” іs gіvеn аs
On 2017-02-08 09:46, Peter Grandi wrote:
My system is or seems to be running out of disk space but I
can't find out how or why. [ ... ]
FilesystemSize Used Avail Use% Mounted on
/dev/sda3 28G 26G 2.1G 93% /
[ ... ]
So from chunk level, your fs is already full.
Am Wed, 08 Feb 2017 19:38:06 +0100
schrieb Libor Klepáč :
> Hello,
> inspired by recent discussion on BTRFS vs. databases i wanted to ask
> on suitability of BTRFS for hosting a Cyrus imap server spool. I
> haven't found any recent article on this topic.
>
> I'm preparing
On 2017-02-08 13:38, Libor Klepáč wrote:
Hello,
inspired by recent discussion on BTRFS vs. databases i wanted to ask on
suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found
any recent article on this topic.
I'm preparing migration of our mailserver to Debian Stretch, ie.
On 2017-02-07 22:35, Kai Krakow wrote:
[...]
>>
>> Atomicity can be a relative term. If the snapshot atomicity is
>> relative to barriers but not relative to individual writes between
>> barriers then AFAICT it's fine because the filesystem doesn't make
>> any promise it won't keep even in the
On 2017-02-08 08:46, Tomasz Torcz wrote:
On Wed, Feb 08, 2017 at 07:50:22AM -0500, Austin S. Hemmelgarn wrote:
It is exponentially safer in BTRFS
to run single data single metadata than half raid1 data half raid1 metadata.
Why?
To convert to profiles _designed_ for a single device and
On Wed, Feb 08, 2017 at 02:46:32PM +, Peter Grandi wrote:
> >> My system is or seems to be running out of disk space but I
> >> can't find out how or why. [ ... ]
> >> FilesystemSize Used Avail Use% Mounted on
> >> /dev/sda3 28G 26G 2.1G 93% /
> [ ... ]
> > So
On Tue, Jan 31, 2017 at 07:50:22AM -0800, Liu Bo wrote:
> We have similar codes to create and insert extent mapping around IO path,
> this merges them into a single helper.
Looks good, comments below.
> +static struct extent_map *create_io_em(struct inode *inode, u64 start, u64
> len,
> +
Hello,
inspired by recent discussion on BTRFS vs. databases i wanted to ask on
suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found
any recent article on this topic.
I'm preparing migration of our mailserver to Debian Stretch, ie. kernel 4.9
for now. We are using XFS for
On Wed, Feb 08, 2017 at 07:50:22AM -0500, Austin S. Hemmelgarn wrote:
> It is exponentially safer in BTRFS
> to run single data single metadata than half raid1 data half raid1 metadata.
Why?
> To convert to profiles _designed_ for a single device and then convert back
> to raid1 when I got
On Tue, Feb 07, 2017 at 12:14:51PM -0800, Liu Bo wrote:
> > + end_page_writeback(page);
> > + }
> >
> > cur = cur + iosize;
> > pg_offset += iosize;
> > @@ -3767,7 +3770,8 @@ static noinline_for_stack int write_one_eb(struct
> >
On Tue, Feb 07, 2017 at 02:57:17PM +0800, Qu Wenruo wrote:
> The original csum error message only outputs inode number, offset, check
> sum and expected check sum.
>
> However no root objectid is outputted, which sometimes makes debugging
> quite painful under multi-subvolume case (including
On Thu, Feb 02, 2017 at 06:34:06PM +0100, Jan Kara wrote:
> Allocate struct backing_dev_info separately instead of embedding it
> inside superblock. This unifies handling of bdi among users.
>
> CC: Chris Mason
> CC: Josef Bacik
> CC: David Sterba
On Fri, Feb 03, 2017 at 10:15:32AM -0600, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> new_len is not used in delete_extent_records().
>
> Signed-off-by: Goldwyn Rodrigues
Applied, thanks.
--
To unsubscribe from this list: send the line
On Mon, Feb 06, 2017 at 07:39:09PM -0500, Jeff Mahoney wrote:
> Commit 4c63c2454ef incorrectly assumed that returning -ENOIOCTLCMD would
> cause the native ioctl to be called. The ->compat_ioctl callback is
> expected to handle all ioctls, not just compat variants. As a result,
> when using
Hi,
could you please merge this single-patch pull request, for 4.10 still? There
are quite a few patches on top of v4.10-rc7 so this IMHO does not look like
look too bad even late in the release cycle. Though it's a fix for an uncommon
usecase of 32bit userspace on 64bit kernel, it fixes
>> My system is or seems to be running out of disk space but I
>> can't find out how or why. [ ... ]
>> FilesystemSize Used Avail Use% Mounted on
>> /dev/sda3 28G 26G 2.1G 93% /
[ ... ]
> So from chunk level, your fs is already full. And balance
> won't success since
I'm using trying to use qgroups to keep track of storage occupied by
snapshots. I noticed that:
a) no two rescans can run in parallel, and there's no way to schedule
another rescan while one is running;
b) seems like it's a whole-disk operation regardless of path specified
in CLI.
I only
W dniu 2017-02-08 o 13:14 PM, Martin Raiber pisze:
> Hi,
>
> On 08.02.2017 03:11 Peter Zaitsev wrote:
>> Out of curiosity, I see one problem here:
>> If you're doing snapshots of the live database, each snapshot leaves
>> the database files like killing the database in-flight. Like shutting
>> the
Hi,
When it comes to MySQL I'm not really sure what you're trying to
achieve. Because MySQL manages its own cache flushing OS cache to the
disk and "freezing" FS does not really do much - it will still need to
do crash recovery when such snapshot is restored.
The reason people would use
W dniu 2017-02-08 o 14:32 PM, Austin S. Hemmelgarn pisze:
> On 2017-02-08 08:26, Martin Raiber wrote:
>> On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
>>> On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
> Out of curiosity, I see one
On Wed, Feb 8, 2017 at 1:56 AM, Qu Wenruo wrote:
> Just as Filipe pointed out, the most time consuming part of qgroup is
> btrfs_qgroup_account_extents() and
> btrfs_qgroup_prepare_account_extents().
there's an "and" so the "is" should be "are" and "part" should be
On Wed, Feb 8, 2017 at 12:39 AM, Qu Wenruo wrote:
>
>
> At 02/07/2017 11:55 PM, Filipe Manana wrote:
>>
>> On Tue, Feb 7, 2017 at 12:22 AM, Qu Wenruo
>> wrote:
>>>
>>>
>>>
>>> At 02/07/2017 12:09 AM, Goldwyn Rodrigues wrote:
On 2017-02-08 08:26, Martin Raiber wrote:
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leaves
the
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
> On 2017-02-08 07:14, Martin Raiber wrote:
>> Hi,
>>
>> On 08.02.2017 03:11 Peter Zaitsev wrote:
>>> Out of curiosity, I see one problem here:
>>> If you're doing snapshots of the live database, each snapshot leaves
>>> the database files like
On 2017-02-07 20:49, Nicholas D Steeves wrote:
Dear btrfs community,
Please accept my apologies in advance if I missed something in recent
btrfs development; my MUA tells me I'm ~1500 unread messages
out-of-date. :/
I recently read about "mount -t btrfs -o user_subvol_rm_allowed" while
doing
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leaves
the database files like killing the database in-flight. Like shutting
the system down in the
On Wed, Feb 08, 2017 at 07:29:22AM -0500, Austin S. Hemmelgarn wrote:
> On 2017-02-07 13:27, David Sterba wrote:
> > On Fri, Feb 03, 2017 at 08:48:58AM -0500, Austin S. Hemmelgarn wrote:
> >> This adds some extra documentation to the btrfs-receive manpage that
> >> explains some of the security
On 2017-02-07 17:28, Kai Krakow wrote:
Am Thu, 19 Jan 2017 15:02:14 -0500
schrieb "Austin S. Hemmelgarn" :
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of
On 2017-02-07 22:21, Hans Deragon wrote:
Greetings,
On 2017-02-02 10:06, Austin S. Hemmelgarn wrote:
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case
On 2017-02-07 13:27, David Sterba wrote:
On Fri, Feb 03, 2017 at 08:48:58AM -0500, Austin S. Hemmelgarn wrote:
This adds some extra documentation to the btrfs-receive manpage that
explains some of the security related aspects of btrfs-receive. The
first part covers the fact that the subvolume
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
> Out of curiosity, I see one problem here:
> If you're doing snapshots of the live database, each snapshot leaves
> the database files like killing the database in-flight. Like shutting
> the system down in the middle of writing data.
>
> This is
On 2017-02-07 15:54, Kai Krakow wrote:
Am Tue, 7 Feb 2017 15:27:34 -0500
schrieb "Austin S. Hemmelgarn" :
I'm not sure about this one. I would assume based on the fact that
many other things don't work with nodatacow and that regular defrag
doesn't work on files which
Thank you for the explanation. What I would still like to know is how
to relate the chunk level abstraction to the file level abstraction.
According to the btrfs output there is 2G of data space is available
and 24G of data space is being used. Does this mean 24G of data used
in files? How do I
On 07/02/17 23:28, Kai Krakow wrote:
To be realistic: I wouldn't trade space usage for duplicate data on an
already failing disk, no matter if it's DUP or RAID1. HDD disk space is
cheap, and using such a scenario is just waste of performance AND
space - no matter what. I don't understand the
45 matches
Mail list logo