...and can it be related to the Samsung 840 SSD's not supporting NCQ
Trim? (Although I can't tell which device this trace is from -- it
could be a mechanical Western Digital.)
On Sun, Sep 10, 2017 at 10:16 PM, Rich Rauenzahn wrote:
> Is this something to be concerned about?
Marc reported that "btrfs check --repair" runs much faster than "btrfs
check", which is quite weird.
This patch will add time elapsed for each major tree it checked, for
both original mode and lowmem mode, so we can have a clue what's going
wrong.
Reported-by: Marc MERLIN
On 2017年09月10日 22:34, Martin Raiber wrote:
Hi,
On 10.09.2017 08:45 Qu Wenruo wrote:
On 2017年09月10日 14:41, Qu Wenruo wrote:
On 2017年09月10日 07:50, Rohan Kadekodi wrote:
Hello,
I was trying to understand how file renames are handled in Btrfs. I
read the code documentation, but had a
This patch updates btrfs-completion:
- add "filesystem du" and "rescure zero-log"
- restrict _btrfs_mnts to show btrfs type only
- add more completion in last case statements
(This file contains both spaces/tabs and may need cleanup.)
Signed-off-by: Tomohiro Misono
It doesn't need replaced disk to be readable, right? Then what prevents same
procedure to work without a spare bay?
--
With Best Regards,
Marat Khalili
On September 9, 2017 1:29:08 PM GMT+03:00, Patrik Lundquist
wrote:
>On 9 September 2017 at 12:05, Marat Khalili
On 2017年09月10日 14:41, Qu Wenruo wrote:
On 2017年09月10日 07:50, Rohan Kadekodi wrote:
Hello,
I was trying to understand how file renames are handled in Btrfs. I
read the code documentation, but had a problem understanding a few
things.
During a file rename, btrfs_commit_transaction() is
On 2017年09月10日 01:44, Marc MERLIN wrote:
So, should I assume that btrfs progs git has some issue since there is
no plausible way that a check --repair should be faster than a regular
check?
Yes, the assumption that repair should be no faster than RO check is
correct.
Especially for clean
On 2017年09月10日 07:50, Rohan Kadekodi wrote:
Hello,
I was trying to understand how file renames are handled in Btrfs. I
read the code documentation, but had a problem understanding a few
things.
During a file rename, btrfs_commit_transaction() is called which is
because Btrfs has to commit
Perhaps netapp is using a VFS overlay. There is really only one snapshot but it
is shown in the overlay on every folder. Kind of the same with samba Shadow
Copies.
From: Ulli Horlacher -- Sent: 2017-09-09 -
21:52
> On Sat 2017-09-09 (22:43), Andrei
Is this something to be concerned about?
I'm running the latest mainline kernel on CentOS 7.
[ 1338.882288] [ cut here ]
[ 1338.883058] WARNING: CPU: 2 PID: 790 at fs/btrfs/ctree.h:1559
btrfs_update_device+0x1c5/0x1d0 [btrfs]
[ 1338.883809] Modules linked in: xt_nat veth
>Actually based on http://carfax.org.uk/btrfs-usage/index.html I
>would've expected 6 TB of usable space. Here I get 6.4 which is odd,
>but that only 1.5 TB is available is even stranger.
>
>Could anyone explain what I did wrong or why my expectations are wrong?
>
>Thank you in advance
I'd say df
@Kai and Dmitrii
thank you for your explanations if I understand you correctly, you're
saying that btrfs makes no attempt to "optimally" use the physical
devices it has in the FS, once a new RAID1 block group needs to be
allocated it will semi-randomly pick two devices with enough space and
>Problem is that each raid1 block group contains two chunks on two
>separate devices, it can't utilize fully three devices no matter what.
>If that doesn't suit you then you need to add 4th disk. After
>that FS will be able to use all unallocated space on all disks in raid1
>profile. But even then
> @Kai and Dmitrii
> thank you for your explanations if I understand you correctly, you're
> saying that btrfs makes no attempt to "optimally" use the physical
> devices it has in the FS, once a new RAID1 block group needs to be
> allocated it will semi-randomly pick two devices with enough space
Hi,
On 10.09.2017 08:45 Qu Wenruo wrote:
>
>
> On 2017年09月10日 14:41, Qu Wenruo wrote:
>>
>>
>> On 2017年09月10日 07:50, Rohan Kadekodi wrote:
>>> Hello,
>>>
>>> I was trying to understand how file renames are handled in Btrfs. I
>>> read the code documentation, but had a problem understanding a few
Thank you for the prompt and elaborate answers! However, I think I was
unclear in my questions, and I apologize for the confusion.
What I meant was that for a file rename, when I check the blktrace
output, there are 2 writes of 256KB each starting from byte number:
13373440
When I check
On Sat, Sep 09, 2017 at 10:43:16PM +0300, Andrei Borzenkov wrote:
> 09.09.2017 16:44, Ulli Horlacher пишет:
> >
> > Your tool does not create .snapshot subdirectories in EVERY directory like
>
> Neither does NetApp. Those "directories" are magic handles that do not
> really exist.
Correct,
Am Sun, 10 Sep 2017 15:45:42 +0200
schrieb FLJ :
> Hello all,
>
> I have a BTRFS RAID1 volume running for the past year. I avoided all
> pitfalls known to me that would mess up this volume. I never
> experimented with quotas, no-COW, snapshots, defrag, nothing really.
> The
Hello all,
I have a BTRFS RAID1 volume running for the past year. I avoided all
pitfalls known to me that would mess up this volume. I never
experimented with quotas, no-COW, snapshots, defrag, nothing really.
The volume is a RAID1 from day 1 and is working reliably until now.
Until yesterday it
> As I am writing some documentation abount creating snapshots:
> Is there a generic name for both volume and subvolume root?
Yes, it is from the UNIX side 'root directory' and from the
Btrfs side 'subvolume'. Like some other things Btrfs, its
terminology is often inconsistent, but "volume"
If 'btrfs_alloc_path()' fails, we must free the resourses already
allocated, as done in the other error handling paths in this function.
Signed-off-by: Christophe JAILLET
---
fs/btrfs/tests/free-space-tree-tests.c | 3 ++-
1 file changed, 2 insertions(+), 1
On Sun, Sep 10, 2017 at 02:01:58PM +0800, Qu Wenruo wrote:
>
>
> On 2017年09月10日 01:44, Marc MERLIN wrote:
> > So, should I assume that btrfs progs git has some issue since there is
> > no plausible way that a check --repair should be faster than a regular
> > check?
>
> Yes, the assumption that
Great, if the free space cache is fucked again after the next go around then I
need to expand the verifier to watch entries being added to the cache as well.
Thanks,
Josef
Sent from my iPhone
> On Sep 10, 2017, at 9:14 AM, Marc MERLIN wrote:
>
>> On Sun, Sep 10, 2017 at
On 10 September 2017 at 08:33, Marat Khalili wrote:
> It doesn't need replaced disk to be readable, right?
Only enough to be mountable, which it already is, so your read errors
on /dev/sdb isn't a problem.
> Then what prevents same procedure to work without a spare bay?
It is
On 2017年09月10日 19:19, Christophe JAILLET wrote:
If 'btrfs_alloc_path()' fails, we must free the resourses already
allocated, as done in the other error handling paths in this function.
Signed-off-by: Christophe JAILLET
Reviewed-by: Qu Wenruo
On Sun, Sep 10, 2017 at 03:12:16AM +, Josef Bacik wrote:
> Ok mount -o clear_cache, umount and run fsck again just to make sure. Then
> if it comes out clean mount with ref_verify again and wait for it to blow up
> again. Thanks,
Ok, just did the 2nd fsck, came back clean after mount -o
10.09.2017 18:47, Kai Krakow пишет:
> Am Sun, 10 Sep 2017 15:45:42 +0200
> schrieb FLJ :
>
>> Hello all,
>>
>> I have a BTRFS RAID1 volume running for the past year. I avoided all
>> pitfalls known to me that would mess up this volume. I never
>> experimented with quotas,
> > Drive1 Drive2Drive3
> > X X
> > X X
> > X X
> >
> > Where X is a chunk of raid1 block group.
>
> But this table clearly shows that adding third drive increases free
> space by 50%. You need to reallocate data to actually
10.09.2017 19:11, Dmitrii Tcvetkov пишет:
>> Actually based on http://carfax.org.uk/btrfs-usage/index.html I
>> would've expected 6 TB of usable space. Here I get 6.4 which is odd,
>> but that only 1.5 TB is available is even stranger.
>>
>> Could anyone explain what I did wrong or why my
On 2017年09月10日 22:32, Rohan Kadekodi wrote:
Thank you for the prompt and elaborate answers! However, I think I was
unclear in my questions, and I apologize for the confusion.
What I meant was that for a file rename, when I check the blktrace
output, there are 2 writes of 256KB each starting
On Sun, Sep 10, 2017 at 01:16:26PM +, Josef Bacik wrote:
> Great, if the free space cache is fucked again after the next go
> around then I need to expand the verifier to watch entries being added
> to the cache as well. Thanks,
Well, I copied about 1TB of data, and nothing happened.
So it
Am Sun, 10 Sep 2017 20:15:52 +0200
schrieb Ferenc-Levente Juhos :
> >Problem is that each raid1 block group contains two chunks on two
> >separate devices, it can't utilize fully three devices no matter
> >what. If that doesn't suit you then you need to add 4th disk. After
>
FLJ posted on Sun, 10 Sep 2017 15:45:42 +0200 as excerpted:
> I have a BTRFS RAID1 volume running for the past year. I avoided all
> pitfalls known to me that would mess up this volume. I never
> experimented with quotas, no-COW, snapshots, defrag, nothing really.
> The volume is a RAID1 from day
10.09.2017 23:17, Dmitrii Tcvetkov пишет:
>>> Drive1 Drive2Drive3
>>> X X
>>> X X
>>> X X
>>>
>>> Where X is a chunk of raid1 block group.
>>
>> But this table clearly shows that adding third drive increases free
>> space by 50%.
34 matches
Mail list logo