13.06.2016 01:49, Henk Slager пишет:
> On Sun, Jun 12, 2016 at 11:22 PM, Maximilian Böhm wrote:
>> Hi there, I did something terribly wrong, all blame on me. I wanted to
>> write to an USB stick but /dev/sdc wasn't the stick in this case but
>> an attached HDD with GPT and an 8 TB btrfs partition…
Henk Slager posted on Sun, 12 Jun 2016 21:03:22 +0200 as excerpted:
> But now that you anyhow have all data on 3x 6TB drives, you could save
> balancing time by just doing btrfs-replace 6TB to 8TB 3x and then for
> the 4th 8TB just add it and let btrfs do the spreading/balancing over
> time by its
This fixes a problem introduced in commit
2f3165ecf103599f82bf0ea254039db335fb5005
"btrfs: don't force mounts to wait for cleaner_kthread to delete one or more
subvolumes".
open_ctree eventually calls btrfs_replay_log which in turn calls
btrfs_commit_super which tries to lock the cleaner_mutex,
On Fri, 10 Jun 2016, Chris Murphy wrote:
> > Are those issues something which was fixed since 4.6.0-rc4+ or I should
> > be on look out for them to come back? What other information should I
> > provide if I run into them again to help you troubleshoot/fix it?
> > P.S. Please CC me the replies
On Mon, Jun 13, 2016 at 10:10:50AM +0800, Lu Fengqi wrote:
> Test if qgroup can handle extent de-reference during reallocation.
> "extent de-reference" means that reducing an extent's reference count
> or freeing an extent.
> Although current qgroup can handle it, we still need to prevent any
> reg
Test if qgroup can handle extent de-reference during reallocation.
"extent de-reference" means that reducing an extent's reference count
or freeing an extent.
Although current qgroup can handle it, we still need to prevent any
regression which may break current qgroup.
Signed-off-by: Lu Fengqi
--
At 06/12/2016 12:38 AM, Eryu Guan wrote:
On Wed, Jun 01, 2016 at 02:40:11PM +0800, Lu Fengqi wrote:
Test if qgroup can handle extent de-reference during reallocation.
"extent de-reference" means that reducing an extent's reference count
or freeing an extent.
Although current qgroup can handle it
On 06/03/2016 09:50 AM, Andrew Armenia wrote:
This patch adds mount option 'chunk_width_limit=X', which when set forces
the chunk allocator to use only up to X devices when allocating a chunk.
This may help reduce the seek penalties seen in filesystems with large
numbers of devices.
Have you
Only in the case of different root_id or different object_id, check_shared
identified extent as the shared. However, If a extent was referred by
different offset of same file, it should also be identified as shared.
In addition, check_shared's loop scale is at least n^3, so if a extent
has too many
At 06/09/2016 05:15 PM, David Sterba wrote:
On Wed, Jun 08, 2016 at 08:53:00AM -0700, Mark Fasheh wrote:
On Wed, Jun 08, 2016 at 01:13:03PM +0800, Lu Fengqi wrote:
Only in the case of different root_id or different object_id, check_shared
identified extent as the shared. However, If a extent wa
I don't think it's memory corruption as my modules test out fine, and
the problem began when i ran the btrfs check
--repair. Someone responded that they thought that the missing files
that are playable by the media player were still in memory, but they
still play after a reboot and they're not in a
On Sun, Jun 12, 2016 at 3:22 PM, Maximilian Böhm wrote:
> Hi there, I did something terribly wrong, all blame on me. I wanted to
> write to an USB stick but /dev/sdc wasn't the stick in this case but
> an attached HDD with GPT and an 8 TB btrfs partition…
>
> $ sudo dd bs=4M if=manjaro-kde-16.06.1
On Sun, Jun 12, 2016 at 11:22 PM, Maximilian Böhm wrote:
> Hi there, I did something terribly wrong, all blame on me. I wanted to
> write to an USB stick but /dev/sdc wasn't the stick in this case but
> an attached HDD with GPT and an 8 TB btrfs partition…
GPT has a secondary copy at the end of t
Hi Maximilian,
On Sonntag, 12. Juni 2016 23:22:11 CEST Maximilian Böhm wrote:
> Hi there, I did something terribly wrong, all blame on me. I wanted to
> write to an USB stick but /dev/sdc wasn't the stick in this case but
> an attached HDD with GPT and an 8 TB btrfs partition…
>
> $ sudo dd bs=4M
On Fri, Jun 10, 2016 at 9:00 PM, Henk Slager wrote:
> I have seldom seen an fs so full, very regular numbers :)
>
> But can you provide the output of this script:
> https://github.com/knorrie/btrfs-heatmap/blob/master/show_usage.py
>
> It gives better info w.r.t. devices and it is then easier to s
Hi there, I did something terribly wrong, all blame on me. I wanted to
write to an USB stick but /dev/sdc wasn't the stick in this case but
an attached HDD with GPT and an 8 TB btrfs partition…
$ sudo dd bs=4M if=manjaro-kde-16.06.1-x86_64.iso of=/dev/sdc
483+1 Datensätze ein
483+1 Datensätze aus
On Sun, Jun 12, 2016 at 7:03 PM, boli wrote:
>>> It's done now, and took close to 99 hours to rebalance 8.1 TB of data from
>>> a 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining
>>> 3x6TB raid1 (9 TB capacity).
>>
>> Indeed, it not clear why it takes 4 days for such an actio
Hi!
On 06/12/2016 08:41 PM, Goffredo Baroncelli wrote:
Hi All,
On 2016-06-10 22:47, Hans van Kranenburg wrote:
+if (sk->min_objectid < sk->max_objectid) +
sk->min_objectid += 1;
...and now it's (289406977 168 19193856), which means you're
continuing your search *after* the block grou
Hi All,
On 2016-06-10 22:47, Hans van Kranenburg wrote:
>> +if (sk->min_objectid < sk->max_objectid)
>> +sk->min_objectid += 1;
>
> ...and now it's (289406977 168 19193856), which means you're
> continuing your search *after* the block group item!
>
> (289406976 168 19193856
>> It's done now, and took close to 99 hours to rebalance 8.1 TB of data from a
>> 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining 3x6TB
>> raid1 (9 TB capacity).
>
> Indeed, it not clear why it takes 4 days for such an action. You
> indicated that you cannot add an online 5
Bearcat Şándor gmail.com> writes:
> Is there a fix for the bad tree block error, which seems to be the
> root (pun intended) of all this?
I think the root cause is some memory corruption. It might be known case,
maybe someone else recognizes something.
Anyhow, if you can't and won't reproduce
On Sun, Jun 12, 2016 at 12:35 PM, boli wrote:
>> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
>>
>> These 90 hours seem like a rather long time, given that a rebalance/convert
>> from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a
>> scrub take
> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
>
> These 90 hours seem like a rather long time, given that a rebalance/convert
> from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub
> takes about 7 hours (4-disk-raid1).
>
> OTOH the files
23 matches
Mail list logo