On Sun, Mar 20, 2016 at 10:34 PM, Ryan Erato wrote:
.
>
> Sending "home.snap" to "/mnt/ssd" results in the -2 error. What is
> peculiar, or possibly a red herring, is that it seems to fail at the
> same point each time, at 4.39GB in to the transfer.
That's pretty suspicious.
Hi,
Thanks for the quick response.
> There are a number of things missing from multiple device support,
> including any concept of a device becoming faulty (i.e. persistent
> failures rather than transient which Btrfs seems to handle OK for the
> most part), and then also getting it to go
Here's an example of what I've been trying:
" mount new ssd
root / # mount /dev/sdb6 /mnt/ssd/
" snapshot ROOT sub-volume mounted at /
root / # btrfs subvol snapshot -r / /ROOT.snap
Create a readonly snapshot of '/' in '//ROOT.snap'
root / # btrfs filesystem sync /
FSSync '/'
root / # btrfs
Hi folks,
So I just ran into this:
https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file
This is a device mapper overlay file - not overlayfs.
For the repairs that are sometimes uncertain what's next, maybe this
is a viable
David Sterba wrote on 2016/03/18 19:18 +0100:
On Tue, Mar 08, 2016 at 04:46:41PM +0800, Qu Wenruo wrote:
1. Could you tell me what you'd like to do?
a) Provide completely the same function with current
implementation by other, more efficient way.
Same function, but less
Dave Hansen wrote on 2016/03/18 09:33 -0700:
On 03/17/2016 06:02 PM, Qu Wenruo wrote:
Dave Hansen wrote on 2016/03/17 09:36 -0700:
On 03/16/2016 06:36 PM, Qu Wenruo wrote:
Dave Hansen wrote on 2016/03/16 13:53 -0700:
I have a medium-sized multi-device btrfs filesystem (4 disks, 16TB
total)
Austin S. Hemmelgarn wrote on 2016/03/18 07:17 -0400:
On 2016-03-17 20:38, Qu Wenruo wrote:
Austin S. Hemmelgarn wrote on 2016/03/17 07:22 -0400:
On 2016-03-17 05:04, Qu Wenruo wrote:
Austin S. Hemmelgarn wrote on 2016/03/16 11:26 -0400:
Currently, open_ctree_fs_info will open whatever
There are a number of things missing from multiple device support,
including any concept of a device becoming faulty (i.e. persistent
failures rather than transient which Btrfs seems to handle OK for the
most part), and then also getting it to go degraded automatically, and
finally hot spare
Hi,
btrfs-progs 4.5 have been released.
There's a new command 'btrfs filesystem du' that mimics the 'du' utility but
reports the shared extents among files and in the group defined by the
commandline arguments.
The standalone tools are starting the deprecation period, but they'll
stay for a
Hi,
I'm testing a btrfs configuration in VirtualBox before I put it on real
hardware and I'm running into a problem where the kernel dies from a BUG_ON
assertion when I test hot-removing a mirror drive in a RAID-1. Since this
apparently defeats the whole point of having RAID-1, this is rather
КЛИЕНТСКИЕ БАЗЫ ДАННЫХ ДЛЯ ПРОДАЖИ ВАШИХ ТОВАРОВ И УСЛУГ!
Собираем по интернет под заказ базы данных
потенциальных клиентов для Бизнеса!
По базе можно звонить, писать, слать факсы
и email,вести любые прямые активные продажи
Ваших товаров и услуг!
Уже завтра огромная клиентская база у Вас,
On Sun, Mar 20, 2016 at 1:31 PM, Patrick Tschackert wrote:
> My raid is done with the scrub now, this is what i get:
>
> $ cat /sys/block/md0/md/mismatch_cnt
> 311936608
I think this is an assembly problem. Read errors don't result in
mismatch counts. An md mismatch count
I'm not an expert by any means, but I did a migration like this a few weeks ago.
The most consistent recommendation on this mailing list is to use the
newest kernels and btrfs-progs feasible. I did my migration using
Fedora 24 live media, which at the time was kernel ~4.3. I see your
btrfs-progs
On Sat, Mar 19, 2016 at 4:58 PM, Ryan Erato wrote:
> I'm having quite the time trying to move my current Gentoo install to
> an SSD. I first attempted Clonezilla, but that failed while cloning
> the btrfs partition. I then realized I could use btrfs send/receive.
>
> The
On Sun, Mar 20, 2016 at 6:19 AM, Martin Steigerwald wrote:
> On Sonntag, 20. März 2016 10:18:26 CET Patrick Tschackert wrote:
>> > I think in retrospect the safe way to do these kinds of Virtual Box
>> > updates, which require kernel module updates, would have been to
>> >
On Sun, Mar 20, 2016 at 3:18 AM, Patrick Tschackert wrote:
> Thanks for answering again!
> So, first of all I installed a newer kernel from the backports as per
> Nicholas D Steeves suggestion:
>
> $ apt-get install -t jessie-backports linux-image-4.3.0-0.bpo.1-amd64
>
>
Hello my dear upstream heroes,
We are using progs v4.4.1 and kernel 4.4.5 in Rockstor and sometimes
this warning is displayed while assigning a qgroup. Here's a sample
output.
/sbin/btrfs qgroup assign 0/408 2015/6 /mnt2/
WARNING: quotas may be inconsistent, rescan needed
2015/6 is the qgroup
I do plan on physically replacing the current drive with the new one
and my fstab/boot comands use device. I never could get UUID or labels
to work, but that's another project.
However, this still leaves me unable to take advantage of btrfs
features for implementing an incremental backup solution
For nocow/prealloc files, we try our best to not allocate space, however,
this ends up a huge performance regression since it's expensive to check
if data is shared.
Let's go back to only go check shared data once we're not able to allocate
space.
The test was made against a tmpfs backed loop
"inspect-internal subvolid-resolve" doesn't work from the following commit.
commit 176aeca9a148 ("btrfs-progs: add getopt stubs where needed")
It's because 1st argument, subvolid, is also used for the pathname of
filesystem. 2nd argument should be used for this purpose instead.
* actual result
"qgroup assign" is considered as working without any options
from the following commit.
commit 176aeca9a148 ("btrfs-progs: add getopt stubs where needed")
However, we can pass options to this command.
* actual result
==
# ./btrfs qgroup
On Sonntag, 20. März 2016 10:18:26 CET Patrick Tschackert wrote:
> > I think in retrospect the safe way to do these kinds of Virtual Box
> > updates, which require kernel module updates, would have been to
> > shutdown the VM and stop the array. *shrug*
>
>
> After this, I think I'll just do
Thanks for answering, I already upgraded to a backports kernel as mentioned
here:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg51748.html
I now have
$ uname -a
Linux vmhost 4.3.0-0.bpo.1-amd64 #1 SMP Debian 4.3.5-1~bpo8+1 (2016-02-23)
x86_64 GNU/Linux
As I wrote here
On Samstag, 19. März 2016 19:34:55 CET Chris Murphy wrote:
> >>> $ uname -a
> >>> Linux vmhost 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u4
> >>> (2016-02-29) x86_64 GNU/Linux
> >>
> >>This is old. You should upgrade to something newer, ideally 4.5 but
> >>4.4.6 is good also, and then oldest
On Mittwoch, 2. März 2016 09:06:57 CET Qu Wenruo wrote:
> And maybe I just missed something, but the filename seems not touched,
> meaning it will leak a lot of information.
> Just like default eCryptfs behavior.
>
> I understand that's an easy design and it's not a high priority thing,
> but I
On Sonntag, 13. Dezember 2015 23:35:08 CET Martin Steigerwald wrote:
> Hi!
>
> For me it is still not production ready. Again I ran into:
>
> btrfs kworker thread uses up 100% of a Sandybridge core for minutes on
> random write into big file
> https://bugzilla.kernel.org/show_bug.cgi?id=90401
I
Thanks for answering again!
So, first of all I installed a newer kernel from the backports as per Nicholas
D Steeves suggestion:
$ apt-get install -t jessie-backports linux-image-4.3.0-0.bpo.1-amd64
After rebooting:
$ uname -a
Linux vmhost 4.3.0-0.bpo.1-amd64 #1 SMP Debian 4.3.5-1~bpo8+1
(sorry for any duplicates, vger.org hates gmail)
On Thu, Mar 17, 2016 at 11:16 PM, Liu Bo wrote:
> For nocow/prealloc files, we try our best to not allocate space, however,
> this ends up a huge performance regression since it's expensive to check
> if data is shared.
>
>
From: Flex Liu
In fs/btrfs/volumes.c:2328
if (seeding_dev) {
sb->s_flags &= ~MS_RDONLY;
ret = btrfs_prepare_sprout(root);
BUG_ON(ret); /* -ENOMEM */
}
the error code would be return from:
fs_devs =
On Sat, 12 Mar 2016 20:48:47 +0500
Roman Mamedov wrote:
> The system was seemingly running just fine for days or weeks, then I
> routinely deleted a bunch of old snapshots, and suddenly got hit with:
>
> [Sat Mar 12 20:17:10 2016] BTRFS error (device dm-0): parent transid
30 matches
Mail list logo