Duncan 1i5t5.dun...@cox.net schrieb:
It's probably just a flaw that
btrfs device composition comes up later and the kernel tries to early to
mount root. rootwait probably won't help here, too. But rootdelay
may help that case tho I myself don't have the ambitions to experiment
with it. My
Duncan 1i5t5.dun...@cox.net schrieb:
While in theory btrfs has the device= mount option, and the kernel has
rootflags= to tell it what mount options to use, at least last I checked
a few kernel cycles ago (I'd say last summer, so 3-5 kernel cycles ago),
for some reason rootflags=device=
P. Remek p.rem...@googlemail.com schrieb:
Yes, it was implemented for the purpose of allowing an application to
implement its own caching - probably for the sole purpose of doing it
better or more efficient. But it simply does not work out that well, at
least with COW fs. The original idea
Austin S Hemmelgarn ahferro...@gmail.com schrieb:
On 2015-02-11 23:33, Kai Krakow wrote:
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file
Ed Tomlinson e...@aei.ca schrieb:
On Tuesday, February 10, 2015 2:17:43 AM EST, Kai Krakow wrote:
Tobias Holst to...@tobby.eu schrieb:
and btrfs scrub status /[device] gives me the following output:
scrub status for [UUID]
scrub started at Mon Feb 9 18:16:38 2015 and was aborted after 2008
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O_DIRECT should guarantee that the
filesystem cache is bypassed and IO is sent directly to
P. Remek p.rem...@googlemail.com schrieb:
Hello,
I am benchmarking Btrfs and when benchmarking random writes with fio
utility, I noticed following two things:
1) On first run when target file doesn't exist yet, perfromance is
about 8000 IOPs. On second, and every other run, performance
Brendan Hide bren...@swiftspirit.co.za schrieb:
I have the following two lines in
/etc/udev/rules.d/61-persistent-storage.rules for two old 250GB
spindles. It sets the timeout to 120 seconds because these two disks
don't support SCT ERC. This may very well apply without modification to
other
Chris Murphy li...@colorremedies.com schrieb:
On Fri, Feb 6, 2015 at 1:01 PM, Brian B canis8...@gmail.com wrote:
My laptop has two disks, a SSD and a traditional magnetic disk. I plan
to make a partition on the mag disk equal in size the SSD and set up
BTRFS RAID1. This I know how to do.
Lucas Smith vedal...@lksmith.net schrieb:
Hey folks!
Having a confusing question brought up to me by my Debian Testing
installer and am unsure what the implications are for it. I want to run
btrfs in RAID10 on 4x1TB WD RE3 drives. Apparently, my system is set up to
boot using EFI. Are
Duncan 1i5t5.dun...@cox.net schrieb:
Back to the extents counts: What I did next was implementing a defrag
job that regularly defrags the journal (actually, the complete log
directory as other log files suffer the same problem):
$ cat /usr/local/sbin/defrag-logs.rb #!/bin/sh exec btrfs
Duncan 1i5t5.dun...@cox.net schrieb:
As they say, Whoosh!
At least here, I interpreted that remark as primarily sarcastic
commentary on the systemd devs' apparent attitude, which can be
(controversially) summarized as: Systemd doesn't have problems because
it's perfect. Therefore, any
Пламен Петров pla...@petrovi.no-ip.info schrieb:
I'm going with the module suggestion from Marc, too.
/dev/sda2 / btrfs relatime,compress=zlib 0 0
This line looks kinda useless to me. The compress=zlib option won't be
applied at boot and cannot be changed at
GEO 1g2e...@gmail.com schrieb:
First of all, I am sorry that I screw up the whole structure of the
discussion (I have not subscribed to the mailing list, and as Kai replied
to the mailing list only, I could not reply to his answer.)
Umm... Try a NNTP gateway like gmane to follow the list in
Duncan 1i5t5.dun...@cox.net schrieb:
Duncan had a nice example in this list how to migrate
directories to subvolumes by using shallow copies: mv dir dir.old
btrfs sub create dir cp -a -- reflink=always dir.old/. dir/. rm
-Rf dir.old.
FWIW, that was someone else. I remember seeing it
GEO 1g2e...@gmail.com schrieb:
@Kai Krakow: I accept your opinion and thank you for your answer.
However I have special reasons doing so. I could name you a few use cases.
For example I do not need to backup search indexes as they mess up over
time, so I simple recreate the cache in case
Chris Murphy li...@colorremedies.com schrieb:
Snapshotting, deleting a bunch of directories in that snapshot, then
backing up the snapshot, then deleting the snapshot will work. But it
sounds more involved. But if you're scripting it, probably doesn't
matter either way.
Will it work as
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
Most of these directories aren't
changing anyways most of the time and thus won't occupy disk space only
once in the backup.
Of course won't should've read would... ;-)
--
Replies to list only preferred.
--
To unsubscribe from this list: send
Hi!
Is it technically possible to wait for a snapshot completely purged from
disk? I imagine an option like --wait for btrfs delete subvolume.
This would fit some purposes I'm planning to implement:
* In a backup scenario have a subprocess which deletes snapshots one by one,
starting with
Garry T. Williams gtwilli...@gmail.com schrieb:
On 2-13-14 20:02:43 Kai Krakow wrote:
Is it technically possible to wait for a snapshot completely purged
from disk? I imagine an option like --wait for btrfs delete
subvolume.
This may be what you're looking for:
http
Holger Hoffstätte holger.hoffstae...@googlemail.com schrieb:
On Thu, 13 Feb 2014 20:02:43 +0100, Kai Krakow wrote:
Is it technically possible to wait for a snapshot completely purged from
disk? I imagine an option like --wait for btrfs delete subvolume.
That would indeed be sweet (see
Brendan Hide bren...@swiftspirit.co.za schrieb:
Is it technically possible to wait for a snapshot completely purged from
disk? I imagine an option like --wait for btrfs delete subvolume.
This would fit some purposes I'm planning to implement:
* In a backup scenario
I have a similar
Duncan 1i5t5.dun...@cox.net schrieb:
Roman Mamedov posted on Sun, 09 Feb 2014 04:10:50 +0600 as excerpted:
If you need to perform a btrfs-specific operation, you can easily use
the btrfs-specific tools to prepare for it, specifically use btrfs fi
df which could give provide every imaginable
Roman Mamedov r...@romanrm.net schrieb:
When I started to use unix, df returned blocks, not bytes. Without your
proposed patch, it does that right. With your patch, it does it wrong.
It returns total/used/available space that is usable/used/available by/for
user data.
No, it does not. It
Duncan 1i5t5.dun...@cox.net schrieb:
[...]
Difficult to twist your mind around that but well explained. ;-)
A snapshot thus looks much like a crash in terms of NOCOW file integrity
since the blocks of a NOCOW file are simply snapshotted in-place, and
there's already no checksumming or file
Hugo Mills h...@carfax.org.uk schrieb:
On Sat, Feb 08, 2014 at 05:33:10PM +0600, Roman Mamedov wrote:
On Fri, 07 Feb 2014 21:32:42 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
It should show the raw space available. Btrfs also supports compression
and doesn't try to be smart about
Chris Murphy li...@colorremedies.com schrieb:
On Feb 6, 2014, at 11:08 PM, Roman Mamedov r...@romanrm.net wrote:
And what
if I am accessing that partition on a server via a network CIFS/NFS share
and don't even *have a way to find out* any of that.
That's the strongest argument. And
Martin Steigerwald mar...@lichtvoll.de schrieb:
While I understand that there is *never* a guarentee that a given free
space can really be allocated by a process cause other processes can
allocate space as well in the mean time, and while I understand that its
difficult to provide an accurate
Roman Mamedov r...@romanrm.net schrieb:
It should show the raw space available. Btrfs also supports compression
and doesn't try to be smart about how much compressed data would fit in
the free space of the drive. If one is using RAID1, it's supposed to fill
up with a rate of 2:1. If one is
cwillu cwi...@cwillu.com schrieb:
Everyone who has actually looked at what the statfs syscall returns
and how df (and everyone else) uses it, keep talking. Everyone else,
go read that source code first.
There is _no_ combination of values you can return in statfs which
will not be grossly
Roman Mamedov r...@romanrm.net schrieb:
UNIX 'df' and the 'statfs' call on the other hand should keep the behavior
people are accustomized to rely on since 1970s.
When I started to use unix, df returned blocks, not bytes. Without your
proposed patch, it does that right. With your patch, it
Josef Bacik jba...@fb.com schrieb:
On 02/05/2014 03:15 PM, Roman Mamedov wrote:
Hello,
On a freshly-created RAID1 filesystem of two 1TB disks:
# df -h /mnt/p2/
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 1.8T 1.1M 1.8T 1% /mnt/p2
I cannot write 2TB of user
Chris Murphy li...@colorremedies.com schrieb:
If the database/virtual machine/whatever is crash safe, then the
atomic state that a snapshot grabs will be useful.
How fast is this state fixed on disk from the time of the snapshot
command? Loosely speaking. I'm curious if this is 1 second; a
Duncan 1i5t5.dun...@cox.net schrieb:
The question here is: Does it really make sense to create such snapshots
of disk images currently online and running a system. They will probably
be broken anyway after rollback - or at least I'd not fully trust the
contents.
VM images should not be
Chris Murphy li...@colorremedies.com schrieb:
On Feb 7, 2014, at 2:07 PM, Kai Krakow hurikhan77+bt...@gmail.com wrote:
Chris Murphy li...@colorremedies.com schrieb:
If the database/virtual machine/whatever is crash safe, then the
atomic state that a snapshot grabs will be useful.
How
Duncan 1i5t5.dun...@cox.net schrieb:
Ah okay, that makes it clear. So, actually, in the snapshot the file is
still nocow - just for the exception that blocks being written to become
unshared and relocated. This may introduce a lot of fragmentation but it
won't become worse when rewriting the
David Sterba dste...@suse.cz schrieb:
On Tue, Feb 04, 2014 at 08:22:05PM -0500, Josef Bacik wrote:
On 02/04/2014 03:52 PM, Kai Krakow wrote:
Hi!
I'm curious... The whole snapshot thing on btrfs is based on its COW
design. But you can make individual files and directory contents nocow
Hi!
I'm curious... The whole snapshot thing on btrfs is based on its COW design.
But you can make individual files and directory contents nocow by applying
the C attribute on it using chattr. This is usually recommended for database
files and VM images. So far, so good...
But what happens to
Martin Steigerwald mar...@lichtvoll.de schrieb:
Okay, I have seen 260 MB/s. But frankly I am pretty sure that Virtuoso
isn´t doing this kind of large scale I/O on a highly fragmented file. Its
a database. Its random access. My oppinion is that Virtuoso couldn´t care
less about the
KC conrad.francois.ar...@googlemail.com schrieb:
I was wondering about whether using options like autodefrag and
inode_cache on SSDs.
On one hand, one always hears that defragmentation of SSD is a no-no,
does that apply to BTRFS's autodefrag?
Also, just recently, I heard something similar
KC conrad.francois.ar...@googlemail.com schrieb:
I followed your advice on NOCOW for virtualbox images and torrents like
so: chattr -v /home/juha/VirtualBox\ VMs/
chattr -RC /home/juha/Downloads/torrent/#unfinished
As you can see, i used the recursive flag. However, I do not know
whether
Holger Hoffstaette holger.hoffstae...@googlemail.com schrieb:
On Sun, 24 Nov 2013 22:45:59 -0500, Jim Salter wrote:
TL;DR scrub's ioprio argument isn't really helpful - a scrub murders
system performance til it's done.
My system:
3.11 kernel (from Ubuntu Saucy)
I don't run Ubuntu,
Hello list!
What is the status of btrfs' self-healing capabilities?
On my backup btrfs device I am currently facing back-reference errors that
neither btrfs can deal with online, nor btrfsck is able to repair it (it
bails out with an assertion). I'm going to post this as a separate post.
So
Hello list!
By backup btrfs device fails to delete snapshots with the following
backtrace:
[87332.200212] WARNING: CPU: 0 PID: 5406 at fs/btrfs/extent-tree.c:5723
__btrfs_free_extent+0x692/0xb20()
[87332.200213] Modules linked in: rfcomm af_packet bnep vmnet(O) vmblock(O)
vsock vmmon(O)
Tim Cuthbertson ratch...@gmail.com schrieb:
I am a bit confused and I have probably managed to outsmart myself.
For about 15 months, I have been running my system on a single, large
btrfs volume. It is RAID-0 on two SATA-III HDD's for a total of 1.9
TB. This is a home system running Siduction
Duncan 1i5t5.dun...@cox.net schrieb:
But because a full balance rewrites everything anyway, it'll effectively
defrag too.
Is that really true? I thought it just rewrites each distinct extent and
shuffels chunks around... This would mean it does not merge extents
together.
Regards,
Kai
--
Marc MERLIN m...@merlins.org schrieb:
On Mon, Dec 30, 2013 at 02:22:55AM +0100, Kai Krakow wrote:
These thought are actually quite interesting. So you are saying that data
may not be fully written to SSD although the kernel thinks so? This is
That, and worse.
Incidently, I have just
Duncan 1i5t5.dun...@cox.net schrieb:
[ spoiler: tldr ;-) ]
* How stable is it? I've read about some csum errors lately...
FWIW, both bcache and btrfs are new and still developing technology.
While I'm using btrfs here, I have tested usable (which for root means
either means directly
Aastha Mehta aasth...@gmail.com schrieb:
Rather than a local disk, I have a remote device to which my IO
requests are sent and from which the data is fetched. I need certain
data to be fetched from the remote device after a remount. But somehow
I do not see any request appearing at the
Martin Steigerwald mar...@lichtvoll.de schrieb:
- btrfs dedup disable
Delete the dedup tree, after this we're not able to use dedup any more
unless you enable it again.
So if deduplication has been switched on for a while, btrfs dedup disable
will cause BTRFS to undo the deduplication (and
Hello list!
I'm planning to buy a small SSD (around 60GB) and use it for bcache in front
of my 3x 1TB HDD btrfs setup (mraid1+draid0) using write-back caching. Btrfs
is my root device, thus the system must be able to boot from bcache using
init ramdisk. My /boot is a separate filesystem
Chris Murphy li...@colorremedies.com schrieb:
I think most of these questions are better suited for the bcache list.
Ah yes, you are true. I will repost the non-btrfs related questions to the
bcache list. But actually I am most interested in using bcache together
btrfs, so getting a general
Duncan 1i5t5.dun...@cox.net schrieb:
But the above documentation should also suggest trying this to see if it
addresses that remaining single-mode system chunk stub:
btrfs balance start -fsconvert=raid1 /home
Cool man, that fixed it for me. :-)
Regards,
Kai
--
To unsubscribe from this
Marc MERLIN m...@merlins.org schrieb:
I'm one of those people who uses cp -al and rsync to do backups. Indeed
I should likely rework the flow to use subvolumes and snapshots.
You also mentioned reflinks, and it sounds like I can use
cp -a --reflink instead of cp -al.
Also, would the dedupe
Hi!
Andrea Gelmini andrea.gelm...@gmail.com schrieb:
and thanks a lot for your work.
I have an USB drive with BTRFS, on which I write with different
kernel release.
Anyway, today I made a copy of one big file, and than powered off
the computer with a clean shutdown (Ubuntu 13.10 -
Duncan 1i5t5.dun...@cox.net schrieb:
100 is I_ERR_FILE_EXTENT_DISCOUNT. I'm not sure what kind of problem
this indicates but btrfsck does not seem to fix this currently - it just
detects it.
Interesting...
I wish it were documented what it technically means and what implications
each
Hendrik Friedel hend...@friedels.name schrieb:
I re-post this:
[...]
root 256 inode 9579 errors 100
root 256 inode 9580 errors 100
root 256 inode 14258 errors 100
root 256 inode 14259 errors 100
root inode 9579 errors 100
root inode 9580 errors 100
root inode 14258 errors
Hi!
What's leafsize was used when making the file system? The default is now
(as of yesterday) 16KB to avoid metadata fragmentation.
Since my btrfs is about 2 years old I suppose I'm still using 4kB leafsize.
Is there a way to change it without recreating the whole filesystem from
scratch?
y...@wp.pl y...@wp.pl schrieb:
I recently noticed that my boot has become slower - it took around 29s,
while at the beginning it was ~6s. I thought it was an issue with systemd,
because it failed to properly indicate at which stage the slowdown
occurred and how long it took. I rolled back to
Jérôme Poulin jeromepou...@gmail.com schrieb:
If I close all my desktop programs, my system stays at +12 GB RAM usage
while after a fresh boot it has 12+ GB free (of 16 GB). Cache stays low
at 2 GB while after a few minutes uptime cache is about 5 GB.
I probably have the same problem over
Tomasz Chmielewski t...@virtall.com schrieb:
I probably have the same problem over here, after about 2 weeks of
random read/write it seems my memory and swap get almost full and even
after killing all process and getting in single user mode, memory
won't free up. Would you happen to have
Hello!
Is this output in slabtop normal after a few days uptime with a daily rsync
(btrfs to btrfs) and some compiling (gentoo emerge):
Active / Total Objects (% used): 1120836 / 1810907 (61,9%)
Active / Total Slabs (% used) : 42492 / 42492 (100,0%)
Active / Total Caches (% used)
Sandy McArthur sandy...@gmail.com schrieb:
I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
is connected.
With disk 4 attached btrfsck errors with:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path-slots[0] == 0)' failed
(I'd have to reboot in a
Duncan 1i5t5.dun...@cox.net schrieb:
It is a RAID-1 so why bother with the faulty drive? Just wipe it, put it
back in, then run a btrfs balance... There should be no data loss
because all data is stored twice (two-way mirroring).
The caveat would be if it didn't start as btrfs raid1, and
Martin m_bt...@ml1.co.uk schrieb:
Which is 'best' or 'faster'?
Take a snapshot of an existing backup and then rsync --delete into
that to make a backup of some other filesystem?
Or use rsync --link to link a new backup tree against a previous
backup tree for the some other filesystem?
Mike Audia mike...@gmx.com schrieb:
I believe 30 sec is the default for the checkpoint interval. Is this
adjustable? --
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Torbjørn li...@skagestad.org schrieb:
Just curious: What would be the benefit of increasing the checkpoint
interval?
Laptops typically spin down disks to save power. If btrfs forces a write
every 30 second, you have to spin it back up.
I'd expect btrfs not to write to the disk when a
George Mitchell geo...@chinilu.com schrieb:
I am seeing a huge improvement in boot performance since doing a system
wide file by file defragementation of metadata. In fact in the four
sequential boots since completing this process, I have not seen one
open_ctree failure so far. This leads
Hey list!
Here's another backtrace observed while deduplicating my snapshotted btrfs
backup volume...
https://gist.github.com/kakra/26b377cccfc66ab870e4
[58198.314804] btrfs: error -5 while searching for dev_stats item for device
/dev/sdd1!
[58198.314807] [ cut here ]
[58198.311938] zcache: destroyed local pool id=2
[58198.314804] btrfs: error -5 while searching for dev_stats item for device
/dev/sdd1!
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
Hey list!
Here's another backtrace observed while deduplicating my snapshotted btrfs
backup volume...
https
Hi!
I think such a solution as part of the filesystem could do much better than
something outside of it (like bcache). But I'm not sure: What makes data
hot? I think the most benefit is detecting random read access and mark only
those data as hot, also writes should go to the SSD first and
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
Apparently, it's not fixed. The system does not freeze now but it threw
multiple backtraces right in front of my Xorg session. The backtraces
look a little bit different now. Here's what I got:
https://gist.github.com/kakra/8a340f006d01e146865d
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
We can try to debug that further, you can send me / upload the output of
btrfs-image -c9 /dev/whatever blah.img
built from Josef's repository
git://github.com/josefbacik/btrfs-progs.git
It contains all your metadata (like file
Helmut Hullen hul...@t-online.de schrieb:
If I want to manage a complete disk with btrfs, what's the Best
Practice? Would it be best to create the btrfs filesystem on
/dev/sdb, or would it be better to create just one partition from
start to end and then do mkfs.btrfs /dev/sdb1?
Would the
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
I can reliably reproduce it from two different approaches. I'd like to
only apply the commits fixing it. Can you name them here?
In git log order:
6ced2666 Btrfs: separate sequence numbers for delayed ref tracking and
tree mod log ef9120b1
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
How will it compare to bcache? I'm currently thinking about buying an
SSD but bcache requires some efforts in migrating the storage to use.
And after all those hassles I am even not sure if it would work easily
with a dracut generated initramfs.
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
It sounds simple, and was sort-of prompted by the new syscall taking
short ranges, but it is tricky figuring out a sane heuristic (when to
hash, when to bail, when to submit without comparing, what should be the
source in the last case), and
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
It sounds simple, and was sort-of prompted by the new syscall taking
short ranges, but it is tricky figuring out a sane heuristic (when to
hash, when to bail, when to submit without comparing
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
I can reliably reproduce it from two different approaches. I'd like to
only apply the commits fixing it. Can you name them here?
In git log order:
6ced2666 Btrfs: separate sequence numbers for delayed ref tracking and
tree mod log ef9120b1
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
I'm using an bash/rsync script[1] to backup my whole system on a nightly
basis to an attached USB3 drive into a scratch area, then take a snapshot
of this area. I'd like to have these snapshots immutable, so they should
be read-only.
Have you
Josef Bacik jba...@fusionio.com schrieb:
I've upgraded to 3.9.0 mainly for the snapshot-aware defragging patches.
I'm running bedup[1] on a regular basis and it is now the third time that
I got back to my PC just to find it hard-frozen and I needed to use the
reset button.
It looks like
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
That one should be fixed in btrfs-next. If you can reliably reproduce the
bug I'd be glad to get a confirmation - you can probably even save putting
it on bugzilla then ;-)
I can reliably reproduce it from two different approaches. I'd like to
zwu.ker...@gmail.com zwu.ker...@gmail.com schrieb:
The patchset is trying to introduce hot relocation support
for BTRFS. In hybrid storage environment, when the data in
HDD disk get hot, it can be relocated to SSD disk by BTRFS
hot relocation support automatically; also, if SSD disk ratio
james northrup northrup.ja...@gmail.com schrieb:
tried a git based backup? sounds spot-on as a compromise prior to
applying btrfs tweaks. snapshotting the git binaries would have the
dedupe characteristics.
Git is efficient with space, yes. But if you have a lot of binary files, and
a lot
Hey list,
Although this is about the preload daemon, my intended audience for this
matter is the btrfs community. So I'm posting this here.
I've created a small script here[1] to read the preload daemon state file
and use this to run the btrfs defragmenter/compressor on these files. The
idea
Hey list,
I wonder if it is possible to deduplicate read-only snapshots.
Background:
I'm using an bash/rsync script[1] to backup my whole system on a nightly
basis to an attached USB3 drive into a scratch area, then take a snapshot of
this area. I'd like to have these snapshots immutable, so
Hello list,
I've upgraded to 3.9.0 mainly for the snapshot-aware defragging patches. I'm
running bedup[1] on a regular basis and it is now the third time that I got
back to my PC just to find it hard-frozen and I needed to use the reset
button.
It looks like this happens only while running
Alexander Skwar alexanders.mailinglists+nos...@gmail.com schrieb:
Where I'm hanging right now, is that I can't seem to figure out a
bullet proof way to find all the subvolumes of the filesystems I
might have.
What about this:
# btrfs sub list -a /
ID 256 gen 1487089 top level 5 path
Alexander Skwar alexanders.mailinglists+nos...@gmail.com schrieb:
So I guess I'd still need to mount the root volume temporarily
somewhere to do the translation.
That brings in the idea how bedup seems to handle this. Maybe you want to
take one or the other idea from there as it also has to
Hello list,
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
I've upgraded to 3.9.0 mainly for the snapshot-aware defragging patches.
I'm running bedup[1] on a regular basis and it is now the third time that
I got back to my PC just to find it hard-frozen and I needed to use the
reset button
Tomasz Torcz to...@pipebreaker.pl schrieb:
Although this is about the preload daemon, my intended audience for this
matter is the btrfs community. So I'm posting this here.
I've created a small script here[1] to read the preload daemon state file
and use this to run the btrfs
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
There's no deep reason read-only snapshots should keep their storage
immutable, they can be affected by raid rebalancing for example.
Sounds logical, and good...
The current bedup restriction comes from the clone call; Mark Fasheh's
dedup
Alexander Skwar alexanders.mailinglists+nos...@gmail.com schrieb:
FWIW, I've also written a script which creates and manages
(ie. deletes old) snapshots.
It figures out all the available filesystems and creates snaps
for all the available (sub)volumes.
It's also on
George Mitchell geo...@chinilu.com schrieb:
1) The system fails to boot intermittently due to dracut/initrd issues
(btrfs: open_ctree failed). This is being worked on upstream and I am
seeing a continual flow of patches addressing it, but so far no fix.
This will take time to fix and it
Hi!
Why don't you just use du -B 4096 -sh /path/to/fs vs.
du -B 16384 -sh ...?
Subtracting both results is the overhead of the one vs. the other.
But to answer your request for the formula, its:
blocks = (long)((file_size + block_size - 1) / block_size)
occupied_size = blocks * block_size
But
Hi!
This is what happened while rsyncing my system disk to my btrfs backup
device after I enabled space caching for the latter (and first time using it
after 3.3.1, last time I sync'ed it was with 3.2.x):
# mount options
# LABEL=usb-backup /mnt/usb-backup btrfs \
#
Hello!
Is there any documentation about btrfs mount flags wrt:
1. which flags are one-time options and are permanent,
2. which flags are global per btrfs partition,
3. which flags are local per subvolume mount?
I'm asking because while googling I found very confusing info about
autodefrag.
Just in case it is interesting, here's the blocked state (take note I
currently have other fs actions running on my btrfs root fs copying a lot of
files from a remote server):
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
This is what happened while rsyncing my system disk to my btrfs backup
Mitch Harder mitch.har...@sabayonlinux.org schrieb:
On Sat, Feb 4, 2012 at 5:40 AM, Kai Krakow hurikhan77+bt...@gmail.com
wrote:
It's actually the case for me that rsync writes to the device using mount
options compress-force=zlib and that rsync probably truncates files
sometimes when using
Fahrzin Hemmati fahh...@gmail.com schrieb:
I recently re-installed Ubuntu, and somewhere along the way the
installer decided to clear out /var, which happens to be a separate
btrfs device from /. When I do btfrs filesystem df /var it outputs this:
Data: total=134.01GB, used=485.78
System,
201 - 300 of 316 matches
Mail list logo