Helmut Hullen hul...@t-online.de schrieb:
If I want to manage a complete disk with btrfs, what's the Best
Practice? Would it be best to create the btrfs filesystem on
/dev/sdb, or would it be better to create just one partition from
start to end and then do mkfs.btrfs /dev/sdb1?
Would the
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
We can try to debug that further, you can send me / upload the output of
btrfs-image -c9 /dev/whatever blah.img
built from Josef's repository
git://github.com/josefbacik/btrfs-progs.git
It contains all your metadata (like file
Hi!
I think such a solution as part of the filesystem could do much better than
something outside of it (like bcache). But I'm not sure: What makes data
hot? I think the most benefit is detecting random read access and mark only
those data as hot, also writes should go to the SSD first and
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
Apparently, it's not fixed. The system does not freeze now but it threw
multiple backtraces right in front of my Xorg session. The backtraces
look a little bit different now. Here's what I got:
https://gist.github.com/kakra/8a340f006d01e146865d
Hey list!
Here's another backtrace observed while deduplicating my snapshotted btrfs
backup volume...
https://gist.github.com/kakra/26b377cccfc66ab870e4
[58198.314804] btrfs: error -5 while searching for dev_stats item for device
/dev/sdd1!
[58198.314807] [ cut here ]
[58198.311938] zcache: destroyed local pool id=2
[58198.314804] btrfs: error -5 while searching for dev_stats item for device
/dev/sdd1!
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
Hey list!
Here's another backtrace observed while deduplicating my snapshotted btrfs
backup volume...
https
George Mitchell geo...@chinilu.com schrieb:
I am seeing a huge improvement in boot performance since doing a system
wide file by file defragementation of metadata. In fact in the four
sequential boots since completing this process, I have not seen one
open_ctree failure so far. This leads
Martin m_bt...@ml1.co.uk schrieb:
Which is 'best' or 'faster'?
Take a snapshot of an existing backup and then rsync --delete into
that to make a backup of some other filesystem?
Or use rsync --link to link a new backup tree against a previous
backup tree for the some other filesystem?
Mike Audia mike...@gmx.com schrieb:
I believe 30 sec is the default for the checkpoint interval. Is this
adjustable? --
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Torbjørn li...@skagestad.org schrieb:
Just curious: What would be the benefit of increasing the checkpoint
interval?
Laptops typically spin down disks to save power. If btrfs forces a write
every 30 second, you have to spin it back up.
I'd expect btrfs not to write to the disk when a
Sandy McArthur sandy...@gmail.com schrieb:
I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
is connected.
With disk 4 attached btrfsck errors with:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path-slots[0] == 0)' failed
(I'd have to reboot in a
Duncan 1i5t5.dun...@cox.net schrieb:
It is a RAID-1 so why bother with the faulty drive? Just wipe it, put it
back in, then run a btrfs balance... There should be no data loss
because all data is stored twice (two-way mirroring).
The caveat would be if it didn't start as btrfs raid1, and
Hi!
Is it technically possible to wait for a snapshot completely purged from
disk? I imagine an option like --wait for btrfs delete subvolume.
This would fit some purposes I'm planning to implement:
* In a backup scenario have a subprocess which deletes snapshots one by one,
starting with
Garry T. Williams gtwilli...@gmail.com schrieb:
On 2-13-14 20:02:43 Kai Krakow wrote:
Is it technically possible to wait for a snapshot completely purged
from disk? I imagine an option like --wait for btrfs delete
subvolume.
This may be what you're looking for:
http
Holger Hoffstätte holger.hoffstae...@googlemail.com schrieb:
On Thu, 13 Feb 2014 20:02:43 +0100, Kai Krakow wrote:
Is it technically possible to wait for a snapshot completely purged from
disk? I imagine an option like --wait for btrfs delete subvolume.
That would indeed be sweet (see
Brendan Hide bren...@swiftspirit.co.za schrieb:
Is it technically possible to wait for a snapshot completely purged from
disk? I imagine an option like --wait for btrfs delete subvolume.
This would fit some purposes I'm planning to implement:
* In a backup scenario
I have a similar
Chris Murphy li...@colorremedies.com schrieb:
Snapshotting, deleting a bunch of directories in that snapshot, then
backing up the snapshot, then deleting the snapshot will work. But it
sounds more involved. But if you're scripting it, probably doesn't
matter either way.
Will it work as
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
Most of these directories aren't
changing anyways most of the time and thus won't occupy disk space only
once in the backup.
Of course won't should've read would... ;-)
--
Replies to list only preferred.
--
To unsubscribe from this list: send
Duncan 1i5t5.dun...@cox.net schrieb:
Duncan had a nice example in this list how to migrate
directories to subvolumes by using shallow copies: mv dir dir.old
btrfs sub create dir cp -a -- reflink=always dir.old/. dir/. rm
-Rf dir.old.
FWIW, that was someone else. I remember seeing it
GEO 1g2e...@gmail.com schrieb:
@Kai Krakow: I accept your opinion and thank you for your answer.
However I have special reasons doing so. I could name you a few use cases.
For example I do not need to backup search indexes as they mess up over
time, so I simple recreate the cache in case
GEO 1g2e...@gmail.com schrieb:
First of all, I am sorry that I screw up the whole structure of the
discussion (I have not subscribed to the mailing list, and as Kai replied
to the mailing list only, I could not reply to his answer.)
Umm... Try a NNTP gateway like gmane to follow the list in
Пламен Петров pla...@petrovi.no-ip.info schrieb:
I'm going with the module suggestion from Marc, too.
/dev/sda2 / btrfs relatime,compress=zlib 0 0
This line looks kinda useless to me. The compress=zlib option won't be
applied at boot and cannot be changed at
Duncan 1i5t5.dun...@cox.net schrieb:
As they say, Whoosh!
At least here, I interpreted that remark as primarily sarcastic
commentary on the systemd devs' apparent attitude, which can be
(controversially) summarized as: Systemd doesn't have problems because
it's perfect. Therefore, any
Duncan 1i5t5.dun...@cox.net schrieb:
Back to the extents counts: What I did next was implementing a defrag
job that regularly defrags the journal (actually, the complete log
directory as other log files suffer the same problem):
$ cat /usr/local/sbin/defrag-logs.rb #!/bin/sh exec btrfs
Hello!
I tried to cp --reflink a huge file (about 80G, a VMware disk
image). It took maybe about 1 minute when my PC started thrashing the
hard disk, some minutes later the command returned with an out of
memory message. I could no longer open terminals in my KDE Konsole to
investiage dmesg. I
Hello!
I tried to cp --reflink a huge file (about 80G, a VMware disk
image). It took maybe about 1 minute when my PC started thrashing the
hard disk, some minutes later the command returned with an out of
memory message. I could no longer open terminals in my KDE Konsole to
investiage dmesg. I
Hello again!
2011/10/8 Kai Krakow hurikhan77+bt...@gmail.com:
I tried to cp --reflink a huge file (about 80G, a VMware disk
image). It took maybe about 1 minute when my PC started thrashing the
hard disk, some minutes later the command returned with an out of
memory message.
[...]
So I'd
David Sterba wrote:
Then I could mount the /home subvolume.
I also found the corrupted file
? -? ? ? ??? 13.4.4.40.js
Chromium cache? Somebody recently reported a problem there. I wonder
what this browser does to the filesystem ... :)
If you meant me by
Hello list!
I'm trying to rm some files, this is what I get in dmesg:
[30975.249519] [ cut here ]
[30975.249529] WARNING: at fs/btrfs/extent-tree.c:4588
__btrfs_free_extent+0x3b7/0x7ed()
[30975.249532] Hardware name:
[30975.249535] Modules linked in: af_packet lm90
Hello!
2011/10/26 dima dole...@parallels.com:
I'm trying to rm some files, this is what I get in dmesg:
[snip]
Can you ls the directory where the problem files are located? What would the
the output? I had a very similar problem but on 3.0.x kernel when several
files suddenly got corrupted.
Hello btrfs!
Recently I upgraded to 3.2.0-rc4 due to instabilities with my btrfs
filesystem in 3.1.1. While with 3.1.1 my system completely froze, with
3.2.0-rc4 it stays at least somehow usable (for some strange reason my xorg
screen turns black as soon as this happens, only ssh is working
Hello!
2011/12/8 Jan Schmidt list.bt...@jan-o-sch.net:
On 07.12.2011 21:40, Kai Krakow wrote:
[...]
The problematic file seems to be in /usr/portage but scrubbing doesn't tell
me the filename (I was under the impression 3.2.x adds a patch which should
report filenames).
It should. Did you
Hello,
I managed to mount my broken btrfs partition in read-only mode and clone my
rootfs subvolume to an ext4 partition and boot from that - so I now have the
original system bootable.
Jan Schmidt wrote:
On 07.12.2011 21:40, Kai Krakow wrote:
[...]
The problematic file seems to be in /usr
Hello btrfs!
As already posted in another thread my btrfs oopsed when I tried to delete a
subvolume which probably had an error. I've just upgraded to 3.2-rc5 and now
it oopses on unmount.
Here's what I get on unmount:
[ 89.907762] zcache: destroyed pool id=2, cli_id=65535
[ 89.908762]
Hello btrfs...
I tried to delete a subvolume which probably has some transid errors. After
this, the subvolume is gone but I cannot reboot - it hangs. After reisub,
the deleted subvolume is right back there (this is different from previous
kernel version before 3.2.0-rc4 (afair) where the
As long as you create your data and metadata with a mirror policy, you can
use btrfs scrubbing to find and correct broken data blocks. I think latest
kernels also so this repairing online.
It works by finding a mirrored block with correct checksum if the block in
question has a bad checksum.
Hi!
Michael Andreen h...@ruin.nu schrieb:
The find-root program seems to think there is a root (and poentially some
older roots?), but not sure how to use that information.
[...]
Anything else I can do to debug this or potentially recover a few bits
before reformatting?
You could try the
Hello!
bt...@spiritvideo.com bt...@spiritvideo.com schrieb:
The plan that occurs to me is to make a snapshot of the system in the
state that I want to always boot. Then, I would rewrite the init
script in the initrd to (a) delete any old tmp copy of the snapshot;
(b) copy the static
Just happened while writing a huge avi file to my usb3 backup disk:
[356036.596292] [ cut here ]
[356036.596300] kernel BUG at fs/btrfs/inode.c:1588!
[356036.596304] invalid opcode: [#1] SMP
[356036.596307] CPU 2
[356036.596309] Modules linked in: vmnet(O)
/ \
--exclude /media/ \
--exclude /mnt/ \
/ ${BASEDIR}/current/
btrfs subvolume snapshot \
${BASEDIR}/current \
${BASEDIR}/snapshots/system-${DATE}
btrfs filesystem sync ${BASEDIR}
)
umount /mnt/usb-backup
)
Kai Krakow hurikhan77+bt...@gmail.com schrieb
Mitch Harder mitch.har...@sabayonlinux.org schrieb:
On Sat, Feb 4, 2012 at 5:40 AM, Kai Krakow hurikhan77+bt...@gmail.com
wrote:
It's actually the case for me that rsync writes to the device using mount
options compress-force=zlib and that rsync probably truncates files
sometimes when using
Fahrzin Hemmati fahh...@gmail.com schrieb:
I recently re-installed Ubuntu, and somewhere along the way the
installer decided to clear out /var, which happens to be a separate
btrfs device from /. When I do btfrs filesystem df /var it outputs this:
Data: total=134.01GB, used=485.78
System,
Hi!
This is what happened while rsyncing my system disk to my btrfs backup
device after I enabled space caching for the latter (and first time using it
after 3.3.1, last time I sync'ed it was with 3.2.x):
# mount options
# LABEL=usb-backup /mnt/usb-backup btrfs \
#
Hello!
Is there any documentation about btrfs mount flags wrt:
1. which flags are one-time options and are permanent,
2. which flags are global per btrfs partition,
3. which flags are local per subvolume mount?
I'm asking because while googling I found very confusing info about
autodefrag.
Just in case it is interesting, here's the blocked state (take note I
currently have other fs actions running on my btrfs root fs copying a lot of
files from a remote server):
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
This is what happened while rsyncing my system disk to my btrfs backup
Hi!
Why don't you just use du -B 4096 -sh /path/to/fs vs.
du -B 16384 -sh ...?
Subtracting both results is the overhead of the one vs. the other.
But to answer your request for the formula, its:
blocks = (long)((file_size + block_size - 1) / block_size)
occupied_size = blocks * block_size
But
George Mitchell geo...@chinilu.com schrieb:
1) The system fails to boot intermittently due to dracut/initrd issues
(btrfs: open_ctree failed). This is being worked on upstream and I am
seeing a continual flow of patches addressing it, but so far no fix.
This will take time to fix and it
Hey list,
Although this is about the preload daemon, my intended audience for this
matter is the btrfs community. So I'm posting this here.
I've created a small script here[1] to read the preload daemon state file
and use this to run the btrfs defragmenter/compressor on these files. The
idea
Hey list,
I wonder if it is possible to deduplicate read-only snapshots.
Background:
I'm using an bash/rsync script[1] to backup my whole system on a nightly
basis to an attached USB3 drive into a scratch area, then take a snapshot of
this area. I'd like to have these snapshots immutable, so
Hello list,
I've upgraded to 3.9.0 mainly for the snapshot-aware defragging patches. I'm
running bedup[1] on a regular basis and it is now the third time that I got
back to my PC just to find it hard-frozen and I needed to use the reset
button.
It looks like this happens only while running
Alexander Skwar alexanders.mailinglists+nos...@gmail.com schrieb:
Where I'm hanging right now, is that I can't seem to figure out a
bullet proof way to find all the subvolumes of the filesystems I
might have.
What about this:
# btrfs sub list -a /
ID 256 gen 1487089 top level 5 path
Alexander Skwar alexanders.mailinglists+nos...@gmail.com schrieb:
So I guess I'd still need to mount the root volume temporarily
somewhere to do the translation.
That brings in the idea how bedup seems to handle this. Maybe you want to
take one or the other idea from there as it also has to
Hello list,
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
I've upgraded to 3.9.0 mainly for the snapshot-aware defragging patches.
I'm running bedup[1] on a regular basis and it is now the third time that
I got back to my PC just to find it hard-frozen and I needed to use the
reset button
Tomasz Torcz to...@pipebreaker.pl schrieb:
Although this is about the preload daemon, my intended audience for this
matter is the btrfs community. So I'm posting this here.
I've created a small script here[1] to read the preload daemon state file
and use this to run the btrfs
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
There's no deep reason read-only snapshots should keep their storage
immutable, they can be affected by raid rebalancing for example.
Sounds logical, and good...
The current bedup restriction comes from the clone call; Mark Fasheh's
dedup
Alexander Skwar alexanders.mailinglists+nos...@gmail.com schrieb:
FWIW, I've also written a script which creates and manages
(ie. deletes old) snapshots.
It figures out all the available filesystems and creates snaps
for all the available (sub)volumes.
It's also on
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
I'm using an bash/rsync script[1] to backup my whole system on a nightly
basis to an attached USB3 drive into a scratch area, then take a snapshot
of this area. I'd like to have these snapshots immutable, so they should
be read-only.
Have you
Josef Bacik jba...@fusionio.com schrieb:
I've upgraded to 3.9.0 mainly for the snapshot-aware defragging patches.
I'm running bedup[1] on a regular basis and it is now the third time that
I got back to my PC just to find it hard-frozen and I needed to use the
reset button.
It looks like
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
That one should be fixed in btrfs-next. If you can reliably reproduce the
bug I'd be glad to get a confirmation - you can probably even save putting
it on bugzilla then ;-)
I can reliably reproduce it from two different approaches. I'd like to
zwu.ker...@gmail.com zwu.ker...@gmail.com schrieb:
The patchset is trying to introduce hot relocation support
for BTRFS. In hybrid storage environment, when the data in
HDD disk get hot, it can be relocated to SSD disk by BTRFS
hot relocation support automatically; also, if SSD disk ratio
james northrup northrup.ja...@gmail.com schrieb:
tried a git based backup? sounds spot-on as a compromise prior to
applying btrfs tweaks. snapshotting the git binaries would have the
dedupe characteristics.
Git is efficient with space, yes. But if you have a lot of binary files, and
a lot
Jan Schmidt list.bt...@jan-o-sch.net schrieb:
I can reliably reproduce it from two different approaches. I'd like to
only apply the commits fixing it. Can you name them here?
In git log order:
6ced2666 Btrfs: separate sequence numbers for delayed ref tracking and
tree mod log ef9120b1
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
How will it compare to bcache? I'm currently thinking about buying an
SSD but bcache requires some efforts in migrating the storage to use.
And after all those hassles I am even not sure if it would work easily
with a dracut generated initramfs.
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
It sounds simple, and was sort-of prompted by the new syscall taking
short ranges, but it is tricky figuring out a sane heuristic (when to
hash, when to bail, when to submit without comparing, what should be the
source in the last case), and
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
Gabriel de Perthuis g2p.c...@gmail.com schrieb:
It sounds simple, and was sort-of prompted by the new syscall taking
short ranges, but it is tricky figuring out a sane heuristic (when to
hash, when to bail, when to submit without comparing
Kai Krakow hurikhan77+bt...@gmail.com schrieb:
I can reliably reproduce it from two different approaches. I'd like to
only apply the commits fixing it. Can you name them here?
In git log order:
6ced2666 Btrfs: separate sequence numbers for delayed ref tracking and
tree mod log ef9120b1
Hello!
Is this output in slabtop normal after a few days uptime with a daily rsync
(btrfs to btrfs) and some compiling (gentoo emerge):
Active / Total Objects (% used): 1120836 / 1810907 (61,9%)
Active / Total Slabs (% used) : 42492 / 42492 (100,0%)
Active / Total Caches (% used)
Jérôme Poulin jeromepou...@gmail.com schrieb:
If I close all my desktop programs, my system stays at +12 GB RAM usage
while after a fresh boot it has 12+ GB free (of 16 GB). Cache stays low
at 2 GB while after a few minutes uptime cache is about 5 GB.
I probably have the same problem over
Tomasz Chmielewski t...@virtall.com schrieb:
I probably have the same problem over here, after about 2 weeks of
random read/write it seems my memory and swap get almost full and even
after killing all process and getting in single user mode, memory
won't free up. Would you happen to have
y...@wp.pl y...@wp.pl schrieb:
I recently noticed that my boot has become slower - it took around 29s,
while at the beginning it was ~6s. I thought it was an issue with systemd,
because it failed to properly indicate at which stage the slowdown
occurred and how long it took. I rolled back to
Hi!
What's leafsize was used when making the file system? The default is now
(as of yesterday) 16KB to avoid metadata fragmentation.
Since my btrfs is about 2 years old I suppose I'm still using 4kB leafsize.
Is there a way to change it without recreating the whole filesystem from
scratch?
Hendrik Friedel hend...@friedels.name schrieb:
I re-post this:
[...]
root 256 inode 9579 errors 100
root 256 inode 9580 errors 100
root 256 inode 14258 errors 100
root 256 inode 14259 errors 100
root inode 9579 errors 100
root inode 9580 errors 100
root inode 14258 errors
Duncan 1i5t5.dun...@cox.net schrieb:
100 is I_ERR_FILE_EXTENT_DISCOUNT. I'm not sure what kind of problem
this indicates but btrfsck does not seem to fix this currently - it just
detects it.
Interesting...
I wish it were documented what it technically means and what implications
each
Hi!
Andrea Gelmini andrea.gelm...@gmail.com schrieb:
and thanks a lot for your work.
I have an USB drive with BTRFS, on which I write with different
kernel release.
Anyway, today I made a copy of one big file, and than powered off
the computer with a clean shutdown (Ubuntu 13.10 -
Marc MERLIN m...@merlins.org schrieb:
I'm one of those people who uses cp -al and rsync to do backups. Indeed
I should likely rework the flow to use subvolumes and snapshots.
You also mentioned reflinks, and it sounds like I can use
cp -a --reflink instead of cp -al.
Also, would the dedupe
Duncan 1i5t5.dun...@cox.net schrieb:
But the above documentation should also suggest trying this to see if it
addresses that remaining single-mode system chunk stub:
btrfs balance start -fsconvert=raid1 /home
Cool man, that fixed it for me. :-)
Regards,
Kai
--
To unsubscribe from this
Hello list!
I'm planning to buy a small SSD (around 60GB) and use it for bcache in front
of my 3x 1TB HDD btrfs setup (mraid1+draid0) using write-back caching. Btrfs
is my root device, thus the system must be able to boot from bcache using
init ramdisk. My /boot is a separate filesystem
Chris Murphy li...@colorremedies.com schrieb:
I think most of these questions are better suited for the bcache list.
Ah yes, you are true. I will repost the non-btrfs related questions to the
bcache list. But actually I am most interested in using bcache together
btrfs, so getting a general
Marc MERLIN m...@merlins.org schrieb:
On Mon, Dec 30, 2013 at 02:22:55AM +0100, Kai Krakow wrote:
These thought are actually quite interesting. So you are saying that data
may not be fully written to SSD although the kernel thinks so? This is
That, and worse.
Incidently, I have just
Duncan 1i5t5.dun...@cox.net schrieb:
[ spoiler: tldr ;-) ]
* How stable is it? I've read about some csum errors lately...
FWIW, both bcache and btrfs are new and still developing technology.
While I'm using btrfs here, I have tested usable (which for root means
either means directly
Aastha Mehta aasth...@gmail.com schrieb:
Rather than a local disk, I have a remote device to which my IO
requests are sent and from which the data is fetched. I need certain
data to be fetched from the remote device after a remount. But somehow
I do not see any request appearing at the
Martin Steigerwald mar...@lichtvoll.de schrieb:
- btrfs dedup disable
Delete the dedup tree, after this we're not able to use dedup any more
unless you enable it again.
So if deduplication has been switched on for a while, btrfs dedup disable
will cause BTRFS to undo the deduplication (and
Tim Cuthbertson ratch...@gmail.com schrieb:
I am a bit confused and I have probably managed to outsmart myself.
For about 15 months, I have been running my system on a single, large
btrfs volume. It is RAID-0 on two SATA-III HDD's for a total of 1.9
TB. This is a home system running Siduction
Duncan 1i5t5.dun...@cox.net schrieb:
But because a full balance rewrites everything anyway, it'll effectively
defrag too.
Is that really true? I thought it just rewrites each distinct extent and
shuffels chunks around... This would mean it does not merge extents
together.
Regards,
Kai
--
Hello list!
What is the status of btrfs' self-healing capabilities?
On my backup btrfs device I am currently facing back-reference errors that
neither btrfs can deal with online, nor btrfsck is able to repair it (it
bails out with an assertion). I'm going to post this as a separate post.
So
Hello list!
By backup btrfs device fails to delete snapshots with the following
backtrace:
[87332.200212] WARNING: CPU: 0 PID: 5406 at fs/btrfs/extent-tree.c:5723
__btrfs_free_extent+0x692/0xb20()
[87332.200213] Modules linked in: rfcomm af_packet bnep vmnet(O) vmblock(O)
vsock vmmon(O)
Holger Hoffstaette holger.hoffstae...@googlemail.com schrieb:
On Sun, 24 Nov 2013 22:45:59 -0500, Jim Salter wrote:
TL;DR scrub's ioprio argument isn't really helpful - a scrub murders
system performance til it's done.
My system:
3.11 kernel (from Ubuntu Saucy)
I don't run Ubuntu,
KC conrad.francois.ar...@googlemail.com schrieb:
I was wondering about whether using options like autodefrag and
inode_cache on SSDs.
On one hand, one always hears that defragmentation of SSD is a no-no,
does that apply to BTRFS's autodefrag?
Also, just recently, I heard something similar
KC conrad.francois.ar...@googlemail.com schrieb:
I followed your advice on NOCOW for virtualbox images and torrents like
so: chattr -v /home/juha/VirtualBox\ VMs/
chattr -RC /home/juha/Downloads/torrent/#unfinished
As you can see, i used the recursive flag. However, I do not know
whether
Martin Steigerwald mar...@lichtvoll.de schrieb:
Okay, I have seen 260 MB/s. But frankly I am pretty sure that Virtuoso
isn´t doing this kind of large scale I/O on a highly fragmented file. Its
a database. Its random access. My oppinion is that Virtuoso couldn´t care
less about the
Hi!
I'm curious... The whole snapshot thing on btrfs is based on its COW design.
But you can make individual files and directory contents nocow by applying
the C attribute on it using chattr. This is usually recommended for database
files and VM images. So far, so good...
But what happens to
David Sterba dste...@suse.cz schrieb:
On Tue, Feb 04, 2014 at 08:22:05PM -0500, Josef Bacik wrote:
On 02/04/2014 03:52 PM, Kai Krakow wrote:
Hi!
I'm curious... The whole snapshot thing on btrfs is based on its COW
design. But you can make individual files and directory contents nocow
Duncan 1i5t5.dun...@cox.net schrieb:
Ah okay, that makes it clear. So, actually, in the snapshot the file is
still nocow - just for the exception that blocks being written to become
unshared and relocated. This may introduce a lot of fragmentation but it
won't become worse when rewriting the
Josef Bacik jba...@fb.com schrieb:
On 02/05/2014 03:15 PM, Roman Mamedov wrote:
Hello,
On a freshly-created RAID1 filesystem of two 1TB disks:
# df -h /mnt/p2/
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 1.8T 1.1M 1.8T 1% /mnt/p2
I cannot write 2TB of user
Chris Murphy li...@colorremedies.com schrieb:
If the database/virtual machine/whatever is crash safe, then the
atomic state that a snapshot grabs will be useful.
How fast is this state fixed on disk from the time of the snapshot
command? Loosely speaking. I'm curious if this is 1 second; a
Duncan 1i5t5.dun...@cox.net schrieb:
The question here is: Does it really make sense to create such snapshots
of disk images currently online and running a system. They will probably
be broken anyway after rollback - or at least I'd not fully trust the
contents.
VM images should not be
Chris Murphy li...@colorremedies.com schrieb:
On Feb 7, 2014, at 2:07 PM, Kai Krakow hurikhan77+bt...@gmail.com wrote:
Chris Murphy li...@colorremedies.com schrieb:
If the database/virtual machine/whatever is crash safe, then the
atomic state that a snapshot grabs will be useful.
How
Duncan 1i5t5.dun...@cox.net schrieb:
[...]
Difficult to twist your mind around that but well explained. ;-)
A snapshot thus looks much like a crash in terms of NOCOW file integrity
since the blocks of a NOCOW file are simply snapshotted in-place, and
there's already no checksumming or file
Hugo Mills h...@carfax.org.uk schrieb:
On Sat, Feb 08, 2014 at 05:33:10PM +0600, Roman Mamedov wrote:
On Fri, 07 Feb 2014 21:32:42 +0100
Kai Krakow hurikhan77+bt...@gmail.com wrote:
It should show the raw space available. Btrfs also supports compression
and doesn't try to be smart about
Chris Murphy li...@colorremedies.com schrieb:
On Feb 6, 2014, at 11:08 PM, Roman Mamedov r...@romanrm.net wrote:
And what
if I am accessing that partition on a server via a network CIFS/NFS share
and don't even *have a way to find out* any of that.
That's the strongest argument. And
1 - 100 of 316 matches
Mail list logo