Hello,
> You have mentioned two issues when balance and fi show running
> concurrently
my mail was a bit chaotic, but I get the stalls even on idle system.
Today I got
parent transid verify failed on 1559973888000 wanted 1819 found 1821
parent transid verify failed on 1559973888000 wanted 18
On 10/22/2014 09:30 PM, Chris Murphy wrote:
Sure. So if Btrfs is meant to address scalability, then perhaps at the moment
it's falling short. As it's easy to add large drives and get very large
multiple device volumes, the snapshotting needs to scale also.
I'd say per user, it's reasonable to
On Oct 22, 2014, at 4:08 PM, Zygo Blaxell wrote:
>
> If you have one subvolume per user and 1000 user directories on a server,
> it's only 5 snapshots per user (last hour, last day, last week, last
> month, and last year).
Sure. So if Btrfs is meant to address scalability, then perhaps at the
Robert White posted on Wed, 22 Oct 2014 12:41:10 -0700 as excerpted:
> So I've been considering some NOCOW files (for VM disk images), but some
> questions arose. IS there a "1COW" (copy on write only once) flag or are
> the following operations dangerous or undefined?
>
> (1) The page https://bt
On Wed, Oct 22, 2014 at 01:37:15PM -0700, Robert White wrote:
> On 10/22/2014 01:08 PM, Zygo Blaxell wrote:
> >I have datasets where I record 14000+ snapshots of filesystem directory
> >trees scraped from test machines and aggregated onto a single server
> >for deduplication...but I store each snap
Chris Murphy posted on Wed, 22 Oct 2014 12:15:25 -0400 as excerpted:
> Granted I'm ignoring the fact there are 5000+ snapshots[.]
> The short term, maybe even medium term, it's "doctor, it hurts
> when I do this!" and the doctor says, "well then don't do that!"
LOL! Nicely said! =:^)
--
Duncan
Dave posted on Wed, 22 Oct 2014 08:49:46 -0400 as excerpted:
> On Tue, Oct 21, 2014 at 10:08 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>> As for the mounted filesystem question, since all it does is flip a
>> switch so that new metadata writes use the skinny-metadata code path,
>> it shouldn't be a
When btrfs allocate a chunk, it will try to alloc up to 1G for data and
256M for metadata, or 10% of all the writeable space if there is enough
space for the stripe on device.
However, when we run out of space, this allocation may cause unbalanced
chunk allocation.
For example, there are only 1G u
Steps to reproduce:
# mkfs.btrfs -f /dev/sdb7
# mount /dev/sdb7 /mnt
# btrfs dev stats /dev/sdb7
output:
[/dev/sdb7].write_io_errs 0
[/dev/sdb7].read_io_errs0
[/dev/sdb7].flush_io_errs 0
[/dev/sdb7].corruption_errs 0
[/dev/sdb7
On Wed, Oct 22, 2014 at 09:07:31PM -0400, Dave Jones wrote:
> Just hit this while running trinity.
>
> WARNING: CPU: 3 PID: 9612 at fs/btrfs/extent-tree.c:3799
> btrfs_free_reserved_data_space+0x1d1/0x280 [btrfs]()
> Modules linked in: rfcomm hidp bnep af_key llc2 scsi_transport_iscsi
> nf
Just hit this while running trinity.
WARNING: CPU: 3 PID: 9612 at fs/btrfs/extent-tree.c:3799
btrfs_free_reserved_data_space+0x1d1/0x280 [btrfs]()
Modules linked in: rfcomm hidp bnep af_key llc2 scsi_transport_iscsi nfnetlink
sctp libcrc32c can_raw can_bcm nfc caif_socket caif af_802154 ieee8021
On 10/22/2014 01:42 PM, Hugo Mills wrote:
swap-on-NFS is still, I think, in a set of out of tree patches, and
it's not gone anywhere near btrfs yet. It's just that once it does
land in mainline, it would form the appropriate infrastructure to
develop swapfile capability for btrfs.
I just lo
On Wed, Oct 22, 2014 at 01:39:58PM -0700, Robert White wrote:
> On 10/22/2014 01:25 PM, Hugo Mills wrote:
> >The new code is the swap-on-NFS infrastructure, which indirects
> >swapfile accesses through the filesystem code. The reason you have to
> >do that with NFS is because NFS doesn't expose
On 10/22/2014 01:25 PM, Hugo Mills wrote:
The new code is the swap-on-NFS infrastructure, which indirects
swapfile accesses through the filesystem code. The reason you have to
do that with NFS is because NFS doesn't expose a block device at all,
so you can't get a list of blocks on an underly
On 10/22/2014 01:08 PM, Zygo Blaxell wrote:
I have datasets where I record 14000+ snapshots of filesystem directory
trees scraped from test machines and aggregated onto a single server
for deduplication...but I store each snapshot as a git commit, not as
a btrfs snapshot or even subvolume.
We do
On Wed, Oct 22, 2014 at 01:08:48PM -0700, Robert White wrote:
> So the documentation is clear that you can't mount a swap file
> through BTRFS (unless you use a loop device).
>
> Why isn't a NOCOW file that has been fully pre-allocated -- as with
> fallocate(1) -- not suitable for swapping?
>
> I
So the documentation is clear that you can't mount a swap file through
BTRFS (unless you use a loop device).
Why isn't a NOCOW file that has been fully pre-allocated -- as with
fallocate(1) -- not suitable for swapping?
I found one reference to an unimplemented feature necessary for swap,
bu
On Wed, Oct 22, 2014 at 07:41:32AM +, Duncan wrote:
> Tomasz Chmielewski posted on Wed, 22 Oct 2014 09:14:14 +0200 as excerpted:
> >> Tho that is of course per subvolume. If you have multiple subvolumes
> >> on the same filesystem, that can still end up being a thousand or two
> >> snapshots p
On Wed, Oct 22, 2014 at 12:41:10PM -0700, Robert White wrote:
> So I've been considering some NOCOW files (for VM disk images), but
> some questions arose. IS there a "1COW" (copy on write only once)
> flag or are the following operations dangerous or undefined?
>
> (1) The page https://btrfs.wiki
So I've been considering some NOCOW files (for VM disk images), but some
questions arose. IS there a "1COW" (copy on write only once) flag or are
the following operations dangerous or undefined?
(1) The page https://btrfs.wiki.kernel.org/index.php/FAQ (section "Can
copy-on-write be turned off
On 10/22/2014 03:10 AM, Robert White wrote:
> Each snapshot is effectively stapling down one version of your
> entire metadata tree, right ?
On the best of my knowledge, I cannot confirm that.
I understood (please, be free to correct me if I am wrong) that each snapshot
create a copy of the chang
On 22/10/2014 14:40, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłow wrote:
Looks normal to me. Last time I started a balance after adding 6th
device to my FS, it took 4 days to move 25GBs of data.
It's long term untenable. At some point i
On Oct 21, 2014, at 9:43 PM, Chris Murphy wrote:
>
> On Oct 21, 2014, at 4:14 PM, Piotr Pawłow wrote:
>
>> On 21.10.2014 20:59, Tomasz Chmielewski wrote:
>>> FYI - after a failed disk and replacing it I've run a balance; it took
>>> almost 3 weeks to complete, for 120 GBs of data:
>>
>> Loo
On Tue, Oct 21, 2014 at 10:08 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> As for the mounted filesystem question, since all it does is flip a
> switch so that new metadata writes use the skinny-metadata code path, it
> shouldn't be a problem.
Nope. Just tried it here:
# btrfs --version
Btrfs v3.1
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłow wrote:
Looks normal to me. Last time I started a balance after adding 6th device to my
FS, it took 4 days to move 25GBs of data.
It's long term untenable. At some point it must be fixed. It's way, way slower
t
On 2014-10-21 21:10, Robert White wrote:
I don't think balance will _ever_ move the contents of a read only
snapshot. I could be wrong. I think you just end up with an endlessly
fragmented storage space and balance has to take each chunk and search
for someplace else it might better fit. Which e
On 2014-10-21 16:44, Arnaud Kapp wrote:
Hello,
I would like to ask if the balance time is related to the number of
snapshot or if this is related only to data (or both).
I currently have about 4TB of data and around 5k snapshots. I'm thinking
of going raid1 instead of single. From the numbers I
This is just a clean up patch which looks like you have missed
it in 3.17. sorry if it confused you.
On 10/07/14 08:08, Anand Jain wrote:
After Patch:
remove BTRFS_SCAN_PROC scan method
There isn't any consumer for btrfs_scan_block_devices() so delete it.
Signed-off-by: Anand Jain
---
Hi,
You have mentioned two issues when balance and fi show running
concurrently
- stalling and
- errors.
first of all..
3.17 replaced our own system wide disk scan methods by lblkid scan
methods. lblkid with its feature-rich is slower as reported here.
https://www.mail-archive.com/linux
Tomasz Chmielewski posted on Wed, 22 Oct 2014 09:14:14 +0200 as excerpted:
> Remember a given btrfs filesystem is not necessarily a backup
> destination for data from one source.
>
> It can be, say, 30 or 60 daily snapshots, plus several monthly, for each
> data source * number of data sources.
>
But 5000 snapshots?
Why? Are you *TRYING* to test btrfs until it breaks, or TRYING to
demonstrate a balance taking an entire year?
Remember a given btrfs filesystem is not necessarily a backup
destination for data from one source.
It can be, say, 30 or 60 daily snapshots, plus several month
Goffredo Baroncelli posted on Tue, 21 Oct 2014 18:40:19 +0200 as
excerpted:
> On 10/21/2014 11:50 AM, Duncan wrote:
>> Goffredo Baroncelli posted on Mon, 20 Oct 2014 22:21:04 +0200 as
>> excerpted:
>>
> [...]
>>> >
>>> > Could this be related to the inode overflow in 32 bit system (see
>>> > ino
32 matches
Mail list logo