Hi Lucas, we use Java to run largely our own programs. None do anything
special or interesting with the disk, it is simply where we deploy our .jar
files and scratch space for e.g. logs
The fragmentation idea is interesting, but it seems unlikely that the disk
would be fatally fragmented at
Austin S Hemmelgarn ahferro...@gmail.com schrieb:
On 2015-02-11 23:33, Kai Krakow wrote:
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The
Also,
Another thought came to me. It seems that the system only has issues when a
sync operation happens. As to Why, I don't know but Maybe someone else on the
list can shed some light on this.
end
-- above line is for a mailing list.
--
Sent from my Android device with K-9 Mail. Please excuse
Steven,
The only reason I brought up swap space is that it seems the system may be
trying to utilize that due to low physical memory. How much RAM in the machine
running Docker? The main thing that makes me want to believe it's RAM is this:
[146280.252150] [81180257]
On Thu, Feb 12, 2015 at 10:53:14AM +0100, David Sterba wrote:
Adding Greg to CC.
On Thu, Feb 12, 2015 at 07:03:37AM +0800, Anand Jain wrote:
drivers/cpufreq/cpufreq.c is already using this function. And now btrfs
needs it as well. export symbol kobject_move().
Signed-off-by: Anand
On Wed, Feb 11, 2015 at 11:12:39AM +, Filipe Manana wrote:
We try to lock a mutex while the current task state is not TASK_RUNNING,
which results in the following warning when CONFIG_DEBUG_LOCK_ALLOC=y:
[30736.772501] [ cut here ]
[30736.774545] WARNING: CPU: 9
[ Please CC me on replies, I'm not on the list ]
[ This is a followup to http://www.spinics.net/lists/linux-btrfs/msg41496.html ]
Hello linux-btrfs,
I've been having troubles keeping my Apache Mesos / Docker slave nodes stable.
After some period of load, tasks begin to hang. Once this happens
On 2015-02-11 23:33, Kai Krakow wrote:
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O_DIRECT should guarantee that the
filesystem cache
Hi,
I am new to btrfs and trying to learn it.
I have download btrfs-progs code from the git repository but not able to
compile it.
Can someone help me with the steps to compile this user space programs?
Thanks
Nishant
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
We have a scenario where after the fsync log replay we can lose file data
that had been previously fsync'ed if we added an hard link for our inode
and after that we sync'ed the fsync log (for example by fsync'ing some
other file or directory).
This is because when adding an hard link we updated
On Thu, Feb 12, 2015 at 6:26 AM, Swâmi Petaramesh sw...@petaramesh.org wrote:
It also contains *lots* of subvols and snapshots.
About how many is lots?
1/ Could I first pull a disk out of the current RAID-1 config, losing
redundancy
without breaking anything else ?
2/ Then reset the
This test is motivated by an fsync issue discovered in btrfs.
The issue was that we could lose file data, that was previously
fsync'ed successfully, if we end up adding a hard link to our
inode and then persist the fsync log later via an fsync of other
inode for example. This is similar to my
On Thu, Feb 12, 2015 at 04:54:47PM -0500, Nishant Agrawal wrote:
Hi,
I am new to btrfs and trying to learn it.
I have download btrfs-progs code from the git repository but not
able to compile it.
Can someone help me with the steps to compile this user space programs?
There's a bunch
We have a scenario where after the fsync log replay we can lose file data
that had been previously fsync'ed if we added an hard link for our inode
and after that we sync'ed the fsync log (for example by fsync'ing some
other file or directory).
This is because when adding an hard link we updated
On Thu, Feb 12, 2015 at 11:16:51AM -0500, Josef Bacik wrote:
On our gluster boxes we stream large tar balls of backups onto our fses. With
160gb of ram this means we get really large contiguous ranges of dirty data,
but
the way our ENOSPC stuff works is that as long as it's contiguous we
Original Message
Subject: Re: [PATCH v3 00/10] Enhance btrfs-find-root and open_ctree()
to provide better chance on damaged btrfs.
From: David Sterba dste...@suse.cz
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2015年02月12日 21:16
On Thu, Feb 12, 2015 at 09:36:01AM +0800, Qu
Hi
I don't remember the exact mkfs.btrfs options anymore but
ls /sys/fs/btrfs/[UUID]/features/
shows the following output:
big_metadata compress_lzo extended_iref mixed_backref raid56
I also tested my device with a short
hdparm -tT /dev/dm5
and got
/dev/mapper/sdc_crypt:
Timing cached
FYI, still seeing this with 3.19:
[196992.429463] [ cut here ]
[196992.429526] WARNING: CPU: 1 PID: 26328 at fs/btrfs/qgroup.c:1414
btrfs_delayed_qgroup_accounting+0x9f3/0xa0d [btrfs]()
[196992.429617] Modules linked in: xt_nat xt_tcpudp ipt_MASQUERADE
Ed Tomlinson e...@aei.ca schrieb:
On Tuesday, February 10, 2015 2:17:43 AM EST, Kai Krakow wrote:
Tobias Holst to...@tobby.eu schrieb:
and btrfs scrub status /[device] gives me the following output:
scrub status for [UUID]
scrub started at Mon Feb 9 18:16:38 2015 and was aborted after 2008
On Wed, Feb 11, 2015 at 09:17:22PM -0500, Kevin Mulvey wrote:
This is a patch to inode.c that fixes some spacing errors found by
checkpatch.pl
https://btrfs.wiki.kernel.org/index.php/Project_ideas#Cleanup_projects
Please note that pure whitespace and style reformatting changes are not
really
Adding Greg to CC.
On Thu, Feb 12, 2015 at 07:03:37AM +0800, Anand Jain wrote:
drivers/cpufreq/cpufreq.c is already using this function. And now btrfs
needs it as well. export symbol kobject_move().
Signed-off-by: Anand Jain anand.j...@oracle.com
---
v1-v2: Didn't notice there wasn't my
On Thu, Feb 12, 2015 at 05:33:41AM +0100, Kai Krakow wrote:
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O_DIRECT should guarantee
Swâmi Petaramesh posted on Thu, 12 Feb 2015 14:26:09 +0100 as excerpted:
I have a BTRFS RAID-1 FS made from 2x 2TB SATA mechanical drives.
It was created a while ago, with defaults by the time of 4K leaf sizes.
It also contains *lots* of subvols and snapshots.
It has become very slow
Hello guys,
On Thu, Feb 12, 2015 at 05:33:41AM +0100, Kai Krakow wrote:
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O_DIRECT
On Fri, Feb 13, 2015 at 10:20:25AM +0900, Tomasz Chmielewski wrote:
FYI, still seeing this with 3.19:
I also got this warning (can be reproduced by a loop of running
xfstests/btrfs/057) and
tried to fix it but I failed.
I think you also have several snapshots, and this warning may occur
after
I'm going to amend what I wrote earlier. The problem with the seed
device method, it won't let you change the leafsize. And that means
you'll need to go with a new volume with mkfs, and migrate data with
btrfs send receive instead.
And to clarify, you don't need to thin out subvolume to start out
Hello,
Sometimes my system is hanging for a few seconds.
When I start top, I see this :
%cpu: 80.7 command: btrfs-transacti
Is it normal that btrfs-transaction takes such hijg cpu.
uname- a:
Linux sanos1 3.13.11-ckt13 #1 SMP Tue Feb 3 12:06:18 CET 2015 x86_64 x86_64
x86_64 GNU/Linux
On Fri, Feb 13, 2015 at 12:19 AM, Roel Niesen
roel.nie...@1stsolutions.be wrote:
Hello,
Sometimes my system is hanging for a few seconds.
When I start top, I see this :
%cpu: 80.7 command: btrfs-transacti
Is it normal that btrfs-transaction takes such hijg cpu.
Approximately
how many
Our gluster boxes were hitting a problem where they'd run out of space when
updating the block group cache and therefore wouldn't be able to update the free
space inode. This is a problem because this is how we invalidate the cache and
protect ourselves from errors further down the stack, so if
For newly restored metadumps we can actually mount the fs and use it properly
except that the data obviously doesn't match properly. To get around this make
us skip csum validation if the metadump_v2 flag is set on the fs, this will
allow us to reproduce balance issues with metadumps. Thanks,
Hi,
I have a BTRFS RAID-1 FS made from 2x 2TB SATA mechanical drives.
It was created a while ago, with defaults by the time of 4K leaf sizes.
It also contains *lots* of subvols and snapshots.
It has become very slow over time, and I know that BTRFS performs better with
the new 16K leaf sizes.
On Thu, Feb 12, 2015 at 09:36:01AM +0800, Qu Wenruo wrote:
Subject: Re: [PATCH v3 00/10] Enhance btrfs-find-root and open_ctree()
to provide better chance on damaged btrfs.
From: David Sterba dste...@suse.cz
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2015年02月12日 01:52
On Wed, Feb 11, 2015
On 02/11/2015 11:36 PM, Liu Bo wrote:
On Wed, Feb 11, 2015 at 03:08:59PM -0500, Josef Bacik wrote:
On our gluster boxes we stream large tar balls of backups onto our fses. With
160gb of ram this means we get really large contiguous ranges of dirty data, but
the way our ENOSPC stuff works is
On our gluster boxes we stream large tar balls of backups onto our fses. With
160gb of ram this means we get really large contiguous ranges of dirty data, but
the way our ENOSPC stuff works is that as long as it's contiguous we only hold
metadata reservation for one extent. The problem is we
We have a scenario where after the fsync log replay we can lose file data
that had been previously fsync'ed if we added an hard link for our inode
and after that we sync'ed the fsync log (for example by fsync'ing some
other file or directory).
This is because when adding an hard link we updated
This test is motivated by an fsync issue discovered in btrfs.
The issue was that we could lose file data, that was previously
fsync'ed successfully, if we end up adding a hard link to our
inode and then persist the fsync log later via an fsync of other
inode for example.
The btrfs issue was fixed
36 matches
Mail list logo