> -Original Message-
> From: Austin S. Hemmelgarn [mailto:ahferro...@gmail.com]
> Sent: Thursday, 6 July 2017 9:52 PM
> To: Paul Jones ; linux-btrfs@vger.kernel.org
> Subject: Re: Btrfs Compression
>
> On 2017-07-05 23:19, Paul Jones wrote:
> > While reading t
Le 06/07/2017 à 13:51, Austin S. Hemmelgarn a écrit :
>
> Additionally, when you're referring to extent size, I assume you mean
> the huge number of 128k extents that the FIEMAP ioctl (and at least
> older versions of `filefrag`) shows for compressed files? If that's
> the case, then it's importan
On 2017-07-05 23:19, Paul Jones wrote:
While reading the thread about adding zstd compression, it occurred
to me that there is potentially another thing affecting performance -
Compressed extent size. (correct my terminology if it's incorrect). I
have two near identical RAID1 filesystems (used fo
While reading the thread about adding zstd compression, it occurred to me that
there is potentially another thing affecting performance - Compressed extent
size. (correct my terminology if it's incorrect).
I have two near identical RAID1 filesystems (used for backups) on near
identical discs (HG
On Tue, Jun 06, 2017 at 02:41:15PM +0300, Timofey Titovets wrote:
> Btrfs already skip store of data where compression didn't
> free at least one byte. Let's make logic better and make check
> that compression free at least one sector size
> because in another case it useless to store this data com
Btrfs already skip store of data where compression didn't
free at least one byte. Let's make logic better and make check
that compression free at least one sector size
because in another case it useless to store this data compressed
Signed-off-by: Timofey Titovets
Cc: David Sterba
---
fs/btrfs/
On Mon, Jun 05, 2017 at 09:56:14PM +0300, Timofey Titovets wrote:
> 2017-06-05 19:10 GMT+03:00 David Sterba :
> > On Tue, May 30, 2017 at 02:18:05AM +0300, Timofey Titovets wrote:
> >> Btrfs already skip store of data where compression didn't
> >> free at least one byte. Let's make logic better and
2017-06-05 19:10 GMT+03:00 David Sterba :
> On Tue, May 30, 2017 at 02:18:05AM +0300, Timofey Titovets wrote:
>> Btrfs already skip store of data where compression didn't
>> free at least one byte. Let's make logic better and make check
>> that compression free at least one sector size
>> because i
On Tue, May 30, 2017 at 02:18:05AM +0300, Timofey Titovets wrote:
> Btrfs already skip store of data where compression didn't
> free at least one byte. Let's make logic better and make check
> that compression free at least one sector size
> because in another case it useless to store this data com
Btrfs already skip store of data where compression didn't
free at least one byte. Let's make logic better and make check
that compression free at least one sector size
because in another case it useless to store this data compressed
Signed-off-by: Timofey Titovets
---
fs/btrfs/inode.c | 3 ++-
1
ight place compress_file_range()
Thanks to David
Timofey Titovets (2):
Btrfs: lzo.c compressed data size must be less then input size
Btrfs: compression must free at least one sector size
fs/btrfs/inode.c | 3 ++-
fs/btrfs/lzo.c | 4 +++-
2 files changed, 5 insertions(+), 2 deletions(-)
2017-05-29 17:23 GMT+03:00 David Sterba :
> On Thu, May 25, 2017 at 09:12:20PM +0300, Timofey Titovets wrote:
>> Btrfs already skip store of data where compression didn't
>> free at least one byte. Let's make logic better and make check
>> that compression free at least one sector size
>> because i
On Thu, May 25, 2017 at 09:12:20PM +0300, Timofey Titovets wrote:
> Btrfs already skip store of data where compression didn't
> free at least one byte. Let's make logic better and make check
> that compression free at least one sector size
> because in another case it useless to store this data com
Btrfs already skip store of data where compression didn't
free at least one byte. Let's make logic better and make check
that compression free at least one sector size
because in another case it useless to store this data compressed
Signed-off-by: Timofey Titovets
---
fs/btrfs/lzo.c | 9 +++
data size
Changes since v3:
- Use btrfs sector size directly instead of assume that PAGE_SIZE == sectorsize
Timofey Titovets (2):
Btrfs: lzo.c pr_debug() deflate->lzo
Btrfs: compression must free at least one sector size
fs/btrfs/lzo.c | 11 +--
fs/btrfs/zlib.c | 7 ++-
2
2017-05-25 15:51 GMT+03:00 Chandan Rajendra :
...
> Apologies for the delayed response.
>
> I am not really sure if compression code must save atleast one sectorsize
> worth of space. But if other developers agree to it, then the above
> 'if' condition can be replaced with,
>
> u32 sectorsize = btr
On Sunday, May 21, 2017 12:10:39 AM IST Timofey Titovets wrote:
> Btrfs already skip store of data where compression didn't free at least one
> byte.
> So make logic better and make check that compression free at least one
> PAGE_SIZE,
> because in another case it useless to store this data compr
Hi Timofey,
[auto build test ERROR on v4.9-rc8]
[also build test ERROR on next-20170522]
[cannot apply to btrfs/next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Timofey-Titovets/Btrfs-lzo-c-
Timofey Titovets posted on Mon, 22 May 2017 01:32:21 +0300 as excerpted:
> 2017-05-21 20:30 GMT+03:00 Roman Mamedov :
>> On Sun, 21 May 2017 19:54:05 +0300 Timofey Titovets
>> wrote:
>>
>>> Sorry, but i know about subpagesize-blocksize patch set, but i don't
>>> understand where you see conflict?
2017-05-21 20:30 GMT+03:00 Roman Mamedov :
> On Sun, 21 May 2017 19:54:05 +0300
> Timofey Titovets wrote:
>
>> Sorry, but i know about subpagesize-blocksize patch set, but i don't
>> understand where you see conflict?
>>
>> Can you explain what you mean?
>>
>> By PAGE_SIZE i mean fs cluster size i
On Sun, 21 May 2017 19:54:05 +0300
Timofey Titovets wrote:
> Sorry, but i know about subpagesize-blocksize patch set, but i don't
> understand where you see conflict?
>
> Can you explain what you mean?
>
> By PAGE_SIZE i mean fs cluster size in my patch set.
This appears to be exactly the conf
2017-05-21 4:38 GMT+03:00 Duncan <1i5t5.dun...@cox.net>:
> Timofey Titovets posted on Sat, 20 May 2017 21:30:47 +0300 as excerpted:
>
>> 2017-05-20 20:14 GMT+03:00 Kai Krakow :
>>
>>> BTW: What's the smallest block size that btrfs stores? Is it always
>>> PAGE_SIZE? I'm not familiar with btrfs inte
Timofey Titovets posted on Sat, 20 May 2017 21:30:47 +0300 as excerpted:
> 2017-05-20 20:14 GMT+03:00 Kai Krakow :
>
>> BTW: What's the smallest block size that btrfs stores? Is it always
>> PAGE_SIZE? I'm not familiar with btrfs internals...
Thanks for asking the question. =:^) I hadn't made t
Btrfs already skip store of data where compression didn't free at least one
byte.
So make logic better and make check that compression free at least one
PAGE_SIZE,
because in another case it useless to store this data compressed
Signed-off-by: Timofey Titovets
---
fs/btrfs/lzo.c | 5 -
fs
e
Timofey Titovets (2):
Btrfs: lzo.c pr_debug() deflate->lzo
Btrfs: compression must free at least PAGE_SIZE
fs/btrfs/lzo.c | 7 +--
fs/btrfs/zlib.c | 3 ++-
2 files changed, 7 insertions(+), 3 deletions(-)
--
2.13.0
--
To unsubscribe from this list: send the line "unsubscribe linux-btr
2017-05-20 20:14 GMT+03:00 Kai Krakow :
> Am Sat, 20 May 2017 19:49:53 +0300
> schrieb Timofey Titovets :
>
>> Btrfs already skip store of data where compression didn't free at
>> least one byte. So make logic better and make check that compression
>> free at least one PAGE_SIZE, because in another
Am Sat, 20 May 2017 19:49:53 +0300
schrieb Timofey Titovets :
> Btrfs already skip store of data where compression didn't free at
> least one byte. So make logic better and make check that compression
> free at least one PAGE_SIZE, because in another case it useless to
> store this data compressed
ffers
Changes since v1:
- Merge patches for zlib and lzo in one
- Sync check logic for zlib and lzo
- Check profit after all data are compressed (not while compressing)
Timofey Titovets (2):
Btrfs: lzo.c pr_debug() deflate->lzo
Btrfs: compression must free at least PAGE_SIZE
fs/btrfs/lz
Btrfs already skip store of data where compression didn't free at least one
byte.
So make logic better and make check that compression free at least one
PAGE_SIZE,
because in another case it useless to store this data compressed
Signed-off-by: Timofey Titovets
---
fs/btrfs/lzo.c | 5 -
fs
First patch fix copy paste typo in debug message in lzo.c, lzo is not deflate
Second and third patches force btrfs not compress data if compression will not
free at least one PAGE_SIZE
Because it's useless in term of storage and read data from disk, as a result
productivity suffers
Timofey Tito
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs() in common/rc, which'll keep
creating and writing files until enospc error occurs. Note _fill_fs
is copied from tests/generic/256, but with some minor modifications.
Signed-off-by:
On Tue, Nov 01, 2016 at 04:49:34PM +0800, Wang Xiaoguang wrote:
> hi Darrick,
>
> Common/populate needs xfs_io supports falloc and fpunch,
> so I didn't put _fill_fs() in common/populate.
Tests will include common/rc first, and so pick up the functionality
_fill_fs requires before it's included f
hi Darrick,
Common/populate needs xfs_io supports falloc and fpunch,
so I didn't put _fill_fs() in common/populate.
Regards,
Xiaoguang Wang
On 11/01/2016 04:45 PM, Wang Xiaoguang wrote:
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs() in common/rc, which'll keep
creating and writing files until enospc error occurs. Note _fill_fs
is copied from tests/generic/256, but with some minor modifications.
Signed-off-by:
On Fri, Oct 28, 2016 at 03:05:55PM +0800, Wang Xiaoguang wrote:
> hi,
>
> On 10/27/2016 07:25 PM, Eryu Guan wrote:
> > On Wed, Oct 26, 2016 at 05:52:11PM +0800, Wang Xiaoguang wrote:
> > > When enabling btrfs compression, original codes can not fill fs
> > > corr
On Fri, Oct 28, 2016 at 03:00:29PM +0800, Wang Xiaoguang wrote:
> hi,
>
> On 10/28/2016 01:13 AM, Darrick J. Wong wrote:
> > On Wed, Oct 26, 2016 at 05:52:11PM +0800, Wang Xiaoguang wrote:
> > > When enabling btrfs compression, original codes can not fill fs
> >
hi,
On 10/27/2016 07:25 PM, Eryu Guan wrote:
On Wed, Oct 26, 2016 at 05:52:11PM +0800, Wang Xiaoguang wrote:
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs() in common/rc, which'll keep
creating and writing files until enospc error o
hi,
On 10/28/2016 01:13 AM, Darrick J. Wong wrote:
On Wed, Oct 26, 2016 at 05:52:11PM +0800, Wang Xiaoguang wrote:
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs() in common/rc, which'll keep
creating and writing files until enospc
On Wed, Oct 26, 2016 at 05:52:11PM +0800, Wang Xiaoguang wrote:
> When enabling btrfs compression, original codes can not fill fs
> correctly, here we introduce _fill_fs() in common/rc, which'll keep
> creating and writing files until enospc error occurs. Note _fill_fs
> is
On Wed, Oct 26, 2016 at 05:52:11PM +0800, Wang Xiaoguang wrote:
> When enabling btrfs compression, original codes can not fill fs
> correctly, here we introduce _fill_fs() in common/rc, which'll keep
> creating and writing files until enospc error occurs. Note _fill_fs
> is
When enabling btrfs compression, original codes can not fill fs
correctly, here we introduce _fill_fs() in common/rc, which'll keep
creating and writing files until enospc error occurs. Note _fill_fs
is copied from tests/generic/256, but with some minor modifications.
Signed-off-by:
On Mon, Oct 10, 2016 at 04:06:17PM +0800, Wang Xiaoguang wrote:
> When enabling btrfs compression, original codes can not fill fs
> correctly, fix this.
>
> Signed-off-by: Wang Xiaoguang
> ---
> V2: In common/, I did't find an existing function suitable for
> the
When enabling btrfs compression, original codes can not fill fs
correctly, fix this.
Signed-off-by: Wang Xiaoguang
---
V2: In common/, I did't find an existing function suitable for
these 4 test cases to fill fs, so I still use _pwrite_byte() with
a big enough file length fo fi
On Mon, Oct 10, 2016 at 11:49:03AM +0800, Wang Xiaoguang wrote:
> hi,
>
> On 10/10/2016 05:04 AM, Darrick J. Wong wrote:
> >On Sat, Oct 08, 2016 at 01:36:10AM +1100, Dave Chinner wrote:
> >>On Fri, Oct 07, 2016 at 03:00:42PM +0800, Wang Xiaoguang wrote:
> >>
hi,
On 10/10/2016 05:04 AM, Darrick J. Wong wrote:
On Sat, Oct 08, 2016 at 01:36:10AM +1100, Dave Chinner wrote:
On Fri, Oct 07, 2016 at 03:00:42PM +0800, Wang Xiaoguang wrote:
When enabling btrfs compression, original codes can not fill fs
correctly, fix this.
Signed-off-by: Wang Xiaoguang
On Sat, Oct 08, 2016 at 01:36:10AM +1100, Dave Chinner wrote:
> On Fri, Oct 07, 2016 at 03:00:42PM +0800, Wang Xiaoguang wrote:
> > When enabling btrfs compression, original codes can not fill fs
> > correctly, fix this.
> >
> > Signed-off-by: Wang Xiaoguang
> &g
On Fri, Oct 07, 2016 at 03:00:42PM +0800, Wang Xiaoguang wrote:
> When enabling btrfs compression, original codes can not fill fs
> correctly, fix this.
>
> Signed-off-by: Wang Xiaoguang
> ---
> tests/generic/171 | 4 +---
> tests/generic/172 | 2 +-
> tests/gener
When enabling btrfs compression, original codes can not fill fs
correctly, fix this.
Signed-off-by: Wang Xiaoguang
---
tests/generic/171 | 4 +---
tests/generic/172 | 2 +-
tests/generic/173 | 4 +---
tests/generic/174 | 4 +---
4 files changed, 4 insertions(+), 10 deletions(-)
diff --git a
From: Wang Xiaoguang
btrfs/059.out should not be hardcoded to zlib, if compression method
is lzo, this case will fail wrongly, so here add a filter.
Signed-off-by: Wang Xiaoguang
---
common/filter.btrfs | 4
tests/btrfs/059 | 16 +++-
tests/btrfs/059.out | 6 +++---
3 fi
Oh, I see now :)
I'll try your changes and tell if they work or not :)
--
Philippe Loctaux
p...@philippeloctaux.com
On Sun, Feb 21, 2016 at 04:37:54PM -0800, Joe Perches wrote:
> On Mon, 2016-02-22 at 00:26 +0100, Philippe Loctaux wrote:
> > Added line after variable declaration, fixing checkpat
On Mon, 2016-02-22 at 00:26 +0100, Philippe Loctaux wrote:
> Added line after variable declaration, fixing checkpatch warning.
[]
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
[]
> @@ -522,6 +522,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
>
>
Added line after variable declaration, fixing checkpatch warning.
Signed-off-by: Philippe Loctaux
---
fs/btrfs/compression.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 3346cd8..5194b6f 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btr
Christoph Anton Mitterer posted on Sat, 19 Dec 2015 01:00:55 +0100 as
excerpted:
> How, exactly, do you propose that filefrag understand this? It is
> getting the information from the fiemap ioctl:
>
> https://www.kernel.org/doc/Documentation/filesystems/fiemap.txt
>
> and the problem is that f
To: Christoph Anton Mitterer , 808...@bugs.debian.org
Cc: Debian Bug Tracking System
Subject: Re: Bug#808265: e2fsprogs: support btrfs compression in filefrag
Date: Fri, 18 Dec 2015 18:25:21 -0500
On Fri, Dec 18, 2015 at 01:16:14AM +0100, Christoph Anton Mitterer
wrote:
>
> It woul
On 07/29/2014 11:54 PM, Nick Krause wrote:
> Hey Guys ,
> I am new to reading and writing kernel code.I got interested in
> writing code for btrfs as it seems to
> need more work then other file systems and this seems other then
> drivers, a good use of time on my part.
> I interested in helpin
On Wed, Jul 30, 2014 at 10:36:57AM -0400, Peter Hurley wrote:
>
> Where is that git tree? I've been planning to set up a unit test and
> regression suite for tty/serial, and wouldn't mind cribbing the
> infrastructure from someone's existing work.
https://git.kernel.org/cgit/fs/ext2/xfstests-bld.
On Wed, Jul 30, 2014 at 2:31 PM, wrote:
> Nick,
>
>> On Wed, Jul 30, 2014 at 11:36 AM, wrote:
On Tue, Jul 29, 2014 at 11:54:20PM -0400, Nick Krause wrote:
> Hey Guys ,
> I interested in helping improving the compression of btrfs by using a
> set of threads using work queues li
Nick,
> On Wed, Jul 30, 2014 at 11:36 AM, wrote:
>>> On Tue, Jul 29, 2014 at 11:54:20PM -0400, Nick Krause wrote:
Hey Guys ,
I interested in helping improving the compression of btrfs by using a
set of threads using work queues like XFS
or reads and keeping the page cache af
On Wed, Jul 30, 2014 at 11:36 AM, wrote:
>> On Tue, Jul 29, 2014 at 11:54:20PM -0400, Nick Krause wrote:
>>> Hey Guys ,
>>> I interested in helping improving the compression of btrfs by using a
>>> set of threads using work queues like XFS
>>> or reads and keeping the page cache after reading co
> On Tue, Jul 29, 2014 at 11:54:20PM -0400, Nick Krause wrote:
>> Hey Guys ,
>> I interested in helping improving the compression of btrfs by using a
>> set of threads using work queues like XFS
>> or reads and keeping the page cache after reading compressed blocks as
>> these seem to be a great w
On 07/30/2014 10:13 AM, Theodore Ts'o wrote:
> On Wed, Jul 30, 2014 at 10:38:21AM +0100, Hugo Mills wrote:
>> qemu/kvm is good for this, because it has a mode
>> that bypasses the BIOS and bootloader emulation, and just directly
>> runs a kernel from a file on the host machine. This is fast. You ca
On Wed, Jul 30, 2014 at 10:38:21AM +0100, Hugo Mills wrote:
> qemu/kvm is good for this, because it has a mode
> that bypasses the BIOS and bootloader emulation, and just directly
> runs a kernel from a file on the host machine. This is fast. You can
> pass large sparse files to the VM to act as sc
On Tue, Jul 29, 2014 at 11:54:20PM -0400, Nick Krause wrote:
> Hey Guys ,
> I am new to reading and writing kernel code.I got interested in
> writing code for btrfs as it seems to
> need more work then other file systems and this seems other then
> drivers, a good use of time on my part.
> I in
Hey Guys ,
I am new to reading and writing kernel code.I got interested in
writing code for btrfs as it seems to
need more work then other file systems and this seems other then
drivers, a good use of time on my part.
I interested in helping improving the compression of btrfs by using a
set of
Hello,
This patch reduces zlib compression memory usage by `merging' inflate
and deflate streams into a single stream.
-- v2: rebased-on linux-next rc4 20140707
Sergey Senozhatsky (1):
btrfs compression: merge inflate and deflate z_streams
fs/btrfs/zlib.c
`struct workspace' used for zlib compression contains two zlib
z_stream-s: `def_strm' used in zlib_compress_pages(), and `inf_strm'
used in zlib_decompress/zlib_decompress_biovec(). None of these
functions use `inf_strm' and `def_strm' simultaniously, meaning that
for every compress/decompress oper
On (07/01/14 16:44), David Sterba wrote:
> On Tue, Jul 01, 2014 at 12:32:10AM +0900, Sergey Senozhatsky wrote:
> > `struct workspace' used for zlib compression contains two zlib
> > z_stream-s: `def_strm' used in zlib_compress_pages(), and `inf_strm'
> > used in zlib_decompress/zlib_decompress_biov
On Tue, Jul 01, 2014 at 12:32:10AM +0900, Sergey Senozhatsky wrote:
> `struct workspace' used for zlib compression contains two zlib
> z_stream-s: `def_strm' used in zlib_compress_pages(), and `inf_strm'
> used in zlib_decompress/zlib_decompress_biovec(). None of these
> functions use `inf_strm' an
`struct workspace' used for zlib compression contains two zlib
z_stream-s: `def_strm' used in zlib_compress_pages(), and `inf_strm'
used in zlib_decompress/zlib_decompress_biovec(). None of these
functions use `inf_strm' and `def_strm' simultaniously, meaning that
for every compress/decompress oper
On Wed, Jun 25, 2014 at 12:00:44AM +0900, Sergey Senozhatsky wrote:
> Add compression `workspace' in free_workspace() to
> `idle_workspace' list head, instead of tail. So we have
> better chances to reuse most recently used `workspace'.
>
> Signed-off-by: Sergey Senozhatsky
Makes sense to me,
Re
f address translations.
p.s. This patch is theoretical, no testing has been performed in order to
support this patch.
Sergey Senozhatsky (1):
btrfs compression: reuse recently used workspace
fs/btrfs/compression.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--
2.0.0.548.ge727dec
--
To un
Add compression `workspace' in free_workspace() to
`idle_workspace' list head, instead of tail. So we have
better chances to reuse most recently used `workspace'.
Signed-off-by: Sergey Senozhatsky
---
fs/btrfs/compression.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/b
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 23/08/13 01:20, Mark Ridley wrote:
> The main reason I started using strict allocate = yes on samba was out
> of desperation/exasperation with BTRFS.
The most effective performance option is to turn oplocks on.
Opportunistic locks are granted to a
Hi,
> The speed improvement for dumping large databases through samba with
> strict allocate = yes to BTRFS was amazing. It reduced a 1 hour dump down
> to 20 minutes.
What you want btrfs to do is to allocate a file of fixed-size on disk
in advance, without knowing how large the file will be aft
That would be fine, but nodatacow (according to the btrfs wiki) stops
compression, so I might as well get the speed benefits of 'strict allocate
= yes' which also disables compression.
If you want to use BTRFS to store backups then compression has be turned
on.
Database files like MSSQL usually c
The speed improvement for dumping large databases through samba with
strict allocate = yes to BTRFS was amazing. It reduced a 1 hour dump down
to 20 minutes.
On 23/08/2013 09:01, "Roger Binns" wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA1
>
>On 22/08/13 07:07, Josef Bacik wrote:
>> Not
Mark Ridley posted on Fri, 23 Aug 2013 09:20:04 +0100 as excerpted:
> I don't want to try nodatacow (which would probably fix the issue), but
> you lose compression on the whole filesystem, autodefrag doesn't fix it
> either.
I don't do servantware (in the context of my sig) and thus don't do sam
I tried defrag -c and it does nothing to files that have come in with
strict allocate = yes.
On 22/08/2013 19:29, "Kai Krakow" wrote:
>Josef Bacik schrieb:
>
>> Not sure what strict allocate = yes does, but I assume it probably does
>> fallocate() in which case yeah we aren't going to compress
The main reason I started using strict allocate = yes on samba was out of
desperation/exasperation with BTRFS.
BTRFS stalls from time to time causing SAMBA and/or MSSQL to give up on
the dump of a database.
>From what I have noticed, if for example you dump a 50GB database to samba
without strict
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 22/08/13 07:07, Josef Bacik wrote:
> Not sure what strict allocate = yes does,
I've worked on SMB servers before and can answer that. Historically the
way Windows apps (right back into the 16 bit days) have made sure there is
space for a file abou
On Thu, Aug 22, 2013 at 08:29:26PM +0200, Kai Krakow wrote:
> Josef Bacik schrieb:
>
> > Not sure what strict allocate = yes does, but I assume it probably does
> > fallocate() in which case yeah we aren't going to compress, we'll just
> > write
> > into the preallocated space. We don't support
Josef Bacik schrieb:
> Not sure what strict allocate = yes does, but I assume it probably does
> fallocate() in which case yeah we aren't going to compress, we'll just
> write
> into the preallocated space. We don't support compressed writes into
> preallocated space ATM, and I'm not sure we eve
On Thu, Aug 22, 2013 at 09:57:24AM +, Mark Ridley wrote:
> Hi,
>
> If i set strict allocate = yes in samba to speed up the transfer
> of a mssql database dump,
> then btrfs does not compress the file.
> I have tried it also by just copying a small file in Windows to the
> samba share and the
Hi,
If i set strict allocate = yes in samba to speed up the transfer
of a mssql database dump,
then btrfs does not compress the file.
I have tried it also by just copying a small file in Windows to the
samba share and the same.
I have tried btrfs mount options autodefrag and then
btrfs fi defra
Signed-off-by: Andrew Mahone
---
fs/btrfs/lz4_wrapper.c | 419 +
1 file changed, 419 insertions(+)
create mode 100644 fs/btrfs/lz4_wrapper.c
diff --git a/fs/btrfs/lz4_wrapper.c b/fs/btrfs/lz4_wrapper.c
new file mode 100644
index 000..60854de
On Tue, Feb 15, 2011 at 11:30:38AM +, Pádraig Brady wrote:
> On 14/02/11 17:58, Marti Raudsepp wrote:
> > On Mon, Feb 14, 2011 at 17:01, Chris Mason wrote:
> >> Or, it could just be delalloc ;)
> >
> > I suspect delalloc. After creating the file, filefrag reports "1
> > extent found", but for
On 14/02/11 17:58, Marti Raudsepp wrote:
> On Mon, Feb 14, 2011 at 17:01, Chris Mason wrote:
>> Or, it could just be delalloc ;)
>
> I suspect delalloc. After creating the file, filefrag reports "1
> extent found", but for some reason it doesn't actually print out
> details of the extent.
That's
Excerpts from Marti Raudsepp's message of 2011-02-14 12:58:17 -0500:
> On Mon, Feb 14, 2011 at 17:01, Chris Mason wrote:
> > Or, it could just be delalloc ;)
>
> I suspect delalloc. After creating the file, filefrag reports "1
> extent found", but for some reason it doesn't actually print out
> d
On Mon, Feb 14, 2011 at 17:01, Chris Mason wrote:
> Or, it could just be delalloc ;)
I suspect delalloc. After creating the file, filefrag reports "1
extent found", but for some reason it doesn't actually print out
details of the extent.
After a "sync" call, the extent appears and "cp" starts wo
Excerpts from Josef Bacik's message of 2011-02-13 11:13:30 -0500:
> On Sun, Feb 13, 2011 at 06:07:36PM +0200, Marti Raudsepp wrote:
> > On Sun, Feb 13, 2011 at 17:57, Josef Bacik wrote:
> > > Does the same problem happen when you use cp --sparse=never?
> >
> > You are right. cp --sparse=never doe
On Sun, Feb 13, 2011 at 05:49:42PM +0200, Marti Raudsepp wrote:
> Hi list!
>
> It seems I have found a serious regression in compressed btrfs in
> kernel 2.6.37. When creating a small file (less than the block size)
> and then cp/mv it to *another* file system, an appropriate number of
> zeroes ge
On Sun, Feb 13, 2011 at 06:07:36PM +0200, Marti Raudsepp wrote:
> On Sun, Feb 13, 2011 at 17:57, Josef Bacik wrote:
> > Does the same problem happen when you use cp --sparse=never?
>
> You are right. cp --sparse=never does not cause data loss.
>
So fiemap probably isn't doing the right thing whe
On Sun, Feb 13, 2011 at 17:57, Josef Bacik wrote:
> Does the same problem happen when you use cp --sparse=never?
You are right. cp --sparse=never does not cause data loss.
Regards,
Marti
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord.
On Sun, Feb 13, 2011 at 05:49:42PM +0200, Marti Raudsepp wrote:
> Hi list!
>
> It seems I have found a serious regression in compressed btrfs in
> kernel 2.6.37. When creating a small file (less than the block size)
> and then cp/mv it to *another* file system, an appropriate number of
> zeroes ge
Hi list!
It seems I have found a serious regression in compressed btrfs in
kernel 2.6.37. When creating a small file (less than the block size)
and then cp/mv it to *another* file system, an appropriate number of
zeroes gets written to the destination file. Case in point:
% echo foobar > foobar
%
On Mon, 2009-04-06 at 18:32 +1200, mp3geek wrote:
> Just wondering how do I measure the compression used in btrfs?
I'm afraid the best way right now is to compare the storage reported in
the FS by df with the sizes of the files reported by du.
We need to add an ioctl that reports on the actual si
Just wondering how do I measure the compression used in btrfs?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
101 - 197 of 197 matches
Mail list logo