Re: [f2fs-dev] [GIT PULL] f2fs fix for 5.13-rc1

2021-05-14 Thread pr-tracker-bot
The pull request you sent on Fri, 14 May 2021 03:14:12 -0700:

> git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git 
> tags/f2fs-5.13-rc1-fix

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/ac524ece210e0689f037e2d80bee49bb39791792

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH 03/11] mm: Protect operations adding pages to page cache with invalidate_lock

2021-05-14 Thread Darrick J. Wong
On Fri, May 14, 2021 at 09:19:45AM +1000, Dave Chinner wrote:
> On Thu, May 13, 2021 at 11:52:52AM -0700, Darrick J. Wong wrote:
> > On Thu, May 13, 2021 at 07:44:59PM +0200, Jan Kara wrote:
> > > On Wed 12-05-21 08:23:45, Darrick J. Wong wrote:
> > > > On Wed, May 12, 2021 at 03:46:11PM +0200, Jan Kara wrote:
> > > > > +->fallocate implementation must be really careful to maintain page 
> > > > > cache
> > > > > +consistency when punching holes or performing other operations that 
> > > > > invalidate
> > > > > +page cache contents. Usually the filesystem needs to call
> > > > > +truncate_inode_pages_range() to invalidate relevant range of the 
> > > > > page cache.
> > > > > +However the filesystem usually also needs to update its internal 
> > > > > (and on disk)
> > > > > +view of file offset -> disk block mapping. Until this update is 
> > > > > finished, the
> > > > > +filesystem needs to block page faults and reads from reloading 
> > > > > now-stale page
> > > > > +cache contents from the disk. VFS provides mapping->invalidate_lock 
> > > > > for this
> > > > > +and acquires it in shared mode in paths loading pages from disk
> > > > > +(filemap_fault(), filemap_read(), readahead paths). The filesystem is
> > > > > +responsible for taking this lock in its fallocate implementation and 
> > > > > generally
> > > > > +whenever the page cache contents needs to be invalidated because a 
> > > > > block is
> > > > > +moving from under a page.
> > > > > +
> > > > > +->copy_file_range and ->remap_file_range implementations need to 
> > > > > serialize
> > > > > +against modifications of file data while the operation is running. 
> > > > > For blocking
> > > > > +changes through write(2) and similar operations inode->i_rwsem can 
> > > > > be used. For
> > > > > +blocking changes through memory mapping, the filesystem can use
> > > > > +mapping->invalidate_lock provided it also acquires it in its 
> > > > > ->page_mkwrite
> > > > > +implementation.
> > > > 
> > > > Question: What is the locking order when acquiring the invalidate_lock
> > > > of two different files?  Is it the same as i_rwsem (increasing order of
> > > > the struct inode pointer) or is it the same as the XFS MMAPLOCK that is
> > > > being hoisted here (increasing order of i_ino)?
> > > > 
> > > > The reason I ask is that remap_file_range has to do that, but I don't
> > > > see any conversions for the xfs_lock_two_inodes(..., MMAPLOCK_EXCL)
> > > > calls in xfs_ilock2_io_mmap in this series.
> > > 
> > > Good question. Technically, I don't think there's real need to establish a
> > > single ordering because locks among different filesystems are never going
> > > to be acquired together (effectively each lock type is local per sb and we
> > > are free to define an ordering for each lock type differently). But to
> > > maintain some sanity I guess having the same locking order for doublelock
> > > of i_rwsem and invalidate_lock makes sense. Is there a reason why XFS uses
> > > by-ino ordering? So that we don't have to consider two different orders in
> > > xfs_lock_two_inodes()...
> > 
> > I imagine Dave will chime in on this, but I suspect the reason is
> > hysterical raisins^Wreasons.
> 
> It's the locking rules that XFS has used pretty much forever.
> Locking by inode number always guarantees the same locking order of
> two inodes in the same filesystem, regardless of the specific
> in-memory instances of the two inodes.
> 
> e.g. if we lock based on the inode structure address, in one
> instancex, we could get A -> B, then B gets recycled and
> reallocated, then we get B -> A as the locking order for the same
> two inodes.
> 
> That, IMNSHO, is utterly crazy because with non-deterministic inode
> lock ordered like this you can't make consistent locking rules for
> locking the physical inode cluster buffers underlying the inodes in
> the situation where they also need to be locked.

 That's protected by the ILOCK, correct?

> We've been down this path before more than a decade ago when the
> powers that be decreed that inode locking order is to be "by
> structure address" rather than inode number, because "inode number
> is not unique across multiple superblocks".
> 
> I'm not sure that there is anywhere that locks multiple inodes
> across different superblocks, but here we are again

Hm.  Are there situations where one would want to lock multiple
/mappings/ across different superblocks?  The remapping code doesn't
allow cross-super operations, so ... pipes and splice, maybe?  I don't
remember that code well enough to say for sure.

I've been operating under the assumption that as long as one takes all
the same class of lock at the same time (e.g. all the IOLOCKs, then all
the MMAPLOCKs, then all the ILOCKs, like reflink does) that the
incongruency in locking order rules within a class shouldn't be a
problem.

> > It might simply be time to convert all
> > three XFS inode locks to use the same ordering rules.
> 
> Careful, there lie 

Re: [f2fs-dev] [PATCH v2 00/40] Use ASCII subset instead of UTF-8 alternate symbols

2021-05-14 Thread Mauro Carvalho Chehab
Em Fri, 14 May 2021 12:08:36 +0100
Edward Cree  escreveu:

> For anyone who doesn't know about it: X has this wonderful thing called
>  the Compose key[1].  For instance, type ⎄--- to get —, or ⎄<" for “.
> Much more mnemonic than Unicode codepoints; and you can extend it with
>  user-defined sequences in your ~/.XCompose file.

Good tip. I haven't use composite for years, as US-intl with dead keys is
enough for 99.999% of my needs. 

Btw, at least on Fedora with Mate, Composite is disabled by default. It has
to be enabled first using the same tool that allows changing the Keyboard
layout[1].

Yet, typing an EN DASH for example, would be "--.", with is 4
keystrokes instead of just two ('--'). It means twice the effort ;-)

[1] KDE, GNome, Mate, ... have different ways to enable it and to 
select what key would be considered :

https://dry.sailingissues.com/us-international-keyboard-layout.html
https://help.ubuntu.com/community/ComposeKey

Thanks,
Mauro


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v2 00/40] Use ASCII subset instead of UTF-8 alternate symbols

2021-05-14 Thread Edward Cree
> On Fri, 2021-05-14 at 10:21 +0200, Mauro Carvalho Chehab wrote:
>> I do use a lot of UTF-8 here, as I type texts in Portuguese, but I rely
>> on the US-intl keyboard settings, that allow me to type as "'a" for á.
>> However, there's no shortcut for non-Latin UTF-codes, as far as I know.
>>
>> So, if would need to type a curly comma on the text editors I normally 
>> use for development (vim, nano, kate), I would need to cut-and-paste
>> it from somewhere

For anyone who doesn't know about it: X has this wonderful thing called
 the Compose key[1].  For instance, type ⎄--- to get —, or ⎄<" for “.
Much more mnemonic than Unicode codepoints; and you can extend it with
 user-defined sequences in your ~/.XCompose file.
(I assume Wayland supports all this too, but don't know the details.)

On 14/05/2021 10:06, David Woodhouse wrote:
> Again, if you want to make specific fixes like removing non-breaking
> spaces and byte order marks, with specific reasons, then those make
> sense. But it's got very little to do with UTF-8 and how easy it is to
> type them. And the excuse you've put in the commit comment for your
> patches is utterly bogus.

+1

-ed

[1] https://en.wikipedia.org/wiki/Compose_key


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH 03/11] mm: Protect operations adding pages to page cache with invalidate_lock

2021-05-14 Thread Jan Kara
On Thu 13-05-21 20:38:47, Matthew Wilcox wrote:
> On Thu, May 13, 2021 at 09:01:14PM +0200, Jan Kara wrote:
> > On Wed 12-05-21 15:40:21, Matthew Wilcox wrote:
> > > Remind me (or, rather, add to the documentation) why we have to hold the
> > > invalidate_lock during the call to readpage / readahead, and we don't just
> > > hold it around the call to add_to_page_cache / add_to_page_cache_locked
> > > / add_to_page_cache_lru ?  I appreciate that ->readpages is still going
> > > to suck, but we're down to just three implementations of ->readpages now
> > > (9p, cifs & nfs).
> > 
> > There's a comment in filemap_create_page() trying to explain this. We need
> > to protect against cases like: Filesystem with 1k blocksize, file F has
> > page at index 0 with uptodate buffer at 0-1k, rest not uptodate. All blocks
> > underlying page are allocated. Now let read at offset 1k race with hole
> > punch at offset 1k, length 1k.
> > 
> > read()  hole punch
> > ...
> >   filemap_read()
> > filemap_get_pages()
> >   - page found in the page cache but !Uptodate
> >   filemap_update_page()
> >   locks everything
> >   truncate_inode_pages_range()
> > lock_page(page)
> > do_invalidatepage()
> > unlock_page(page)
> > locks page
> >   filemap_read_page()
> 
> Ah, this is the partial_start case, which means that page->mapping
> is still valid.  But that means that do_invalidatepage() was called
> with (offset 1024, length 1024), immediately after we called
> zero_user_segment().  So isn't this a bug in the fs do_invalidatepage()?
> The range from 1k-2k _is_ uptodate.  It's been zeroed in memory,
> and if we were to run after the "free block" below, we'd get that
> memory zeroed again.

Well, yes, do_invalidatepage() could mark zeroed region as uptodate. But I
don't think we want to rely on 'uptodate' not getting spuriously cleared
(which would reopen the problem). Generally the assumption is that there's
no problem clearing (or not setting) uptodate flag of a clean buffer
because the fs can always provide the data again. Similarly, fs is free to
refetch data into clean & uptodate page, if it thinks it's worth it. Now
all these would become correctness issues. So IMHO the fragility is not
worth the shorter lock hold times. That's why I went for the rule that
read-IO submission is still protected by invalidate_lock to make things
simple.

Honza
-- 
Jan Kara 
SUSE Labs, CR


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [GIT PULL] f2fs fix for 5.13-rc1

2021-05-14 Thread Jaegeuk Kim
Hi Linus,

Could you please consider this pull request?

Thanks,

The following changes since commit 6efb943b8616ec53a5e444193dccf1af9ad627b5:

  Linux 5.13-rc1 (2021-05-09 14:17:44 -0700)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git 
tags/f2fs-5.13-rc1-fix

for you to fetch changes up to f395183f9544ba2f56b25938d6ea7042bd873521:

  f2fs: return EINVAL for hole cases in swap file (2021-05-12 07:38:00 -0700)


f2fs-5.13-rc1-fix

This series of patches fix some critical bugs such as memory leak in compression
flows, kernel panic when handling errors, and swapon failure due to newly added
condition check.


Chao Yu (3):
  f2fs: compress: fix to free compress page correctly
  f2fs: compress: fix race condition of overwrite vs truncate
  f2fs: compress: fix to assign cc.cluster_idx correctly

Jaegeuk Kim (4):
  f2fs: avoid null pointer access when handling IPU error
  f2fs: support iflag change given the mask
  f2fs: avoid swapon failure by giving a warning first
  f2fs: return EINVAL for hole cases in swap file

 fs/f2fs/compress.c | 55 +++---
 fs/f2fs/data.c | 39 +++---
 fs/f2fs/f2fs.h |  2 +-
 fs/f2fs/file.c |  3 ++-
 fs/f2fs/segment.c  |  4 ++--
 5 files changed, 56 insertions(+), 47 deletions(-)


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] 答复: [PATCH v3] f2fs: compress: add nocompress extensions support

2021-05-14 Thread changfengnan
I think below explanation is clear enough, what do you think?

" After add nocompress_extension, the priority should be:
dir_flag < comp_extention,nocompress_extension <  comp_file_flag,
no_comp_file_flag.

For example:
1.If dir is set compress, the default file and compress_extension specified
file will compress, and nocompress_extension specified file will nocompress.
2.If dir is set not compress, the default file and nocompress_extension
specified file will not compress, but compress_extension specified file will
compress.
3.We can change compress attribute regardless of the file in which type of
dir and specified or not."

-邮件原件-
发件人: Chao Yu  
发送时间: 2021年5月14日 15:50
收件人: Fengnan Chang 
抄送: jaeg...@kernel.org; linux-f2fs-devel@lists.sourceforge.net
主题: Re: [f2fs-dev] [PATCH v3] f2fs: compress: add nocompress extensions
support

On 2021/5/7 11:05, Fengnan Chang wrote:
> When we create a directory with enable compression, all file write 
> into directory will try to compress.But sometimes we may know, new 
> file cannot meet compression ratio requirements.
> We need a nocompress extension to skip those files to avoid 
> unnecessary compress page test.

Could you please elaborate priority of comp_ext, no_comp_ext, dir_flag,
comp_file_flag, no_comp_file_flag here and in f2fs.rst as well?

> 
> Signed-off-by: Fengnan Chang 
> ---
>   Documentation/filesystems/f2fs.rst |  8 +++
>   fs/f2fs/f2fs.h |  2 +
>   fs/f2fs/namei.c| 18 +--
>   fs/f2fs/super.c| 79 +-
>   4 files changed, 103 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/filesystems/f2fs.rst 
> b/Documentation/filesystems/f2fs.rst
> index 63c0c49b726d..f9248a36cd53 100644
> --- a/Documentation/filesystems/f2fs.rst
> +++ b/Documentation/filesystems/f2fs.rst
> @@ -281,6 +281,14 @@ compress_extension=%s Support adding specified
extension, so that f2fs can enab
>For other files, we can still enable compression
via ioctl.
>Note that, there is one reserved special extension
'*', it
>can be set to enable compression for all files.
> +nocompress_extension=%s Support adding specified extension, so
that f2fs can disable
> +  compression on those corresponding files, just
contrary to compression extension.
> +  If you know exactly which files cannot be
compressed, you can use this.
> +  The same extension name can't appear in both
compress and nocompress
> +  extension at the same time.
> +  If the compress extension specifies all files, the
types specified by the
> +  nocompress extension will be treated as special
cases and will not be compressed.
> +  Don't allow use '*' to specifie all file in
nocompress extension.
>   compress_chksum  Support verifying chksum of raw data in
compressed cluster.
>   compress_mode=%s Control file compression mode. This supports "fs"
and "user"
>modes. In "fs" mode (default), f2fs does automatic
compression 
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 
> 87d734f5589d..3d5d28a2568f 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -150,8 +150,10 @@ struct f2fs_mount_info {
>   unsigned char compress_level;   /* compress level */
>   bool compress_chksum;   /* compressed data chksum */
>   unsigned char compress_ext_cnt; /* extension count */
> + unsigned char nocompress_ext_cnt;   /* nocompress
extension count */
>   int compress_mode;  /* compression mode */
>   unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /*
extensions */
> + unsigned char noextensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /*

> +extensions */
>   };
> 
>   #define F2FS_FEATURE_ENCRYPT0x0001
> diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index 
> 405d85dbf9f1..84ca322a22ee 100644
> --- a/fs/f2fs/namei.c
> +++ b/fs/f2fs/namei.c
> @@ -279,14 +279,16 @@ static void set_compress_inode(struct f2fs_sb_info
*sbi, struct inode *inode,
>   const unsigned char *name)
>   {
>   __u8 (*extlist)[F2FS_EXTENSION_LEN] = 
> sbi->raw_super->extension_list;
> + unsigned char (*noext)[F2FS_EXTENSION_LEN] = 
> +F2FS_OPTION(sbi).noextensions;
>   unsigned char (*ext)[F2FS_EXTENSION_LEN];
> - unsigned int ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
> + unsigned char ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
> + unsigned char noext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt;
>   int i, cold_count, hot_count;
> 
>   if (!f2fs_sb_has_compression(sbi) ||
> - is_inode_flag_set(inode, FI_COMPRESSED_FILE) ||
>   F2FS_I(inode)->i_flags & F2FS_NOCOMP_FL ||
> 

Re: [f2fs-dev] [PATCH v2 00/40] Use ASCII subset instead of UTF-8 alternate symbols

2021-05-14 Thread Mauro Carvalho Chehab
Em Wed, 12 May 2021 18:07:04 +0100
David Woodhouse  escreveu:

> On Wed, 2021-05-12 at 14:50 +0200, Mauro Carvalho Chehab wrote:
> > Such conversion tools - plus some text editor like LibreOffice  or similar  
> > - have
> > a set of rules that turns some typed ASCII characters into UTF-8 
> > alternatives,
> > for instance converting commas into curly commas and adding non-breakable
> > spaces. All of those are meant to produce better results when the text is
> > displayed in HTML or PDF formats.  
> 
> And don't we render our documentation into HTML or PDF formats? 

Yes.

> Are
> some of those non-breaking spaces not actually *useful* for their
> intended purpose?

No.

The thing is: non-breaking space can cause a lot of problems.

We even had to disable Sphinx usage of non-breaking space for
PDF outputs, as this was causing bad LaTeX/PDF outputs.

See, commit: 3b4c963243b1 ("docs: conf.py: adjust the LaTeX document output")

The afore mentioned patch disables Sphinx default behavior of
using NON-BREAKABLE SPACE on literal blocks and strings, using this
special setting: "parsedliteralwraps=true".

When NON-BREAKABLE SPACE were used on PDF outputs, several parts of 
the media uAPI docs were violating the document margins by far,
causing texts to be truncated.

So, please **don't add NON-BREAKABLE SPACE**, unless you test
(and keep testing it from time to time) if outputs on all
formats are properly supporting it on different Sphinx versions.

-

Also, most of those came from conversion tools, together with other
eccentricities, like the usage of U+FEFF (BOM) character at the
start of some documents. The remaining ones seem to came from 
cut-and-paste.

For instance,  bibliographic references (there are a couple of
those on media) sometimes have NON-BREAKABLE SPACE. I'm pretty
sure that those came from cut-and-pasting the document titles
from their names at the original PDF documents or web pages that
are referenced.

> > While it is perfectly fine to use UTF-8 characters in Linux, and specially 
> > at
> > the documentation,  it is better to  stick to the ASCII subset  on such
> > particular case,  due to a couple of reasons:
> > 
> > 1. it makes life easier for tools like grep;  
> 
> Barely, as noted, because of things like line feeds.

You can use grep with "-z" to seek for multi-line strings(*), Like:

$ grep -Pzl 'grace period started,\s*then' $(find Documentation/ -type 
f)
Documentation/RCU/Design/Data-Structures/Data-Structures.rst

(*) Unfortunately, while "git grep" also has a "-z" flag, it
seems that this is (currently?) broken with regards of handling multilines:

$ git grep -Pzl 'grace period started,\s*then'
$

> > 2. they easier to edit with the some commonly used text/source
> >code editors.  
> 
> That is nonsense. Any but the most broken and/or anachronistic
> environments and editors will be just fine.

Not really.

I do use a lot of UTF-8 here, as I type texts in Portuguese, but I rely
on the US-intl keyboard settings, that allow me to type as "'a" for á.
However, there's no shortcut for non-Latin UTF-codes, as far as I know.

So, if would need to type a curly comma on the text editors I normally 
use for development (vim, nano, kate), I would need to cut-and-paste
it from somewhere[1].

[1] If I have a table with UTF-8 codes handy, I could type the UTF-8 
number manually... However, it seems that this is currently broken 
at least on Fedora 33 (with Mate Desktop and US intl keyboard with 
dead keys).

Here, U is not working. No idea why. I haven't 
test it for *years*, as I din't see any reason why I would
need to type UTF-8 characters by numbers until we started
this thread.
 
In practice, on the very rare cases where I needed to write
non-Latin utf-8 chars (maybe once in a year or so, Like when I
would need to use a Greek letter or some weird symbol), there changes
are high that I wouldn't remember its UTF-8 code.

So, If I need to spend time to seek for an specific symbol, after
finding it, I just cut-and-paste it.

But even in the best case scenario where I know the UTF-8 and
U works, if I wanted to use, for instance, a curly
comma, the keystroke sequence would be:

U201csome stringU201d

That's a lot harder than typing and has a higher chances of
mistakenly add a wrong symbol than just typing:

"some string"

Knowing that both will produce *exactly* the same output, why
should I bother doing it the hard way?

-

Now, I'm not arguing that you can't use whatever UTF-8 symbol you
want on your docs. I'm just saying that, now that the conversion 
is over and a lot of documents ended getting some UTF-8 characters
by accident, it is time for a cleanup.

Thanks,
Mauro


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: compress: add nocompress extensions support

2021-05-14 Thread Chao Yu

On 2021/5/7 11:05, Fengnan Chang wrote:

When we create a directory with enable compression, all file write into
directory will try to compress.But sometimes we may know, new file
cannot meet compression ratio requirements.
We need a nocompress extension to skip those files to avoid unnecessary
compress page test.


Could you please elaborate priority of comp_ext, no_comp_ext, dir_flag,
comp_file_flag, no_comp_file_flag here and in f2fs.rst as well?



Signed-off-by: Fengnan Chang 
---
  Documentation/filesystems/f2fs.rst |  8 +++
  fs/f2fs/f2fs.h |  2 +
  fs/f2fs/namei.c| 18 +--
  fs/f2fs/super.c| 79 +-
  4 files changed, 103 insertions(+), 4 deletions(-)

diff --git a/Documentation/filesystems/f2fs.rst 
b/Documentation/filesystems/f2fs.rst
index 63c0c49b726d..f9248a36cd53 100644
--- a/Documentation/filesystems/f2fs.rst
+++ b/Documentation/filesystems/f2fs.rst
@@ -281,6 +281,14 @@ compress_extension=%s   Support adding specified 
extension, so that f2fs can enab
 For other files, we can still enable compression via 
ioctl.
 Note that, there is one reserved special extension 
'*', it
 can be set to enable compression for all files.
+nocompress_extension=%s   Support adding specified extension, so that 
f2fs can disable
+compression on those corresponding files, just 
contrary to compression extension.
+If you know exactly which files cannot be compressed, 
you can use this.
+The same extension name can't appear in both compress 
and nocompress
+extension at the same time.
+If the compress extension specifies all files, the 
types specified by the
+nocompress extension will be treated as special cases 
and will not be compressed.
+Don't allow use '*' to specifie all file in nocompress 
extension.
  compress_chksumSupport verifying chksum of raw data in 
compressed cluster.
  compress_mode=%s   Control file compression mode. This supports "fs" and 
"user"
 modes. In "fs" mode (default), f2fs does automatic 
compression
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 87d734f5589d..3d5d28a2568f 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -150,8 +150,10 @@ struct f2fs_mount_info {
unsigned char compress_level;   /* compress level */
bool compress_chksum;   /* compressed data chksum */
unsigned char compress_ext_cnt; /* extension count */
+   unsigned char nocompress_ext_cnt;   /* nocompress extension 
count */
int compress_mode;  /* compression mode */
unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /* 
extensions */
+   unsigned char noextensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /* 
extensions */
  };

  #define F2FS_FEATURE_ENCRYPT  0x0001
diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
index 405d85dbf9f1..84ca322a22ee 100644
--- a/fs/f2fs/namei.c
+++ b/fs/f2fs/namei.c
@@ -279,14 +279,16 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, 
struct inode *inode,
const unsigned char *name)
  {
__u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
+   unsigned char (*noext)[F2FS_EXTENSION_LEN] = 
F2FS_OPTION(sbi).noextensions;
unsigned char (*ext)[F2FS_EXTENSION_LEN];
-   unsigned int ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
+   unsigned char ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
+   unsigned char noext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt;
int i, cold_count, hot_count;

if (!f2fs_sb_has_compression(sbi) ||
-   is_inode_flag_set(inode, FI_COMPRESSED_FILE) ||
F2FS_I(inode)->i_flags & F2FS_NOCOMP_FL ||
-   !f2fs_may_compress(inode))
+   !f2fs_may_compress(inode) ||
+   (!ext_cnt && !noext_cnt))
return;

down_read(>sb_lock);
@@ -303,6 +305,16 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, 
struct inode *inode,

up_read(>sb_lock);

+   for (i = 0; i < noext_cnt; i++) {
+   if (is_extension_exist(name, noext[i])) {
+   f2fs_disable_compressed_file(inode);
+   return;
+   }
+   }
+
+   if (is_inode_flag_set(inode, FI_COMPRESSED_FILE))
+   return;
+
ext = F2FS_OPTION(sbi).extensions;

for (i = 0; i < ext_cnt; i++) {
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 5020152aa8fc..865191339625 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -148,6 +148,7 @@ enum {
Opt_compress_algorithm,