Re: Mass-Hardlinking Oops

2009-10-12 Thread Tomasz Chmielewski

Yan, Zheng wrote:


What is the reason for the limit, and is there any chance of increasing
it to something more reasonable as Mikhail suggested?


The limit is imposed by the format of inode back references. We can
get rid of the limit, but it requires a disk format change.


Please do get rid of this limit, it's ridiculously small.

Of course, not necessarily right now, but when you introduce some other 
changes needing disk format change, please think of removing the hard 
link limit as well.



--
Tomasz Chmielewski
http://wpkg.org

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


soft lockup when leaf and node bigger than 4KB

2009-10-12 Thread Zhang Jingwang
I make a btrfs use 'mkfs.btrfs -m single -l 16384 -n 16384 /dev/xxx'.
After mount it, I run a test script to create lots of files, then soft
lockup occurs.

After digging into the source code, I think there is a problem with
bio-bi_end_io.

when a bio is done, end_io function is called, it does following things:
1: put the bio into async thread waiting for checksum. (end_workqueue_bio)
2: check whether the bio can be checksum-ed. If not, put it back to
wait queue.(end_workqueue_fn)
3: If it can be checksum-ed, call end_bio_extent_readpage().
4: checksum the extent, set_extent_uptodate() and set uptodate flag of
the pages belong to the bio.(end_bio_extent_readpage)

But when checking whether the bio can be checksum-ed in step 2, it
examine the extent_range_uptodate() and uptodate flag of these pages
which is set in step 4. So I think there will be a endless loop here.

When we say a page is uptodate, we means its checksum is correct, but
we must ensure that all of the pages belongs to a btree node is
uptodate in order to calculate its checksum. So it's a logical
paradox.

I am new to btrfs and there may be some mistake about this problem.
Any comment is welcome and thanks for your time!

-- 
Zhang Jingwang
National Research Centre for High Performance Computers
Institute of Computing Technology, Chinese Academy of Sciences
No. 6, South Kexueyuan Road, Haidian District
Beijing, China
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mass-Hardlinking Oops

2009-10-12 Thread Chris Mason
On Mon, Oct 12, 2009 at 10:07:43AM +0200, Tomasz Chmielewski wrote:
 Yan, Zheng wrote:
 
 What is the reason for the limit, and is there any chance of increasing
 it to something more reasonable as Mikhail suggested?
 
 The limit is imposed by the format of inode back references. We can
 get rid of the limit, but it requires a disk format change.
 
 Please do get rid of this limit, it's ridiculously small.
 
 Of course, not necessarily right now, but when you introduce some
 other changes needing disk format change, please think of removing
 the hard link limit as well.

Please keep in mind this is only a limit on the number of links to a
single file where the links and the file are all in the same directory.

-chris
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ENOSPC at 94% full -- and causing BUGs elsewhere?

2009-10-12 Thread Hugo Mills
On Sun, Oct 04, 2009 at 08:06:30AM -0400, Chris Mason wrote:
 On Sat, Oct 03, 2009 at 05:55:32PM -0400, Josef Bacik wrote:
  On Sat, Oct 03, 2009 at 01:21:09PM +0100, Hugo Mills wrote:
  I've just had the following on my home server. I believe that it's
   btrfs that's responsible, as the machine wasn't doing much other than
   reading/writing on a btrfs filesystem. The process that was doing so
   is now stuck in D+ state, and can't be killed. The timing of the oops
   at the end is also suggestive of being involved in the same incident.
   This is the only btrfs filesystem on the machine.
  
  Patches have gone to Linus to fix the enospc problems.  You can try running 
  the
  enospc branch of Chris's git tree and it should behave better for you.  
  Thanks,
 
 The right tree for this is the master branch of btrfs-unstable for
 2.6.31.

   Thanks, Josef and Chris. I've now found the time to check out and
build the btrfs-unstable tree, and it is indeed handling the ENOSPC
condition much more cleanly.

   However, it seems to have got into a position where I have lots of
free space reported by df (over 10% of the size of the volume -- 185
GiB free of 1474 GiB total), but still refuses to write anything to
the filesystem. Do you have any suggestions for what I could try?

   The original ENOSPC error I reported above happened at
approximately 85/1370 GiB free; I then added 100 GiB more space
online, had another failure (same kernel: 2.6.31 mainline), and then
rebooted into master from btrfs-unstable.

   Just for the record, I'm now using this kernel:

Linux vlad 2.6.31-47417-gac6889c #1 Sun Oct 11 14:27:06 BST 2009 x86_64 
GNU/Linux

   Hugo.

-- 
=== Hugo Mills: h...@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- I'll take your bet, but make it ten thousand francs. I'm only ---  
   a _poor_ corrupt official.


signature.asc
Description: Digital signature


Re: Mass-Hardlinking Oops

2009-10-12 Thread Goffredo Baroncelli
On Monday 12 October 2009, jim owens wrote:
 Pär Andersson wrote:
  I just ran into the max hard link per directory limit, and remembered
  this thread. I get EMLINK when trying to create more than 311 (not 272)
  links in a directory, so at least the BUG() is fixed.
  
  What is the reason for the limit, and is there any chance of increasing
  it to something more reasonable as Mikhail suggested?
  
  For comparison I tried to create 200k hardlinks to the the same file in
  the same directory on btrfs, ext4, reiserfs and xfs:
 
 what real-world application uses and needs this many hard links?
 
 jim

For me 311 hard link to the same file under the same directory limit is not 
so high.
I don't know a software which need so many hard links. But it easy to find 
some similar cases.

For example under my /usr/bin I have 478 _soft links_ to _different_ 
files.

$ find /usr/bin/ -type l | wc -l
478

When a directory is created, its .. entry is a hard link to the parent 
directory. For example the /usr/share/doc directory has 2828 hard links 
because it has 2826 children directories.

$ ls -ld /usr/share/doc
drwxr-xr-x 2828 root root 12288 2009-08-20 19:03 /usr/share/doc
$ ls -ld /usr/share/* | egrep ^d | wc -l
2826

These cases are different cases. But the 311 hard link to the same file under 
the same directory limit may be too strong. Not now but in the next format 
change I think that it would be useful to remove this limit.

BR
Goffredo

-- 
gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) kreijackATinwind.it
Key fingerprint = 4769 7E51 5293 D36C 814E  C054 BF04 F161 3DC5 0512
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mass-Hardlinking Oops

2009-10-12 Thread John Dong


On Oct 12, 2009, at 12:16 PM, jim owens wrote:


Pär Andersson wrote:

I just ran into the max hard link per directory limit, and remembered
this thread. I get EMLINK when trying to create more than 311 (not  
272)

links in a directory, so at least the BUG() is fixed.
What is the reason for the limit, and is there any chance of  
increasing

it to something more reasonable as Mikhail suggested?
For comparison I tried to create 200k hardlinks to the the same  
file in

the same directory on btrfs, ext4, reiserfs and xfs:


what real-world application uses and needs this many hard links?

jim


I don't think that's a good counterargument for why this is not a bug.

Can't think of any off the top of my head for Linux, but definitely in  
OS X Time Machine can easily create 200+ hardlinks.--

To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mass-Hardlinking Oops

2009-10-12 Thread jim owens

Goffredo Baroncelli wrote:
I don't know a software which need so many hard links. But it easy to find 
some similar cases.


For example under my /usr/bin I have 478 _soft links_ to _different_ 
files.


Hard link is not used in place of soft link... soft link is
a different and preferred addition to posix style systems.

So don't think we need more hard links just because you find
apps using soft links.

When a directory is created, its .. entry is a hard link to the parent 
directory. For example the /usr/share/doc directory has 2828 hard links 
because it has 2826 children directories.


Max subdirectories per directory is again a different feature.

btrfs does not use hard link count for subdirectories.

That association of hard links-2 == max subdirs is only a legacy
of the design of some filesystems such as UFS.

These cases are different cases. But the 311 hard link to the same file under 
the same directory limit may be too strong. Not now but in the next format 
change I think that it would be useful to remove this limit.


I would agree if the cost was 0, but it increases a field size so
it would be nice to have a justified need.  But it is Chris's call.

jim
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mass-Hardlinking Oops

2009-10-12 Thread Tomasz Chmielewski

jim owens wrote:

Pär Andersson wrote:

I just ran into the max hard link per directory limit, and remembered
this thread. I get EMLINK when trying to create more than 311 (not 272)
links in a directory, so at least the BUG() is fixed.

What is the reason for the limit, and is there any chance of increasing
it to something more reasonable as Mikhail suggested?

For comparison I tried to create 200k hardlinks to the the same file in
the same directory on btrfs, ext4, reiserfs and xfs:


what real-world application uses and needs this many hard links?


The number of links depends on the length of a filename.

Is _13_ (yes, thirteen) hardlinks in a directory a big number? I don't think so.

On systems storing user data, I regularly see user files having maximum length: mostly files 
and/or directories saved by users using a webbrowser - the files take their name from the

website title, and these titles can be really long.

Consider that most sides of the world use UTF-8 characters, so the maximum file 
name is easily achieved.


Below, we hit this limit with just 13 hardlinks - it's not me to decide if 13 is 
this many hard links.


cd /tmp
dd if=/dev/zero of=btrfs.img bs=1M count=400
mkfs.btrfs btrfs.img
mount -o loop btrfs /mnt/btrfs/
touch 
a
i=1 ; while [ $i -ne 40 ] ; do
ln $LNFILE $i$LNFILE  
echo $i

i=$((i+1))
done   
1
2
3
4
5

6
7
8
9
10
11
12
13
ln: Erzeuge harte Verknüpfung 
„14a“
 ⇒  
„a“:
 Datei oder Verzeichnis nicht gefunden
14
ln: Erzeuge harte Verknüpfung 
„15a“
 ⇒  
„a“:
 Datei oder Verzeichnis nicht gefunden
15
ln: Erzeuge harte Verknüpfung 
„16a“
 ⇒  
„a“:
 Datei oder Verzeichnis nicht gefunden
16

Message from sysl...@dom at Mon Oct 12 22:31:49 2009 ...
dom klogd: [ 9657.948456] Oops:  [#1] SMP
Getötet
17

Message from sysl...@dom at Mon Oct 12 22:31:49 2009 ...
dom klogd: [ 9657.948459] last sysfs file: 
/sys/devices/system/cpu/cpu7/cache/index2/shared_cpu_map

Message from sysl...@dom at Mon Oct 12 22:31:49 2009 ...
dom klogd: [ 9657.948574] Stack:

Message from sysl...@dom at Mon Oct 12 22:31:49 2009 ...
dom klogd: [ 9657.948589] Call Trace:



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mass-Hardlinking Oops

2009-10-12 Thread berk walker

I believe one hard-link should be the maximum.
berk

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html