Re: memory leak in 2.6.11-rc2

2005-01-25 Thread Andrea Arcangeli
On Tue, Jan 25, 2005 at 02:31:03PM +0100, Andreas Gruenbacher wrote:
> On Tue, 2005-01-25 at 13:51, Andrea Arcangeli wrote:
> > If somebody could fix the kernel CVS I could have a look at the
> > interesting changesets between 2.6.11-rc1-bk8 and 2.6.11-rc2.
> 
> What's not okay?

I already prepared a separated deatiled bugreport. I'm reproducing one
last time before sending it, after a "su - nobody" to be sure it's not a
local problem in my environment that might have changed, but I'm 99%
sure it'll reproduce just fine in a fresh account too.

Are you using cvsps at all?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: memory leak in 2.6.11-rc2

2005-01-25 Thread Andreas Gruenbacher
On Tue, 2005-01-25 at 13:51, Andrea Arcangeli wrote:
> If somebody could fix the kernel CVS I could have a look at the
> interesting changesets between 2.6.11-rc1-bk8 and 2.6.11-rc2.

What's not okay?

-- 
Andreas Gruenbacher <[EMAIL PROTECTED]>
SUSE Labs, SUSE LINUX GMBH

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: memory leak in 2.6.11-rc2

2005-01-25 Thread Andrea Arcangeli
On Mon, Jan 24, 2005 at 10:45:47PM -0500, Dave Jones wrote:
> On Tue, Jan 25, 2005 at 02:19:24PM +1100, Andrew Tridgell wrote:
>  > The problem I've hit now is a severe memory leak. I have applied the
>  > patch from Linus for the leak in free_pipe_info(), and still I'm
>  > leaking memory at the rate of about 100Mbyte/minute.
>  > I've tested with both 2.6.11-rc2 and with 2.6.11-rc1-mm2, both with
>  > the pipe leak fix. The setup is:
> 
> That's a little more extreme than what I'm seeing, so it may be
> something else, but my firewall box needs rebooting every
> few days. It leaks around 50MB a day for some reason.
> Given it's not got a lot of ram, after 4-5 days or so, it's
> completely exhausted its swap too.
> 
> It's currently on a 2.6.10-ac kernel, so it's entirely possible that
> we're not looking at the same issue, though it could be something
> thats been there for a while if your workload makes it appear
> quicker than a firewall/ipsec gateway would.
> Do you see the same leaks with an earlier kernel ?
> 
> post OOM (when there was about 2K free after named got oom-killed)
> this is what slabinfo looked like..
> 
> dentry_cache1502   3775160   251 : tunables  120   600 : 
> slabdata151151  0
> vm_area_struct  1599   2021 84   471 : tunables  120   600 : 
> slabdata 43 43  0
> size-1283431   6262128   311 : tunables  120   600 : 
> slabdata202202  0
> size-64 4352   4575 64   611 : tunables  120   600 : 
> slabdata 75 75  0
> avtab_node  7073   7140 32  1191 : tunables  120   600 : 
> slabdata 60 60  0
> size-32 7256   7616 32  1191 : tunables  120   600 : 
> slabdata 64 64  0

What is avtab_node? there's no such thing in my kernel. But the above
can be ok. Can you show meminfo too after oom kill?

Just another datapoint my firewall runs a kernel based on 2.6.11-rc1-bk8 with
all the needed oom fixes and I've no problems on it yet. I run it oom
and this is what I get after the oom:

athlon:/home/andrea # free
 total   used   free sharedbuffers cached
Mem:511136  50852 460284  0572  15764
-/+ buffers/cache:  34516 476620
Swap:  1052248  01052248
athlon:/home/andrea # 

The above is sane, 34M is very reasonable for what's loaded there
(there's the X server running, named too, and various other non standard
daemons, one even has a virtual size of >100m so it's not a tiny thing),
so I'm quite sure I'm not hitting a memleak, at least not on the
firewal. No ipsec on it btw, and it's a pure IDE without anything
special, just quite a few nics and USB usermode running all the time.

athlon:/home/andrea # uptime
  1:34pm  up 2 days 12:08,  1 user,  load average: 0.98, 1.13, 0.54
athlon:/home/andrea # iptables -L -v |grep -A2 FORWARD
Chain FORWARD (policy ACCEPT 65 packets, 9264 bytes)
 pkts bytes target prot opt in out source   destination 

3690K 2321M block  all  --  anyany anywhere anywhere

athlon:/home/andrea # 

So if there's a memleak in rc1-bk8, it's probably not in the core of the
kernel, but in some driver or things like ipsec. Either that or it broke
after 2.6.11-rc1-bk8. The kernel I'm running is quite heavily patched
too, but I'm not aware of any memleak fix in the additional patches.

Anyway I'll try again in a few days to verify it goes back down again to
exactly 34M of anonymous/random and 15M of cache.

No apparent problem on my desktop system either, it's running the same
kernel with different config.

If somebody could fix the kernel CVS I could have a look at the
interesting changesets between 2.6.11-rc1-bk8 and 2.6.11-rc2.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: memory leak in 2.6.11-rc2

2005-01-25 Thread Nick Piggin
Andrew Tridgell wrote:
Andrew,
 > So what you should do before generating the leak tool output is to put
 > heavy memory pressure on the machine to try to get it to free up as much of
 > that pagecache as possible.  bzero(malloc(lots)) will do it - create a real
 > swapstorm, then do swapoff to kill remaining swapcache as well.
As you saw when you logged into the machine earlier tonight, when you
suspend the dbench processes and run a memory filler the memory is
reclaimed.
I still think its a bug though, as the oom killer is being triggered
when it shouldn't be. I have 4G of ram in this machine, and I'm only
running a couple of hundred processes that should be using maybe 500M
in total, so for the oom killer to kick in might mean that the memory
isn't being reclaimed under normal memory pressure. Certainly a ps
shows no process using more than a few MB.
The oom killer report is below. This is with 2.6.11-rc2, with the pipe
leak fix, and the pgown monitoring patch. It was running one nbench of
size 50 and one dbench of size 40 at the time.
There are various OOM killer improvements and fixes that have gone
into Andrew's kernel tree which should be included for 2.6.11.
I don't think the OOM killer was ever perfect in 2.6, but recent
tinkering in mm/ probably aggrivated it. *blush*
Here is another small OOM killer improvement. Previously we needed
to reclaim SWAP_CLUSTER_MAX pages in a single pass. That should be
changed so that we need only reclaim that many pages during the
entire try_to_free_pages run, without going OOM.
Andrea? Andrew? Look OK?



---

 linux-2.6-npiggin/mm/vmscan.c |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff -puN mm/vmscan.c~oom-helper mm/vmscan.c
--- linux-2.6/mm/vmscan.c~oom-helper2005-01-25 23:04:28.0 +1100
+++ linux-2.6-npiggin/mm/vmscan.c   2005-01-25 23:05:06.0 +1100
@@ -914,12 +914,12 @@ int try_to_free_pages(struct zone **zone
sc.nr_reclaimed += reclaim_state->reclaimed_slab;
reclaim_state->reclaimed_slab = 0;
}
-   if (sc.nr_reclaimed >= SWAP_CLUSTER_MAX) {
+   total_scanned += sc.nr_scanned;
+   total_reclaimed += sc.nr_reclaimed;
+   if (total_reclaimed >= SWAP_CLUSTER_MAX) {
ret = 1;
goto out;
}
-   total_scanned += sc.nr_scanned;
-   total_reclaimed += sc.nr_reclaimed;
 
/*
 * Try to write back as many pages as we just scanned.  This

_


Re: memory leak in 2.6.11-rc2

2005-01-25 Thread Andrew Tridgell
Andrew,

 > So what you should do before generating the leak tool output is to put
 > heavy memory pressure on the machine to try to get it to free up as much of
 > that pagecache as possible.  bzero(malloc(lots)) will do it - create a real
 > swapstorm, then do swapoff to kill remaining swapcache as well.

As you saw when you logged into the machine earlier tonight, when you
suspend the dbench processes and run a memory filler the memory is
reclaimed.

I still think its a bug though, as the oom killer is being triggered
when it shouldn't be. I have 4G of ram in this machine, and I'm only
running a couple of hundred processes that should be using maybe 500M
in total, so for the oom killer to kick in might mean that the memory
isn't being reclaimed under normal memory pressure. Certainly a ps
shows no process using more than a few MB.

The oom killer report is below. This is with 2.6.11-rc2, with the pipe
leak fix, and the pgown monitoring patch. It was running one nbench of
size 50 and one dbench of size 40 at the time.

If this isn't a leak, then it would also be good to fix /usr/bin/free
so the -/+ buffers line becomes meaningful again. With the machine
completely idle (just a sshd running) I see:

Mem:   37011843483188 217996  0 1300921889440
-/+ buffers/cache:14636562237528

which looks like a leak. This persists even after the disks are
unmounted. After filling/freeing memory I see:

Mem:   3701184  281763673008  0520   3764
-/+ buffers/cache:  238923677292

so it can recover it, but under normal usage it doesn't before the oom
killer kicks in.

Cheers, Tridge


Jan 25 00:49:14 dev4-003 kernel: Out of Memory: Killed process 18910 (smbd).
Jan 25 00:49:14 dev4-003 kernel: oom-killer: gfp_mask=0xd0
Jan 25 00:49:14 dev4-003 kernel: DMA per-cpu:
Jan 25 00:49:14 dev4-003 kernel: cpu 0 hot: low 2, high 6, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 0 cold: low 0, high 2, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 1 hot: low 2, high 6, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 1 cold: low 0, high 2, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 2 hot: low 2, high 6, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 2 cold: low 0, high 2, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 3 hot: low 2, high 6, batch 1
Jan 25 00:49:14 dev4-003 kernel: cpu 3 cold: low 0, high 2, batch 1
Jan 25 00:49:14 dev4-003 kernel: Normal per-cpu:
Jan 25 00:49:14 dev4-003 kernel: cpu 0 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 0 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 1 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 1 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 2 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 2 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 3 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 3 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: HighMem per-cpu:
Jan 25 00:49:14 dev4-003 kernel: cpu 0 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 0 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 1 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 1 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 2 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 2 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 3 hot: low 32, high 96, batch 16
Jan 25 00:49:14 dev4-003 kernel: cpu 3 cold: low 0, high 32, batch 16
Jan 25 00:49:14 dev4-003 kernel:
Jan 25 00:49:14 dev4-003 kernel: Free pages:5884kB (640kB HighMem)
Jan 25 00:49:14 dev4-003 kernel: Active:84764 inactive:806173 dirty:338229 
writeback:36 unstable:0 free:1471 slab:23814 mapped:31610 pagetables:1616
Jan 25 00:49:14 dev4-003 kernel: DMA free:284kB min:68kB low:84kB high:100kB 
active:440kB inactive:10048kB present:16384kB pages_scanned:140 
all_unreclaimable? no
Jan 25 00:49:14 dev4-003 kernel: protections[]: 0 0 0
Jan 25 00:49:14 dev4-003 kernel: Normal free:4960kB min:3756kB low:4692kB 
high:5632kB active:72072kB inactive:632096kB present:901120kB 
pages_scanned:58999 all_unreclaimable? no
Jan 25 00:49:14 dev4-003 kernel: protections[]: 0 0 0
Jan 25 00:49:14 dev4-003 kernel: HighMem free:640kB min:512kB low:640kB 
high:768kB active:266948kB inactive:2581960kB present:2850752kB pages_scanned:0 
all_unreclaimable? no
Jan 25 00:49:14 dev4-003 kernel: protections[]: 0 0 0
Jan 25 00:49:14 dev4-003 kernel: DMA: 53*4kB 3*8kB 1*16kB 1*32kB 0*64kB 0*128kB 
0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 284kB
Jan 25 00:49:14 dev4-003 kernel: Normal: 400*4kB 16*8kB 0*16kB 1*32kB 0*64kB 
1*128kB 0*256kB 2*512kB 0*1024kB 1*2048kB 0*4096kB = 4960kB
Jan 25 00:49:14 dev4-003 kernel: HighMem: 6*4kB 11*8kB 1*16kB 0*32kB 0*64kB 
2*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 640kB
Jan 25 00:49:14 dev4-003 kernel: Swap cac

Re: memory leak in 2.6.11-rc2

2005-01-24 Thread Andrew Morton
Andrew Tridgell <[EMAIL PROTECTED]> wrote:
>
>  > I'm trying the little leak tracking patch from Alexander Nyberg now.
> 
>  Here are the results (only backtraces with more than 10k counts
>  included). The leak was at 1G of memory at the time I ran this, so its
>  safe to say 10k page allocations ain't enough to explain it :-)
> 
>  I also attach a hacked version of the pgown sort program that sorts
>  the output by count, and isn't O(n^2). It took 10 minutes to run the
>  old version :-)
> 
>  I'm guessing the leak is in the new xattr code given that is what
>  dbench and nbench were beating on. Andreas, can you look at the
>  following and see if you can spot anything?
> 
>  This was on 2.6.11-rc2 with the pipe leak patch from Linus. The
>  machine had leaked 1G of ram in 10 minutes, and was idle (only thing
>  running was sshd).
> 
>  175485 times:
>  Page allocated via order 0
>  [0xc0132258] generic_file_buffered_write+280
>  [0xc011b6a9] current_fs_time+77
>  [0xc0132a1e] __generic_file_aio_write_nolock+642
>  [0xc0132e70] generic_file_aio_write+100
>  [0xc017e586] ext3_file_write+38
>  [0xc014b7f5] do_sync_write+169
>  [0xc015f6de] fcntl_setlk64+286
>  [0xc01295a8] autoremove_wake_function+0

It would be pretty strange for plain old pagecache pages to leak in this
manner.  A few things come to mind.

- The above trace is indistinguishable from the normal situation of
  having a lot of pagecache floating about.  IOW: we don't know if the
  above pages have really leaked or not.

- It's sometimes possible for ext3 pages to remain alive (on the page
  LRU) after a truncate, but with no other references to them.  These pages
  are trivially reclaimable.  So even if you've deleted the files which the
  benchmark created, there _could_ be pages left over.  Although it would
  be unusual.

So what you should do before generating the leak tool output is to put
heavy memory pressure on the machine to try to get it to free up as much of
that pagecache as possible.  bzero(malloc(lots)) will do it - create a real
swapstorm, then do swapoff to kill remaining swapcache as well.

If we still see a lot of pages with the above trace then something really
broke.  Where does one get the appropriate dbench version?  How are you
mkfsing the filesystem?  Was mke2fs patched?  How is the benchmark being
invoked?

Thanks.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: memory leak in 2.6.11-rc2

2005-01-24 Thread Andrew Tridgell
 > I'm trying the little leak tracking patch from Alexander Nyberg now.

Here are the results (only backtraces with more than 10k counts
included). The leak was at 1G of memory at the time I ran this, so its
safe to say 10k page allocations ain't enough to explain it :-)

I also attach a hacked version of the pgown sort program that sorts
the output by count, and isn't O(n^2). It took 10 minutes to run the
old version :-)

I'm guessing the leak is in the new xattr code given that is what
dbench and nbench were beating on. Andreas, can you look at the
following and see if you can spot anything?

This was on 2.6.11-rc2 with the pipe leak patch from Linus. The
machine had leaked 1G of ram in 10 minutes, and was idle (only thing
running was sshd).

Cheers, Tridge


175485 times:
Page allocated via order 0
[0xc0132258] generic_file_buffered_write+280
[0xc011b6a9] current_fs_time+77
[0xc0132a1e] __generic_file_aio_write_nolock+642
[0xc0132e70] generic_file_aio_write+100
[0xc017e586] ext3_file_write+38
[0xc014b7f5] do_sync_write+169
[0xc015f6de] fcntl_setlk64+286
[0xc01295a8] autoremove_wake_function+0

141512 times:
Page allocated via order 0
[0xc0132258] generic_file_buffered_write+280
[0xc0132a1e] __generic_file_aio_write_nolock+642
[0xc0132e70] generic_file_aio_write+100
[0xc017e586] ext3_file_write+38
[0xc014b7f5] do_sync_write+169
[0xc015f6de] fcntl_setlk64+286
[0xc01295a8] autoremove_wake_function+0
[0xc014b8d5] vfs_write+157

67641 times:
Page allocated via order 0
[0xc0132258] generic_file_buffered_write+280
[0xc014dc69] __getblk+29
[0xc011b6a9] current_fs_time+77
[0xc0132a1e] __generic_file_aio_write_nolock+642
[0xc018d368] ext3_xattr_user_get+108
[0xc0132e70] generic_file_aio_write+100
[0xc017e586] ext3_file_write+38
[0xc014b7f5] do_sync_write+169

52758 times:
Page allocated via order 0
[0xc0132258] generic_file_buffered_write+280
[0xc014dc69] __getblk+29
[0xc0132a1e] __generic_file_aio_write_nolock+642
[0xc018d368] ext3_xattr_user_get+108
[0xc0132e70] generic_file_aio_write+100
[0xc017e586] ext3_file_write+38
[0xc014b7f5] do_sync_write+169
[0xc0120c0b] kill_proc_info+47

19610 times:
Page allocated via order 0
[0xc0132258] generic_file_buffered_write+280
[0xc011b6a9] current_fs_time+77
[0xc0132a1e] __generic_file_aio_write_nolock+642
[0xc0132c3a] generic_file_aio_write_nolock+54
[0xc0132def] generic_file_write_nolock+151
[0xc011b7cb] __do_softirq+95
[0xc01295a8] autoremove_wake_function+0
[0xc01295a8] autoremove_wake_function+0

16874 times:
Page allocated via order 0
[0xc0132258] generic_file_buffered_write+280
[0xc011b6a9] current_fs_time+77
[0xc0132a1e] __generic_file_aio_write_nolock+642
[0xc0132c3a] generic_file_aio_write_nolock+54
[0xc0132def] generic_file_write_nolock+151
[0xc01295a8] autoremove_wake_function+0
[0xc01295a8] autoremove_wake_function+0
[0xc0152d4a] blkdev_file_write+38



pgown-fast.c
Description: Binary data


Re: memory leak in 2.6.11-rc2

2005-01-24 Thread Dave Jones
On Tue, Jan 25, 2005 at 02:19:24PM +1100, Andrew Tridgell wrote:
 > The problem I've hit now is a severe memory leak. I have applied the
 > patch from Linus for the leak in free_pipe_info(), and still I'm
 > leaking memory at the rate of about 100Mbyte/minute.
 > I've tested with both 2.6.11-rc2 and with 2.6.11-rc1-mm2, both with
 > the pipe leak fix. The setup is:

That's a little more extreme than what I'm seeing, so it may be
something else, but my firewall box needs rebooting every
few days. It leaks around 50MB a day for some reason.
Given it's not got a lot of ram, after 4-5 days or so, it's
completely exhausted its swap too.

It's currently on a 2.6.10-ac kernel, so it's entirely possible that
we're not looking at the same issue, though it could be something
thats been there for a while if your workload makes it appear
quicker than a firewall/ipsec gateway would.
Do you see the same leaks with an earlier kernel ?

post OOM (when there was about 2K free after named got oom-killed)
this is what slabinfo looked like..

dentry_cache1502   3775160   251 : tunables  120   600 : 
slabdata151151  0
vm_area_struct  1599   2021 84   471 : tunables  120   600 : 
slabdata 43 43  0
size-1283431   6262128   311 : tunables  120   600 : 
slabdata202202  0
size-64 4352   4575 64   611 : tunables  120   600 : 
slabdata 75 75  0
avtab_node  7073   7140 32  1191 : tunables  120   600 : 
slabdata 60 60  0
size-32 7256   7616 32  1191 : tunables  120   600 : 
slabdata 64 64  0

Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: memory leak in 2.6.11-rc2

2005-01-24 Thread Andrew Tridgell
Randy,

 > I have applied the patch from Linus for the leak in
 > free_pipe_info()
...
 > Do you have today's memleak patch applied?  (cut-n-paste below).

yes :-)

I'm trying the little leak tracking patch from Alexander Nyberg now.

Cheers, Tridge
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: memory leak in 2.6.11-rc2

2005-01-24 Thread Randy.Dunlap
Andrew Tridgell wrote:
I've fixed up the problems I had with raid, and am now testing the
recent xattr changes with dbench and nbench.
The problem I've hit now is a severe memory leak. I have applied the
patch from Linus for the leak in free_pipe_info(), and still I'm
leaking memory at the rate of about 100Mbyte/minute.
I've tested with both 2.6.11-rc2 and with 2.6.11-rc1-mm2, both with
the pipe leak fix. The setup is:
 - 4 way PIII with 4G ram
 - qla2200 adapter with ibm fastt200 disk array 
 - running dbench -x and nbench on separate disks, in a loop

The oom killer kicks in after about 30 minutes. Naturally the oom
killer decided to kill my sshd, which was running vmstat :-) 
Do you have today's memleak patch applied?  (cut-n-paste below).
--
~Randy

--- 1.40/fs/pipe.c  2005-01-15 12:01:16 -08:00
+++ edited/fs/pipe.c2005-01-24 14:35:09 -08:00
@@ -630,13 +630,13 @@
struct pipe_inode_info *info = inode->i_pipe;
inode->i_pipe = NULL;
-   if (info->tmp_page)
-   __free_page(info->tmp_page);
for (i = 0; i < PIPE_BUFFERS; i++) {
struct pipe_buffer *buf = info->bufs + i;
if (buf->ops)
buf->ops->release(info, buf);
}
+   if (info->tmp_page)
+   __free_page(info->tmp_page);
kfree(info);
 }
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/