cular connection.
iptables -A OUTPUT -m multiport --dports ... -sports .. -j DROP
Given the suggested keep alive settings, you should only have to wait 75
seconds after creating the IP tables rule before the connection is
broken.
NeilBrown
On Thu, Feb 20 2020, Degremont, Aurelien wrote:
> Th
hout a connection (returned by
ksocknal_find_connectable_route_locked()) it calls
ksocknal_launch_connection_locked() which adds the connection request to
ksnd_connd_routes, and wakes up the connd. The connd thread will then
make the connection.
Hope that helps.
NeilBrown
On Wed, Feb 19 2020, Degrem
if it already has a TCP connection. If it
does, it will use it. If not, it will create one.
So yes : it is exactly:
possible that the server in this case opens the connection itself
without waiting for the client to reconnect?
NeilBrown
On Tue, Feb 18 2020, Aurelien Degremont wrote:
> Th
etween 512 and 1023
(LNET_ACCEPTOR_MIN_PORT to LNET_ACCEPTOR_MAX_PORT).
NeilBrown
On Mon, Feb 17 2020, Degremont, Aurelien wrote:
> Hi all,
>
> From what I've understood so far, LNET listens on port 988 by default and
> peers connect to it using 1021-1023 TCP ports as sour
765
which affects autofs 5.1.3.
Maybe check with you vendor that you have latest kernel and autofs
package.
NeilBrown
>
> Youssef Eldakar
> Bibliotheca Alexandrina
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.o
,
f2fs, fat, freevxfs, fuse, gfs2, hpfs, isofs, jffs2, jfs, minix, nfs,
nilfs, ntfs, ocfs2, openpromfs, overlayfs, procfs, qnx4, qnx6,
reiserfs, romfs, squashfs, sysvfs, ubifs, udf, ufs, xfs
all set SLAB_RECLAIM_ACCOUNT on their inode caches. So to answer your
question: your understand *is* incorrect
ny
reclaimable pagecache pages), vmscan can decide not to bother. This is
probably a fairly small risk but it is possible that the missing
SLAB_RECLAIM_ACCOUNT flag can result in memory not being reclaimed when
it could be.
Thanks,
NeilBrown
> So i do not think there is a memory leak per se.
uot;, that might mean that lustre isn't letting go of cache
pages for some reason.
NeilBrown
On Mon, Apr 29 2019, Jacek Tomaka wrote:
> Wow, Thanks Nathan and NeilBrown.
> It is great to learn about slub merging. It is awesome to have a
> reproducer.
> I am yet to trigger my
similar object-size to signal_cache that
is, by default, being merged with signal_cache.
Thanks,
NeilBrown
On Wed, Apr 24 2019, Nathan Dauchy - NOAA Affiliate wrote:
> On Mon, Apr 15, 2019 at 9:18 PM Jacek Tomaka wrote:
>
>>
>> >signal_cache should have one entry for each p
hen they will
both use the same slab cache and the name it has will be whichever name
was created first.
So maybe some other modules want a slab cache about that size.
The other possible explanation is that the cache wasn't empty when the
lustre module was unloaded. In that case you would have
the task_struct slab were particularly big, I suspect you
would have included it in the list of large slabs - but you didn't.
If signal_cache has more active entries than task_struct, then something
has gone seriously wrong somewhere.
I doubt this problem is related to lustre.
NeilBrown
signature.asc
Description: PGP signature
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
11 matches
Mail list logo