Short answer: no

The hash size has nothing to do with speed and everything to do with the #
of active hosts.

The more distinct machines (on both sides of the connection) the more memory
you'll use.  If somebody launched a scan from inside, that would chew
through a lot of entries...

Once the hash grows, it can not shrink - even as those hosts time out -
you'll need to restart ntop.

You can get an eyeful on the purged hosts by looking at the .db file.  I
once posted a simple gdbm database dump routine (dumps the 1st 10 characters
of each record) - it would be easy to add the struct from ntop.h and create
an intelligent dump.  I'll list it below...

-----Burton


/* Simple gdbm database dump -
     Copyright (c) 2002 - Burton M. Strauss III ([EMAIL PROTECTED])
     Released under GPL v2
 */

#include <stdio.h>
#include <string.h>
#include <gdbm.h>
int main(int argc, char *argv[]) {
   GDBM_FILE dbfile;
   datum key, data;
   int recordCount = 0;

   if ( (argc < 2) || (argc > 3) ) {
      fprintf (stderr, "Usage: dumpgdbm file [key]\n\n");
      exit (1);
   }
   dbfile = gdbm_open (argv[1], 0, GDBM_READER, 0666, NULL);
   if (!dbfile) {
      fprintf (stderr, "Open file %s, error %d (%s) gdbm file.\n", argv[1],
                       gdbm_errno, gdbm_strerror(gdbm_errno));
      exit (2);
   }

   if (argv[2] == NULL) {
       key = gdbm_firstkey ( dbfile );
       while (key.dptr) {
           data =  gdbm_fetch ( dbfile, key );
           recordCount++;
           printf ("%10s: '%s'\n", key.dptr, data.dptr);
           free (data.dptr);
           key = gdbm_nextkey ( dbfile, key );
       }
       printf ("Records read: %d\n", recordCount);
    } else {
       key.dsize = strlen (argv[2]) + 1;
       key.dptr = argv[2];
       data = gdbm_fetch (dbfile, key);
       if (data.dptr) {
           printf ("%10s: '%s'\n", key.dptr, data.dptr);
           free (data.dptr);
       } else {
           printf ("Key %s not found.\n", argv[2]);
       }
   }
   gdbm_close (dbfile);
}


Compile with

gcc -g -O2 -I/usr/include -I/usr/include/gdbm
dumpgdbm.c  -L/usr/lib -lgdbm -o dumpgdbm


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Rob
Trout
Sent: Tuesday, October 01, 2002 9:29 AM
To: [EMAIL PROTECTED]
Subject: [Ntop] Large hash


After running for a few days, I get:

Sep 30 22:45:43 stats ntop[1682]: Extending hash size
[newSize=65536][deviceId=0]

Which seems like a pretty large hash compared to others I have seen on
the list.
We're running ntop 2.1.51 on Mandrake 8.2. It's a p3-900 w/ 256mb ram.
This host is only used for ntop. I'm starting with the command line of:

/usr/local/bin/ntop -P /server/ntop-current/ntop/database -p
/server/ntop-current/ntop/utils/protocol.list -d -u ntop -o -m x.x.x.x/x

Is there a way to limit the hash-size, and if so what would be the
trade-off of doing so?
This is monitoring an internet connection that averages about 4 mb, but
can peak at 9-10mb


Thanks in advance.


_______________________________________________
Ntop mailing list
[EMAIL PROTECTED]
http://lists.ntop.org/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
[EMAIL PROTECTED]
http://lists.ntop.org/mailman/listinfo/ntop

Reply via email to