Dear all,
if you do "cvs co ntop-ht" you can download the new improved ntop.
Please send you feedback.

Cheers, Luca

Luca Deri wrote:

>Burton,
>
>Burton Strauss wrote:
>
>  
>
>>Luca:
>>
>>Interesting start.  I'd still like to hold off on this until after 3.2.
>> 
>>
>>    
>>
>I agree (that's why I didn't commit it) but we need to release 3.2 soon.
>Can we move forward?
>
>  
>
>>(1) This is almost certainly going to make things worse w/ SMP and probably
>>generally:
>>
>>1819 int lockHostsHashMutex(HostTraffic *host, char *where) {
>>1820   if(host) {
>>1821     if(myGlobals.hostsHashMutexNumLocks[host->hostTrafficBucket] == 0)
>>{
>>1822       myGlobals.hostsHashMutexNumLocks[host->hostTrafficBucket]++;
>>1823
>>return(accessMutex(&myGlobals.hostsHashMutex[host->hostTrafficBucket],
>>where));
>>1824     } else {
>>1825       myGlobals.hostsHashMutexNumLocks[host->hostTrafficBucket]++;
>>1826       return(0); /* Already locked */
>>1827     }
>>1828   } else {
>>1829     return(-1);
>>1830   }
>>1831 }
>>
>>It's just like the code I had to fix w/ the self-LOCK mutex problem, only
>>this time it's going to be serious.
>>
>>If the context switches after 1821, but before 1825 - which albeit rare is
>>clearly possible - then you'll get mislocking.  Same only worse in unlock.
>>
>>After stomping the self-LOCK, I've learned that nothing attracts context
>>switches like saying "it can't happen here".
>>
>> 
>>
>>    
>>
>ok I'll add a mutex here
>
>  
>
>>(2) This counter should be declared volatile or perhaps should be type
>>sig_atomic_t.
>>
>> u_short hostsHashMutexNumLocks[CONST_HASH_INITIAL_SIZE];
>>
>>Otherwise you risk problems of multiple thread near simultaneous updates.
>>
>> 
>>
>>    
>>
>The above mutex I plant to add should be enough. Howeber you're right.
>
>  
>
>>(3) This appears to treat all users, readers and writers equivalently.  I
>>haven't worked it out in my head, but I was thinking more of POSIX
>>reader/writer locks using cond variables - see Nichols et al, "pthread
>>programming" page 84ff, but altering this to provide writer's with priority
>>(perhaps by adding a writers_waiting count).
>>
>> 
>>
>>    
>>
>ok let me know if something better comes to your mind
>
>  
>
>>(4) FWIW, none of my testing has shown that packet processing is the culprit
>>re packet losses.  In fact, queuing to handle multiples is what takes the
>>time.  It's very often the pcap_stats() call that causes the losses.  So I'd
>>be interested in the stats on a heavily loaded machine.
>> 
>>
>>    
>>
>
>With my patch there's no packet queueing anymore
>
>  
>
>>(5) This code - if a thread dies 
>>
>> for(j=0; j<NUM_DISPATCH_THREADS; j++) {
>>   myGlobals.device[i].pcapDispatchThreadId[j] = 0;
>> }
>>
>> Should issue the pthread_kill() call too and should only really run once
>>(not once per thread) (although a dropped nic will kill all of them, we
>>don't want to depend upon that).
>> 
>>
>>    
>>
>
>ok
>
>  
>
>>(6) Need some measurement of idle time in the pcap_loop processors.  Try the
>>attached patch.  Note, DO NOT move the 'static' definition of
>>idleStartOfProcessing - I'm pretty sure that it needs to be there so it's
>>thread specific.  If this doesn't work, then we'll have to go to POSIX
>>thread specific data (keys).  Probably want enough per-device processors so
>>that this: pcap_loop() idle (min/avg/max) gets big.
>>
>> 
>>
>>    
>>
>ok. I can test it on a FreeBSD box I have access on a busy internet link.
>
>Cheers, Luca
>
>  
>
>>-----Burton
>>
>>
>>
>>-----Original Message-----
>>From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
>>Of Luca Deri
>>Sent: Thursday, May 26, 2005 8:33 AM
>>To: ntop-dev Mailing List
>>Subject: [Ntop-dev] ntop packet loss & multithreading
>>
>>Dear all,
>>some of you reported that the latest ntop is slower (in terms of packet
>>processing) with respect to previous versions. I have to admit that
>>unfortunately this is true as I now sometimes experience packet losses too.
>>We (me, Burton and Dinesh) decided to postpone multi-threading changes to
>>3.2.x, but I believe this problem needs to be tackled now.
>>
>>In the past days I have created a new experimental ntop
>>(http://luca.ntop.org/ntop-ht.tgz) that:
>>- has small hash lockes (compared to the global lock that's inside ntop)
>>- spawns multiple (default 2) threads that fetch packets
>>
>>Occording to my tests the new ntop is faster on Linux but more or less the
>>same on FreeBSD. This could be explained with the time ntop spends with
>>small-locks (mutex lock/unlock) that maybe on BSD take relatively more time
>>than on Linux.
>>
>>I kindly ask all of you (you can simply override the .c/h files with those
>>part of the ntop-ht.tgz file and send your feedback. In fact I don't want to
>>commit any code before your feedback.
>>
>>Cheers, Luca
>>
>> 
>>
>>------------------------------------------------------------------------
>>
>>_______________________________________________
>>Ntop-dev mailing list
>>[email protected]
>>http://listgateway.unipi.it/mailman/listinfo/ntop-dev
>> 
>>
>>    
>>
>
>
>  
>


-- 
Luca Deri <[EMAIL PROTECTED]>   http://luca.ntop.org/
Hacker: someone who loves to program and enjoys being
clever about it - Richard Stallman

_______________________________________________
Ntop-dev mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-dev

Reply via email to