Is this really the shortest test case you can make for this problem?

- Does it crash if you allocate blocks of size 1024 rather than random size?
 Does for me. Strip it out.

- Does it crash if you run 2 threads instead of 4?
 Does for me. Strip it out.

Some times it crashes, some times it doesn't. Clearly it's timing
related.  The root cause is not going to be identified by injecting a
whole bunch of random!

Make this program shorter.


On 12/19/06, Vlad Seryakov <[EMAIL PROTECTED]> wrote:
I tried nedmalloc with LD_PRELOAD for my little test and it crashed vene
before the start.

Zoran, can you test it on Solaris and OSX so we'd know that is not Linux
related problem.


#include <tcl.h>

#include <stdlib.h>
#include <memory.h>
#include <unistd.h>
#include <signal.h>
#include <pthread.h>

#define MemAlloc malloc
#define MemFree free

static int nbuffer = 16384;
static int nloops = 50000;
static int nthreads = 4;

static void *gPtr = NULL;
static Tcl_Mutex gLock;

void MemThread(void *arg)
{
      int   i,n;
      void *ptr = NULL;

      for (i = 0; i < nloops; ++i) {
          n = 1 + (int) (nbuffer * (rand() / (RAND_MAX + 1.0)));
          if (ptr != NULL) {
              MemFree(ptr);
          }
          ptr = MemAlloc(n);
          if (n % 50 == 0) {
              Tcl_MutexLock(&gLock);
              if (gPtr != NULL) {
                  MemFree(gPtr);
                  gPtr = NULL;
              } else {
                  gPtr = MemAlloc(n);
              }
              Tcl_MutexUnlock(&gLock);
          }
      }
}

int main (int argc, char **argv)
{
      int i;
      Tcl_ThreadId *tids;

      tids = (Tcl_ThreadId *)malloc(sizeof(Tcl_ThreadId) * nthreads);

      for (i = 0; i < nthreads; ++i) {
          Tcl_CreateThread( &tids[i], MemThread, NULL,
TCL_THREAD_STACK_DEFAULT, TCL_THREAD_JOINABLE);
      }
      for (i = 0; i < nthreads; ++i) {
          Tcl_JoinThread(tids[i], NULL);
      }
}




Zoran Vasiljevic wrote:
> On 19.12.2006, at 01:10, Stephen Deasey wrote:
>
>> This program allocates memory in a worker thread and frees it in the
>> main thread. If all free()'s put memory into a thread-local cache then
>> you would expect this program to bloat, but it doesn't, so I guess
>> it's not a problem (at least not on Fedora Core 5).
>
> It is also not the case with nedmalloc as it specifically
> tracks that usage pattern. The block being free'd "knows"
> to which so-called mspace it belongs regardless which thread
> free's it.
>
> So, I'd say the nedmalloc is OK in this respect.
> I have given it a purify run and it runs cleanly.
> Our application is nnoticeably faster on Mac and
> bloats less. But this is only a tip of the iceberg.
> We yet have to give it a real stress-test on the
> field, yet I'm reluctant to do this now and will
> have to wait for a major release somewhere in spring
> next year.
>
>
>
>
> -------------------------------------------------------------------------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> _______________________________________________
> naviserver-devel mailing list
> naviserver-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/naviserver-devel
>

--
Vlad Seryakov
571 262-8608 office
[EMAIL PROTECTED]
http://www.crystalballinc.com/vlad/


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel


Reply via email to