Hi , I am using Fedora Core -3 and my Python version is 2.4 . kernel version - 2.6.9-1.667smp There is some freakish thing that happens with python hashes when i run a python script my python file is basically :
myhash = {} def summa(): global myhash myhash[0] = 0 myhash[1] = 1 myhash[2] = 2 myhash[3] = 3 i run a C file : main(int argc, char **argv) { int i = atoi(argv[1]), j; printf("myhash = {}\n"); printf("def summa():\n"); printf(" global myhash\n"); for (j = 0; j < i; j++) printf(" myhash[%d] = %d\n", j, j); printf("\nsumma()\n"); } and the output of this .c file is redirected to a .py file. I do the following Steps to the .c file to create the .py file 1 cc -o s s.c 2 ./s (input) >>test.py 3 python test.py When i run python to this .py file , i find that this process eats lots of virtual memory of my machine. I can give u detailed examples to what heights can the virtual memory can go , when i do a top , with the inputs given to the c file 1. input is 100000 VIRT is 119m 2. input is 300000 VIRT is 470m 3 input is 700000 VIRT is 1098m 4 input is 1000000 VIRT is 1598m where VIRT - virtual memory m - MB ( Mega Bytes) these results are very alarming as it means that each hash[i] requires 1 KB of space approx . I would like to know why and how to solve this problem ? I also did try change the above .c file , so that the new .c file will have multiple functions that divide the load of building the hash structure. Then again the results are same Pls do assist me with this problem of mine !!! -- http://mail.python.org/mailman/listinfo/python-list