Hello all In some situations, I have to work with very large matrices. My Windows machine has 3 GB RAM, so I would expect to be able to use most of my process's address space for my matrix.
Unfortunately, with matrices much larger than 700 or 800 MB, one starts running into heap fragmentation problems: even though there's 2 GB available to your process, it isn't available in one contiguous block. To see this, you can try the following code which tries to allocate a ~1792 MB 2-d array or a list of 1-d arrays that add up to the same size: import numpy as N fdtype = N.dtype('<f8') bufsize = 1792*1024*1024 n = bufsize / fdtype.itemsize m = int(N.sqrt(n)) if 0: # this doesn't work on Windows x = N.zeros((m,m), dtype=fdtype) else: x = [N.zeros(m,) for i in range(m)] print len(x) import time time.sleep(100000) How does one go about allocating a discontiguous array so that I can work around this problem? Thanks! Regards, Albert ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys -- and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion