Am 09.09.2016 um 23:32 schrieb Adam Jackson: > On Fri, 2016-09-09 at 12:09 -0700, Ian Romanick wrote: > >> 16 *gigabytes*? Why? That has to indicate a bug somewhere... >> possibly a leak in the test? > > I suspect more likely a leak in llvmpipe, or in how it's driving llvm. > You're going to end up jitting a fragment shader for every combination > of 16 src/dst blend factor and five operators which comes out to around > 1500 specializations. You'd hope that wouldn't hold on to 10M apiece, I > admit. >
More exactly, the test runs 27552 variations. That said, I've run it here and didn't see high memory consumption - stayed below 55M for the entire time (but yes it takes ages to run) (the lru cache size is limited obviously, and old variations freed). There's two possibilities for leaks: 1) We're doing something wrong when interfacing with llvm (e.g. not freeing something we should). 2) There's a leak in llvm itself (we're internally using a very old llvm version still, and the biggest reason for this is we'd need to verify there's no new memory leaks in llvm which are difficult to track down as the memory vanishes in generic pools - maybe it's better now but leaks which weren't noticed in off-line compilation tended to get unnoticed for quite a while). So, if there's a memory leak it's going to be dependent on llvm version - note this doesn't exclude 1) since we'll have to do some things differently depending on the version. Roland _______________________________________________ Piglit mailing list Piglit@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/piglit