Hi,

I'm afraid the vec remarks feature got broken with the latest LLVMs and no one 
has had spare time to fix it. Should not be too difficult to fix though if you 
want to give it a try.

https://github.com/pocl/pocl/issues/613

BR,
Pekka

Pekka Jääskeläinen

________________________________
From: Noah Reddell <[email protected]>
Sent: Friday, December 28, 2018 7:30:57 PM
To: Portable Computing Language development discussion
Subject: Re: [pocl-devel] intermittent clang ComputeLineNumbers SegFault



Having it on /tmp on many systems makes the cache non-persistent, which kind of 
defeats the purpose of having a cache in the first place... perhaps there is a 
more suitable place, but i'm not aware of it.
There's a complex set of factors to balance for sure.  Since the default 
behavior is to remove build products, I don't think the default POCL_CACHE_DIR 
needs to be persistent storage. $HOME is generally going to be slower and 
farther away than $TMPDIR.
    Most importantly the behavior is already customizable through 
POCL_CACHE_DIR variable. I have a work-around.  A general user wouldn't know to 
adjust the variable upon encountering this SEGFAULT unless discovering record 
of this discussion.
    The lingering problem is that we don't understand what is driving the clang 
SEGFAULT but it seems most likely related to false success of open(O_CREAT | 
O_EXCL) on this DVS filesystem.  (speculating this encounters same issue as 
older NFS filesystem)  In addition to the working local /tmp for 
POCL_CACHE_DIR, I tried a Lustre parallel filesystem path (common to all 
compute nodes).  This works as well, presumably because this more sophisticated 
filesystem is correctly supporting O_EXCL.


Side question:  when I export POCL_VECTORIZER_REMARKS=1, where should the 
output go?  I'm not seeing anything in the stdout/stderr streams or 
${POCL_CACHE_DIR}/*/*/build.log


_______________________________________________
pocl-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pocl-devel

Reply via email to