On Fri, 2013-11-29 at 11:20 +1300, David Koontz wrote:
> On 29 Nov 2013, at 8:39 am, Brian Drummond <[email protected]> wrote:
> 
> > Now if the mcode version compiles fsm.vhd just fine, this is pointing to
> > the gcc middle and back end...
> > 
> > Searching for the messages shows     
> >    gcc/ggc-page.c:        perror ("virtual memory exhausted");
> >    libiberty/xmalloc.c:   "\n%s%sout of memory allocating 
> >                           %lu bytes after a total of %lu bytes\n",
> > And reading around, the gcc middle and back ends are notoriously memory
> > hungry, as they run many (typically hundreds of) optimisation passes.
> > 
> > I tried unsuccessfully to pass -O0 flags on the command line, to see if
> > it used less memory with optimisation off. 
> 
> Nick was kind enough to email me saying his tool does accept (pass) -O0 
> shortening the now 3:41:56.93 cpu time to around 90 secs purportedly.  He 
> just made the changejust for this. The issues Adrien has are serving well for 
> his tool development too.

Oh sorry I was unclear : it passed the flags to ghdl1 successfully;
however the hoped-for reduction in footprint didn't appear. I suspect
this is a gcc issue, and mcode may be the fallback plan for pathological
cases.

Glad -O0 was useful for Nick's experiments.

> I have some directions on how generate an mcode version for OS X that are 
> easily adaptable for Linux (using an i32 Ada/gcc), for anyone wanting more 
> immediate gratification. 

Those would be useful; apart from anything else, I do want to see if one
specific change has broken mcode builds.

- Brian



_______________________________________________
Ghdl-discuss mailing list
[email protected]
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to