On Fri, 2013-11-29 at 19:47 +0100, Adrien Prost-Boucle wrote: > Hello, > > As promised attached is a re-formatted version of tb.vhd (25k lines now, > you asked for it :P), with the "others" thing that makes ghdl crash on > my machines. > When the "others" thing is removed from both vector arrays, the analysis > pass of ghdl finishes just fine. > Much better, thanks. Will look at this one.
> > Could you use an mcode version of ghdl if some one made a Linux version? > > Sorry I don't know what could be an mcode version of anything... as far > as google told me it's related to Mac OS X ? I believe David's answered that. The other thing you could try for the fsm is a 64-bit build of ghdl, on a machine with at least 8 GB of physical RAM. "Serious" gcc users regard 16GB as not too large for some purposes. I would start with the source and build process from https://gna.org/bugs/?21305. I ran GHDL up to a 4.8GB footprint here but I only have 4GB so it was swapping badly at that stage, but it did prove that 4GB is not an upper limit for ghdl. (I think you will need another approach to code generation if you are going to synthesize this monster : I left XST running for hours to no result! ... see below) On the subject of high level synthesis : have you seen these projects? http://www.nkavvadias.com/hercules/ What's interesting about this one, to me, is that it involves GIMPLE as an intermediate language, with the C front end based on gcc. Which opens up the hypothetical possibility of adding --enable-languages=ada to the configure stage, and offering high level synth from Ada (perhaps Fortran would appeal in some circles) If you've never used Ada, you may be wondering, why? I could suggest many reasons, but here's one useful for HLS : fixed point types fully supported by the language, and you get to choose the width... Or the York Hardware Ada Compiler : for example ftp://ftp.cs.york.ac.uk/papers/rtspapers/R%3AWard%3A2001.ps or in more detail http://www.cs.york.ac.uk/ftpdir/reports/2005/YCST/09/YCST-2005-09.pdf A practical detail that undermines this paper a little is that the language subset he uses for his "sequential Ada" example (p.176 of the latter paper) is ... synthesisable VHDL. Seriously. Substitute " to " for " .. ", prepend "variable " to each variable declaration, and wrap the example in a process, and XST swallows it whole. And spits out a lump of hardware, using about 3x as many CLBs as his resource estimates (bigger if you factor in that I targetted a newer FPGA) to implement the task in a single (very slow!) cycle. Sound familiar? For me, the important step in the York Hardware Ada Compiler is ... it reveals techniques for extracting sequentiality from an inherently parallel problem! In other words, automatic resource sharing, to reduce the hardware size. (Ironically, the exact opposite of the GPU programmers' Grand Challenge turns out to be important!) At which point it *might* interest you. It *may* have cracked a different but important part of the puzzle. My opinion is that he takes it too far, extracting all the sequentiality he can find, hundreds of cycles, as if he was compiling for a single-stream CPU. And the result is - to me - disappointing; the hardware isn't orders of magnitude smaller. So I wonder if he's missed a sweet spot: Looking at the example I see a 10 iteration loop; I would want to see results targetting the order of 10 cycles, with decent cycle time, and options to either minimise hardware for O(10) cycle throughput, or pipeline 10 deep for single cycle throughput. Or tools to explore different alternatives and let an expert system - or human - decide which is optimal. I will also read your papers with interest, though I don't have access to these: > http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6628278 > http://www.sciencedirect.com/science/article/pii/S1383762113001938 > For those very interested I can send the submitted drafts. > And remember my tool is still deeply in a development state and features > mush more bugs that functionalities, so it's really not time yet for > advertisement about it. Understood! But if we can help ghdl to support the effort, that would be good for both parties. - Brian _______________________________________________ Ghdl-discuss mailing list [email protected] https://mail.gna.org/listinfo/ghdl-discuss
