Hi Tristan,

thanks for the new release, and for making your repo public :-)

I've been quickly trying out 0.20, and have the following remarks:

It compiles fine for me in i386 fedora core 4 with the current updates.
To compile my testbench, I had to add -lm
to /usr/lib/gcc/x86_64-unknown-linux-gnu/4.0.2/vhdl/lib/grt.lst, because
my testbench uses transcentental functions (sin) from ieee.math_real.
The testbench I tried seems to run fine, although I had to modify it
somewhat. At one point it tries to compute real(2 ** 31), which makes
the ghdl compiled binary abort due to an integer overflow. ghdl is
likely right, but modelsim doesn't complain, so I found a bug in my
testbench ;-)

ghdl 0.20 also compiles on x86_64. Here, I additionally had to add -ldl
to grt.lst, since dlopen and friends are provided there. The testbench
also compiles, but the compiled result segfaults early. The backtrace
looks as follows:

#0  0x0000000000471b48 in grt__files__file_open ()
#1  0x0000000000471e89 in __ghdl_text_file_open ()
#2  0x000000000045ce7a in std__textio__ELAB_SPEC ()
#3  0x000000000046a7c3 in std__textio__ELAB_BODY ()
#4  0x0000000000470a81 in work__tbsin__ELAB (INSTANCE=0x5b7180) at tbsin.vhd:10
#5  0x0000000000471157 in work__tbsin__ARCH__behav__ELAB (INSTANCE=0x5b7180)
    at tbsin.vhd:36
#6  0x0000000000401838 in __ghdl_ELABORATE ()
#7  0x0000000000482658 in grt__main__run ()
#8  0x000000000047f7a7 in ghdl_main ()
#9  0x000000000047ca25 in main ()


The statement that seems to go wrong is this:
grt-files.adb:File_Open:

Name : String (1 .. Integer (Str.Bounds.Dim_1.Length) + 1);


The pointer Str points to:
0x7fffff9ba310: 0xb0    0x93    0x49    0x00    0x00    0x00    0x00    0x00
0x7fffff9ba318: 0xc0    0x93    0x49    0x00    0x00    0x00    0x00    0x00

The memory pointed to by these two pointers looks like this:
0x4993b0 <_UI00000001.1382>:    0x53    0x54    0x44    0x5f    0x49    0x4e   
0x50     0x55
0x4993b8 <_UI00000001.1382+8>:  0x54    0x00    0x00    0x00    0x00    0x00   
0x00     0x00
0x4993c0 <std__textio__U1__STB.1383>:   0x01    0x00    0x00    0x00    0x09   
0x00     0x00    0x00
0x4993c8 <std__textio__U1__STB.1383+8>: 0x00    0x00    0x00    0x00    0x09   
0x00     0x00    0x00

0x4993b0 <_UI00000001.1382>:    83 'S'  84 'T'  68 'D'  95 '_'  73 'I'  78 'N' 
80 'P'   85 'U'
0x4993b8 <_UI00000001.1382+8>:  84 'T'  0 '\0'  0 '\0'  0 '\0'  0 '\0'  0 '\0' 
0 '\0'   0 '\0'
0x4993c0 <std__textio__U1__STB.1383>:   1 '\001'        0 '\0'  0 '\0'  0 '\0' 
9 '\t'   0 '\0'  0 '\0'  0 '\0'
0x4993c8 <std__textio__U1__STB.1383+8>: 0 '\0'  0 '\0'  0 '\0'  0 '\0'  9 '\t' 
0 '\0'   0 '\0'  0 '\0'

The code then tries to fetch Bounds.Dim_1.Length, which it thinks is at
offset 0x10 from the second pointer, where in memory the sought for data
is at offset 0x0c. The end result is that Length fetched is bogus, and
this bogus (huge) value is then used to enlarge the stack (alloca-type
operation), which results in a segfault the first time afterwards an
access of the stack top is attempted (during the next subroutine call).

It looks to me that the ghdl compiler, which generated the table, and
grt disagree about the size and alignment of Ghdl_Index_Type (grt being
correct, IMO).

After a couple of hours trying to understand the internals of ghdl and
failing, I came up with the following patch (by pure guessing):

--- gcc/vhdl/translation.adb.32 2005-10-29 23:55:01.000000000 +0200
+++ gcc/vhdl/translation.adb    2005-10-29 23:57:33.000000000 +0200
@@ -25748,7 +25748,7 @@
       New_Type_Decl (Get_Identifier ("__ghdl_size_type"), Sizetype);
 
       --  Create __ghdl_index_type, which is the type for *all* array index.
-      Ghdl_Index_Type := New_Unsigned_Type (32);
+      Ghdl_Index_Type := New_Unsigned_Type (System.Word_Size);
       New_Type_Decl (Get_Identifier ("__ghdl_index_type"), Ghdl_Index_Type);
 
       Ghdl_I32_Type := New_Signed_Type (32);

This apparently fixes the segfault. Now the following test program at
least runs. The output of value, however, is incorrect, it stays at
zero, and the simulation spews the following messages:
../../../src/ieee/numeric_std-body.v93:2179:7:@2550ns:(assertion warning): 
NUMERIC_STD.TO_SIGNED: vector truncated

If I uncomment the rng_lib stuff, then the simulation even hangs (i.e.
no progress is made, but CPU is burned).

rng_lib is from:
http://www.opencores.org/projects.cgi/web/rng_lib/overview

library IEEE;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use std.textio.all;
use ieee.math_real.all;

library work;
use work.rng_lib.all;

entity tbsin is
end tbsin;

architecture behav of tbsin is

  signal phase : std_ulogic_vector(7 downto 0);
  signal value : std_ulogic_vector(7 downto 0);
  signal ts : integer;
  
begin

  process
    variable r_gauss : rand_var;
    variable rnd : integer;
    variable t : real;
  begin
    --r_gauss := init_gaussian(0, 0, 0, 0.0, 1.0);  -- mean=0, stdev=1
    for i in 0 to 255 loop
      phase <= std_ulogic_vector(to_unsigned(i, phase'length));
      value <= std_ulogic_vector(to_signed(integer(127.0 * SIN(real(i) * (2.0 * 
MATH_PI / real(2 ** 8)))), value'length));
      --r_gauss := rand(r_gauss);
      --t := real(2 ** 31);
      --ts <= 2 ** 31;
      --rnd := integer(32.0 * r_gauss.rnd);
      wait for 10 ns;
    end loop;
    wait;
  end process;

end behav;

Now I guess I need some help...

Tom


Reply via email to