RE: incremental linking?
> [Simon] might be worth adding to the docs (title: linking > times on Suns, > Sun's ld vs GNU ld; explanation: see previous mails)? See http://www.haskell.org/ghc/docs/latest/html/users_guide/faq.html specifically the last question. Cheers, Simon ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
Hi Hal, > How did you get ghc to use gld when doing --make instead of standard > ld? I'm having the exact same problem you were and I'd love to make it > work faster. [Simon] might be worth adding to the docs (title: linking times on Suns, Sun's ld vs GNU ld; explanation: see previous mails)? ghc calls ld via gcc, so everything from the gcc man page about how gcc finds its tools seems to apply. It could be as easy as setting PATH so that gnu's ld is found before sun's, but as the message you quote said, the gcc installation might find the path to sun's ld first by other routes.. > > (*) a little stumbling block here: gcc refers to PATH only *after* > > perusing its preconfigured search paths. In our case, those included > > /usr/ccs/bin, so we had to set GCC_EXEC_PREFIX instead (which > > is used before the preconfigured paths). So we had to do something like the following (assuming sh, and gnu's binutils in /usr/local/packages): GCC_EXEC_PREFIX=/usr/local/packages/binutils/bin/ ghc-5.04 --make .. The handling of prefixes is explained for gcc's -B option in the gcc man page (here the version for our suns): -Bprefix This option specifies where to find the executables, libraries, include files, and data files of the compiler itself. The compiler driver program runs one or more of the subprograms cpp, cc1, as and ld. It tries prefix as a prefix for each program it tries to run, both with and without machine/version/. For each subprogram to be run, the compiler driver first tries the -B prefix, if any. If that name is not found, or if -B was not specified, the driver tries two standard prefixes, which are /usr/lib/gcc/ and /usr/local/lib/gcc-lib/. If neither of those results in a file name that is found, the unmodified program name is searched for using the directories specified in your PATH environment variable. The compiler will check to see if the path provided by the -B refers to a directory, and if necessary it will add a directory separator character at the end of the path. -B prefixes that effectively specify directory names also apply to libraries in the linker, because the compiler translates these options into -L options for the linker. They also apply to includes files in the preprocessor, because the compiler translates these options into -isystem options for the preprocessor. In this case, the compiler appends include to the prefix. The run-time support file libgcc.a can also be searched for using the -B prefix, if needed. If it is not found there, the two standard prefixes above are tried, and that is all. The file is left out of the link if it is not found by those means. Another way to specify a prefix much like the -B prefix is to use the environment variable GCC_EXEC_PREFIX. As a special kludge, if the path provided by -B is [dir/]stageN/, where N is a number in the range 0 to 9, then it will be replaced by [dir/]include. This is to help with boot-strapping the compiler. To see where gcc is looking, check the "programs" entry in the output of gcc -print-search-dirs and make sure your favourite ld is found first. Cheers, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
Claus, How did you get ghc to use gld when doing --make instead of standard ld? I'm having the exact same problem you were and I'd love to make it work faster. Thanks in advance, Hal -- Hal Daume III "Computer science is no more about computers| [EMAIL PROTECTED] than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume On Sat, 30 Nov 2002, Claus Reinke wrote: > > It seems that Sun's ld was indeed the weak link. Switching to > Gnu's ld (*) brought the linking time down to just under 1 minute, > both on the machine that used to take 6 minutes and on the > older one that used to take 20 minutes! > > I don't know whether Sun's lds themselves are to blame or whether > ghc/gcc generate output that suits Gnu's ld better than it does Sun's > ld, but as long as ghc remains as it is, that doesn't really make a > difference for our purposes. > > (slow) ld -V > ld: Software Generation Utilities - Solaris Link Editors: 5.8-1.276 > > (better) ld -V > ld: Software Generation Utilities - Solaris Link Editors: 5.9-1.344 > > (winner) ld -V > GNU ld version 2.13.1 > Supported emulations: >elf32_sparc >elf64_sparc > > We haven't done extensive testing yet, and earlier versions > of binutils (up to and including 2.13) are reported to have > problems on Solaris, so don't throw away Sun's tools, but it > looks as if our case is now closed (and incremental linking > isn't an issue anymore with these new link times!-) > > Thanks for the helpful feedback, and Good Luck with the > other suspiciously slow systems! > > Claus > > (*) a little stumbling block here: gcc refers to PATH only *after* > perusing its preconfigured search paths. In our case, those included > /usr/ccs/bin, so we had to set GCC_EXEC_PREFIX instead (which > is used before the preconfigured paths). Check with: > gcc -print-search-dirs > > - Original Message - > From: "Claus Reinke" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Sent: Friday, November 29, 2002 12:02 PM > Subject: Re: incremental linking? > > > > >I haven't been able to discern any pattern among those experiencing long > > >link times so far, except that -export-dynamic flag used by the dynamic > > >loader stuff seems to cause the linker to go off into space for a while. > > > > We're still investigating here, but just a quick summary for our own > > (large) project: > > > > - nfs doesn't seem to have too drastic effects, even in-memory disks > > don't speed things up, time seems to be spend in computation > > - on our (admittedly overloaded and dated) main Sun Server, linking > >could take some 20 minutes! > > - we've found a more modern (and not yet well-utilized;-) Sun server, > > bringing the time down to 6 minutes..:-( > > > > (from that, I thought linking might have to be expensive - how naive!-) > > > > - the same program on my rather old 366Mhz PII notebook links in > >about 1 minute (I didn't notice that at first, because overall compile > >time is longer on my notebook - but that turns out to be caused by > >a single generated file, for which the assembler almost chokes; after > >all, the notebook "only" has 192Mb memory, and the disk is crammed) > > - with the laptop as reference, I'd guess the problem is not ghc's fault > >(unless it does things drastically different on cygwin vs solaris?) > > - on our Suns, gcc (and hence ghc) seem to use the native linker > > - sunsolve lists several linker patches to address problems like > > "linker orders of magnitude slower than Gnu's". We seem to have > > those patches, but we're checking again.. > > > > moral so far: if compilation of big projects takes a long time, it is worth > > checking where that time is spend. for the same project, on different > > systems, we've got different bottlenecks: > > > > - large (generated) files [all systems]: assembler needs an awful lot > > of space (not enough space->compile takes forever) > > - network disks: import chasing takes a lot of time > > - Suns (?): linking takes too long > > > > will report again if we get better news.. > > > > Claus > > > > PS. if we get linking times down to what seems possible, incremental > >linking would no longer be urgent - we'll see.. > > > > > > ___ > > Glasgow-haskell-users mailing list > > [EMAIL PROTECTED] > > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > > ___ > Glasgow-haskell-users mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
It seems that Sun's ld was indeed the weak link. Switching to Gnu's ld (*) brought the linking time down to just under 1 minute, both on the machine that used to take 6 minutes and on the older one that used to take 20 minutes! I don't know whether Sun's lds themselves are to blame or whether ghc/gcc generate output that suits Gnu's ld better than it does Sun's ld, but as long as ghc remains as it is, that doesn't really make a difference for our purposes. (slow) ld -V ld: Software Generation Utilities - Solaris Link Editors: 5.8-1.276 (better) ld -V ld: Software Generation Utilities - Solaris Link Editors: 5.9-1.344 (winner) ld -V GNU ld version 2.13.1 Supported emulations: elf32_sparc elf64_sparc We haven't done extensive testing yet, and earlier versions of binutils (up to and including 2.13) are reported to have problems on Solaris, so don't throw away Sun's tools, but it looks as if our case is now closed (and incremental linking isn't an issue anymore with these new link times!-) Thanks for the helpful feedback, and Good Luck with the other suspiciously slow systems! Claus (*) a little stumbling block here: gcc refers to PATH only *after* perusing its preconfigured search paths. In our case, those included /usr/ccs/bin, so we had to set GCC_EXEC_PREFIX instead (which is used before the preconfigured paths). Check with: gcc -print-search-dirs - Original Message - From: "Claus Reinke" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Friday, November 29, 2002 12:02 PM Subject: Re: incremental linking? > >I haven't been able to discern any pattern among those experiencing long > >link times so far, except that -export-dynamic flag used by the dynamic > >loader stuff seems to cause the linker to go off into space for a while. > > We're still investigating here, but just a quick summary for our own > (large) project: > > - nfs doesn't seem to have too drastic effects, even in-memory disks > don't speed things up, time seems to be spend in computation > - on our (admittedly overloaded and dated) main Sun Server, linking >could take some 20 minutes! > - we've found a more modern (and not yet well-utilized;-) Sun server, > bringing the time down to 6 minutes..:-( > > (from that, I thought linking might have to be expensive - how naive!-) > > - the same program on my rather old 366Mhz PII notebook links in >about 1 minute (I didn't notice that at first, because overall compile >time is longer on my notebook - but that turns out to be caused by >a single generated file, for which the assembler almost chokes; after >all, the notebook "only" has 192Mb memory, and the disk is crammed) > - with the laptop as reference, I'd guess the problem is not ghc's fault >(unless it does things drastically different on cygwin vs solaris?) > - on our Suns, gcc (and hence ghc) seem to use the native linker > - sunsolve lists several linker patches to address problems like > "linker orders of magnitude slower than Gnu's". We seem to have > those patches, but we're checking again.. > > moral so far: if compilation of big projects takes a long time, it is worth > checking where that time is spend. for the same project, on different > systems, we've got different bottlenecks: > > - large (generated) files [all systems]: assembler needs an awful lot > of space (not enough space->compile takes forever) > - network disks: import chasing takes a lot of time > - Suns (?): linking takes too long > > will report again if we get better news.. > > Claus > > PS. if we get linking times down to what seems possible, incremental >linking would no longer be urgent - we'll see.. > > > ___ > Glasgow-haskell-users mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
>I haven't been able to discern any pattern among those experiencing long >link times so far, except that -export-dynamic flag used by the dynamic >loader stuff seems to cause the linker to go off into space for a while. We're still investigating here, but just a quick summary for our own (large) project: - nfs doesn't seem to have too drastic effects, even in-memory disks don't speed things up, time seems to be spend in computation - on our (admittedly overloaded and dated) main Sun Server, linking could take some 20 minutes! - we've found a more modern (and not yet well-utilized;-) Sun server, bringing the time down to 6 minutes..:-( (from that, I thought linking might have to be expensive - how naive!-) - the same program on my rather old 366Mhz PII notebook links in about 1 minute (I didn't notice that at first, because overall compile time is longer on my notebook - but that turns out to be caused by a single generated file, for which the assembler almost chokes; after all, the notebook "only" has 192Mb memory, and the disk is crammed) - with the laptop as reference, I'd guess the problem is not ghc's fault (unless it does things drastically different on cygwin vs solaris?) - on our Suns, gcc (and hence ghc) seem to use the native linker - sunsolve lists several linker patches to address problems like "linker orders of magnitude slower than Gnu's". We seem to have those patches, but we're checking again.. moral so far: if compilation of big projects takes a long time, it is worth checking where that time is spend. for the same project, on different systems, we've got different bottlenecks: - large (generated) files [all systems]: assembler needs an awful lot of space (not enough space->compile takes forever) - network disks: import chasing takes a lot of time - Suns (?): linking takes too long will report again if we get better news.. Claus PS. if we get linking times down to what seems possible, incremental linking would no longer be urgent - we'll see.. ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: incremental linking?
> I too am getting link times in the several minutes range for > my modestly > sized project, I am on a standalone dual-cpu redhat linux box with > 5.04.1 (no nfs, no nuttin') > > the project is available at > http://repetae.net/john/computer/ginsu/ > > I think there is definatly something fishy going on. I don't remember > linking always taking this long, I just assumed I added a > bunch of code > to my project or something but linking takes longer than all the > other compilation stages combined. That's bizarre. I just compiled your program on my laptop (Gentoo Linux, gcc 3.2) and linking took 6-7 seconds. John, what version of gcc/binutils is on your RedHat box? I haven't been able to discern any pattern among those experiencing long link times so far, except that -export-dynamic flag used by the dynamic loader stuff seems to cause the linker to go off into space for a while. Cheers, Simon ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
On Wed, Nov 27, 2002 at 03:55:54PM -, Simon Marlow wrote: > Those who experience long link times (longer than a few seconds), please > reply with your > > - platform / OS version > - versions of relevent things (GHC, GCC, binutils). > - time to link 'main = print "hello"'. Platform: Debian unstable, with the Debian GHC package (maintained by Michael Weber, iirc) GHC version: 5.04 GCC version: 2.95.4 binutils: GNU ld version 2.13.90.0.10 20021010 Debian GNU/Linux For the runtime loader example, this is the thing which takes ages to link: ghc -package lang -optl-export-dynamic -fglasgow-exts -O2 -o TextFilter Main.o TextFilter.o TextFilterPluginAPI.o -ldl -lHSrts -lHSlang /usr/lib/ghc-5.04/HSbase.o /usr/lib/ghc-5.04/HSlang.o ../runtime_loader/libRuntimeLoader.a make 169.92s user 0.86s system 98% cpu 2:52.81 total Linking in a 'main = print "hello"' is normal: 15:13 exodus:~/bar % ghc Main.hs ghc Main.hs 3.44s user 0.23s system 78% cpu 4.677 total Here's the fun part: 15:18 exodus:~/bar % time ghc -package lang /usr/lib/ghc-5.04/HSbase.o /usr/lib/ghc-5.04/HSlang.o Main.hs ghc -package lang /usr/lib/ghc-5.04/HSbase.o /usr/lib/ghc-5.04/HSlang.o 1.42s user 0.34s system 105% cpu 1.666 total 15:30 exodus:~/bar % time ghc -optl-export-dynamic -package lang Main.hs ghc -optl-export-dynamic -package lang Main.hs 3.68s user 0.21s system 102% cpu 3.793 total 15:22 exodus:~/bar % time ghc -optl-export-dynamic -package lang /usr/lib/ghc-5.04/HSbase.o /usr/lib/ghc-5.04/HSlang.o Main.hs ghc -optl-export-dynamic -package lang /usr/lib/ghc-5.04/HSbase.o Main.hs 169.05s user 0.87s system 100% cpu 2:49.85 total I guess it looks like -optl-export-dynamic in conjunction with linking in the .o's is the killer. strace seems to indicate that ld isn't doing any syscalls when it's doing the -export-dynamic work; it just mmaps a huge wad of memory before starting, then it doesn't make a single syscall until it goes to write the final output. Presumably it's mmapping the .o's into memory to do the -export-dynamic magic. > Does starting up GHCi take a long time? GHCI startup appears to be normal (a few seconds). At least for me the culprit is -optl-export-dynamic ... I'm not sure that this is in GHC's problem domain any more. -- #ozone/algorithm <[EMAIL PROTECTED]> - trust.in.love.to.save ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
I too am getting link times in the several minutes range for my modestly sized project, I am on a standalone dual-cpu redhat linux box with 5.04.1 (no nfs, no nuttin') the project is available at http://repetae.net/john/computer/ginsu/ I think there is definatly something fishy going on. I don't remember linking always taking this long, I just assumed I added a bunch of code to my project or something but linking takes longer than all the other compilation stages combined. John On Wed, Nov 27, 2002 at 03:20:44PM -, Simon Marlow wrote: > > On Wed, Nov 27, 2002 at 09:50:56AM -, Simon Marlow wrote: > > > > > > More fun with Haskell-in-the-large: linking time has become the > > > > main bottleneck in our development cycle. The standard solution > > > > would be to use an incremental linker, but it seems that gnu does > > > > not yet support this:-| > > > > > > Hmm, I've never heard of linking being a bottleneck. > > > > The runtime loader stuff I'm working on[1] takes around 10 > > seconds to compile ... and 3 minutes to link it with libHSbase > > and libHSrts. (This is on a 500MHz PIII). Linking is a huge > > bottleneck once you start linking in the Haskell libraries; ld > > takes up enormous amounts of CPU time resolving symbols, > > I think. > > > > 1. > > http://www.algorithm.com.au/wiki/hacking/haske> ll.ghc_runtime_loading > > 3 minutes???!! > > I just downloaded your example code, did './configure && make' and the > link step took about 3 seconds. This is also on a 500MHz PIII. > > Are you sure you're not getting libHSbase over NFS? There may be > something that ld is doing that causes a lot of NFS traffic. > > Cheers, > Simon > ___ > Glasgow-haskell-users mailing list > [EMAIL PROTECTED] > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > -- --- John Meacham - California Institute of Technology, Alum. - [EMAIL PROTECTED] --- ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
On Wed, 27 Nov 2002 15:20:44 - "Simon Marlow" <[EMAIL PROTECTED]> wrote: > > The runtime loader stuff I'm working on[1] takes around 10 > > seconds to compile ... and 3 minutes to link it with libHSbase > > and libHSrts. (This is on a 500MHz PIII). Linking is a huge > > bottleneck once you start linking in the Haskell libraries; ld > > takes up enormous amounts of CPU time resolving symbols, > > I think. > 3 minutes???!! > > I just downloaded your example code, did './configure && make' and the > link step took about 3 seconds. This is also on a 500MHz PIII. > > Are you sure you're not getting libHSbase over NFS? There may be > something that ld is doing that causes a lot of NFS traffic. I was getting the same thing when doing the runtime loader stuff. I'm definately not using NFS, it's a stand-alone linux machine. Mind you the runtime loader stuff is not exactly normal use of the linker and ghc's library system. It links .o files in directly, bypassing packages and such. Duncan ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: incremental linking?
> > > More fun with Haskell-in-the-large: linking time has become the > > > main bottleneck in our development cycle. The standard solution > > > would be to use an incremental linker, but it seems that gnu does > > > not yet support this:-| > > > > Hmm, I've never heard of linking being a bottleneck. Even > GHC itself > > links in about 3-4 seconds here. One common problem is > that linking on > > a network filesystem takes a *lot* longer than linking > objects from a > > local disk. It's always a good idea to keep the build tree > on the local > > disk, even if the sources are NFS-mounted. > > I also have this problem, and while being on a local disk > rather than NFS > helps, it doesn't help all that much. For large projects, I > usually have > time to get a cup of coffee while linking (admittedly only four doors > away, but...). When on NFS, I have time to go to the local > coffeehouse... Ok, it looks like we need to investigate this. NFS isn't the problem in itself: I realised that our GHC installation is NFS-mounted on the machine I tried the experiment on, and it makes very little difference (although Linux's NFS implementation is a bit fast & loose when it comes to caching, I seem to recall). Those who experience long link times (longer than a few seconds), please reply with your - platform / OS version - versions of relevent things (GHC, GCC, binutils). - time to link 'main = print "hello"'. Does starting up GHCi take a long time? Would someone like to strace (or truss, or ktrace or whatever) the ld process and see what it is doing for all that time. Is it CPU or I/O bound? Cheers, Simon ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: incremental linking?
> > More fun with Haskell-in-the-large: linking time has become the > > main bottleneck in our development cycle. The standard solution > > would be to use an incremental linker, but it seems that gnu does > > not yet support this:-| > > Hmm, I've never heard of linking being a bottleneck. Even GHC itself > links in about 3-4 seconds here. One common problem is that linking on > a network filesystem takes a *lot* longer than linking objects from a > local disk. It's always a good idea to keep the build tree on the local > disk, even if the sources are NFS-mounted. I also have this problem, and while being on a local disk rather than NFS helps, it doesn't help all that much. For large projects, I usually have time to get a cup of coffee while linking (admittedly only four doors away, but...). When on NFS, I have time to go to the local coffeehouse... - Hal ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: incremental linking?
> On Wed, Nov 27, 2002 at 09:50:56AM -, Simon Marlow wrote: > > > > More fun with Haskell-in-the-large: linking time has become the > > > main bottleneck in our development cycle. The standard solution > > > would be to use an incremental linker, but it seems that gnu does > > > not yet support this:-| > > > > Hmm, I've never heard of linking being a bottleneck. > > The runtime loader stuff I'm working on[1] takes around 10 > seconds to compile ... and 3 minutes to link it with libHSbase > and libHSrts. (This is on a 500MHz PIII). Linking is a huge > bottleneck once you start linking in the Haskell libraries; ld > takes up enormous amounts of CPU time resolving symbols, > I think. > > 1. > http://www.algorithm.com.au/wiki/hacking/haske> ll.ghc_runtime_loading 3 minutes???!! I just downloaded your example code, did './configure && make' and the link step took about 3 seconds. This is also on a 500MHz PIII. Are you sure you're not getting libHSbase over NFS? There may be something that ld is doing that causes a lot of NFS traffic. Cheers, Simon ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
On Wed, Nov 27, 2002 at 09:50:56AM -, Simon Marlow wrote: > > More fun with Haskell-in-the-large: linking time has become the > > main bottleneck in our development cycle. The standard solution > > would be to use an incremental linker, but it seems that gnu does > > not yet support this:-| > > Hmm, I've never heard of linking being a bottleneck. The runtime loader stuff I'm working on[1] takes around 10 seconds to compile ... and 3 minutes to link it with libHSbase and libHSrts. (This is on a 500MHz PIII). Linking is a huge bottleneck once you start linking in the Haskell libraries; ld takes up enormous amounts of CPU time resolving symbols, I think. 1. http://www.algorithm.com.au/wiki/hacking/haskell.ghc_runtime_loading -- #ozone/algorithm <[EMAIL PROTECTED]> - trust.in.love.to.save ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: incremental linking?
> Unfortunately, we're not talking seconds, but coffee-breaks of > linking times on our Sun (yes, the stuff is in the range of a large > compiler - we're fortunate enough to be able to build on rather > substantial third-party packages, think haskell-in-haskell frontend > distributed over unusually many modules + strategic traversal > support + our own code). > > And yes, I was worried about NFS-mounting first, especially since > linking on our Sun takes even longer than on our PCs (long breaks > instead of short ones;-), but moving .hi and .o to local tmp-space > didn't speed things up (then again, it's a large machine, and our > disk setup is likely to be more complex than I know - I'll have to > check with our admins). It does sound like it's taking rather too long, I'd investigate further. > > > Alternative a: use someone else's incremental linker, e.g., Sun's > > > ild (ghc's -pgml option appears to have its own idea about option > > > formatting, btw) - this doesn't seem to work - should it? > > > > You'd probably want to call the incremental linker directly > rather than > > using GHC - what exactly does it do, BTW? What files does > it generate? > > Calling it via GHC seemed the best way to ensure that it gets > everything it needs (what else would be the purpose of -pgml?). Ah I see - it's just like a normal linker, except it takes a previous version of the executable and attempts to just re-link the bits it needs? GHC has some built-in assumptions about the linker: it must accept gcc-style command line options. This probably isn't true of the incremental linker, so you could either (a) write a wrapper around it, or (b) arrange to call the incremental linker through gcc. I can't remember if (b) is possible, but a quick scan of the docs suggested that gcc doesn't have the equivalent of -pgml so (b) might be out. BTW, -pgml is usually just used to select a different version of gcc (as with -pgmc). > > Alternative b: convince ghc to link objects in stages, e.g., on a > > per-directory basis - gnu's ld seems to support at least this kind > > of partial linking (-i/-r). Not quite as nice as a fully incremental > > linker, but would probably save our day.. > > Yes, this works fine. We use it to build the libraries for GHCi. > > Presumably directed via Makefiles? > Could this please be automated for ghc --make? It's easy: $ ld -r -o All.o A.o B.o C.o ... Cheers, Simon ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: incremental linking?
> Hmm, I've never heard of linking being a bottleneck. Even GHC itself > links in about 3-4 seconds here. One common problem is that linking on > a network filesystem takes a *lot* longer than linking objects from a > local disk. It's always a good idea to keep the build tree on the local > disk, even if the sources are NFS-mounted. Unfortunately, we're not talking seconds, but coffee-breaks of linking times on our Sun (yes, the stuff is in the range of a large compiler - we're fortunate enough to be able to build on rather substantial third-party packages, think haskell-in-haskell frontend distributed over unusually many modules + strategic traversal support + our own code). And yes, I was worried about NFS-mounting first, especially since linking on our Sun takes even longer than on our PCs (long breaks instead of short ones;-), but moving .hi and .o to local tmp-space didn't speed things up (then again, it's a large machine, and our disk setup is likely to be more complex than I know - I'll have to check with our admins). > > Alternative a: use someone else's incremental linker, e.g., Sun's > > ild (ghc's -pgml option appears to have its own idea about option > > formatting, btw) - this doesn't seem to work - should it? > > You'd probably want to call the incremental linker directly rather than > using GHC - what exactly does it do, BTW? What files does it generate? Calling it via GHC seemed the best way to ensure that it gets everything it needs (what else would be the purpose of -pgml?). According to docs, ild just keeps more information and space in the linked object, so that on re-linking, it can (a) check for file-modification times and (b) replace and partially relink only those contributing objects that have changed. http://docs.sun.com/db/doc/802-5693/6i9edqka5?l=zh&a=view > > Alternative b: convince ghc to link objects in stages, e.g., on a > > per-directory basis - gnu's ld seems to support at least this kind > > of partial linking (-i/-r). Not quite as nice as a fully incremental > > linker, but would probably save our day.. > > Yes, this works fine. We use it to build the libraries for GHCi. Presumably directed via Makefiles? Could this please be automated for ghc --make? Thanks, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: incremental linking?
> More fun with Haskell-in-the-large: linking time has become the > main bottleneck in our development cycle. The standard solution > would be to use an incremental linker, but it seems that gnu does > not yet support this:-| Hmm, I've never heard of linking being a bottleneck. Even GHC itself links in about 3-4 seconds here. One common problem is that linking on a network filesystem takes a *lot* longer than linking objects from a local disk. It's always a good idea to keep the build tree on the local disk, even if the sources are NFS-mounted. > Alternative a: use someone else's incremental linker, e.g., Sun's > ild (ghc's -pgml option appears to have its own idea about option > formatting, btw) - this doesn't seem to work - should it? You'd probably want to call the incremental linker directly rather than using GHC - what exactly does it do, BTW? What files does it generate? > Alternative b: convince ghc to link objects in stages, e.g., on a > per-directory basis - gnu's ld seems to support at least this kind > of partial linking (-i/-r). Not quite as nice as a fully incremental > linker, but would probably save our day.. Yes, this works fine. We use it to build the libraries for GHCi. Cheers, Simon ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users