Re: configure adds -std=gnu++11 to CXX variable

2024-05-29 Thread Tom Tromey
> "Jason" == Jason Merrill  writes:

Jason> Thanks, though I don't think all that code needs to go;
Jason> AC_PROG_CXX_STDCXX_EDITION_TRY still looks useful for a project that
Jason> relies on features from a particular standard.  We just don't want
Jason> AC_PROG_CXX to invoke it.

I didn't read the macro but there's also config/ax_cxx_compile_stdcxx.m4.
gdb uses this to make sure C++17 is available.

Tom


Re: Updated Sourceware infrastructure plans

2024-05-01 Thread Tom Tromey
> Do you (or others) have any thoughts about GitLab FOSS?

Dunno about the FOSS edition specifically, but I've used many review
tools in anger in the last 5 years: github, gitlab, gerrit, phabricator,
and a couple that ran in bugzilla ("MozReview", not sure if it had some
other name; and a second one that I think was nameless).

For the most part they are pretty similar, IMO, and just the gloss
differs.  I didn't like some aspects of phabricator but I no longer full
recall what.  Maybe its support for patch series was weaker.

For github/gitlab, I think it's probably nicer if you're also using
their bug tracker.  Those also seem totally fine by and large.  Nicer
than bugzilla in some ways (nicer / more responsive UI), worse in others
(searching is not as good).

Tom


Re: Updated Sourceware infrastructure plans

2024-05-01 Thread Tom Tromey
Mark> See also https://sourceware.org/bugzilla/show_bug.cgi?id=30997
Mark> We really should automate this. There are several people running
Mark> scripts by hand. The easiest would be to simply run it from a git
Mark> hook.  patchwork comes with a simple script that just calculates the
Mark> hash and pings patchwork, which can then mark the patch associated
Mark> with that hash as committed. If people really believe calculating a
Mark> hash is too much work from a git hook then we can also simply run it
Mark> from builder.sourceware.org. We already run a builder for each commit
Mark> anyway. It would just be one extra build step checking the commit
Mark> against patchwork.

There's just no possibility this approach will work for gdb.  It can't
reliably recognize when a series is re-sent, or when patches land that
are slightly different from what was submitted.  Both of these are
commonplace events in gdb.

Tom


Re: Updated Sourceware infrastructure plans

2024-04-23 Thread Tom Tromey
> Indeed.  Though Patchwork is another option for patch tracking, that
> glibc seem to be having success with.

We tried this in gdb as well.  It was completely unworkable -- you have
to manually clear out the patch queue, meaning it's normally full of
patches that already landed.  I know glibc has success with it, but I
wouldn't consider it for gdb unless it gained some new abilities.

Tom


Re: Updated Sourceware infrastructure plans

2024-04-22 Thread Tom Tromey
Jason> Someone mentioned earlier that gerrit was previously tried
Jason> unsuccessfully.

We tried it and gdb and then abandoned it.  We tried to integrate it
into the traditional gdb development style, having it send email to
gdb-patches.  I found these somewhat hard to read and in the end we
agreed not to use it.

I've come around again to thinking we should probably abandon email
instead.  For me the main benefit is that gerrit has patch tracking,
unlike our current system, where losing patches is fairly routine.

Jason> I think this is a common pattern in GCC at least: someone has an
Jason> idea for a workflow improvement, and gets it working, but it
Jason> isn't widely adopted.

It essentially has to be mandated, IMO.

For GCC this seems somewhat harder since the community is larger, so
there's more people to convince.

Tom


Re: Updated Sourceware infrastructure plans

2024-04-22 Thread Tom Tromey
> "Frank" == Frank Ch Eigler  writes:

>> [...]  I suggest that a basic principle for such a system is that it
>> should be *easy* to obtain and maintain a local copy of the history
>> of all pull requests.  That includes all versions of a pull request,
>> if it gets rebased, and all versions of comments, if the system
>> allows editing comments.  A system that uses git as the source of
>> truth for all the pull request data and has refs [...]

Frank> Do you know of a system with these characteristics?

Based on:

https://gerrit-review.googlesource.com/Documentation/dev-design.html#_notedb

... it sounds like this is what gerrit does.

Tom


Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-03 Thread Tom Tromey
> "Florian" == Florian Weimer  writes:

Florian> Everyone still pushes their own patches, and there are no
Florian> technical countermeasures in place to ensure that the pushed version is
Florian> the reviewed version.

This is a problem for gdb as well.

Probably we should switch to some kind of pull-request model, where
patches can only be landed via the UI, after sufficient review; and
where all generated files are regenerated by the robot before checkin.
(Or alternatively some CI runs and rejects patches where they don't
match.)

Tom


Re: [RFC] add regenerate Makefile target

2024-03-19 Thread Tom Tromey
> not sure if the current autoregen.py is in sync with that?

I'm curious why "autoreconf -f" is insufficient.
It seems to me that this should work.

> Also... I discovered the existence of an automake rule:
> am--refresh which IIUC is intended to automake the update of Makefile
> and its dependencies.

Don't use that rule directly.  It's an implementation detail and
shouldn't be relied on.

thanks,
Tom


Re: [RFC] add regenerate Makefile target

2024-03-15 Thread Tom Tromey
> "Eric" == Eric Gallager  writes:

Eric> Also there are the files generated by cgen, too, which no one seems to
Eric> know how to regenerate, either.

I thought I sent out some info on this a while ago.

Anyway what I do is make a symlink to the cgen source tree in the
binutils-gdb source tree, then configure with --enable-cgen-maint.
Then I make sure to build with 'make GUILE=guile3.0'.

It could be better but that would require someone to actually work on
cgen.

Eric> And then in bfd there's that chew
Eric> program in the doc subdir. And then in the binutils subdirectory
Eric> proper there's that sysinfo tool for generating sysroff.[ch].

gdb used to use a mish-mash of different approaches, some quite strange,
but over the last few years we standardized on Python scripts that
generate files.  They're written to be seamless -- just invoke in the
source dir; the output is then just part of your patch.  No special
configure options are needed.  On the whole this has been a big
improvement.

Tom


Re: Using std types and functions within GCC

2024-03-15 Thread Tom Tromey
> "David" == David Malcolm via Gcc  writes:

David> For example, there's at
David> least one place where I'd have used std::optional, but that's C++14 and
David> so unavailable.

FWIW, gdb had its own gdb::optional (which was really just a
stripped-down copy of the one from libstdc++) to fill exactly this need,
at least until we moved to C++17 this year.  If you need it you could
easily lift it from the gdb repository.

Tom


Re: lambda coding style

2024-01-11 Thread Tom Tromey
> "Jason" == Jason Merrill via Gcc  writes:

Jason> I think we probably want the same formatting for lambdas in function
Jason> argument lists, e.g.

Jason> algorithm ([] (parms)
Jason>   {
Jason> return foo;
Jason>   });

Jason> Any other opinions?

FWIW gdb did pretty much this same thing.  Our rules are documented
here:

https://sourceware.org/gdb/wiki/Internals%20GDB-C-Coding-Standards#Indentation_of_lambdas_as_parameters

There's a special case in here where a call takes a single lambda as the
last argument -- gdb indents that in a more block-like way.

Tom


Re: [PATCH v2 3/3] p1689r5: initial support

2022-11-01 Thread Tom Tromey
> "Ben" == Ben Boeckel via Gcc-patches  writes:

Ben> - `-fdeps-file=` specifies the path to the file to write the format to.

I don't know how this output is intended to be used, but one mistake
made with the other dependency-tracking options was that the output file
isn't created atomically.  As a consequence, Makefiles normally have to
work around this to be robust.  If that's a possible issue here then it
would be best to handle it in this patch.

Tom


Re: Issue with pointer types marked with scalar_storage_order

2021-05-06 Thread Tom Tromey
> "Ulrich" == Ulrich Weigand via Gcc  writes:

Ulrich> If we do want to byte-swap pointer types, then I guess we need
Ulrich> to still fix the debug info problem, which I guess would at a
Ulrich> minimum require extending DWARF to allow DW_AT_endianity as an
Ulrich> attribute to DW_TAG_pointer_type (and then probably also
Ulrich> DW_TAG_reference_type, DW_TAG_rvalue_reference_type,
Ulrich> DW_TAG_ptr_to_member_type and possibly others).  Also, the
Ulrich> implementation in GDB would need to be changed accordingly.

Ulrich> Any comments or suggestions on what to do here?

This kind of extension is pretty normal in DWARF, so I think it isn't a
big deal to emit it.  Consumers are ordinarily expected to simply ignore
things they don't understand.

Tom


Re: dejagnu version update?

2020-05-14 Thread Tom Tromey
> "Rob" == Rob Savoye  writes:

Rob>   Not that team, the folks I talked to thought I was crazy for wanting
Rob> to refactor it. :-)

I don't think refactoring dejagnu is crazy, but I think it's pretty hard
to imagine rewriting the gdb test suite in Python.  It's 260 KLOC.

Tom


Re: Not usable email content encoding

2020-04-23 Thread Tom Tromey
> "Segher" == Segher Boessenkool  writes:

Segher> My point was that this should *never* be part of patches, already.

FWIW, I use a few scripts so that I can keep ChangeLogs as files.
That's what I do when working on gdb.

https://github.com/tromey/git-gnu-changelog

This is easier on the whole, IME, because it means there is no extra
manual step before pushing.

Of course, better would be to remove ChangeLogs entirely (including not
putting anything like them into a commit message), because they are
largely not useful and are just make-work.  Again IMNSHO -- I know there
are some folks who read them, but I basically never have since switching
to git.

Tom


Re: Not usable email content encoding

2020-03-19 Thread Tom Tromey
> "Jonathan" == Jonathan Wakely via Gcc  writes:

[gerrit]
Jonathan> I think it also only very recently gained the ability to group a
Jonathan> series of patches together, as it wants a single commit per review.

We tried gerrit for gdb for a while, and in the end decided to drop it.

The main issue for us is that gerrit's support for patch series is poor.
In particular, it doesn't have any way to provide a cover letter (like
git send-email --compose), but in gdb we rely on these to provide an
introduction to the series -- to help justify the series overall and
orient the reviewers.

Here's the gerrit bug:

https://bugs.chromium.org/p/gerrit/issues/detail?id=924

Based on this I think we all assumed that the situation wouldn't
improve.

Also, gerrit was pretty bad about threading messages, so it became quite
hard to follow progress in email (but following all patches in the web
interface is very difficult, a problem shared by all these web UIs).

Phabricator, IME, is even worse.  Last I used it, it had extremely bad
support for patch series, to the extent that Mozilla had to write a tool
wrapping Phabricator to make it workable.

In gdb we've also considered using an updated patchworks -- with a
gerrit-like commit hook it would be possible to automatically close
patches when they land, which is patchworks' biggest weakness.  (In gdb
land we're more concerned with tracking unreviewed patches than with
online patch review.)  However, this probably would not be a good match
for the new From munging, because it would mean extra (forgettable)
steps when trying to apply patches from the patchworks repository.

TL;DR we're doomed,
Tom


Re: Git ChangeLog policy for GCC Testsuite inquiry

2020-02-07 Thread Tom Tromey
> "Jason" == Jason Merrill  writes:

Jason> I omit ChangeLogs by adding ':!*/ChangeLog' to the end of the git
Jason> send-email command.  I don't remember where I found that incantation.

Cool, I did not know about this.

FWIW if you have the ChangeLog merger available, it's actually more
convenient if the patch includes the ChangeLog, because then applying it
with "git am" does the right thing.  Without this you have to edit the
ChangeLogs by hand instead.

Tom


Re: Git ChangeLog policy for GCC Testsuite inquiry

2020-02-07 Thread Tom Tromey
> "Jonathan" == Jonathan Wakely  writes:

Jonathan> I have a script that does the opposite, which I've been using for
Jonathan> years. I edit the ChangeLog files as before, and a Git
Jonathan> prepare-commit-msg hook extracts the top changelog entry from each
Jonathan> file in the commit and pre-populates my commit msg with those entries.

Jonathan> To use it just put the two prepare-commit-msg* files from
Jonathan> https://gitlab.com/miscripts/miscripts/-/tree/master/gcc into your
Jonathan> $GCC_SRC/.git/hooks directory.

I do this too, combined with scripts to handle merge ChangeLogs
specially during rebase; scripts to update the dates on ChangeLog files
before pushing; and a wrapper for "git send-email" that strips the
ChangeLogs from the email (part of gdb patch submission rules).

You can find it all here

https://github.com/tromey/git-gnu-changelog

Tom


Re: Moving to C++11

2019-09-26 Thread Tom Tromey
> "Jason" == Jason Merrill  writes:

Jason> Note that std::move is from C++11.

>> I'm not too worried about requiring even a C++14 compiler, for the
>> set of products we still release latest compilers we have newer
>> GCCs available we can use for building them (even if those are
>> not our primary supported compilers which would limit us to
>> GCC 4.8).

Jason> I wouldn't object to C++14, but there's nothing in there I
Jason> particularly want to use, so it seems unnecessary.

>> Note I'd still not like to see more C++ feature creep into general
>> non-container/infrastructure code, C++ is complex enough as-is.

Jason> I agree for rvalue references.  I want to start using C++11 'auto' in
Jason> local variable declarations.

FWIW in gdb we went with C++11, because it was the version that offered
the most useful upgrades -- for me those was mainly move and foreach,
but 'auto' is sometimes nice as well.

Tom


Re: Adding -Wshadow=local to gcc build rules

2019-09-18 Thread Tom Tromey
> "Bernd" == Bernd Edlinger  writes:

Bernd> I'm currently trying to add -Wshadow=local to the gcc build rules.
Bernd> I started with -Wshadow, but gave up that idea immediately.

Bernd> As you could expect the current code base has plenty of shadowed
Bernd> local variables.  Most are trivial to resolve, some are less trivial.
Bernd> I am not finished yet, but it is clear that it will be a rather big
Bernd> patch.

Bernd> I would like to ask you if you agree that would be a desirable step,
Bernd> in improving code quality in the gcc tree.

We did this in gdb and it was worthwhile.  According to my notes, it
found 3 real bugs (though one was by chance).  You can see what else we
tried here: https://tromey.com/blog/?p=1035

Tom


Re: [PATCH] Do not warn with warn_unused_result for alloca(0).

2019-06-13 Thread Tom Tromey
> "Jeff" == Jeff Law  writes:

Jeff> I'd like to move C-alloca support to the ash heap of history.  But I'm
Jeff> not sure we can realistically do that.

Are there still platforms or compilers in use where it's needed?

For gdb I was planning to just remove these calls.

Tom


Re: Indicating function exit points in debug data

2019-03-20 Thread Tom Tromey
> "Segher" == Segher Boessenkool  writes:

>> Section 6.2.5.2 outlines the line number information state machine's
>> opcodes. One of them is "DW_LNS_set_epilogue_begin". Its definition
>> is:

Segher> How should this work with shrink-wrapping?  The whole point of that is
Segher> you do not tear down the frame after all other code, etc.  I don't see
Segher> how we can do better than putting this DW_LNS_set_epilogue_begin right
Segher> before the actual return -- and that is after all the tear down etc.

I think it's fine if the epilogue marker is inexact or missing from
optimized code, because (1) that's the current state, and (2) it doesn't
really make sense to talk about an epilogue in some cases.

Similarly, IMO it is fine not to worry about non-local exits.  You can
already catch exceptions and examine them in gdb -- the epilogue marker
feature is mostly to address the unmet need of wanting to set a
breakpoint at the end of a function.

Ideally, in -O0 / -Og code, the marker would be reliable where it
appears.

It would be great if there was a way to indicate the location of the
value-to-be-returned in the DWARF.  That way gdb could extract it at the
epilogue point.  AFAIK this would require a DWARF extension.

thanks,
Tom


Re: ChangeLog's: do we have to?

2018-07-05 Thread Tom Tromey
> "Florian" == Florian Weimer  writes:

Florian> To some degree, it's a bit of a chicken-and-egg problem because
Florian> “git am” tends to choke on ChangeLog patches (so we can't
Florian> really use it today)

FWIW, installing a ChangeLog merge driver fixes this.
I use git-merge-changelog from gnulib.  If you want to use git am and
avoid manually copying ChangeLog text from the commit message back into
the appropriate files, then it's much better to install the driver and
include the ChangeLog diffs in the patch submission.

Tom


Re: gdb 8.x - g++ 7.x compatibility

2018-02-07 Thread Tom Tromey
> "Dan" == Daniel Berlin  writes:

Dan> If there are multiple types named Foo<2u>, DWARF needs to be extended to
Dan> allow a pointer from the vtable debug info to the class type debug info
Dan> (unless they already added one).

This is what we did for Rust.

Rust doesn't have a stable ABI yet, so using gdb's current approach --
having the debugger use details of the ABI in addition to the debug info
-- wasn't an option.

So, instead, the Rust compiler emits DWARF for the vtable and associates
the vtable with the concrete type for which it was emitted.  This
required a minor DWARF extension.

I think C++ could probably do something along these lines as well.

The current gdb approach hasn't been really solid since function-local
classes were added to C++.  IIRC there are bugs in gdb bugzilla about
this.  These kinds of problems are, I think, completely avoided by a
DWARF-based approach.

Tom


Re: GCC-Bridge: A Gimple Compiler targeting the JVM

2016-02-03 Thread Tom Tromey
Manuel> Everything is possible! Not sure how hard it would be, though. As
Manuel> said, GJC

"gcj".

Manuel> the Java FE, was doing something similar sometime ago, but
Manuel> it has perhaps bit-rotted now.

It used to, but when we moved to using ecj for parsing java source, we
removed (IIRC) the bytecode generator.

Note that it only ever accepted trees generated by the java front end.
You couldn't compile trees from other front ends to java byte code.
If you wanted that you had to resort to a more extreme measure like
mips2java.

Tom


Re: ivopts vs. garbage collection

2016-01-11 Thread Tom Tromey
> "Michael" == Michael Matz  writes:

Michael> Well, that's a hack.  A solution is to design something that
Michael> works generally for garbage collected languages with such
Michael> requirements instead of arbitrarily limiting transformations
Michael> here and there.  It could be something like the notion of
Michael> derived pointers, where the base pointer needs to stay alive as
Michael> long as the derived pointers are.

This was done once in GCC, for the Modula 3 compiler.
There was a paper about it, but I can't find it any more.

The basic idea was to emit a description of the stack frame that their
GC could read.  They had a moving GC that could use this information to
rewrite the frame when moving objects.

Tom


Re: building gcc with macro support for gdb?

2015-12-05 Thread Tom Tromey
Martin> The one that's more difficult is 18881 where the debugger cannot
Martin> resolve calls to functions overloaded on the constness of the
Martin> argument.  Do you happen to have a trick for dealing with that
Martin> one?

Nothing really convenient to use.  Sometimes you can get it to do the
right thing by using special gdb syntax to pick the correct overload;
but of course that's a pain.

Tom


Re: building gcc with macro support for gdb?

2015-12-04 Thread Tom Tromey
> "Martin" == Martin Sebor  writes:

Martin> To get around these, I end up using info macro to print the
Martin> macro definition and using whatever it expands to instead.  I
Martin> wonder if someone has found a more convenient workaround.

For some of these, like the __builtin_offsetof and __null problems, you
can add a new "macro define" to gcc's gdbinit.in.

In fact I already see __null there, so maybe you don't have the correct
add-auto-load-safe-path setting in your ~/.gdbinit.

Tom


Re: incremental compiler project

2015-09-04 Thread Tom Tromey
Manuel> The overall goal of the project is worthwhile, however, it is unclear
Manuel> whether the approach envisioned in the wiki page will lead to the
Manuel> desired benefits. See http://tromey.com/blog/?p=420 which is the last
Manuel> status report that I am aware of.

Yeah.  I stopped working on that project when my manager at the time
asked me to work on gdb instead.

I think the goal of that project is still relevant, in that C++
compilation is still just too darn slow.  Projects today (e.g., firefox)
still do the "include the .cc files" trick to get a compilation
performance boost.

On the other hand, I'm not sure the incremental compiler is the way to
go.  It is a complicated approach.

Perhaps better would be to tackle things head on; that is, push harder
for modules in C and C++ and fix the problem at its root.

Tom


Re: 33 unknowns left

2015-08-27 Thread Tom Tromey
>> mkoch = mkoch 
Jeff> Michael Koch?  konque...@gmx.de/

Yes; and he has an entry in /etc/passwd, so maybe the conversion script
has a bug?

Tom


Re: Offer of help with move to git

2015-08-24 Thread Tom Tromey
Eric> In the mean time, I'm enclosing a contributor map that will need to be
Eric> filled in whoever does the conversion.  The right sides should become
Eric> full names and preferred email addresses.

It's probably worth starting with the map I used when converting gdb.
There is a lot of overlap between the sets of contributors.

See the file "Total-merged-user-map" here:

https://github.com/tromey/gdb-git-migration

Tom


Re: How to implement '@' GDB-like operator for libcc1

2015-03-16 Thread Tom Tromey
> "Jan" == Jan Kratochvil  writes:

Jan> I have problems implementing '@' into GCC, could you suggest at which place
Jan> should I call build_array_type_nelts()?  Or is it the right way at all?

Jan> +case ATSIGN_EXPR:
Jan> +  orig_op0 = op0 = TREE_OPERAND (expr, 0);
Jan> +  orig_op1 = op1 = TREE_OPERAND (expr, 1);
[...]
Jan> +  ret = build_array_type_nelts (TREE_TYPE (op0), tree_to_uhwi (op1));

It seems like there should be some kind of reference to &op0, that is,
lowering ATSIGN_EXPR to *(typeof(OP0)[OP1]*)(&OP0).

Also, I this has to consider the case where OP1 is not a constant.

Tom


Re: [PATCH] gcc parallel make check

2014-09-11 Thread Tom Tromey
> "Jakub" == Jakub Jelinek  writes:

Jakub> I fear that is going to be too expensive, because e.g. all the
Jakub> caching that dejagnu and our tcl stuff does would be gone, all
Jakub> the tests for lp64 etc.  would need to be repeated for each test.

In gdb I arranged to have this stuff saved in a special cache directory.
See gdb/testsuite/lib/cache.exp for the mechanism.

Tom


Re: Fwd: Macros taking a function as argument - and evaluating it at least twice

2013-11-13 Thread Tom Tromey
> "Steven" == Steven Bosscher  writes:

Steven> Here is a non-comprehensive list of macros that are used with a
Steven> function passed to the macro's argument, and the macro evaluates that
Steven> argument at least twice:
[...]
Steven> Not sure what to do about them (if anything) but I don't think this is
Steven> intended...

If someone implemented PR 57612 then we could detect these automatically :-)

Tom


Re: [RFC] Replace Java with Go in default languages

2013-11-13 Thread Tom Tromey
> "Jeff" == Jeff Law  writes:

Jeff> Given the problems Ian outlined around adding Go to the default
Jeff> languages and the build time issues with using Ada instead of Java,
Jeff> I'm unsure how best to proceed.

IIRC from upthread the main reason to keep one of these languages is
-fnon-call-exceptions testing.

How about just writing some tests for that, in C++?  It may not be quite
as good but it seems like it could be a reasonable first pass; with more
obscure issues caught by subsequent testing, much as is the case for
non-core targets.

Tom


Re: [RFC] Replace Java with Go in default languages

2013-11-13 Thread Tom Tromey
> "Richard" == Richard Biener  writes:

Richard> Whatever the "core language runtime" would be - I'm somewhat a
Richard> Java ignorant.

The core is quite large, unless you are also willing to track through
all the code and chop out the bits you don't want for testing.  This
would mean having a second class library just for testing purposes.

For example, the core includes all of the networking support, because
(from memory...)  Object refers to ClassLoader which refers to URL.

Tom


Re: [Patch: libcpp, c-family, Fortran] Re: Warning about __DATE__ and __TIME__

2013-11-06 Thread Tom Tromey
> "Tobias" == Tobias Burnus  writes:

Tobias> Updated version attached – after bootstrapping and regtesting on
Tobias> x86-64-gnu-linux
Tobias> OK?

Sorry, I didn't notice this until today.

Tobias> @@ -925,7 +928,8 @@ enum {
Tobias>CPP_W_NORMALIZE,
Tobias>CPP_W_INVALID_PCH,
Tobias>CPP_W_WARNING_DIRECTIVE,
Tobias> -  CPP_W_LITERAL_SUFFIX
Tobias> +  CPP_W_LITERAL_SUFFIX,
Tobias> +  CPP_W_DATE_TIME
Tobias>  };

I think this change requires a parallel change to c-family/c-common.c.

Tobias> + cpp_warning (pfile, CPP_W_DATE_TIME, "Macro \"%s\" might 
prevent "
Tobias> +  "reproduce builds", NODE_NAME (node));

Tobias> +   cpp_warning (pfile, CPP_W_DATE_TIME, "Macro \"%s\" might 
prevent "
Tobias> +"reproduce builds", NODE_NAME (node));

I think change "reproduce" to "reproducible" in these warnings.

Tom


Re: make install broken on the trunk?

2013-10-21 Thread Tom Tromey
> "Matthias" == Matthias Klose  writes:

Matthias> A make install from trunk 20131020 seems to be broken, at
Matthias> least when building with Go (last time I successfully
Matthias> installed was 20130917).  However, even without Go enabled,
Matthias> dfa.c is rebuilt and and then the depending binaries are
Matthias> rebuilt. Rebuilding go1 ends with

If you are using bootstrap-lean then this is PR 58572.

Tom


Re: automatic dependencies

2013-10-02 Thread Tom Tromey
> "Eric" == Eric Botcazou  writes:

>> Sorry, I think it requires a review.
>> I'll send it to gcc-patches.

Eric> IMO it clearly falls into the obvious category.

I wasn't so sure; but in any case Jakub quickly approved it and I have
checked it in.

Tom


Re: automatic dependencies

2013-10-02 Thread Tom Tromey
> "Eric" == Eric Botcazou  writes:

Eric> Ping?

Sorry, I think it requires a review.
I'll send it to gcc-patches.

Tom


Re: automatic dependencies, adding new files in branch...

2013-10-01 Thread Tom Tromey
> "Basile" == Basile Starynkevitch  writes:

Basile> I want to merge the current trunk into the MELT branch, and I
Basile> have some trouble understanding how one should add new files
Basile> into GCC (i.e. into a branch)

Nothing much has changed there.  You just don't need to list any
discoverable dependencies.

Basile> MELT_H= $(srcdir)/melt/generated/meltrunsup.h \
Basile> $(srcdir)/melt-runtime.h \
Basile> melt-predef.h 

You can drop this.

Basile> melt-runtime.args:  $(MELT_RUNTIME_C) melt-run-md5.h 
melt-runtime-params-inc.c $(C_COMMON_H) \
Basile> $(CONFIG_H) $(SYSTEM_H) $(TIMEVAR_H) $(TM_H) $(TREE_H) \
Basile> $(GGC_H) $(BASIC_BLOCK_H) $(GIMPLE_H) $(CFGLOOP_H) \
Basile> tree-pass.h $(MELT_H) \
Basile> $(srcdir)/melt/generated/meltrunsup.h \
Basile> $(srcdir)/melt/generated/meltrunsup-inc.cc \
Basile> gt-melt-runtime.h $(PLUGIN_H) $(TOPLEV_H) $(VERSION_H) \
Basile>  Makefile

I don't understand why this .args file has all these dependencies.

Surely they are dependencies for melt-runtime.o, not for this file.
That is, if tree-pass.h changes, there is no need for melt-runtime.args
to change.

I think you can drop most of these dependencies, which is good because
probably (I didn't look) some of those *_H variables have been deleted.

Basile> melt-runtime.o:  $(MELT_RUNTIME_C) melt-run-md5.h 
melt-runtime-params-inc.c  $(C_COMMON_H) \
Basile> $(CONFIG_H) $(SYSTEM_H) $(TIMEVAR_H) $(TM_H) $(TREE_H) \
Basile> $(GGC_H) $(BASIC_BLOCK_H) $(GIMPLE_H) $(CFGLOOP_H) \
Basile> tree-pass.h $(MELT_H) \
Basile> $(srcdir)/melt/generated/meltrunsup.h \
Basile> $(srcdir)/melt/generated/meltrunsup-inc.cc \
Basile> gt-melt-runtime.h $(PLUGIN_H) $(TOPLEV_H) $(VERSION_H) \
Basile> | melt-runtime.args
Basile> ls -l melt-runtime.args
Basile> $(COMPILER) -c $(shell cat melt-runtime.args)  $(OUTPUT_OPTION)

I think this can be reduced to:

melt-runtime.o: $(MELT_RUNTIME_C) melt-runtime-params-inc.c | melt-runtime.args

CFLAGS-melt-runtime.o = $(shell cat melt-runtime.args)

Maybe you need more explicit dependencies in there, it depends on which
files are generated.

You have to make sure your new .o is in ALL_HOST_OBJS.
That is how the dependencies are found by make; see the final paragraph
in gcc/Makefile.in.

Basile> Do you have any insights, in particular hints generic enough to
Basile> be valuable for other branches?

Not really.  The change really just affects dependencies that can be
discovered by the C or C++ compiler.  It doesn't affect most other
things.

Basile> Perhaps adding a comment in the trunk's Makefile.in might be
Basile> helpful too

I am happy to add comments, but I'm probably the wrong person to come up
with which specific ones are useful.  That is, I already added all the
comments I thought were necessary.

Tom


Re: automatic dependencies

2013-09-30 Thread Tom Tromey
Tom> 2013-09-30  Tom Tromey  
Tom>* Makefile.in (-DTOOLDIR_BASE_PREFIX): Use $(if), not $(and).

I didn't look at this until later and saw that Emacs guessed wrong.
Here's the corrected ChangeLog entry.

2013-09-30  Tom Tromey  

* Makefile.in (DRIVER_DEFINES): Use $(if), not $(and).

Tom


Re: automatic dependencies

2013-09-30 Thread Tom Tromey
Eric> Are there any additional prerequisites on the GNU make version?
Eric> On a machine with GNU make 3.80 installed, the bootstrap
Eric> consistently fails with:

Sorry about this.

Eric>   $(and $(SHLIB),$(filter yes,yes),-DENABLE_SHARED_LIBGCC) \

I looked in the GNU make NEWS file and found that $(and ..) was added in
3.81.

In this particular case it looked easy to reimplement using $(if).

Could you please try this patch with make 3.80?

thanks,
Tom

2013-09-30  Tom Tromey  

* Makefile.in (-DTOOLDIR_BASE_PREFIX): Use $(if), not $(and).

Index: Makefile.in
===
--- Makefile.in (revision 202912)
+++ Makefile.in (working copy)
@@ -1924,7 +1924,7 @@
   -DTOOLDIR_BASE_PREFIX=\"$(libsubdir_to_prefix)$(prefix_to_exec_prefix)\" \
   @TARGET_SYSTEM_ROOT_DEFINE@ \
   $(VALGRIND_DRIVER_DEFINES) \
-  $(and $(SHLIB),$(filter yes,@enable_shared@),-DENABLE_SHARED_LIBGCC) \
+  $(if $(SHLIB),$(if $(filter yes,@enable_shared@),-DENABLE_SHARED_LIBGCC)) \
   -DCONFIGURE_SPECS="\"@CONFIGURE_SPECS@\""
 
 CFLAGS-gcc.o += $(DRIVER_DEFINES)


Re: automatic dependencies

2013-09-25 Thread Tom Tromey
Diego> Thank you, thank you, thank you!  A long time in the making, but
Diego> I'm glad you persevered.

FWIW, I had given up on this patch way back when; but then was somewhat
reinvigorated by the Cauldron and happened to notice a few "missing
dependency" bug fixes on the list...

Tom


Re: automatic dependencies

2013-09-25 Thread Tom Tromey
>>>>> "Tom" == Tom Tromey  writes:

Tom> I wanted to mention this explicitly, since last time around this series
Tom> tripped across a GNU make bug.  If you see any problems, please report
Tom> them and CC me.  I will look into them as soon as I am able.

Oops, I meant to write a bit more here before sending.


You will need to do a clean build in order for the dependencies to
actually take effect.  In this approach, dependencies are computed as a
side effect of the build -- but obviously that was not true before the
patch series.  So, after you update, you effectively have no
dependencies.


If you look in the tree you will see there are still dependencies for
host objects and in various t-* files in config/.

The latter are easy to convert; see the "t-i386" and "t-glibc" patches.
The key is to ensure that (1) dependencies are in fact created (search
the .deps directory), (2) dependencies are used (see the DEPFILES code
at the end of gcc/Makefile.in), and (3) any generated files are either
in generated_files or have an explicit dependency (the t-i386 change has
an example of the latter).

I don't plan to convert the host objects.  However I think it isn't
extremely hard, following the existing model.


thanks,
Tom


automatic dependencies

2013-09-25 Thread Tom Tromey
Hi all.

I've checked in the automatic dependency tracking patch series.

I wanted to mention this explicitly, since last time around this series
tripped across a GNU make bug.  If you see any problems, please report
them and CC me.  I will look into them as soon as I am able.

thanks,
Tom


Re: resurrecting automatic dependencies

2013-07-23 Thread Tom Tromey
> "Ian" == Ian Lance Taylor  writes:

Ian> So you should be good to go for Go.

Thanks.  I confirmed it works here.  I've merged this and pushed the
needed go/Make-lang.in change to my branch and built with a large -j on
gcc110 with success.

Tom


Re: resurrecting automatic dependencies

2013-07-23 Thread Tom Tromey
Tom> There may be more missing dependencies.  Please try out this branch if
Tom> you would.  You can report bugs to me, just send the build log.

I tried -j33 on a bigger machine and found a problem with Go.

The dependency patch uses the language Makefile conventions to add some
order-only dependencies to ensure that generated files are made early
enough (this code is actually already in gcc, but it has some latent
bugs).  In particular it uses the $(lang)_OBJS variable (via
ALL_HOST_OBJS).

However, Go does not set go_OBJS, so a sufficiently large -j setting
will cause a build failure, as a Go file is compiled before a generated
header.

Fixing this is simple enough in go/Make-lang.in; but this also has the
side effect of defining IN_GCC_FRONT_END for the various Go
compilations:

$(foreach file,$(ALL_HOST_FRONTEND_OBJS),$(eval CFLAGS-$(file) += 
-DIN_GCC_FRONTEND))

... which causes build failures for go-backend.c (uses rtl.h) and
go-lang.c (uses except.h), since with this defined, certain headers are
prohibited.


A short term solution is to keep Go using explicit dependencies.

For a long term solution ... well, I'm CCing Ian.

The except.h doesn't seem to be needed.  At least, I removed the include
and go-lang.c still compiled.  The go-backend.c problem looks harder,
though.  Thoughts?

Tom


Re: resurrecting automatic dependencies

2013-07-22 Thread Tom Tromey
> "Diego" == Diego Novillo  writes:

Diego> Have you any plans for other build system work?

Nope, no other plans.
This was just an unfinished item from long ago that Cauldron inspired me
to try to complete.

Tom


resurrecting automatic dependencies

2013-07-18 Thread Tom Tromey
Today I started resurrecting my old automatic dependency patch.

I decided, this time, to take a more incremental approach.  Thanks to
git, I made a patch series, rather than one monster patch.  Now we can
easily test various parts of the change to more easily notice if, or
when, we trip across the GNU make bug again.

I pushed the series to gcc.git tromey/auto-dependency-checking (which
was supposed to be 'tracking', but which I mistyped completely
unconsciously: the fingers have reasons which reason cannot know).
Anyway..

The branch for now only implements automatic dependency tracking for
host objects.  This is the most useful case.

The series on the branch is based on the observation that it is safe to
leave explicit dependencies in the Makefile during the conversion.
Conversions of various bits are done in separate patches.

I've tested this a bit and it works ok.  I tried clean builds with
various -jN options to try to provoke missing dependencies, and caught a
few bugs that way.  Usually bugs are just a missing pre-dependency and
are easy to fix.

There may be more missing dependencies.  Please try out this branch if
you would.  You can report bugs to me, just send the build log.


There are still a few things that I haven't done yet:

* Update the dependencies in files in config/
* Make auto dependency tracking work for build/*.o
* Remove all the *_H macros from Makefile.in
* Ada

I suppose at least removing the dead macros would be good to have.
The rest is nice but optional -- leaving them undone won't break anything.

Tom


Re: Should -Wmaybe-uninitialized be included in -Wall?

2013-07-09 Thread Tom Tromey
Andrew> I would question the appropriateness of using -Wall -Werror in
Andrew> production code.

Andreas> What matters is whether *some* stages of production code
Andreas> development use this combination of options.  It could
Andreas> certainly be argued whether it should also be a project's
Andreas> "configure" default, like currently the case for gdb.

gdb only enables it for the development branch, not for releases.  If
you're building from CVS you're expected to know how to either fix these
problems or disable -Werror.  Typically the fix is trivial; if you look
through the archives you'll see fixes along these lines.

Tom


Re: Way to tell in libcpp if something is a macro...

2013-06-26 Thread Tom Tromey
Jakub> Though, for all such changes consider what will happen if people
Jakub> compile with -save-temps, or preprocess separately from
Jakub> compilation (ccache etc.).

Yes, good point.
It is useful as a hack but doubtful in other ways.

Tom


Re: Way to tell in libcpp if something is a macro...

2013-06-26 Thread Tom Tromey
> "Ed" == Ed Smith-Rowland <3dw...@verizon.net> writes:

Ed> I have a situation where I would like to detect if a string is a
Ed> currently defined macro.

Ed> Something like a
Ed>   bool cpp_is_macro(const unsigned char *);
Ed> would be great.

Ed> Or perhaps I could construct something from the string and test that.

Ed> If something like this doesn't exist does anyone have some pointers on
Ed> how to make one for libcpp.

Call ht_lookup and convert to a cpp hash node, e.g., from grepping:

  return CPP_HASHNODE (ht_lookup (pfile->hash_table, 
  buf, bufp - buf, HT_ALLOC));

Then see if the node's 'type' field is NT_MACRO.

I think that should work.

See directives.c:do_ifdef for some bits.
E.g., you may consider marking the macro as "used".

Tom


Re: If you had a month to improve gcc build parallelization, where would you begin?

2013-04-10 Thread Tom Tromey
> "Joern" == Joern Rennecke  writes:

Joern> The likelyhood that a test depends on the outcome of the last few
Joern> n tests is rather low.  So you could tests speculatively with an
Joern> incomplete set of defines, and then re-run them when you have
Joern> gathered all the results from the preceding tests to verify that
Joern> you have computed the right result.

I think there are things that can be parallelized without needing to do
any speculation.  For example, AC_CHECK_HEADERS is often invoked with
many header files.  These could in most cases be checked for in
parallel.  Similarly for AC_CHECK_FUNCS.

Beyond that you could define a general way to run checks in parallel and
then just change the gcc configure script to use it, using knowledge of
the code to decide what dependencies there are.

Whether or not this would yield a big enough benefit, though ...

I had the vague impression that this was being looked at in upstream
autoconf, but I don't actually follow it any more.

Tom


Re: Debugging C++ Function Calls

2013-03-27 Thread Tom Tromey
> "Lawrence" == Lawrence Crowl  writes:

Lawrence> Are the symbol searches specific to the scope context, or does it
Lawrence> search all globally defined symbols?

I am not totally certain in this case, but in gdb many searches are
global, so that "print something" works even if "something" is not
locally visible.

Lawrence> There is a weakness in the patch, int that the following is legal.
[...]

Thanks.

Tom


Re: Debugging C++ Function Calls

2013-03-26 Thread Tom Tromey
Richard> Did you consider using clang?
Richard> 

We may look at it after re-examining g++.
I think there are some reasons to prefer gcc.

Tom


Re: Debugging C++ Function Calls

2013-03-25 Thread Tom Tromey
> "Lawrence" == Lawrence Crowl  writes:

Tom> Sure, but maybe for a critique of the approach.  But only if you are
Tom> interested.

Lawrence> Sure, send it.

I think the intro text of this message provides the best summary of the
approach:

http://sourceware.org/ml/gdb-patches/2010-07/msg00284.html

Tom


Re: Debugging C++ Function Calls

2013-03-25 Thread Tom Tromey
> "Lawrence" == Lawrence Crowl  writes:

Lawrence> Hm.  I haven't thought about this deeply, but I think SFINAE may
Lawrence> not be less of an issue because it serves to remove candidates
Lawrence> from potential instantiation, and gdb won't be instantiating.
Lawrence> The critical distinction is that I'm not trying to call arbitrary
Lawrence> expressions (which would have a SFINAE problem) but call expressions
Lawrence> that already appear in the source.

Thanks.
I will think about it.

Lawrence> I agree that the best long-term solution is an integrated compiler,
Lawrence> interpreter, and debugger.  That's not likely to happen soon.  :-)

Sergio is re-opening our look into reusing GCC.
Keith Seitz wrote a GCC plugin to try to let us farm out
expression-parsing to the compiler.  This has various issues, some
because gdb allows various C++ extensions that are useful when
debugging; and also g++ was too slow.
Even if g++ can't be used we at least hope this time to identify some of
the things that make it slow and file a few bug reports...

Lawrence> I don't know anything about gdb internals, so it may not be helpful
Lawrence> for me to look at it.

Sure, but maybe for a critique of the approach.  But only if you are
interested.

Tom


Re: stabs support in binutils, gcc, and gdb

2013-01-03 Thread Tom Tromey
> "David" == David Taylor  writes:

David> It appears that STABS is largely in maintenance mode.  Are there any
David> plans to deprecate STABS support?  If STABS enhancements were made and
David> posted would they be frowned upon?  Or would they be reviewed for
David> possible inclusion in a future release?

In gdb things are rarely pre-emptively deprecated like this.
If someone wants to maintain the stabs code, then it will stay alive.
The most important thing is having a reasonably responsive maintainer --
it is the un-maintained code that tends to slowly rot and then
eventually be deleted.

Tom


Re: Unifying the GCC Debugging Interface

2012-11-27 Thread Tom Tromey
> "Gaby" == Gabriel Dos Reis  writes:

Richard> Just to add another case which seems to be not covered in the thread.
Richard> When dumping from inside a gdb session in many cases I cut&paste
Richard> addresses literally.  For overloading to work I'd need to write casts
Richard> in front of the inferior call argument.  That sounds ugly - so at least
Richard> keep the old interfaces as well.  Or rather for debugging purposes
Richard> provide python helpers rather than new inferior overloads.

Gaby> this means that we need an improvement from GDB.  This
Gaby> is not useful only to the small GCC community.  It is very useful to
Gaby> the wider GDB/C++ user community.

There is no way for gdb to do anything about this generically.
Richard is talking about a situation like:

print overload(0xf)

gdb can't know what the user meant here.

Maybe it is possible with some application-specific knowledge, for
example if you could tell the type of an object from its address.
In this case it can be done by gcc, via Python scripts for gdb.

Tom


Re: C++ and gather-detailed-mem-stats

2012-08-24 Thread Tom Tromey
> "Diego" == Diego Novillo  writes:

Diego> The compiler will then always add 'file', 'function' and 'line' to the
Diego> argument list, which can then be retrieved via builtins or special
Diego> argument names (I think builtins may be safer).

Diego> This would allow us to handle operators.  I don't think it would be a
Diego> big deal if this introduces ABI incompatibilities.

This will also require gdb changes, if you want to be able to call these
functions from gdb.  I guess it would need a DWARF extension as well.
(I am not sure about other debuginfo formats.)

Tom


Re: [RFH] Uses of output.h in the front ends

2012-06-04 Thread Tom Tromey
> "Steven" == Steven Bosscher  writes:

[...]

Steven> java/class.c: switch_to_section (get_section (buf, flags, NULL));
Steven> java/class.c: switch_to_section (get_section (buf, flags, NULL));

Steven> I am not sure how to fix this. I think it could be fixed by having a
Steven> version of build_constant_desc that puts the data in a specific
Steven> section while wrapping up global variables in varasm.

In this particular case I'm not sure why switch_to_section is needed.
The code is also setting DECL_SECTION_NAME -- is that not enough?
It seems to be enough elsewhere in the same file ... see
emit_register_classes_in_jcr_section further down.

Tom


Re: Switching to C++ by default in 4.8

2012-04-12 Thread Tom Tromey
> "Diego" == Diego Novillo  writes:

Diego> Nice!  What version of gdb has this support?

7.4.

Tom


Re: Switching to C++ by default in 4.8

2012-04-12 Thread Tom Tromey
> "Diego" == Diego Novillo  writes:

Diego> Tom, I'm thinking of that patch on black listing functions.  There was
Diego> also the idea of a command that would only step in the outermost
Diego> function call of an expression.

That patch went in.  The new command is called "skip".

I don't think anybody has worked on stepping into just the outermost
function call of an expression.

Tom


Re: Switching to C++ by default in 4.8

2012-04-04 Thread Tom Tromey
> "Richard" == Richard Guenther  writes:

Richard> Oh, and did we address all the annoyances of debugging gcc when it's
Richard> compiled by a C++ compiler? ...

If you mean gdb problems, please file bugs.

Tom


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-03 Thread Tom Tromey
> "Stefano" == Stefano Lattarini  writes:

Stefano> On a second though, by double-checking the existing code, I
Stefano> couldn't see how the 'cygnus' option could possibly influence
Stefano> the location of the generated info files -- and it turned out
Stefano> it didn't!  Despite what was documented in the manual, the
Stefano> 'cygnus' option did *not* cause the generated '.info' files to
Stefano> be placed in the builddir (see attached test case).

It certainly does for me:

barimba. pwd
/home/tromey/gnu/baseline-gdb/build/binutils
barimba. grep '^srcdir = ' Makefile
srcdir = ../../src/binutils
barimba. find . -name 'binutils.info'
./doc/binutils.info
barimba. find ../../src/binutils -name 'binutils.info'
barimba.

How did you test it?
If you built from a distribution tar, then it is expected that the info
file would be in srcdir.

Tom


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-02 Thread Tom Tromey
> "Stefano" == Stefano Lattarini  writes:

Stefano> It should still be possible, with the right hack (which is
Stefano> tested in the testsuite, and required by other packages
Stefano> anyway).  The baseline is: if you don't want your '.info' files
Stefano> to be distributed, then it should be easily possible to have
Stefano> them built in the builddir; but if you want them distributed,
Stefano> they will be built in the srcdir.

Now I am confused.  Is it possible to continue to work the way it does
today, or not?

If so, then great.

If not, then we need some other plan.

Tom> But, I see that, according to the Automake manual, I am wrong about that.

Stefano> Weird, I didn't expect that hack to be documented in the
Stefano> manual...  And in fact I cannot find it.  Could you please
Stefano> point me to it?  Thanks.

http://www.gnu.org/software/automake/manual/automake.html#Cygnus

"Info files are always created in the build directory, and not in the
source directory. Packages that don't use the cygnus option can emulate
this effect by using the no-installinfo option and listing the generated
info files in the CLEANFILES variable. "

Re-reading the suggestion here, I can't understand how it would work.

Anyway the real use in the src tree is different, IIUC.
Info files are built in the build tree by developers, but put in the
source tree for distribution.

Tom


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-02 Thread Tom Tromey
> "Stefano" == Stefano Lattarini  writes:

Stefano> Sorry if I sound dense, but what exactly is the feature you are
Stefano> talking about here?

I was under the impression that it would no longer be possible to build
info files in the build tree.  But, I see that, according to the
Automake manual, I am wrong about that.  So, sorry for the noise.

Tom


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-02 Thread Tom Tromey
> "Stefano" == Stefano Lattarini  writes:

Stefano> True, and that was even stated in the manual; the whole point
Stefano> of ditching support for cygnus trees is that by now those two
Stefano> big users are basically not making any real use of the 'cygnus'
Stefano> option anymore.  To quote my previous report:

Stefano>   ./bfd/doc/Makefile.in:AUTOMAKE_OPTIONS = 1.9 cygnus
Stefano>   ./bfd/doc/Makefile.in:# cygnus option.
Stefano>   ./bfd/doc/Makefile.am:AUTOMAKE_OPTIONS = 1.9 cygnus
Stefano>   ./bfd/doc/Makefile.am:# cygnus option.

But this is a reason not to remove it; or at least to restore the
previous handling of info files.

I don't care about the cygnus option per se.  It was always a grab bag
of hacks.  The issue is removing a feature that an important user relies
on.  So far the suggested replacements haven't seemed that good to me.

Tom


Re: bug#11034: Binutils, GDB, GCC and Automake's 'cygnus' option

2012-04-02 Thread Tom Tromey
> "Stefano" == Stefano Lattarini  writes:

Stefano> Note there's nothing I'm planning to do, nor I should do, in
Stefano> this regard: the two setups described above are both already
Stefano> supported by the current automake implementation (but the last
Stefano> one is not encouraged, even though it makes perfect sense in
Stefano> some *rare* situations).  I was just pointing out that you have
Stefano> to choose one of these setups -- so, if you want to distribute
Stefano> info files, you must accept to have them build in the srcdir.

Or we can just stick with an older version of automake.
It seems to me that this is the sensible approach.

Or move to some other build system; either autogen-based or just
requiring GNU make features.  The latter is fine for GCC but I'm not
sure whether all the src projects are on board.

I'm pretty disappointed that automake would make this change.  I realize
these choices may (arguably) make the most sense for most projects, but
the gcc and src trees are not like most projects; and really the whole
'cygnus' feature is there just to support these two big users.

Tom


Re: Help with cfi markup for MIPS16 hard-float stubs

2012-02-16 Thread Tom Tromey
> "rth" == Richard Henderson  writes:

rth> Oh, of course.  GDB is seeing the sequence of CFA's:
rth>X   foo
rth>X-4 __mips16_call_stub_df_0
rth>X   caller
rth> and it is sanity checking the stack is monotonic.
rth> Which seems like a fairly reasonable thing to do...

Removing this check has been discussed more than once, most recently to
support Go.  I'm not sure why it hasn't been deleted yet.

Tom


Re: Access to source code from an analyser

2012-01-20 Thread Tom Tromey
> "Manuel" == Manuel López-Ibáñez  writes:

Manuel> However, to be honest, even if you implement your own source-location
Manuel> manager in your own plugin code, I don't think it will be very precise
Manuel> because the internal representation of GCC C/C++ FE is not very close
Manuel> to the actual code, and the locations are sometimes quite imprecise.

Please file bugs if you run across these.

Tom


Re: Suspicion of regression in uninitialized value detection

2011-12-07 Thread Tom Tromey
> "Robert" == Robert Dewar  writes:

Robert> Now the debugging at -O1 is hopeless (even parameters routinely
Robert> disappear), and so I am forced to do everything at -O0.

There's been a lot of work on gcc in this area.
Please file bugs for cases you find.

Tom


Re: Working with frontend-specific aspects of GCC from a GCC plugin

2011-11-30 Thread Tom Tromey
> "David" == David Malcolm  writes:

David> I maintain gcc-python-plugin [1].  I'm hoping to expose the function
David> decl_as_string() from the C++ frontend from within my plugin.

I think this problem was discussed before, either here or on
gcc-patches, I forget.

David> (b) somehow set things up within the ELF metadata or linkage flags so
David> that the symbols aren't immediately needed at dynamic-link time, and
David> make sure that they only ever get called from frontends that provide
David> them (and cross our fingers and hope that the missing functions are
David> never actually called).  Not yet sure if this is feasible.  Again, this
David> raises the question of how to determine what frontend we're a plugin
David> for.

One idea that came up was to redeclare the FE-specific functions as
'weak', then check to see if they are available at runtime before
calling them.  It seems like a pain to me, since you have to rewrite the
declarations, but I guess it could work.  You could maybe write a plugin
to write out the declarations :)

Tom


Re: RFC: DWARF Extensions for Separate Debug Info Files ("Fission")

2011-10-20 Thread Tom Tromey
> "Cary" == Cary Coutant  writes:

Cary> At Google, we've found that the cost of linking applications with
Cary> debug info is much too high.
[...]

Cary> * .debug_macinfo - Macro information, unaffected by this design.

There is also the new .debug_macro section.  This section refers to
.debug_str, so it will need some updates for your changes there.

Tom


Re: Merging gdc (GNU D Compiler) into gcc

2011-10-04 Thread Tom Tromey
> "Iain" == Iain Buclaw  writes:

Ian> There is a directory gcc/d/zlib, but gcc already has a top-level zlib
Ian> directory.

Iain> Zlib there is the version released with the D Phobos library, it is
Iain> slightly newer. But is harmless to remove.

You could alternatively update the version in gcc.

Tom


Re: Linemap and pph

2011-07-22 Thread Tom Tromey
Gabriel> We are tying to keep pph as "pluginable" as possible (Diego correct me
Gabriel> if I'm wrong), so changing the actual implementation of the linemap
Gabriel> would be our very last resort I think.

Gabriel> However since source_location aren't pointers per se, this wouldn't
Gabriel> work (at least with our current cache implementation, and changing
Gabriel> that is also last resort in my opinion)

I think something's got to give :)

Tom> Can you not just serialize the line map when creating the PPH?

Gabriel> We were wondering about that, the problem we thought is that a pph can
Gabriel> be loaded from anywhere in many different C files (which would
Gabriel> generate a different linemap entry in each case I think?). If there
Gabriel> was a way to serialize the linemap entries from the LC_ENTER entry for
Gabriel> the header file to the LC_LEAVE (i.e. ignoring builtins, command line,
Gabriel> etc.), and then directly insert those entries in a given C file
Gabriel> compilation's linemaps entries, that would be great!

Sure, I think you could probably serialize just some subset of the
linemap.  It is hard to be positive, as nobody has ever tried it, and I
don't know enough about your needs to suggest what subsetting might make
sense.

The question then is whether you are certain that the other objects you
write out are guaranteed not to reference locations outside this set.

Tom


Re: Linemap and pph

2011-07-22 Thread Tom Tromey
> "Gabriel" == Gabriel Charette  writes:

Gabriel> @tromey: We have a question for you: the problem is detailed
Gabriel> here and our question to you is at the bottom. Thanks!

Sure.  I have not followed PPH progress very closely and after reading
your message I am not sure I have much to offer.  I'll give it a stab
anyhow.

Gabriel> This is a problem in the linemap case because when we read the last
Gabriel> name, we also read it's input_location (line and column) and try to
Gabriel> replay that location by calling linemap_line_start (so far so
Gabriel> good). This issue is that linemap_line_start EXPECTS to be called
Gabriel> with lines in increasing orders (but since the bindings are backwards
Gabriel> in the chain we actually call it with lines in a decreasing order),
Gabriel> the logic in linemap_line_start is such that if the line# is less then
Gabriel> the previous line# it was called with, it assumes this must be a new
Gabriel> file and creates a new file entry in the line_table by calling
Gabriel> linemap_add (which in our case is WRONG!!).

My belief is that linemap was designed to support the typical case for a
C and C++ compiler.  However, this is not sacrosanct; you could perhaps
change the implementation to work better for your case.  I suspect,
though, that this won't be easy -- and also be aware of Dodji's patch
series, which complicates linemap.

Gabriel> In the past we have solved similar issues (caused by the backwards
Gabriel> ordering), by replaying whatever it was in the correct order (before
Gabriel> doing anything else), and then simply loading references using the
Gabriel> pph_cache when the real ones were needed. BUT we can't do this with
Gabriel> source_locations as they are a simple typedef unsigned int, not actual
Gabriel> structs which are passed by pointer value...

A source_location is a reference to a particular line_map.  It is sort
of like a compressed pointer.

Gabriel> @tromey: I hear you are the person in the know when it comes down to
Gabriel> linemaps, do you have any hint on how we could rebuild a valid
Gabriel> line_table given we are obtaining the bindings from the last one in
Gabriel> the file to the first one (that is, using pph (pre-parsed headers)
Gabriel> where we are trying to restore a cached version of the parser state
Gabriel> for a header)?

Can you not just serialize the line map when creating the PPH?

Then, when using the PPH, read the serialized line map, in order, into
the current global line map.  This will preserve the include chains and
all the rest.  Then rewrite source locations coming from the PPH to new
locations from the new line map.  This rewriting could perhaps even be
done efficiently, say just by adding an offset to all locations --
though I am not sure, this would require some research.  (It seems
doable to me, though; perhaps, but not definitely, requiring some
additional linemap API.)

I have no idea if this is practical for you.  I guess it depends on how
a PPH is read in.

Tom


Re: A visualization of GCC's passes, as a subway map

2011-07-12 Thread Tom Tromey
> "David" == David Malcolm  writes:

David> This would be good.  However, looking at, say, 
David> http://gcc.gnu.org/onlinedocs/gccint/Tree-SSA-passes.html#Tree-SSA-passes
David> I don't see meaningful per-pass anchors there.  I'm not familiar with
David> gcc's documentation toolchain; is there a way to add the necessary
David> anchors to the generated HTML?

Yes, you can use @anchor in Texinfo.

Tom


Re: C++ mangling, function name to mangled name (or tree)

2011-07-06 Thread Tom Tromey
> "Kevin" == Kevin André  writes:

Pierre> I would like user of the plugin to give in arguments the name of
Pierre> the functions on which he would like a test to be run. That
Pierre> means that I must convert the string containing a function name
Pierre> (like "myclass::init") and get either the mangled name or the
Pierre> tree corresponding to the function. I know that there might be
Pierre> several results (functions with the same name and different
Pierre> arguments), a good policy for me would be to recover every
Pierre> concerned functions (at least for the moment).

Pierre> I guess what I want to do is possible, because there are already
Pierre> some tools doing it (like gdb).

Kevin> Are you absolutely sure about gdb? It could be doing it the other way
Kevin> around, i.e. start from the mangled names in the object file and
Kevin> demangle all of them. Then it would search for a function name in its
Kevin> list of demangled names.
Kevin> Just guessing, though :)

GDB has to be able to canonicalize a name in order to look in up in the
symbol table.  At least, that is true given the current implementation
of GDB's symbol tables.

So, GDB has a parser for C++ names that breaks the names up and then
reconstructs them in a canonical form.  See cp-name-parser.y.

GDB does not perform mangling.  As far as I know, nothing outside of g++
does.

The demangle-and-compare approach is more complicated than you might
think.  E.g., should it work if the user uses the name of a typedef?

Tom


Re: RFC: DWARF debug tags for gfortran's OOP implementation

2011-06-28 Thread Tom Tromey
> "Tobias" == Tobias Burnus  writes:

Tobias> The DWARF spec does not really tell the implications of the
Tobias> accessibility tags, which makes it a tad more difficult to
Tobias> understand what should be done.

That is ok -- the DWARF consumer will see that the CU is Fortran, and
will know to apply Fortran semantics.

Tom


Re: Debugging information in C macros

2011-05-10 Thread Tom Tromey
> "Michael" == Michael T  writes:

Michael> I was wondering if it is possible to improve the debugging
Michael> information generated by gcc when resolving C macros?

It could be done, but nobody has tried.

Michael> I wonder whether this couldn't be done by the gcc preprocessor?
Michael> Or does standards compliance forbid this?

A DWARF extension might be needed, but that is not a big deal.  The bulk
of the work will be defining exactly what you want in various cases, and
then of course the implementation in GCC.

It isn't clear to me that you always want to emit this information.
E.g., there are some hairy macros in libc that, presumably, only libc
developers would ever want to step through.

Look for a 6 part series from Dodji Seketeli in December 2010 for some
work that would be a good starting point for this.  The first message is
titled "[PATCH 0/6] Tracking locations of tokens resulting from macro
expansion".  This series changes GCC to more accurately track source
locations through macro expansion.

Tom


Re: GCC Optimisation, Part 0: Introduction

2011-04-29 Thread Tom Tromey
> "Paolo" == Paolo Bonzini  writes:

Paolo> * Put the string at the end of the IDENTIFIER_NODE using the trailing
Paolo> array hack (or possibly in the ht_identifier, see
Paolo> libcpp/include/symtab.h and libcpp/symtab.c)

I implemented this once:

http://gcc.gnu.org/ml/gcc-patches/2008-03/msg01293.html

It did not go in because a different approach (the one in the code now)
was deemed clearer.

Tom


Re: [PATCH v3] Re: avoid useless if-before-free tests

2011-04-15 Thread Tom Tromey
> "Janne" == Janne Blomqvist  writes:

Jim> Can someone add me to the gcc group?  That would help.
Jim> I already have ssh access to sourceware.org.

Janne> I'm not sure if I'm considered to be well-established
Janne> enough, so could someone help Jim out here, please?

I added Jim to the gcc group.

Tom


Re: On the toplevel configure and build system

2011-03-30 Thread Tom Tromey
> "Joseph" == Joseph S Myers  writes:

Joseph> Additional tools for the build (not host) system may be built
Joseph> (not installed) when present in the source tree, if of direct
Joseph> use in building and testing the components in those
Joseph> repositories, and likewise additional libraries used by build or
Joseph> host tools or target libraries in those repositories may also be
Joseph> built.  But support for tools and libraries that do not meet
Joseph> that criterion should be considered obsolete and removed.

Joseph> Specifically, I propose removal of all support for building:
Joseph> ... libiconv ...

libiconv is used by some people building gdb, to work around limitations
in the host platform's iconv.

Tom


Re: hints on debugging memory corruption...

2011-02-04 Thread Tom Tromey
> "Basile" == Basile Starynkevitch  writes:

Basile> So I need to understand who is writing the 0x101 in that field.

valgrind can sometimes catch this, assuming that the write is an invalid
one.

Basile> An obvious strategy is to use the hardware watchpoint feature of GDB.
Basile> However, one cannot nicely put a watchpoint on an address which is not
Basile> mmap-ed yet.

I think a new-enough gdb should handle this ok.

rth> I typically find the location at which the object containing the address
rth> is allocated.  E.g. in alloc_block on the return statement.  Make this
rth> bp conditional on the object you're looking for.

I do this, too.

One thing to watch out for is that the memory can be recycled.  I've
been very confused whenever I've forgotten this.  I have a hack for the
GC (appended -- ancient enough that it probably won't apply) that makes
it easy to notice when an object you are interested in is collected.
IIRC I apply this before the first run, call ggc_watch_object for the
thing I am interested in, and then see in what GC cycle the real one is
allocated.

Tom

Index: ggc-page.c
===
--- ggc-page.c  (revision 127650)
+++ ggc-page.c  (working copy)
@@ -430,6 +430,13 @@
   } *free_object_list;
 #endif
 
+  /* Watched objects.  */
+  struct watched_object
+  {
+void *object;
+struct watched_object *next;
+  } *watched_object_list;
+
 #ifdef GATHER_STATISTICS
   struct
   {
@@ -481,7 +488,7 @@
 /* Initial guess as to how many page table entries we might need.  */
 #define INITIAL_PTE_COUNT 128
 
-static int ggc_allocated_p (const void *);
+int ggc_allocated_p (const void *);
 static page_entry *lookup_page_table_entry (const void *);
 static void set_page_table_entry (void *, page_entry *);
 #ifdef USING_MMAP
@@ -549,7 +556,7 @@
 
 /* Returns nonzero if P was allocated in GC'able memory.  */
 
-static inline int
+int
 ggc_allocated_p (const void *p)
 {
   page_entry ***base;
@@ -1264,9 +1271,36 @@
 (unsigned long) size, (unsigned long) object_size, result,
 (void *) entry);
 
+  {
+struct watched_object *w;
+for (w = G.watched_object_list; w; w = w->next)
+  {
+   if (result == w->object)
+ {
+   fprintf (stderr, "re-returning watched object %p\n", w->object);
+   break;
+ }
+  }
+  }
+
   return result;
 }
 
+int
+ggc_check_watch (void *p, char *what)
+{
+  struct watched_object *w;
+  for (w = G.watched_object_list; w; w = w->next)
+{
+  if (p == w->object)
+   {
+ fprintf (stderr, "got it: %s\n", what);
+ return 1;
+   }
+}
+  return 0;
+}
+
 /* If P is not marked, marks it and return false.  Otherwise return true.
P must have been allocated by the GC allocator; it mustn't point to
static objects, stack variables, or memory allocated with malloc.  */
@@ -1293,6 +1327,19 @@
   if (entry->in_use_p[word] & mask)
 return 1;
 
+  {
+struct watched_object *w;
+for (w = G.watched_object_list; w; w = w->next)
+  {
+   if (p == w->object)
+ {
+   fprintf (stderr, "marking object %p; was %d\n", p,
+(int) (entry->in_use_p[word] & mask));
+   break;
+ }
+  }
+  }
+
   /* Otherwise set it, and decrement the free object count.  */
   entry->in_use_p[word] |= mask;
   entry->num_free_objects -= 1;
@@ -1337,6 +1384,15 @@
   return OBJECT_SIZE (pe->order);
 }
 
+void
+ggc_watch_object (void *p)
+{
+  struct watched_object *w = XNEW (struct watched_object);
+  w->object = p;
+  w->next = G.watched_object_list;
+  G.watched_object_list = w;
+}
+
 /* Release the memory for object P.  */
 
 void
@@ -1345,11 +1401,21 @@
   page_entry *pe = lookup_page_table_entry (p);
   size_t order = pe->order;
   size_t size = OBJECT_SIZE (order);
+  struct watched_object *w;
 
 #ifdef GATHER_STATISTICS
   ggc_free_overhead (p);
 #endif
 
+  for (w = G.watched_object_list; w; w = w->next)
+{
+  if (w->object == p)
+   {
+ fprintf (stderr, "freeing watched object %p\n", p);
+ break;
+   }
+}
+
   if (GGC_DEBUG_LEVEL >= 3)
 fprintf (G.debug_file,
 "Freeing object, actual size=%lu, at %p on %p\n",
@@ -1868,6 +1934,10 @@
 #define validate_free_objects()
 #endif
 
+int ggc_nc = 0;
+
+
+
 /* Top level mark-and-sweep routine.  */
 
 void
@@ -1903,6 +1973,21 @@
 
   clear_marks ();
   ggc_mark_roots ();
+
+  if (G.watched_object_list)
+{
+  struct watched_object *w;
+  fprintf (stderr, "== starting collection %d\n", ggc_nc);
+  ++ggc_nc;
+  for (w = G.watched_object_list; w; w = w->next)
+   {
+ if (!ggc_marked_p (w->object))
+   {
+ fprintf (stderr, "object %p is free\n", w->object);
+   }
+   }
+}
+
 #ifdef GATHER_STATISTICS
   ggc_prune_overhead_list ();
 #endif


Re: Plugin that parse tree

2011-01-27 Thread Tom Tromey
> "Ian" == Ian Lance Taylor  writes:

Ian> The problem with warnings for this kind of code in C/C++ is that it
Ian> often arises in macro expansions.  I think it would be necessary to
Ian> first develop a scheme which lets us determine whether code resulted
Ian> from a macro expansion or not, which I think would be quite useful in a
Ian> number of different cases.

There is a patch series pending for this.

See the thread "Tracking locations of tokens resulting from macro
expansion".

Tom


Re: PATCH RFA: Do not build java by default

2010-11-02 Thread Tom Tromey
> "Laurent" == Laurent GUERBY  writes:

Laurent> Let's imagine we have a reliable tool on a distributed build
Laurent> farm that accepts set of patches (via mail and web with some
Laurent> authentification) and does automatic regression testing and
Laurent> report on selected platform.

Can we have it for gdb as well?

Tom


Re: PATCH RFA: Do not build java by default

2010-11-02 Thread Tom Tromey
> "Jeff" == Jeff Law  writes:

Jeff> Building libjava (at least for me) is primarily painful due to 2 files
Jeff> (the names escape me) and the rather poor coarse level parallelism
Jeff> (can't build the 32bit and 64bit multilibs in parallel for example).

Jeff> Has anyone looked at fixing the build machinery for libjava to make it
Jeff> more sensible?

Nope.  AFAIK it is already as parallelized as possible, but it has been
a while since I looked at it.

I thought the really bad file (HTML_401F.java, IIRC) had some functions
split up so that it wasn't so evil any more.

The multilib thing sounds like a top-level problem of some kind.
At least, I don't recall that libjava does anything special here.

Tom


Re: PATCH RFA: Do not build java by default

2010-11-01 Thread Tom Tromey
> "Steven" == Steven Bosscher  writes:

Steven> The argument against disabling java as a default language always was
Steven> that there should be at least one default language that requires
Steven> non-call exceptions. I recall testing many patches without trouble if
Steven> I did experimental builds with just C, C++, and Fortran, only to find
Steven> lots of java test suite failures in a complete bootstrap+test cycle.
Steven> So the second point is, IMVHO, not really true.

Is it possible to convert all failures of this form into a C++ test case
with -fnon-call-exceptions?  If so then at least there is a way to add
regression tests.

Steven> Is it possible to build and test java without all of libjava?

As far as I'm aware, not at present.  I think even the minimal possible
subset of libjava is pretty big, on the order of hundreds of classes,
IIRC.

Tom


Re: Discussion about merging Go frontend

2010-11-01 Thread Tom Tromey
> "Ian" == Ian Lance Taylor  writes:

Ian> This patch puts the code in libiberty, but it could equally well go in
Ian> gcc.  Anybody want to make an argument one way or another?

Ian> +extern const char *
Ian> +objfile_attributes_compare (objfile_attributes *attrs1,

GDB already uses the name "objfile" for one of its modules.
I don't think we have any name clashes with this patch right now, but I
would prefer to avoid the eventual confusion.
So, if this is in libiberty, could it please have a different name?

thanks,
Tom




Re: Remove "asssertions" support from libcpp

2010-08-10 Thread Tom Tromey
>>>>> "Steven" == Steven Bosscher  writes:

Steven> Assertions in libcpp have been deprecated since r135264:
Steven> 2008-05-13  Tom Tromey  
Steven> PR preprocessor/22168:
Steven> * expr.c (eval_token): Warn for use of assertions.
Steven> Can this feature be removed for GCC 4.6?

It would be fine by me, but I would rather have someone more actively
involved in GCC make the decision.

Tom


Re: Bizarre GCC problem - how do I debug it?

2010-08-06 Thread Tom Tromey
> "Bruce" == Bruce Korb  writes:

Bruce> That seems to work.  There are one or two or three bugs then.
Bruce> Either gdb needs to recognize an out of sync object code, or else
Bruce> gcc needs to produce object code that forces gdb to object in a way
Bruce> more obvious than just deciding upon the wrong file and line --
Bruce> or both.

Nothing can be done about old versions of gdb.  They are fixed.

I think the situation is better in newer versions of GDB.  We've fixed a
lot of bugs, anyway.  (I'm not sure exactly what problem you hit, so I
don't know if gdb is in fact any more future-proof in that area.)

I don't think things can ever be perfect.  GDB checks the various DWARF
version numbers, but that doesn't exclude extensions.

Bruce> I simply installed the latest openSuSE and got whatever was
Bruce> supplied.  It isn't reasonable to expect folks to go traipsing
Bruce> through upstream web sites looking for "changes.html" files 

In a situation like this, I suggest complaining to your vendor.  We've
done a lot of work in GDB to catch up with GCC's changing output.  The
development process here is actually reasonably well synchronized.

Tom


Re: How to get attual method in GCC AST

2010-08-05 Thread Tom Tromey
> "Kien" == Kien Nguyen Trung  writes:

Kien> obj_type_ref
Kien>   indirect_ref (test.cpp:21-17)

Kien> The problem is method read() of class B is get from a virtual method
Kien> of based class A. And i cannot get the real name of  this method.
Kien> Do you have any ideal to help me. Thansk

You may be able to extract it from the OBJ_TYPE_REF node.
I would suggest looking at how the devirtualization pass works to see if
this helps.

Maybe the information is completely lost in some cases.  (I don't really
know.)  If so I would suggest adding a bit more info to OBJ_TYPE_REF to
assist you.

Tom


Re: Edit-and-continue

2010-07-19 Thread Tom Tromey
> "Dave" == Dave Korn  writes:

Dave> I think you're probably assuming too much.  Tom T. is working on an
Dave> incremental compiler, isn't he?

I was, but I was asked to work on gdb a couple of years ago, so that
work is suspended.

Dave>   But yes, OP, it's a long-term project.

Apple implemented fix-and-continue in their toolchain.  They spoke about
it a little bit on the gdb list, it is in the archives.  My take-away
was that the feature is a lot of work for not much benefit, but YMMV,
and of course we'd be happy to review any gdb patches in this direction
:-)

Tom


Re: gengtype & many GTY tags for same union component?

2010-07-06 Thread Tom Tromey
> "Basile" == Basile Starynkevitch  writes:

Basile> My understanding of the description of the tag GTY option in
Basile> http://gcc.gnu.org/onlinedocs/gccint/GTY-Options.html#GTY-Options
Basile> is that a given discriminated union case can have several
Basile> tags.

It seems like a reasonable feature, but I didn't see any text there
indicating that this is already supported.

Basile> struct meltspecial_st
Basile>   GTY ((tag ("OBMAG_SPEC_FILE"),
Basile> tag ("OBMAG_SPEC_RAWFILE"),
Basile> tag ("OBMAG_SPEC_MPFR"),
Basile> tag ("OBMAG_SPECPPL_COEFFICIENT"),
Basile> tag ("OBMAG_SPECPPL_LINEAR_EXPRESSION"),
Basile> tag ("OBMAG_SPECPPL_CONSTRAINT"),
Basile> tag ("OBMAG_SPECPPL_CONSTRAINT_SYSTEM"),
Basile> tag ("OBMAG_SPECPPL_GENERATOR"),
Basile> tag ("OBMAG_SPECPPL_GENERATOR_SYSTEM"),
Basile> tag ("OBMAG_SPECPPL_POLYHEDRON"))
Basile>   ) u_special;

One thing you can do here is provide a "desc" tag for the union that
collapses all these tags to a single tag.

Instead of:

  GTY ((desc ("%0.u_discr->object_magic"))) 

You would have:

int
magic_for_ggc (int magic)
{
  if (magic == OBMAG_SPEC_FILE
  || magic == OBMAG_SPEC_RAWFILE
  || ...)
return OBMAG_SPEC_RAWFILE;
  return magic;
}

.. and

GTY ((desc ("magic_for_ggc (%0.u_discr->object_magic)")))

Tom


Re: Source for current ECJ not available on sourceware.org

2010-06-29 Thread Tom Tromey
> "Brett" == Brett Neumeier  writes:

Brett> What is still not clear is: what version of the ecj CVS project
Brett> corresponds to "ecj 4.5"? It doesn't look like there are branches or
Brett> tags in the CVS repository.

Yeah, oops.  We've been remiss in doing that.

I believe 4.5 was made from CVS head.  IIRC, Matthias made it and I
uploaded it.  So maybe he knows for sure.  We can make a tag once we
know for sure.

Brett> Also -- the ECJ source contains a single Java file (for the class
Brett> org.eclipse.jdt.internal.compiler.batch.GCCMain), in addition to
Brett> downloading the rest of the compiler source from eclipse.org. GCCMain
Brett> doesn't have any license statement in its header, and there is no
Brett> license in the ECJ source repository at sources.redhat.com. What
Brett> license applies to o.e.j.i.c.b.GCCMain? Is it under the EPL, or the
Brett> GPL, or something else?

I think it has to be EPL.  It was derived substantially from the Eclipse
compiler driver.

Tom


Re: Source for current ECJ not available on sourceware.org

2010-06-29 Thread Tom Tromey
> "Brett" == Brett Neumeier  writes:

Brett> Are there any plans to publish the source code along with the binary
Brett> jar file? In the meantime, where can I find the source code for the
Brett> current ecj, as needed by gcc? Is there a source repository I can get
Brett> to?

Yes, check out the eclipse-gcj module from rhug CVS
(sourceware.org:/cvs/rhug).  This module holds some changes to ecj,
plus a script to check out the proper upstream version.  You can get the
sources with:

make login
make checkout

When we prepare a new version of ecj for use with gcj, we update the tag
in this module, check out a new version from upstream, then hack on
GCCMain.java until it works.

Tom


Re: gengtype needs for C++?

2010-06-28 Thread Tom Tromey
Ian> In Tom's interesting idea, we would write the mark function by hand for
Ian> each C++ type that we use GTY with.

I think we should be clear that the need to write a mark function for a
new type is a drawback of this approach.  Perhaps gengtype could still
write the functions for ordinary types in GCC, just not (templatized)
containers.

Also, perhaps we actually need 2 such functions, one for the GC and one
for PCH.  I don't remember.

Tom


Re: Using C++ in GCC is OK

2010-06-02 Thread Tom Tromey
> "Basile" == Basile Starynkevitch  writes:

Basile> Still, my concerns on C++ is mostly gengtype related. I believe we need
Basile> to keep a garbage collector even with C++, and I believe that changing
Basile> gengtype to follow C++ could be quite painful if we follow the usual
Basile> route of parsing our headers. Making a gengtype able to parse almost any
Basile> C++ header file would be painful.

It seems to me that C++ can actually make gengtype's job simpler.

For example, rather than generating code that knows about the layout of
container types, we can just instantiate template functions that walk a
container using the standard iterator API.

So if you see:

static GTY(()) std::vector some_global;

gengtype can just emit

template mark< std::vector > ();

...
  mark (some_global);


Mark would be a template function, with specializations for gcc data
types and various STL things (hopefully I got the C++ right here :-):

template
void mark (const std::vector &c)
{
  T::const_iterator i = c.begin(), e = c.end();
  for (; i != e; ++i)
mark (*i);
}


In this sort of setup, unlike with C, gengtype needs to know very little
about the structure of std::vector.  Instead most of the work is
deferred to g++.  With this approach, maybe gengtype only needs to know
about roots; each data type could supply its own mark specialization.

Tom


  1   2   3   4   >