> Index: gcc/cfgloop.c
> ===================================================================
> --- gcc/cfgloop.c     (revision 215826)
> +++ gcc/cfgloop.c     (working copy)
> @@ -1802,7 +1802,7 @@ record_niter_bound (struct loop *loop, const wides
>      }
>    if (realistic
>        && (!loop->any_estimate
> -       || wi::ltu_p (i_bound, loop->nb_iterations_estimate)))
> +       || (!flag_auto_profile && wi::ltu_p (i_bound, 
> loop->nb_iterations_estimate))))

I do not remember this hunk from previous patches.  So you make autofdo to drop
the conditional that new estimate must be smaller than preivously derived.  I
suppose it is because you want autofdo estimates to be overwritten by anything
determined later.

Lets drop this hunk now, too.  I think we really want to have way to track if
the estimate is from autofdo and thus "not too reliable but better than
nothing" and we may want to introduce some kind of reliability metric on way
estimates are derived.

Lets handle this incrementally and drop this change from this patch.

> Index: gcc/doc/invoke.texi
> ===================================================================
> --- gcc/doc/invoke.texi       (revision 215826)
> +++ gcc/doc/invoke.texi       (working copy)
> @@ -365,7 +365,8 @@ Objective-C and Objective-C++ Dialects}.
>  @gccoptlist{-faggressive-loop-optimizations -falign-functions[=@var{n}] @gol
>  -falign-jumps[=@var{n}] @gol
>  -falign-labels[=@var{n}] -falign-loops[=@var{n}] @gol
> --fassociative-math -fauto-inc-dec -fbranch-probabilities @gol
> +-fassociative-math -fauto-profile -fauto-profile[=@var{path}] @gol
> +-fauto-inc-dec -fbranch-probabilities @gol
>  -fbranch-target-load-optimize -fbranch-target-load-optimize2 @gol
>  -fbtr-bb-exclusive -fcaller-saves @gol
>  -fcheck-data-deps -fcombine-stack-adjustments -fconserve-stack @gol
> @@ -9164,6 +9165,19 @@ code.
>  
>  If @var{path} is specified, GCC looks at the @var{path} to find
>  the profile feedback data files. See @option{-fprofile-dir}.
> +
> +@item -fauto-profile
> +@itemx -fauto-profile=@var{path}
> +@opindex fauto-profile
> +Enable sampling based feedback directed optimizations, and optimizations
> +generally profitable only with profile feedback available.
> +
> +The following options are enabled: @code{-fbranch-probabilities}, 
> @code{-fvpt},
> +@code{-funroll-loops}, @code{-fpeel-loops}, @code{-ftracer}, 
> @code{-ftree-vectorize},
> +@code{ftree-loop-distribute-patterns}
> +
> +If @var{path} is specified, GCC looks at the @var{path} to find
> +the profile feedback data files.

I agree with Andi that you need to write more here - basically it should be 
clear how to
profile the application and use the data.

We do use external tools here (oprofile/gperf/non-Linux tools I suppose) plus an
conversion tool. So you need to provide some references to those.

Actually I think it would be really nice to add tutorial-like chapter into GCC
manual about using FDO/autoFDO to improve your code or even more general aimed
covering advanced topics like correct LTO use, profile feedback and other fun.
> Index: gcc/value-prof.c
> ===================================================================
> --- gcc/value-prof.c  (revision 215826)
> +++ gcc/value-prof.c  (working copy)
> @@ -134,11 +134,10 @@ static bool gimple_divmod_fixed_value_transform (g
>  static bool gimple_mod_pow2_value_transform (gimple_stmt_iterator *);
>  static bool gimple_mod_subtract_transform (gimple_stmt_iterator *);
>  static bool gimple_stringops_transform (gimple_stmt_iterator *);
> -static bool gimple_ic_transform (gimple_stmt_iterator *);
>  
>  /* Allocate histogram value.  */
>  
> -static histogram_value
> +histogram_value
>  gimple_alloc_histogram_value (struct function *fun ATTRIBUTE_UNUSED,
>                             enum hist_type type, gimple stmt, tree value)

Do we still need to expor those?

>  {
> @@ -1307,6 +1306,9 @@ del_node_map (void)
>  struct cgraph_node*
>  find_func_by_profile_id (int profile_id)
>  {
> +  if (flag_auto_profile)
> +    return cgraph_node::get_for_asmname (
> +     get_identifier ((const char *)(size_t)profile_id));

This definitly needs an comment.  Why you need to use find_func_by_profile_id 
on both
gcov profiling and auto profiling paths? 
Perhaps you can use your own fnuction here?

Or perhaps this is not necessary if you implement directly the indirect call 
optimiization?

We may try to head towards of having way to read both coverage and auto profile 
into compiler
eventually. It may be useful to get extra precision from gcov profile and to 
measure data
like cache misses with auto profile.
>    cgraph_node **val = cgraph_node_map->get (profile_id);
>    if (val)
>      return *val;
> @@ -1320,7 +1322,7 @@ find_func_by_profile_id (int profile_id)
>     may ICE. Here we only do very minimal sanity check just to make compiler 
> happy.
>     Returns true if TARGET is considered ok for call CALL_STMT.  */
>  
> -static bool
> +bool
>  check_ic_target (gimple call_stmt, struct cgraph_node *target)
>  {
>     location_t locus;
> @@ -1483,7 +1485,7 @@ gimple_ic (gimple icall_stmt, struct cgraph_node *
>    this call to:
>   */
>  
> -static bool
> +bool
>  gimple_ic_transform (gimple_stmt_iterator *gsi)

I do not think those needs to be exported.
>  {
>    gimple stmt = gsi_stmt (*gsi);
> +
> +/* The following routines implements AutoFDO optimization.
> +
> +   This optimization uses sampling profiles to annotate basic block counts
> +   and uses heuristics to estimate branch probabilities.
> +
> +   There are three phases in AutoFDO:
> +
> +   Phase 1: Read profile from the profile data file.
> +     The following info is read from the profile datafile:
> +        * string_table: a map between function name and its index.
> +        * autofdo_source_profile: a map from function_instance name to
> +          function_instance. This is represented as a forest of
> +          function_instances.

This description does not seem to make it very clear what 
autofdo_source_profile is.
Perhaps expand a bit more about what function instance is?
> +        * WorkingSet: a histogram of how many instructions are covered for a
> +        given percentage of total cycles.

Here I think a bit extra explanation would be nice.  Working sets are per-line 
information?
> +
> +   Phase 2: Early inline + VPT.

Probably expand VPT - it requires one to be quite familiar with GCC sources to
follow 3 letter acronyms ;)
> +     Early inline uses autofdo_source_profile to find if a callsite is:
> +        * inlined in the profiled binary.
> +        * callee body is hot in the profiling run.
> +     If both condition satisfies, early inline will inline the callsite
> +     regardless of the code growth.
> +     Phase 2 is an iterative process. During each iteration, we also check
> +     if an indirect callsite is promoted and inlined in the profiling run.
> +     If yes, vpt will happen to force promote it and in the next iteration,
> +     einline will inline the promoted callsite in the next iteration.
Perhaps "If yes, direct speculative call is introduced"
given that speculative edges are now kind of first class players here.
> +
> +   Phase 3: Annotate control flow graph.
> +     AutoFDO uses a separate pass to:
> +        * Annotate basic block count

           * Annotate basic blocks with count?

> +        * Estimate branch probability

           * Propagate counts across callgraph to fill in information
             missing in the profile.

You probably could expand a bit how profile is driven by source lines and thus
may completely miss some basic blocks.
> +
> +   After the above 3 phases, all profile is readily annotated on the GCC IR.
> +   AutoFDO tries to reuse all FDO infrastructure as much as possible to make
> +   use of the profile. E.g. it uses existing mechanism to calculate the basic
> +   block/edge frequency, as well as the cgraph node/edge count.
> +*/
> +
> +#define DEFAULT_AUTO_PROFILE_FILE "fbdata.afdo"
> +
> +namespace autofdo
> +{
> +
> +/* Represent a source location: (function_decl, lineno).  */
> +typedef std::pair<tree, unsigned> decl_lineno;
> +
> +/* Represent an inline stack. vector[0] is the leaf node.  */
> +typedef std::vector<decl_lineno> inline_stack;
> +
> +/* String array that stores function names.  */
> +typedef std::vector<const char *> string_vector;

I am not sure about use of std:: stuff in GCC. Basically I do not see anything
really bad about it, just in case we have our own equivalents, we may stick
of using those.  So perhaps using our vec would make more sense.
> +
> +/* Profile of a function instance:
> +     1. total_count of the function.
> +     2. head_count of the function (only valid when function is a top-level
> +        function_instance, i.e. it is the original copy instead of the
> +        inlined copy).

Perhaps explain what is head count?

> +     3. map from source location (decl_lineno) of the inlined callsite to
> +        profile (count_info).

Of inline callsite? I would expect this to be actual line profile of the
function.

> +/* Helper functions.  */
> +
> +/* Return the original name of NAME: strip the suffix that starts
> +   with '.'  */
> +
> +static const char *
> +get_original_name (const char *name)
> +{
> +  char *ret = xstrdup (name);
> +  char *find = strchr (ret, '.');
> +  if (find != NULL)
> +    *find = 0;
> +  return ret;
> +}

Caller is actuallly responsible for freeing RET, right? Perhaps it should be 
mentioned
in the comment and actually done in uses.
> +
> +/* Return the combined location, which is a 32bit integer in which
> +   higher 16 bits stores the line offset of LOC to the start lineno
> +   of DECL, The lower 16 bits stores the discrimnator.  */
> +
> +static unsigned
> +get_combined_location (location_t loc, tree decl)
> +{
> +  /* TODO: allow more bits for line and less bits for discriminator.  */
> +  return ((LOCATION_LINE (loc) - DECL_SOURCE_LINE (decl)) << 16);
> +}

We may consider dropping a message into dump file making clear that overflow 
happened.
> +
> +/* Return the function decl of a given lexical BLOCK.  */
> +
> +static tree
> +get_function_decl_from_block (tree block)
> +{
> +  tree decl;
> +
> +  if (LOCATION_LOCUS (BLOCK_SOURCE_LOCATION (block) == UNKNOWN_LOCATION))
> +    return NULL_TREE;
> +
> +  for (decl = BLOCK_ABSTRACT_ORIGIN (block);
> +       decl && (TREE_CODE (decl) == BLOCK);
> +       decl = BLOCK_ABSTRACT_ORIGIN (decl))
> +    if (TREE_CODE (decl) == FUNCTION_DECL)
> +      break;
> +  return decl;
> +}
> +
> +/* Store inline stack for STMT in STACK.  */
> +
> +static void
> +get_inline_stack (location_t locus, inline_stack *stack)
> +{
> +  if (LOCATION_LOCUS (locus) == UNKNOWN_LOCATION)
> +    return;
> +
> +  tree block = LOCATION_BLOCK (locus);
> +  if (block && TREE_CODE (block) == BLOCK)
> +    {
> +      int level = 0;
> +      for (block = BLOCK_SUPERCONTEXT (block);
> +           block && (TREE_CODE (block) == BLOCK);
> +           block = BLOCK_SUPERCONTEXT (block))
> +        {
> +          location_t tmp_locus = BLOCK_SOURCE_LOCATION (block);
> +          if (LOCATION_LOCUS (tmp_locus) == UNKNOWN_LOCATION)
> +            continue;
> +
> +          tree decl = get_function_decl_from_block (block);
> +          stack->push_back (
> +              std::make_pair (decl, get_combined_location (locus, decl)));

OK, you walk block tree up and for each block you want to lookup the function 
context
it originate from and push it to stack.
Won't you get extra NULL entries for inner blocks within the functions?

> +
> +/* Member functions for string_table.  */
> +
> +string_table *
> +string_table::create ()

Why this is not a constructor?
> +{
> +  string_table *map = new string_table ();
> +  if (map->read ())
> +    return map;
> +  delete map;
> +  return NULL;
> +}
> +
> +/* Return the index of a given function NAME.  */
Perhaps document that function returns -1 when index is not found.
> +
> +int
> +string_table::get_index (const char *name) const
> +{
> +  if (name == NULL)
> +    return -1;
> +  string_index_map::const_iterator iter = map_.find (name);
> +  if (iter == map_.end ())
> +    return -1;
> +  else
> +    return iter->second;
> +}
> +
> +/* Return the index of a given function DECL.  */
Similarly here.
> +
> +/* Traverse callsites of the current function_instance to find one at the
> +   location of LINENO and callee name represented in DECL.  */
> +
> +function_instance *
> +function_instance::get_function_instance_by_decl (unsigned lineno,
> +                                                  tree decl) const
> +{
> +  int func_name_idx = afdo_string_table->get_index_by_decl (decl);
> +  if (func_name_idx != -1)
> +    {
> +      callsite_map::const_iterator ret
> +          = callsites.find (std::make_pair (lineno, func_name_idx));
> +      if (ret != callsites.end ())
> +        return ret->second;
> +    }
> +  func_name_idx
> +      = afdo_string_table->get_index (lang_hooks.dwarf_name (decl, 0));
> +  if (func_name_idx != -1)
> +    {
> +      callsite_map::const_iterator ret
> +          = callsites.find (std::make_pair (lineno, func_name_idx));
> +      if (ret != callsites.end ())
> +        return ret->second;
> +    }

The two hunks probably can be unified.  
Why get_index_by_decl does not already use dwarf name and why do you need to 
consider both?

> +  if (DECL_ABSTRACT_ORIGIN (decl))
> +    return get_function_instance_by_decl (lineno, DECL_ABSTRACT_ORIGIN 
> (decl));

What really happens for ipa-cp function clones and for split functions?

> +  else
> +    return NULL;
> +}
> +
> +/* Recursively traverse STACK starting from LEVEL to find the corresponding
> +   function_instance.  */
> +
> +function_instance *
> +function_instance::get_function_instance (const inline_stack &stack,
> +                                          unsigned level)
> +{
> +  if (level == 0)
> +    return this;
> +  function_instance *s = get_function_instance_by_decl (
> +      stack[level].second, stack[level - 1].first);
> +  if (s)
> +    return s->get_function_instance (stack, level - 1);
> +  else
> +    return NULL;

Someting that probably could be written by iteration ;)
> +
> +/* Read the profile and create a function_instance with head count as
> +   HEAD_COUNT. Recursively read callsites to create nested function_instances
> +   too. STACK is used to track the recursive creation process.  */
> +
> +/* function instance profile format:
> +
> +   ENTRY_COUNT: 8 bytes
> +   NAME_INDEX: 4 bytes
> +   NUM_POS_COUNTS: 4 bytes
> +   NUM_CALLSITES: 4 byte
> +   POS_COUNT_1:
> +     POS_1_OFFSET: 4 bytes
> +     NUM_TARGETS: 4 bytes
> +     COUNT: 8 bytes
> +     TARGET_1:
> +       VALUE_PROFILE_TYPE: 4 bytes
> +       TARGET_IDX: 8 bytes
> +       COUNT: 8 bytes

So basically you have list of statements identified by offsets that have counts 
with them.
If the statement is indirect call, then you have numb targets and you have the 
value profiles
associated with them, right?

I am not sure it is a win to have the things represented as VPT histograms when 
you shoot
for quite special purpose problem.  But lets deal with these incrementally, too.
(in general I do not see why you try to share so much with the gcov based VTP - 
it seems
easier to do the speculation by hand)

> +
> +static void
> +read_profile (void)

Block comment here, even though it is quite obvious ;)
> +{
> +  if (gcov_open (auto_profile_file, 1) == 0)
> +    error ("Cannot open profile file %s.", auto_profile_file);
> +
> +  if (gcov_read_unsigned () != GCOV_DATA_MAGIC)
> +    error ("AutoFDO profile magic number does not mathch.");
> +
> +  /* Skip the version number.  */
> +  gcov_read_unsigned ();
> +
> +  /* Skip the empty integer.  */
> +  gcov_read_unsigned ();

Perhaps we can just check the values?
> +
> +  /* string_table.  */
> +  afdo_string_table = string_table::create ();
> +  if (afdo_string_table == NULL)
> +    error ("Cannot read string table from %s.", auto_profile_file);
> +
> +  /* autofdo_source_profile.  */
> +  afdo_source_profile = autofdo_source_profile::create ();
> +  if (afdo_source_profile == NULL)
> +    error ("Cannot read function profile from %s.", auto_profile_file);
> +
> +  /* autofdo_module_profile.  */
> +  fake_read_autofdo_module_profile ();
> +
> +  /* Read in the working set.  */
> +  if (gcov_read_unsigned () != GCOV_TAG_AFDO_WORKING_SET)
> +    error ("Cannot read working set from %s.", auto_profile_file);
> +
> +  /* Skip the length of the section.  */
> +  gcov_read_unsigned ();
> +  gcov_working_set_t set[128];
> +  for (unsigned i = 0; i < 128; i++)
> +    {
> +      set[i].num_counters = gcov_read_unsigned ();
> +      set[i].min_counter = gcov_read_counter ();
> +    }
> +  add_working_set (set);
> +}
> +
> +/* From AutoFDO profiles, find values inside STMT for that we want to measure
> +   histograms for indirect-call optimization.  */

        This function is actually served for 2 purposes: 
         
        * before annotation, we need to mark histogram, promote and inline 
        * after annotation, we just need to mark, and let follow-up logic to 
        decide if it needs to promote and inline. 

Perhaps add it to the block comment?
         
        And you are right, for the "before annotation" case, we can simply 
        call "mark speculative" and "inline". But we still need the logic to 
        fake histogram for "after annotation" case. As a result, I unified two 
        cases into one function to reuse code as much as possible. Shall I 
        separate it into two functions instead? 

Who is using the histogram later? Is it the normal ipa-profile code?
In that case it probably makes sense to produce the histogram indeed.

You sitll can probably avoid using gimple_ic_transform. Because you fake the
histogram, the early checks in function will not lead to anything useful.
You probably want to check yourself check_ic_target

Does this have chance to work inter-module?

> Index: gcc/opts.c
> ===================================================================
> --- gcc/opts.c        (revision 215826)
> +++ gcc/opts.c        (working copy)
> @@ -1279,6 +1279,55 @@ print_specific_help (unsigned int include_flags,
>                      opts->x_help_columns, opts, lang_mask);
>  }
>  
> +static void
> +enable_fdo_optimizations (struct gcc_options *opts,
> +                       struct gcc_options *opts_set,
> +                       int value)

Block comment here.
> +{
> +  if (!opts_set->x_flag_branch_probabilities)
> +    opts->x_flag_branch_probabilities = value;
> +  if (!opts_set->x_flag_profile_values)
> +    opts->x_flag_profile_values = value;
> +  if (!opts_set->x_flag_unroll_loops)
> +    opts->x_flag_unroll_loops = value;
> +  if (!opts_set->x_flag_peel_loops)
> +    opts->x_flag_peel_loops = value;
> +  if (!opts_set->x_flag_tracer)
> +    opts->x_flag_tracer = value;
> +  if (!opts_set->x_flag_value_profile_transformations)
> +    opts->x_flag_value_profile_transformations = value;
> +  if (!opts_set->x_flag_inline_functions)
> +    opts->x_flag_inline_functions = value;
> +  if (!opts_set->x_flag_ipa_cp)
> +    opts->x_flag_ipa_cp = value;
> +  if (!opts_set->x_flag_ipa_cp_clone
> +      && value && opts->x_flag_ipa_cp)
> +    opts->x_flag_ipa_cp_clone = value;
> +  if (!opts_set->x_flag_predictive_commoning)
> +    opts->x_flag_predictive_commoning = value;
> +  if (!opts_set->x_flag_unswitch_loops)
> +    opts->x_flag_unswitch_loops = value;
> +  if (!opts_set->x_flag_gcse_after_reload)
> +    opts->x_flag_gcse_after_reload = value;
> +  if (!opts_set->x_flag_tree_loop_vectorize
> +      && !opts_set->x_flag_tree_vectorize)
> +    opts->x_flag_tree_loop_vectorize = value;
> +  if (!opts_set->x_flag_tree_slp_vectorize
> +      && !opts_set->x_flag_tree_vectorize)
> +    opts->x_flag_tree_slp_vectorize = value;
> +  if (!opts_set->x_flag_vect_cost_model)
> +    opts->x_flag_vect_cost_model = VECT_COST_MODEL_DYNAMIC;
> +  if (!opts_set->x_flag_tree_loop_distribute_patterns)
> +    opts->x_flag_tree_loop_distribute_patterns = value;
> +  if (!opts_set->x_flag_profile_reorder_functions)
> +    opts->x_flag_profile_reorder_functions = value;
> +  /* Indirect call profiling should do all useful transformations
> +     speculative devirtualization does.  */
> +  if (!opts_set->x_flag_devirtualize_speculatively
> +      && opts->x_flag_value_profile_transformations)
> +    opts->x_flag_devirtualize_speculatively = false;
> +}
> +
>  /* Handle target- and language-independent options.  Return zero to
>     generate an "unknown option" message.  Only options that need
>     extra handling need to be listed here; if you simply want
> @@ -1746,50 +1795,23 @@ common_handle_option (struct gcc_options *opts,
>        value = true;
>        /* No break here - do -fprofile-use processing. */
>      case OPT_fprofile_use:
> -      if (!opts_set->x_flag_branch_probabilities)
> -     opts->x_flag_branch_probabilities = value;
> -      if (!opts_set->x_flag_profile_values)
> -     opts->x_flag_profile_values = value;
> -      if (!opts_set->x_flag_unroll_loops)
> -     opts->x_flag_unroll_loops = value;
> -      if (!opts_set->x_flag_peel_loops)
> -     opts->x_flag_peel_loops = value;
> -      if (!opts_set->x_flag_tracer)
> -     opts->x_flag_tracer = value;
> -      if (!opts_set->x_flag_value_profile_transformations)
> -     opts->x_flag_value_profile_transformations = value;
> -      if (!opts_set->x_flag_inline_functions)
> -     opts->x_flag_inline_functions = value;
> -      if (!opts_set->x_flag_ipa_cp)
> -     opts->x_flag_ipa_cp = value;
> -      if (!opts_set->x_flag_ipa_cp_clone
> -       && value && opts->x_flag_ipa_cp)
> -     opts->x_flag_ipa_cp_clone = value;
> -      if (!opts_set->x_flag_predictive_commoning)
> -     opts->x_flag_predictive_commoning = value;
> -      if (!opts_set->x_flag_unswitch_loops)
> -     opts->x_flag_unswitch_loops = value;
> -      if (!opts_set->x_flag_gcse_after_reload)
> -     opts->x_flag_gcse_after_reload = value;
> -      if (!opts_set->x_flag_tree_loop_vectorize
> -          && !opts_set->x_flag_tree_vectorize)
> -     opts->x_flag_tree_loop_vectorize = value;
> -      if (!opts_set->x_flag_tree_slp_vectorize
> -          && !opts_set->x_flag_tree_vectorize)
> -     opts->x_flag_tree_slp_vectorize = value;
> -      if (!opts_set->x_flag_vect_cost_model)
> -     opts->x_flag_vect_cost_model = VECT_COST_MODEL_DYNAMIC;
> -      if (!opts_set->x_flag_tree_loop_distribute_patterns)
> -     opts->x_flag_tree_loop_distribute_patterns = value;
> -      if (!opts_set->x_flag_profile_reorder_functions)
> -     opts->x_flag_profile_reorder_functions = value;
> -      /* Indirect call profiling should do all useful transformations
> -      speculative devirtualization does.  */
> -      if (!opts_set->x_flag_devirtualize_speculatively
> -       && opts->x_flag_value_profile_transformations)
> -     opts->x_flag_devirtualize_speculatively = false;
> +      enable_fdo_optimizations (opts, opts_set, value);
>        break;
>  
> +    case OPT_fauto_profile_:
> +      opts->x_auto_profile_file = xstrdup (arg);
> +      opts->x_flag_auto_profile = true;
> +      value = true;
> +      /* No break here - do -fauto-profile processing. */
> +    case OPT_fauto_profile:
> +      enable_fdo_optimizations (opts, opts_set, value);
> +      if (!opts_set->x_flag_profile_correction)
> +     opts->x_flag_profile_correction = value;
> +      maybe_set_param_value (
> +     PARAM_EARLY_INLINER_MAX_ITERATIONS, 10,
> +     opts->x_param_values, opts_set->x_param_values);
> +      break;
> +
>      case OPT_fprofile_generate_:
>        opts->x_profile_data_prefix = xstrdup (arg);
>        value = true;
> Index: gcc/ipa-inline.c
> ===================================================================
> --- gcc/ipa-inline.c  (revision 215826)
> +++ gcc/ipa-inline.c  (working copy)
> @@ -121,6 +121,7 @@ along with GCC; see the file COPYING3.  If not see
>  #include "ipa-inline.h"
>  #include "ipa-utils.h"
>  #include "sreal.h"
> +#include "auto-profile.h"
>  #include "cilk.h"
>  #include "builtins.h"
>  
> @@ -449,6 +450,14 @@ want_early_inline_function_p (struct cgraph_edge *
>  
>    if (DECL_DISREGARD_INLINE_LIMITS (callee->decl))
>      ;
> +  /* For AutoFDO, we need to make sure that before profile annotation, all
> +     hot paths' IR look exactly the same as profiled binary. As a result,
> +     in einliner, we will disregard size limit and inline those callsites
> +     that are:
> +       * inlined in the profiled binary, and
> +       * the cloned callee has enough samples to be considered "hot".  */
> +  else if (flag_auto_profile && afdo_callsite_hot_enough_for_early_inline 
> (e))
> +    ;

I wonder if it won't make more sense to have special autofdo inliner instead of
sharing the outer loop with early inliner, but we can clean this up 
incrementally.

I think the patch looks quite good now.  Please do the changes above and send 
updated verison.
I think it may be last iteration over it.

Honza

Reply via email to