Hi,

On 2019-01-26 11:44:58 +0100, Adrien NAYRAT wrote:
> +     <varlistentry id="guc-transaction-sample-rate" 
> xreflabel="log_transaction_sample_rate">
> +      <term><varname>log_transaction_sample_rate</varname> 
> (<type>real</type>)
> +      <indexterm>
> +       <primary><varname>log_transaction_sample_rate</varname> configuration 
> parameter</primary>
> +      </indexterm>
> +      </term>
> +       <listitem>
> +        <para>
> +         Set the fraction of transactions whose statements are logged. It 
> applies
> +         to each new transaction regardless of their duration. The default is
> +         <literal>0</literal>, meaning do not log statements from this 
> transaction.
> +         Setting this to <literal>1</literal> logs all statements for all 
> transactions.
> +         Logging can be disabled inside a transaction by setting this to 0.
> +         <varname>log_transaction_sample_rate</varname> is helpful to track a
> +         sample of transaction.
> +        </para>
> +       </listitem>
> +      </varlistentry>
> +
>       </variablelist>

I wonder if this doesn't need a warning, explaining that using this when
there are large transactions could lead to slowdowns.


>      <para>
> diff --git a/src/backend/access/transam/xact.c 
> b/src/backend/access/transam/xact.c
> index 7c3a9c1e89..1a52c10c39 100644
> --- a/src/backend/access/transam/xact.c
> +++ b/src/backend/access/transam/xact.c
> @@ -1821,6 +1821,9 @@ StartTransaction(void)
>       s->state = TRANS_START;
>       s->transactionId = InvalidTransactionId;        /* until assigned */
>  
> +     /* Determine if we log statements in this transaction */
> +     xact_is_sampled = random() <= log_xact_sample_rate * MAX_RANDOM_VALUE;
> +
>       /*
>        * initialize current transaction state fields
>        *
> diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
> index e773f20d9f..a11667834e 100644
> --- a/src/backend/tcop/postgres.c
> +++ b/src/backend/tcop/postgres.c
> @@ -100,7 +100,8 @@ int                       max_stack_depth = 100;
>  /* wait N seconds to allow attach from a debugger */
>  int                  PostAuthDelay = 0;
>  
> -
> +/* flag for logging statements in a transaction */
> +bool         xact_is_sampled = false;

It seems pretty weird to have this in postgres.c, given you declared it
in xact.h?


> @@ -2218,7 +2221,8 @@ check_log_statement(List *stmt_list)
>  int
>  check_log_duration(char *msec_str, bool was_logged)
>  {
> -     if (log_duration || log_min_duration_statement >= 0)
> +     if (log_duration || log_min_duration_statement >= 0 ||
> +             (xact_is_sampled && log_xact_sample_rate > 0))

Why are both of these checked? ISTM once xact_is_sampled is set, we
ought not to respect a different value of log_xact_sample_rate for the
rest of that transaction.


As far as I can tell xact_is_sampled is not properly reset in case of
errors?


Greetings,

Andres Freund

Reply via email to