Your message dated Thu, 19 Mar 2015 11:44:00 +0100
with message-id <20150319104400.GT18070@localhost>
and subject line Re: Bug#780664: [Pkg-puppet-devel] Bug#780664: puppetmaster 
fills up the database connection pool.
has caused the Debian Bug report #780664,
regarding puppetmaster fills up the database connection pool.
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact [email protected]
immediately.)


-- 
780664: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=780664
Debian Bug Tracking System
Contact [email protected] with problems
--- Begin Message ---
Package: puppetmaster
Version: 3.7.2-3
Severity: important
User: [email protected]
Usertags: infra

Hi,

When enabling storeconfig in puppetmaster, the database connection pool gets
filled up by inactive connections opened by previous puppet agents runs. After
the maximum size of the pool have been reached (default to 5 in ActiveRecord),
the puppet agents can't retrieve the catalog anymore and exit with this error:

  err: Could not retrieve catalog from remote server: Error 400 on SERVER:
  could not obtain a database connection within 5.000 seconds (waited
  5.000 seconds)

With the sqlite3 database backend, the puppetmaster opens 5 FDs to the
sqlite database:

  root@puppet-git:~/# lsof /var/lib/puppet/state/clientconfigs.sqlite3
  COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
  puppet  17031 puppet   12u   REG  253,1 14777344 131800 
/var/lib/puppet/state/clientconfigs.sqlite3
  puppet  17031 puppet   13u   REG  253,1 14777344 131800 
/var/lib/puppet/state/clientconfigs.sqlite3
  puppet  17031 puppet   14u   REG  253,1 14777344 131800 
/var/lib/puppet/state/clientconfigs.sqlite3
  puppet  17031 puppet   15u   REG  253,1 14777344 131800 
/var/lib/puppet/state/clientconfigs.sqlite3
  puppet  17031 puppet   16u   REG  253,1 14777344 131800 
/var/lib/puppet/state/clientconfigs.sqlite3

The same seems to happen with the MySQL database backend:

  mysql> show processlist\g
  
+----+--------+-----------+--------+---------+------+-------+------------------+
  | Id | User   | Host      | db     | Command | Time | State | Info            
 |
  
+----+--------+-----------+--------+---------+------+-------+------------------+
  | 44 | puppet | localhost | puppet | Sleep   |  117 |       | NULL            
 |
  | 45 | root   | localhost | NULL   | Query   |    0 | NULL  | show 
processlist |
  | 46 | puppet | localhost | puppet | Sleep   |   98 |       | NULL            
 |
  | 47 | puppet | localhost | puppet | Sleep   |   79 |       | NULL            
 |
  | 48 | puppet | localhost | puppet | Sleep   |   59 |       | NULL            
 |
  | 49 | puppet | localhost | puppet | Sleep   |   40 |       | NULL            
 |
  
+----+--------+-----------+--------+---------+------+-------+------------------+
  6 rows in set (0.00 sec)


This sound a lot like a bug reported upstream a while ago [1], but it's not
easy to track if something came out with upstream's tracker move.

I've tried several changes in lib/puppet/rails.rb:

* replace ActiveRecord::Base.clear_active_connections! by
ActiveRecord::Base.clear_all_connections!
* remove the ActiveRecord::Base.allow_concurrency = true setting, as proposed
in the upstream ticket

But this didn't fix anything. Each time an agent runs, the puppetmaster
initiates a new connection to the database rather than using the active one,
even if it is the same agent than the previous run.

bert.

[1] http://projects.puppetlabs.com/issues/3238 

--- End Message ---
--- Begin Message ---
On Wed, Mar 18, 2015 at 06:36:55PM +0200, Apollon Oikonomopoulos wrote:
> On 16:24 Wed 18 Mar     , bertagaz wrote:
> > So anyone willing to use storeconfigs in Jessie will have to use
> > puppetmaster-passenger. That's an acceptable workaround probably.
> 
> I agree. At this point, where upstream has deprecated the feature, I too 
> think this is a reasonable (and now documented) workaround.

Closing this bug then, as you say, it is documented. :)

bert.

--- End Message ---

Reply via email to