Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Ryan Senior updated an issue PuppetDB / PDB-2002 investigate possible memory bloat in recent releases Change By: Ryan Senior Sprint: PuppetDB 2015-11- 4 18 Add Comment This message was sent by Atlassian JIRA (v6.4.11#64026-sha1:78f6ec4) -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-bugs+unsubscr...@googlegroups.com. To post to this group, send email to puppet-bugs@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-bugs. For more options, visit https://groups.google.com/d/optout.
Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Wyatt Alt commented on PDB-2002 Re: investigate possible memory bloat in recent releases Nick Walker my feeling from looking at this last time was that we've just gradually passed the limit, but I haven't gone back to 2015.2 yet. What size heap is sufficient for you now? Add Comment This message was sent by Atlassian JIRA (v6.4.11#64026-sha1:78f6ec4) -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-bugs+unsubscr...@googlegroups.com. To post to this group, send email to puppet-bugs@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-bugs. For more options, visit https://groups.google.com/d/optout.
Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Wyatt Alt updated an issue PuppetDB / PDB-2002 investigate possible memory bloat in recent releases Change By: Wyatt Alt Story Points: 2 Add Comment This message was sent by Atlassian JIRA (v6.4.11#64026-sha1:78f6ec4) -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-bugs+unsubscr...@googlegroups.com. To post to this group, send email to puppet-bugs@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-bugs. For more options, visit https://groups.google.com/d/optout.
Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Ryan Senior updated an issue PuppetDB / PDB-2002 investigate possible memory bloat in recent releases Change By: Ryan Senior Sprint: PuppetDB 2015-11-4 Add Comment This message was sent by Atlassian JIRA (v6.4.11#64026-sha1:78f6ec4) -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-bugs+unsubscr...@googlegroups.com. To post to this group, send email to puppet-bugs@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-bugs. For more options, visit https://groups.google.com/d/optout.
Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Wyatt Alt updated an issue PuppetDB / PDB-2002 investigate possible memory bloat in recent releases Change By: Wyatt Alt [~nick.walker] and [~charliesharpsteen] have both noticed a heap size increase between 3.1 and 3.2. Looking at two pairs of heap dumps Nick has uploaded https://puppetlabs.app.box.com/files/0/f/4699591822/nwalker_dumps I've seen a measurable increase in both cases, but no hard conclusions about a single source. Some things I've noticed, based on two heap dumps taken after PDB restarts on PE 2015.2 and 2015.3, with retained sizes of 48 mb and 42 mb , respectively for a difference of 6mb:* 1mb increase in retained size of bounded-memoize (essentially from 0 to 1mb). We only use this function in catalog hashing, so this suggests to me that the 2015.3 instance has processed some catalogs while the 2015.2 instance has not.* 3mb increase in retained size of java.util.concurrent.atomic.AtomicReference: this seems to be primarily due to additions to sync code.* 1.2mb increase in allocation to hash related objects bouncycastle/jcajce: this could also be the result of one node having processed some catalogs. There's a chance that this also overlaps with the bullet about bounded-memoize.I haven't dug deeper. I suspect some amount of this is just cost of business, but it may also be useful to go back further than 2015.2 and see if we're missing anything obvious. This affects the support team because they need to run a number of PE instances of different versions on their laptops simultaneously, and also affects potential customers evaluating PE on low-powered vms. I think Nick and Charlie have both previously been able to run on 60mb heaps with no issue.I think the most recent dumps on that box link may also be corrupted somehow. I see a number of strings valued "Error reading from snapshot" when opened in yourkit. Add Comment This message was sent by Atlassian JIRA (v6.4.11#64026-sha1:78f6ec4)
Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Wyatt Alt updated an issue PuppetDB / PDB-2002 investigate possible memory bloat in recent releases Change By: Wyatt Alt [~nick.walker] and [~charliesharpsteen] have both noticed a heap size increase between 3.1 and 3.2. Looking at two pairs of heap dumps Nick has uploaded https://puppetlabs.app.box.com/files/0/f/4699591822/nwalker_dumps I've seen a measurable increase in both cases, but no hard conclusions about a single source. Some things I've noticed, based on two heap dumps taken after PDB restarts on PE 2015.2 and 2015.3, with retained total retained sizes of 48 mb and 42 mb, respectively for a difference of 6mb:* 1mb increase in retained size of bounded-memoize (essentially from 0 to 1mb). We only use this function in catalog hashing, so this suggests to me that the 2015.3 instance has processed some catalogs while the 2015.2 instance has not.* 3mb increase in retained size of java.util.concurrent.atomic.AtomicReference: this seems to be primarily due to additions to sync code.* 1.2mb increase in allocation to hash related objects bouncycastle/jcajce: this could also be the result of one node having processed some catalogs. There's a chance that this also overlaps with the bullet about bounded-memoize.I haven't dug deeper. I suspect some amount of this is just cost of business, but it may also be useful to go back further than 2015.2 and see if we're missing anything obvious. This affects the support team because they need to run a number of PE instances of different versions on their laptops simultaneously, and also affects potential customers evaluating PE on low-powered vms. I think Nick and Charlie have both previously been able to run on 60mb heaps with no issue.I think the most recent dumps on that box link may also be corrupted somehow. I see a number of strings valued "Error reading from snapshot" when opened in yourkit. Add Comment This message was sent by Atlassian JIRA (v6.4.11#64026-sha1:78f6ec4)
Jira (PDB-2002) investigate possible memory bloat in recent releases
Title: Message Title Wyatt Alt created an issue PuppetDB / PDB-2002 investigate possible memory bloat in recent releases Issue Type: Bug Assignee: Unassigned Created: 2015/09/23 5:09 PM Priority: Normal Reporter: Wyatt Alt Nick Walker and Charlie Sharpsteen have both noticed a heap size increase between 3.1 and 3.2. Looking at two pairs of heap dumps Nick has uploaded https://puppetlabs.app.box.com/files/0/f/4699591822/nwalker_dumps I've seen a measurable increase in both cases, but no hard conclusions about a single source. Some things I've noticed, based on two heap dumps taken after PDB restarts on PE 2015.2 and 2015.3, with retained total retained sizes of 48 mb and 42 mb, respectively for a difference of 6mb: 1mb increase in retained size of bounded-memoize (essentially from 0 to 1mb). We only use this function in catalog hashing, so this suggests to me that the 2015.3 instance has processed some catalogs while the 2015.2 instance has not. 3mb increase in retained size of java.util.concurrent.atomic.AtomicReference: this seems to be primarily due to additions to sync code. 1.2mb increase in allocation to hash related objects bouncycastle/jcajce: this could also be the result of one node having processed some catalogs. There's a chance that this also overlaps with the bullet about bounded-memoize. I haven't dug deeper. I suspect some amount of this is just cost o