On Wed, Apr 7, 2010 at 1:38 PM, Patrick <kc7...@gmail.com> wrote:
> I second this.  Puppet will load the whole file into ram, and puppet never 
> deallocates memory.  It's almost always better to move big files by putting 
> them into a package or using an "Exec" type with "creates."
>
>

Just to be clear, the deallocation beyond a threshold is largely a
present-version-of Ruby limitation.   That isn't to say there aren't
some other things we can do to make fileserving better -- such as the
streaming improvements in the next release (this should help a fair
amount!)

As I just mentioned one of them on the list, there are a couple
alternatives to fileserving you can look at now if you want to
transfer content.

One (not so suitable for binary content) is something like
http://github.com/reductivelabs/puppet-vcsrepo using source control.

Another recommended approach, and really it's the right thing to do in
many cases, are read only NFS mounts with copies sourcing of those
locations.   You could also, if you really wanted, use an Exec+rsync,
though I'd go the NFS (or samba, etc) approach first.

Fileserving is definitely something you'd continue to want to do with
templates and such, but not so much for app deployment.

Moving forward, I think you'll see more support and features around
alternative ways to deploy files, such as vcsrepo.   If there's
another use case around this that I'm missing, where NFS or source
control won't work, let me know.

--Michael

> On Apr 7, 2010, at 10:21 AM, Daniel Kerwin wrote:
>
>> Not sure about a limit but puppet isn't very good at transfering
>> really big files. This may lead to memory problems afaik
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to