On Thu, Mar 31, 2016 at 8:02 PM, Thomas Gelf <tho...@gelf.net> wrote:

> your dedication in getting Puppet faster is really appreciated. My post
> is absolutely not in favor of XPP, but please don't get me wrong: it is
> meant to be a constructive contribution to the current design process.
>

Thanks for the feedback, Thomas. This is the right forum for it, and I
appreciate the ways in which you're challenging these ideas. +1 for Real
Talk. Some of the things you mentioned involve historical decisions, and
some are around current/future decisions...I think we should focus on the
problems you identified that we can deal with going forward.



>
> * Cross-Language support: You wrote that the C++ parser needs to provide
> the compiled AST to the Ruby runtime. Makes sense to me. But parsing .pp
> files with C++, serializing them to a custom not yet designed format,
> parsing that custom format with Ruby again and then re-resolve all
> (most, some?) dependency graphs across the whole catalog with Ruby...
> this doesn't sound like something that could help with getting things
> faster. Sure, it would help the C++ parser to hand over it's AST. Or
> store it to disk. But would this speed up the whole process? I have some
> serious doubts in that relation.
>

I actually think it'll make a big difference in parsing speed, if for no
other reason than slurping in pre-parsed stuff in nearly any kind of
non-insane format would easily be faster than the hand-rolled parsing
that's on the ruby side right now. We've seen similar improvements in other
spots, like dalen's patches that added msgpack support vs. hand-rolled
serialization/deserialization. Maybe the thing to do here is to try some
experiments and post back some numbers that could hopefully ground the
discussion with some data?



> [...]
>
But please do forget that the extensibility of a tool is one of the key
> features of any OpenSource software. Ops people didn't choose good old
> Nagios because of it's "beautiful" frontend and it's "well-designed"
> plugin API. They are using it because everyone from students to 60 years
> old UNIX veterans are able to write something they use to call a
> "plugin". Mostly awful snippets of Bash or Perl, not worth to be called
> software. But doing customized crazy shit running on millions of
> systems, available since nearly 20 years without breaking compatibility.
> Of course there is Icinga right now ;) New Core, C++, shiny new web...
> but still running those ugly old plugins. They are awful, they are
> terrible, we all hat them. But lots of people invested a lot of time in
> them, so breaking them is a no-go.
>

Agreed...there's no way we can break compatibility with most existing
puppet modules. That would be some serious, doomsday-level awfulness.
Whatever we come up with in this area has to work with the code that's out
there, and that's definitely the plan.

The vibe I'm getting from this line of feedback is that we should perhaps
better articulate the longer-term plan around the native compiler in
general, instead of focusing on increments (like .xpp) that, absent the
larger context, may seem unhelpful in their own right?


> But let's get back to the next point in your proposal, "requirements":
>
> * publishing modules as XPP: I guess building an AST for a module would
> take less time than checking out the very same module with r10k from
> your local GIT repository. Even with "slow Ruby code". So IMO there are
> no real benefits for this, but lots of potential pitfalls, insecurities,
> bugs. If you need this to provide obfuscated Enterprise-only modules in
> the future... well, it's your choice.
>

I agree...I'd be inclined to make this a non-goal.



> "Databases are slow"
>
> We had active-records hammering our databases. The conclusion wasn't
> that someone with SQL knowledge should design a good schema. The
> publicly stated reasoning was "well, databases are slow, so we need more
> cores to hammer the database, Ruby has not threading, Clojure is cool".
> It still was slow by the end, so we added a message queue, a dead letter
> office and more to the mix.
>

With respect, I think this is a pretty unfair retelling of history. Even in
the Puppetconf talk where I introduced PDB, this was not the story. I'm
comfortable letting all the public video footage of us discussing the
rationale rebut this.

Queueing had nothing to do with speed of persistence, as opposed to
providing a sink for backpressure from the DB. Without that, agent runs
would simply fail when writes timed out even if those agents did no
resource collections as part of compilation. The dead-letter-office is
unrelated to performance; it's a place to put data that couldn't be
processed, so we can debug it more thoroughly (something that has been
directly responsible for a number of important bugfixes and robustness
improvements). Without that, debugging storage problems was quite difficult.


[...]
> Storing an average catalog (1400 resources, cached JSON is 0,5-1MB)
> takes as far as I remember less than half a second all the times. For
> most environments something similar should perfectly be doable to
> persist catalogs as soon as compiled. Even in a blocking mode with no
> queue and a directly attached database in plain Ruby.
>

If writes take 0.5 seconds, then you'd start failing agent runs on any site
that was > 3600 nodes using a 30 minute runinterval. At that point you'd
have requests coming in faster than your ability to persist data (and even
this is being charitable, because in real systems the load isn't perfectly
spread out). Thus the whole point of queueing and optimizing the storage
pipeline. This is a more complex problem than many folks realize.

In any case, if you've got your own well-tuned system for persisting
catalogs, you can use that in place of puppetdb if you like. You could
reuse the puppetdb terminus and swap your backend in (the wire formats are
documented here
https://docs.puppetlabs.com/puppetdb/4.0/api/wire_format/catalog_format_v6.html,
and the spec for queries are documented in the same spot). Is your code in
a place where you can open source it?



> "Facter is slow"
>
> Reasoning: Ruby is slow, we need C++ for a faster Facter. But Facter
> itself never was the real problem. When loaded from Puppet you can
> neglect also it's loading time. The problem were a few silly and some
> more not-so-good single fact implementations. cFacter is mostly faster
> because those facts been rewritten when they were implemented in C++.
>

I think that's partly true; rewriting some of the core facts definitely
sped things up. But I wouldn't underestimate the impact of porting over the
engine that surrounds those facts. Just about every possible part of facter
runs much faster and uses much less memory while maintaining compatibility
with custom facts. Also, we don't fork and execute Ruby or anything...it's
embedded. At this point, we've got native facter, and folks can compare for
themselves how fast and lean it is relative to previous versions.

The point about it being harder for folks to quickly fix facts behaving
weird on their systems is one worth talking more about. Would you mind
starting a separate thread about the debugging experience so we could talk
through that independent of the xpp discussion?



> "Puppet-Master is slow"
>
> Once again, Ruby is slow we learned. We got Puppet Server. I've met (and
> helped) a lot of people that had severe issues with this stack. I'm
> still telling anyone to not migrate unless there is no immediate need
> for doing so. Most average admins are perfectly able to manage and scale
> a Ruby-based web application. To them, Puppet Server is a black box.
> Hard to manage, hard to scale. For many of them it's the only Java-based
> application server they are running, so no clue about JVM memory
> management, JMX and so on.
>

This is also good feedback, and something that's worth its own thread
around the usability/manageability/scalability problems you see. I'd love
to have more of a conversation about how to improve things in those areas!

I do think it's worth keeping in mind that there are more puppet users now
than ever; it's a very big tent. In my humble opinion, generalizations
about what "most average admins" can do are increasingly fraught with peril
the bigger and more diverse our user base has gotten.



>  All I'm interested in is running my beloved
> Puppet hassle-free in production, not wasting my time for caring about
> the platform itself. I'd prefer to dedicate it to lots of small ugly
> self-written modules breaking all of the latest best practices I can
> find on the web ;-)
>

Very well said. :)

deepak

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/CAOjOXY39qT39WVfnDAEAfyvFh7k7W5bHksPmDaUiUWHi%3Dfm7XQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to