On 03/11/11 09:09, Ed Bartosh wrote:
Yeah - but what if someone else's AMQP client would like a json msg ? or an
XML msg?
Those are consumer side requirements. Producer should not care of those from
my point of view.
That doesn't make sense to me. We need to send *some* data message over AMQP -
it can't send a perl ref, and a raw binary dump of the data is not portable. It
has to be serialised to a format that both consumer and sender agree on. Of
course the producer can dictate that format but IMHO it makes sense to permit
that to be defined by the deployment. Why force another site to convert to json,
then convert from json to xml just because *we* decided json was a better wire
format?
Otherwise we'll end up bending our plugin for all kind of
possible consumers. That's actually what you're trying to do - bend generic
solution to satisfy your specific consumer requirements :) Imagine I have my
own workflow engine that consumes events as binary data of some complex
structure. According to your idea obs AMQP plugin should be updated to
support that, right?
Yes I am suggesting exactly that. You make it sound like a bad thing.
Please note that my approach would not require you to touch the AMQP plugin.
Simply provide a function to go from the internal event representation to the
on-the-wire format.
I am not suggesting any conditional event filtering or other processing - simply
allowing the callback to handle the serialisation.
What we can do to find a compromise here is to implement filtering concept.
Before sending message to AMQP plugin should call registered filter in
configurable order. It can be something like unix pipes, for example: Plugin
| filter1 | filter2 .... -> AMQP. Howewer why this can't be done on consumer
side?
There is no need for that.
* We need to support multiple AMQP servers.
* There is no reason a given server cannot be called twice with different
credentials/exchange information.
* For each entry in the list we add a "make_message" function reference (like
your pipeline but cleaner). That is passed the event and just to be clean it
defaults to "event2json" and you have your usecase sorted.
* When the plugin converts the waiting event to a msg string it uses either an
internal ref or the configured callback.
Of course this *can* be done on the consumer side - as I explained I could
install a completely new AMQP client to read a json message then filter it and
transform it and re-send it. This would have the unpleasant side effect of
doubling the trafffic and introducing an additional
serialisation/deserialisation step. It's yet another place for resilience to
fail and for syadmins to have to manage. That doesn't sound like a good thing.
And don't forget ... for BOSS, AMQP is the ruote transport; ruote is not a
generic AMQP client :) so I have to send messages that are understood.
Yep, I understand that. Do you understand that I'm trying to have more
generic solution? I think we need to find a compromise somehow :)
By generic do you mean minimal? My meaning of generic is closer to "flexible".
Is this where we differ?
On the second point I don't see that my proposal is any heavier since
you'd probably just default to using an event2json() msgmaker and my
solution would need to define an event2boss() msgmaker.
I see this as an attempt to bring consumer side requirements to the producer
logic (see above). It doesn't matter how heavy this is. If it's done ones it
would mean that we allow to bend our plugin to satisfy any consumer
requirement.
Again you make this sound like a bad thing?
David
--
"Don't worry, you'll be fine; I saw it work in a cartoon once..."
_______________________________________________
MeeGo-distribution-tools mailing list
[email protected]
http://lists.meego.com/listinfo/meego-distribution-tools