----- Original Message -----
> From: "Kevin Falcone" <[email protected]>

> This is a known bug we've been discussing how to fix. There's a lot
> of magic that needs to happen when you change a lifecycle midstream.
> You have to leave transitions in place until you migrate away from the
> old statuses and it's a pain. Unfortunately, on a large queue, any
> sort of magic could run for a long time and time out, etc.

That is a problem that crops up in a bunch of different places of different
sizes and scopes, all the way down to "does $AndroidClient cache status 
postings if it can't get them through due to a local connectivity error" 
(for Tweetcaster the answer is "yes, visibly"; for Facebook, "maybe,
but you can't tell), and I believe the indicated design pattern there is
"Job Queue".

Is there already something in RTs internals for handling queued jobs?

If not, is this a big enough issue -- and might you gain useful leverage
in the future -- from introducing it?

Cheersr,
-- jra
-- 
Jay R. Ashworth                  Baylink                       [email protected]
Designer                     The Things I Think                       RFC 2100
Ashworth & Associates     http://baylink.pitas.com         2000 Land Rover DII
St Petersburg FL USA               #natog                      +1 727 647 1274


-- 
Help improve RT by taking our user survey: 
https://www.surveymonkey.com/s/N23JW9T

Reply via email to