Keith Mitchell wrote:
> Hi Sarah,
>>
>> As for a 'rollback', yes, in some cases we need to be able to restart
>> and cleanup from a previous checkpoint run. This can be managed by
>> zfs, when zfs is available, and for other checkpoints we need to
>> determine if there is cleanup required if the user wants to rollback.
>> For example, with target discovery we don't have any cleanup
>> specifically. Except for data stored in the data cache. That will be
>> managed with the design of the data cache.
>
> Based on this explanation, it sounds like, for example, target
> instantiation wouldn't necessarily "roll back" either, but could
> potentially be restarted by reading the data cache and re-creating the
> partition/slice/zpool layout. Is that more or less correct?
Yes that would be correct.
>
>>>
>>> Loggers and "Progress receivers": These two concepts seem very
>>> similar in function (although not in the data they receive). One
>>> thought I had was that, if each logging statement were composed of
>>> the following data, then the progress receiver(s) could be a special
>>> logger that only consumes certain portions of that data.
>>> - Current time
>>> - Current progress %
>>> - Estimated time remaining
>>> - Message string
>>> - Optionally, location in code (execution stack or file name and
>>> line number, for example)
Okay but only if stop reading my mind ;-) Ginnie and I have started
working on logging service and we should have design document soon. I
should add that our thoughts were eeirly similar to what you have
suggested. Current progress and estimated time being optional.
>>>
>>
>> Sure, this could work. A progress receiver could be a logger.. we
>> just have to be sure that the data can be used by the progress
>> receiver in a way that can be displayed.
>>
>>
>>
>>> get_loggers() vs. log_message() - For logging, what is the expected
>>> usage: log_message(), or iterate through the loggers from
>>> get_loggers() and log things explicitly to each? (Am I getting too
>>> low level here?)
>>
>> Good question :-). The intent is that logging goes through the
>> engine. So, you do log to each log you are interested in logging to.
>> I imagine we will have an 'all' flag or something. we do have a
>> log_message() method on both the ExecutionEngine and the Logging
>> class. This may be incorrect. I will take a look at this.
>
> The logging systems I've dealt with (in Python and Java) have the
> individual loggers manage what data they care about - a logger set to
> print at the "error" level would ignore statements passed in at the
> "debug" level. The proposal in this document seems to be the reverse -
> the engine (or the application) retrieves the loggers and decides
> which ones to log to.
We have started working on the logging service design and I can say
right now that the approach is more like you describe above. So stay tuned.
>
> I would probably strongly recommend using the former paradigm (and if
> this conversation is better off waiting for the logging service
> proposal, I can do that), because it provides a single interface for
> changing the logging level for one or more loggers (that interface
> being provided by the base logger class). Someone looking for debug
> output can set that on the specific logger, instead of forcing the
> individual application to keep track of what the current logging level
> is and send messages to different loggers based on that.
>
> (If I'm misunderstanding the intent of these API functions, or if this
> conversation is better had when the logging service has a design doc,
> just let me know).
-Sanjay