Sorry for the delay, was busy with the deltacloud appliance.
On 08/13/2010 03:28 PM, Chris Lalancette wrote:
> <snip>
>> + end
>> + end
>> +end
>> +parser = Nokogiri::XML::SAX::PushParser.new(CondorEventLog.new)
>> +
>> +# XXX bit of a hack, condor event log doesn't seem to have a top level
>> element
>> +# enclosing everything else in the doc (as standards conforming xml must).
>> +# Create one for parsing purposes.
>> +parser<< "<events>"
>> +
>> +# last time the event log was modified
>> +event_log_timestamp = nil
>> +
>> +# last position we've read in the log
>> +event_log_position = 0
> This is a problem for dbomatic restarts. That is, if you get a few events,
> and then restart dbomatic, you'll go back to the beginning of the event log
> and re-add those same events to the table. We are going to have to track this
> on persistent storage somehow.
Added file to persistantly track this across dbomatic restarts. Its is
currently located under /var/run/dbomatic/.
>> +
>> +# set true to terminate dbomatic
>> +terminate = false
>> +until terminate
>> + log_file = File.open("/var/log/condor/EventLog")
>> +
>> + # Condor seems to open / close the event log for every
>> + # entry. Simply poll for new data in the log
>> + unless log_file.mtime == event_log_timestamp
>> + event_log_timestamp = log_file.mtime
>> + log_file.pos = event_log_position
>> + while c = log_file.getc
>> + parser<< c.chr
>> + end
> This is probably better done by getting line-by-line (instead of character
> by character), no?
>
Changed.
>> + event_log_position = log_file.pos
>> + end
>> +
>> + sleep 1
> The sleep is fine for now, but I would prefer if we used something like
> inotify to get notified of changes to the file. That's a future optimization,
> though.
Ah yes, I was looking for inotify but forgot the name. I updated the
process to track file changes that way.
>> +end
>> +
>> +parser<< "</events>"
>> +parser.finish
> So, this is a really good start. It looks like it's the framework we need
> for doing additional work here. The biggest questions that come to mind have
> to do more with the types of events that condor is generating, and whether
> we can properly calculate our QoS data from it. In that vein, could you
> schedule some actual guests on, say, EC2, and then see if the information we
> are gathering can answer:
>
> 1) How long it took between us submitting the request, and the request to
> be started? (i.e. gone through matchmaking and the deltacloud GAHP).
> 2) How long it took between the request being started and the guest going to
> running? (this should be the time the backend cloud took to get the guest up
> and running).
>
> And other metrics you can think of.
>
I'm still having trouble getting condor fully integrated with the
aggregator and core components, which is limiting the amount of log info
I'm collecting locally. We tried debugging this last week but didn't
fully finish, perhaps sometime when you and Ian are next available we
can resolve that issue.
Also as far as the dbomatic process itself, how much parametrization
does it need? The only thing that I can see as being variable is the log
file location, but even then that's not a major use case. Also would it
be smart to assume this is going to be started via the init/services
interface and should be daemonized?
-Mo
_______________________________________________
deltacloud-devel mailing list
[email protected]
https://fedorahosted.org/mailman/listinfo/deltacloud-devel