Hi Richard, > The current design of rrdtool is based around scripts calling tools which > do a transaction using a single .rrd file, and then quit.
if you have lots of data I guess you would NOT use the cli but rather the perl module ... but besides this .... > > Note that I'm not suggesting we all run out and start moving our graphing > DBs to SQL, but the necessary architecture to scale to large data sets is > abundantly clear thanks to all those people who spend lots of time and > energy developing databases. Have you actually run tests with databases on this ? are they faster when you update hundreds of thousands of diferent 'data sources' how would you structure the tables ? * ds table ds-id, name, type, min, max * data table ds-id, timestamp, value or would you create a diferent table for each 'datasource' ? I know that oracle has a time-series (extension?) for its product, but I have not hearde that eitehr mysql, postgresql or sqlite were optimized for that type of data ... > struct rrd_update_param { > struct timeval timestamp; > long datasource; > rrd_value_t value; > }; > > It is the job of the frontend tool to parse all the different forms of > time you want to support (N: value: value@ etc), to parse any templates > you want to use, etc. well the rrd_update example is nice, but how would you go for something like rrd_create, or rrd_graph ? > Unfortunately I'm involved in about a billion projects right now > [...] there you go .. and so it ends ... most of the time cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten http://tobi.oetiker.ch [EMAIL PROTECTED] ++41 62 213 9907 -- Unsubscribe mailto:[EMAIL PROTECTED] Help mailto:[EMAIL PROTECTED] Archive http://lists.ee.ethz.ch/rrd-developers WebAdmin http://lists.ee.ethz.ch/lsg2.cgi