For this specific use-case, it's understood that we wouldn't have alerting rules as the metrics would ingested only after they've been created. It would be used purely for inspecting historical trends and identifying outliers over weeks, months. etc. I'm leaning more towards TimescaleDB as we already have many instances of Postgres internally so it would end up being less work to get this up vs implementing something new such as VictoriaMetrics.
I was thinking we could simply spin up a Prometheus instance, run the tsdb import and then have thanos sidecar upload the block(s) to object storage. Once that's done, the prometheus instance could be destroyed as we would always query the data from thanos query. Given that we wouldn't be alerting on this data, would the TSDB import still be suitable? On Wednesday, October 14, 2020 at 9:17:43 AM UTC-4 b.ca...@pobox.com wrote: > I don't know if that CSV import supports your use case: there's a > difference between back-filling an entire timeseries with historical data, > and pumping new blocks of data in every 24 hours. Doing the latter raises > various questions - how would alerting rules work, for instance? > > Your idea of using TimescaleDB seems reasonable for this use case. > VictoriaMetrics is another option to look at, as it supports ingestion in a > whole range of formats: > > https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data > and it can be queried directly using a superset of PromQL with a > Prometheus-compatible API (that is, you can point a Grafana dashboard at it > and pretend it's a Prometheus server) > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/b9bef3bc-996c-4467-b1fb-c82fa7b4123cn%40googlegroups.com.