I have a use-case where I would like to have the ability for a Prometheus write metrics locally while being able to also stream the same metrics to another type of external database. The goal is to ship these metrics as they're scraped so queries on the remote database can be executed with the most up to date data as something like receiving a compacted/completed 2h block wouldn't be sufficient for our needs. It would be great if we could use both local storage and remote write a the same time although from what I can see that's not possible. I'm sure there's a good reason for this although it would be great if someone could share some lower level knowledge as to why this isn't possible?
So far I can't seem to find anything out-of-the-box to allow us to ship incoming metrics remotely as they come in but I suspect we could likely accomplish this by reading the WAL with the wal package? I'm not completely familiar with all the internals of the WAL and the Prometheus storage so I'm trying to figure how I could accomplish this efficiently in a reasonable amount of time. I appreciate any help. Thanks -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/95ab00ca-cee9-43f8-9a44-265dc4704586n%40googlegroups.com.