I'll watch it and out on holiday TET. I fix that error, If I still have error I will ask you Thanks you for helping me. Have a nice day
Vào Th 5, 27 thg 1, 2022 vào lúc 15:54 Brian Candler <[email protected]> đã viết: > You got your backslashes in the wrong place. If you're going to split a > line, then every portion must have a backslash at the end, apart from the > last one. > > You have: > > ExecStart=/usr/local/bin/prometheus > --config.file=/etc/prometheus/prometheus.yml \ > --storage.tsdb.path=/var/lib/prometheus/ \ > --web.console.templates=/etc/prometheus/consoles \ > --web.console.libraries=/etc/prometheus/console_libraries \ > > It should be: > > *ExecStart=/usr/local/bin/prometheus \* > --config.file=/etc/prometheus/prometheus.yml \ > --storage.tsdb.path=/var/lib/prometheus/ \ > --web.console.templates=/etc/prometheus/consoles \ > *--web.console.libraries=/etc/prometheus/console_libraries* > > (the first and last line are different) > > Also, in the sample you pasted, you had a large number of spaces *after* > one of the backslashes (after --config.file=/etc/prometheus/prometheus.yml > ). It might work as-is, but really you want backslash-newline, not > backslash-lots-of-spaces. > > You may find it easier to append everything onto one line, and drop the > backslashes. Another way is to pick up the settings from a config file, > e.g. > > EnvironmentFile=/etc/default/prometheus > ExecStart=/opt/prometheus/prometheus $OPTIONS > > then in /etc/default/prometheus you put: > > OPTIONS='--config.file=/etc/prometheus/prometheus.yml > --storage.tsdb.path=/var/lib/prometheus/ > --web.console.templates=/etc/prometheus/consoles > --web.console.libraries=/etc/prometheus/console_libraries' > > I also recommend these settings under [Service] in your systemd unit file: > > TimeoutStopSec=300 > ExecReload=/bin/kill -HUP $MAINPID > > The first is to give prometheus time to flush its WAL to disk when > shutting down. The second lets you do "systemctl reload prometheus" to > change the prometheus settings without a full shutdown and restart. This > makes changing the prometheus config and alerting rules much less invasive. > > On Thursday, 27 January 2022 at 02:33:06 UTC [email protected] wrote: > >> I checked it says >> Jan 27 09:18:09 vps38 systemd[1]: >> [/etc/systemd/system/prometheus.service:11] Unknown lvalue ''--config.file' >> in section 'Service' >> Jan 27 09:18:09 vps38 systemd[1]: >> [/etc/systemd/system/prometheus.service:12] Unknown lvalue >> '--storage.tsdb.path' in section 'Service' >> Jan 27 09:18:27 vps38 systemd[1]: Started Prometheus. >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.698649951Z caller=main.go:285 msg="no time or size >> retention was set so using the default time retention" duration=15d >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.69870679Z caller=main.go:321 msg="Starting >> Prometheus" version="(version=2.8.1, branch=HEAD, >> revision=4d60eb36dcbed725fcac5b27018574118f12fffb)" >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.698722314Z caller=main.go:322 >> build_context="(go=go1.11.6, user=root@bfdd6a22a683, >> date=20190328-18:04:08)" >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.698735882Z caller=main.go:323 host_details="(Linux >> 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 vps38 >> (none))" >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.698749252Z caller=main.go:324 fd_limits="(soft=1024, >> hard=4096)" >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.698760875Z caller=main.go:325 >> vm_limits="(soft=unlimited, hard=unlimited)" >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.699103841Z caller=main.go:640 msg="Starting TSDB ..." >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.699138444Z caller=main.go:509 msg="Stopping scrape >> discovery manager..." >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.699146573Z caller=main.go:523 msg="Stopping notify >> discovery manager..." >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.699140589Z caller=web.go:418 component=web >> msg="Start listening for connections" address=0.0.0.0:9090 >> Jan 27 09:18:27 vps38 prometheus[12562]: level=info >> ts=2022-01-27T02:18:27.69915108Z caller=main.go:545 msg="Stopping scrape >> manager..." >> Jan 27 09:18:27 vps38 systemd[1]: prometheus.service: main process >> exited, code=exited, status=1/FAILURE >> Jan 27 09:18:27 vps38 systemd[1]: Unit prometheus.service entered failed >> state. >> Jan 27 09:18:27 vps38 systemd[1]: prometheus.service failed. >> >> line 1 and line 2 I don't understand where I went wrong, and this is code >> prometheus.service >> [Unit] >> Description=Prometheus >> Wants=network-online.target >> After=network-online.target >> >> [Service] >> User=prometheus >> Group=prometheus >> Type=simple >> ExecStart=/usr/local/bin/prometheus >> --config.file=/etc/prometheus/prometheus.yml \ >> >> >> --storage.tsdb.path=/var/lib/prometheus/ \ >> --web.console.templates=/etc/prometheus/consoles \ >> --web.console.libraries=/etc/prometheus/console_libraries \ >> >> [Install] >> WantedBy=multi-user.target >> >> >> Vào lúc 20:53:27 UTC+7 ngày Thứ Tư, 26 tháng 1, 2022, Brian Candler đã >> viết: >> >>> Have you checked "journalctl -eu prometheus" - does it say any more than >>> the logs you quoted? >>> >>> Given what you showed, prometheus appears to be crashing without >>> reporting an error - not even a panic. That seems like a bug. If it's a >>> configuration issue then it might be something to do with filesystem >>> permissions. If you're running it as the "prometheus" user, then make sure >>> that the prometheus data directory and all files under it are owned by >>> "prometheus" too. Perhaps one became owned by root accidentally? Or it >>> could be something else. You might be able to find out a bit more by >>> running it under strace. >>> >>> But if it is a bug, I'd say it's likely fixed in a newer version, and >>> upgrading is the best and easiest approach to fixing your issue. >>> >>> On Wednesday, 26 January 2022 at 09:52:53 UTC [email protected] wrote: >>> >>>> Yes, I got it >>>> Im using version the prometheus 2.8.1, Is it too old? And i see my >>>> drive is not full >>>> im sorry guys and Hope you can help me because Im in need of >>>> configuration >>>> Thank you verry much!! >>>> Vào lúc 16:08:01 UTC+7 ngày Thứ Tư, 26 tháng 1, 2022, Brian Candler đã >>>> viết: >>>> >>>>> The version of prometheus you're running is ancient - nearly 3 years >>>>> old. Unless you have a specific reason for this version, then I'd suggest >>>>> trying something more modern. There have been lots of improvements in the >>>>> timeseries database internals since then, and if you still have a problem, >>>>> you're more likely to get help resolving it. >>>>> >>>>> Otherwise, the first thing to check is: is your disk full? >>>>> >>>>> On Wednesday, 26 January 2022 at 08:24:48 UTC [email protected] >>>>> wrote: >>>>> >>>>>> I have one case, I restart the service prometheus then it says >>>>>> [root@vps38 ~]# systemctl status prometheus >>>>>> ● prometheus.service - Prometheus >>>>>> Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; >>>>>> vendor preset: disabled) >>>>>> Active: failed (Result: exit-code) since Wed 2022-01-26 15:14:31 >>>>>> +07; 7s ago >>>>>> Process: 9532 ExecStart=/usr/local/bin/prometheus (code=exited, >>>>>> status=1/FAILURE) >>>>>> Main PID: 9532 (code=exited, status=1/FAILURE) >>>>>> >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.56140132Z caller=main.go:285 msg="no time or size >>>>>> retention was set so using the default time retention" duration=15d >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.56145687Z caller=main.go:321 msg="Starting >>>>>> Prometheus" version="(version=2.8.1, branch=HEAD, >>>>>> revision=4d60eb36dcbed725fcac5b27018574118f12fffb)" >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.561472639Z caller=main.go:322 >>>>>> build_context="(go=go1.11.6, user=root@bfdd6a22a683, >>>>>> date=20190328-18:04:08)" >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.561486513Z caller=main.go:323 host_details="(Linux >>>>>> 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 vps38 >>>>>> (none))" >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.561499697Z caller=main.go:324 >>>>>> fd_limits="(soft=1024, >>>>>> hard=4096)" >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.561510835Z caller=main.go:325 >>>>>> vm_limits="(soft=unlimited, hard=unlimited)" >>>>>> Jan 26 15:14:31 vps38 prometheus[9532]: level=info >>>>>> ts=2022-01-26T08:14:31.56193411Z caller=main.go:640 msg="Starting TSDB >>>>>> ..." >>>>>> Jan 26 15:14:31 vps38 systemd[1]: prometheus.service: main process >>>>>> exited, code=exited, status=1/FAILURE >>>>>> Jan 26 15:14:31 vps38 systemd[1]: Unit prometheus.service entered >>>>>> failed state. >>>>>> Jan 26 15:14:31 vps38 systemd[1]: prometheus.service failed. >>>>>> I hope to get help with this case. ths all >>>>>> >>>>> -- > You received this message because you are subscribed to the Google Groups > "Prometheus Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/prometheus-users/7f8bfb3e-70c7-4d38-8b89-55a9cc944865n%40googlegroups.com > <https://groups.google.com/d/msgid/prometheus-users/7f8bfb3e-70c7-4d38-8b89-55a9cc944865n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/CADCvZwn4AYci0UwrREzKw_-LTG1p_7B5w8uQKEVGNtunMnqrBw%40mail.gmail.com.

