I have the same issue with multiline JSON.
I cant change to influxline as well since I need to convert my other scripts to
write to influxline instead of json!!
Any suggestion to overcome this??
--
Remember to include the InfluxDB version number with all issue reports
---
You received this mess
Hi Nathaniel,
Thanks for replying, I forgot to mention that I am using v0.12 since our
prod influxdb is still on 0.12.
Join works on the collectd and httpd measurement that I tried on in
_internal database.
Here's data returned by my queries, copied from the kapacitor.log - they
both have exact
The "subscriber" metrics with no associated database are the totals across
all databases. To put it into InfluxQL:
select sum(pointsWritten) from "subscriber" where time > now() - 10s and
"database" != '';
select sum(pointsWritten) from "subscriber" where time > now() - 10s and
"database" = ''
Th
Good question, Patrick. I assume it's measuring some internal metrics
gathering, but I'll ask the devs. We're in the process of fully documenting
all _internal values, but I still don't know all of them.
On Wed, Oct 5, 2016 at 2:46 PM, wrote:
> I was looking at the _internal..subscriber measurem
Hello, I dont completely understand what this error means. "write failed
for shard 143: engine: cache maximum memory size exceeded"
The server has 128GB of RAM and the config has default settings.
As far as I understand the "cache-max-memory-size" and
"cache-snapshot-memory-size" parameters are n
I was looking at the _internal..subscriber measurement to generate a report
when there are write errors when I ran across this which has me confused:
> select * from _internal..subscriber where "database"='' and time > now() - 1m
name: subscriber
timedatabase
Do you know that the SMTP server is functional? Can you send email from
other programs or from the command line?
On Wed, Oct 5, 2016 at 2:15 PM, wrote:
> Kapacitor seems to just want to make me upset.
>
> Relevant config:
>
> [smtp]
> enabled = true
> host = "localhost"
> port = 25
> username =
Kapacitor seems to just want to make me upset.
Relevant config:
[smtp]
enabled = true
host = "localhost"
port = 25
username = ""
password = ""
from = "my email"
to = "my email"
no-verify = true (but also tried this with false)
idle-timeout = "30s"
global = false
state-changes-only = true
Relevan
You could encode this by iteratively requesting reasonably small chunks of
time, until you meet or exceed the number of results you're looking for.
When you expect to find very recent data in a measurement, this works well.
Unfortunately, if the newest data is actually fairly old, you'll hit a few
Thanks for the follow up Kostas.
I did find that error, is really just meaning that exe is not active now.
I think logging that as an error is mis-leading, and maybe should not be
logged at all.
Alan
On Tuesday, October 4, 2016 at 4:27:54 PM UTC-7, Alan Latteri wrote:
>
> In telegraf procstat
If you don't know when the points are in time, then yes, using a WHERE time
clause is challenging. InfluxDB will happily return results from a SELECT *
query with no time boundaries, it's just going to be RAM-intensive and may
require more resources than the machine has, depending on the number of
First the union is working as expected, the script has three parents to the
union node: expected_instances, expected_instances, and running_instances.
Expected_instances is given twice, as a result if you do 72985 + 38927 38927
= 150839 > 150782, which just means that some messages are still in
If you set a WHERE time clause, how can you retrieve the N most recent
points? It might be that those points lie outside of the time range
specified in the WHERE clause, so you would need to set the widest time
range possible which would basically lead to a full scan just as if the
WHERE clause
LIMIT does not yet restrict the number of points queried, only the number
of points returned. Add a "WHERE time" clause to prevent the query from
sampling all points in the measurement.
On Wed, Oct 5, 2016 at 4:04 AM, wrote:
>
> I have stored around 4 crore records in a measurement. when i tried
The panic is not diagnostic. What is in the logs for the ~20 lines before
the panic? What happens on restart?
On Wed, Oct 5, 2016 at 12:56 AM, wrote:
> Hi All,
>
> I am running influxdb(v0.13) server in ubuntu system from more than a
> month. But suddenly the server stops. I checked the logs, di
It sounds like SLIMIT doesn't use the order by time desc clause at all.
I encourage you to file a feature request:
https://github.com/influxdata/influxdb/issues/new
On Tue, Oct 4, 2016 at 9:26 PM, Tanya Unterberger <
tanya.unterber...@gmail.com> wrote:
> Nope, sorry. I need the whole series and
Op woensdag 5 oktober 2016 12:59:32 UTC+2 schreef sreeh...@gmail.com:
> Hi Team,
>
> I am creating the Disk I/O graph using Grafana and InfluxDB as Data Source. I
> am using the below query from Grafana to get both Read_Bytes and Write_Bytes
> for particular host.
>
> SELECT mean("read_bytes")
Hi Team,
I am creating the Disk I/O graph using Grafana and InfluxDB as Data Source. I
am using the below query from Grafana to get both Read_Bytes and Write_Bytes
for particular host.
SELECT mean("read_bytes") AS "Read_Bytes", mean("write_bytes") AS "Write_Bytes"
FROM "diskio" WHERE "host" =~
> With 180 fields, querying them all together might be RAM intensive. I would
> definitely recommend using the stress tool to model that schema and the query
> resource needs.
[snip]
> Option 1 has ~180 series per metaseries. Option 2 has two series per
> metaseries. The performance won't be ide
On Wednesday, October 5, 2016 at 2:27:54 AM UTC+3, Alan Latteri wrote:
> In telegraf procstat:
>
>
>
> [[inputs.procstat]]
> exe = "Nuke"
> [[inputs.procstat]]
> exe = "maya"
>
>
> return in log:
> 2016/10/04 15:51:50 Error: procstat getting process, exe: [maya] pidfile:
> [] pattern
I have stored around 4 crore records in a measurement. when i tried to to look
into only 4 records using a select query, it is taking whole memory and result
into my system get hanged.
query i used is : select * from new_fuelitems limit 5
Can anyone help me on this.
--
Remember to include th
That made it prettier, and actually somewhat faster too. We'll go with that
for now!
Thanks.
On Wed, Oct 5, 2016 at 12:54 AM Sean Beckett wrote:
> Regex is the answer. It will be somewhat slow, but it will work. 300
> concatenated "OR" clauses will likely lead to RAM issues.
>
> On Tue, Oct 4, 2
22 matches
Mail list logo