[influxdb] Re: continuous query backfill

2016-10-25 Thread joblx88
On Tuesday, October 25, 2016 at 8:55:49 PM UTC-5, job...@gmail.com wrote:
> I have this continuous query in version 1.02 of influx
> 
> CREATE CONTINUOUS QUERY cqDailyTasks ON r1metrics  BEGIN SELECT count(*) INTO 
> metrics..dailyTasks FROM metrics..tasks GROUP BY time(1d), * END
> 
> I have data since 12 of Oct 
> 
> This continous query did not back fill to the 12 as i expected.  What did i 
> do wrong?
> 
> I tried adding a RESAMPLE EVERY 1m and that did add some data but not till 
> the 12.
> 
> Thanks

(For other newbies out that) 

I just found that I have to backfill it myself with an into clause.  

I did this

SELECT count(*) INTO r1metrics..dailyTasks FROM r1metrics..tasks where time < 
now() GROUP BY time(1d), * 

and it worked.  Influx handles the duplicates by overwriting them

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/0018e601-ca08-431c-ba2f-438d00350075%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] continuous query backfill

2016-10-25 Thread joblx88
I have this continuous query in version 1.02 of influx

CREATE CONTINUOUS QUERY cqDailyTasks ON r1metrics  BEGIN SELECT count(*) INTO 
metrics..dailyTasks FROM metrics..tasks GROUP BY time(1d), * END

I have data since 12 of Oct 

This continous query did not back fill to the 12 as i expected.  What did i do 
wrong?

I tried adding a RESAMPLE EVERY 1m and that did add some data but not till the 
12.

Thanks

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/c2f01027-36de-4b1c-a702-063048a42bef%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: InfluxDB Raspberry Pi installation?

2016-10-25 Thread EBRAddict
For anyone coming across this thread, I found the apt-get install method 
stopped working when I tried on a fresh Raspbian install, it couldn't find 
the library. I'm a Linux newbie so perhaps I'm missing something obvious. I 
did get this to work:

--Update your Pi, this may take a while...

sudo apt-get update
sudo apt-get upgrade


-- get the Debian package from here. Note the file name and subdirectory 
structure may change so you might have to search for it.
wget https:
//repos.influxdata.com/debian/pool/stable/i/influxdb/influxdb_1.0.2-1_armhf.deb


-- install the Debian package
sudo dpkg -i influxdb_1.0.2-1_armhf.deb


-- start the service
sudo service influxd start


-- run the CLI
influx


On Thursday, October 13, 2016 at 8:39:27 AM UTC-4, EBRAddict wrote:
>
> Hi,
>
> I'd like to try InfluxDB on a Raspberry Pi 3 for a mobile sensor project. 
> Currently it's logging ~50 data points every 200ms to a USB flash drive 
> text file but I want to ramp that up to 200 every 10ms, or however fast I 
> can push data from the microcontrollers to the Pi.
>
> I downloaded and uncompressed the ARM binaries using the instructions on 
> the InfluxDB download page:
>
> wget 
> https://dl.influxdata.com/influxdb/releases/influxdb-1.0.2_linux_armhf.tar.gz
> tar xvfz influxdb-1.0.2_linux_armhf.tar.gz
>
>
> What are the next steps? I'm not a Linux guy but I can follow directions 
> if someone could point me to them. I'd like to configure the service(s) to 
> run at startup automatically and be accessible for querying by any logged 
> in user (it's a closed system).
>
> Thanks.
>

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/a20a3416-6ec7-4347-a5aa-940d22535b22%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Telegraf Logparser plugin - * not working on Windows

2016-10-25 Thread zdw101
Hi All,

I'm having some issues using the Telegraf logparser plugin to parse all log 
files in a directory on Windows. I'm using the latest 1.01 Windows binary.

If I use an * in the file path, it doesn't find any files to read. But it works 
if I give the complete path to the file. Am I doing anything wrong in the conf 
file?

telegraf.conf input:

[[inputs.logparser]]
  ## files to tail.
  files = ["D:\\temp\\*.log"]
  ## Read file from beginning.
  from_beginning = true
  ## Override the default measurement name, which would be "logparser_grok"
  name_override = "log"
  ## For parsing logstash-style "grok" patterns:
  [inputs.logparser.grok]
patterns = ["%{CUSTOM_LOG}"]
custom_patterns = '''
  CUSTOM_LOG %{TIMESTAMP_ISO8601:ts:ts-"2006-01-02 15:04:05"} 
%{IPORHOST:serverhost} %{WORD:method:tag} %{URIPATH:page:tag} 
%{NOTSPACE:querystring} %{NUMBER:port:drop} %{NOTSPACE:username:drop} 
%{IPORHOST:clienthost} %{NOTSPACE:useragent:drop} %{NOTSPACE:cookie:drop} 
%{NOTSPACE:referer:drop} %{IPORHOST:hostname} %{NUMBER:response:tag} 
%{NUMBER:subresponse} %{NUMBER:scstatus:drop} %{NUMBER:scbytes:drop} 
%{NUMBER:csbytes:drop} %{NUMBER:timetaken:int}
'''

--test output:

* Plugin: logparser, Collection 1

Thank you

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/ace9f0e8-1136-4a88-bee6-6dd87ce70139%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] Re: Kapacitor joins have no fields in

2016-10-25 Thread Peter Farmer
Hi Nathaniel,

Thats got it! Thanks for the help!

/me heads off to write lots and lots of new batch queries.

Thanks,

Peter

On 25 October 2016 at 16:25,  wrote:

> Peter,
>
> Your script is nearly correct there is just a small subtle issue causing
> it to not quite work. Specifically the output of a query node is a batch
> edge, Meaning that is the data is batched into sets of data. In your case
> that batch of data contains one point, the mean value for the specified
> time range. If you look closely at the two log lines before the join you
> will see that each line has two times, tmax represents the max time for the
> entire batch and the time of the point itself. Since join, joins on time
> for batches this means tmax must match and the points times must also
> match. In your case the tmax match as they are within the tolerance of 60s,
> but the points times are 2w apart and as a result no longer match. The
> resulting data from the join node is a batch without any points, and hence
> no fields.
>
> So after that, what is the solution? Simply add a `last` operation to the
> query nodes so that you select only the last point from each batch, in your
> case the only point. This transforms the batch into a single "stream" point
> with the time of tmax. Then when the points arrive at the join operation
> they will properly match and the rest of your script should work.
>
> TL;DR do this:
>
> var last_minute = batch
> |query('select mean(latency_avg) FROM "vdc"."default".latency')
> .groupBy('source','destination')
> .period(1m)
> .every(1m)
>  // Use the last operation to extract the single mean point from the
> result
>  |last('mean')
> .as('mean')
>  |log()
>  .prefix('LATENCY_AVG:SHORT')
>
> var last_2weeks = batch
> |query('select mean(latency_avg) FROM "vdc"."default".latency')
> .groupBy('source','destination')
> .period(2w)
> .every(1m)
>  // Use the last operation to extract the single mean point from the
> result
>  |last('mean')
> .as('mean')
>  |log()
>  .prefix('LATENCY_AVG:LONG')
>
> last_2weeks
> |join(last_minute)
> .as('last_2weeks','last_minute')
> .tolerance(60s)
> |log()
> .prefix('LATENCY_AVG:JOINED')
> |eval(lambda: "last_minute.mean" / "last_2weeks.mean")
> .as('ratio')
> |log()
> .prefix('LATENCY_AVG:END')
> |alert().crit(lambda: "ratio" > 1.0)
> .log('/tmp/latency.log')
>
>
>
> On Tuesday, October 25, 2016 at 5:29:57 AM UTC-6, Peter Farmer wrote:
>>
>> Hi,
>>
>> Been using influxdb for quite a while now, and have recently started
>> using kapacitor to analysis data and generate alerts. All my simple alerts
>> work perfectly, but I'm trying to do something slightly more complicated.
>> I'm attempted to compare the average data from the last 60 seconds with the
>> average data from the last 14 days, and then generate an alert if the last
>> 60 seconds is significately greater than the last 14 days. Having looked at
>> previous discussions on this subject I create the following tick script:
>>
>> var last_minute = batch
>> |query('select mean(latency_avg) FROM "vdc"."default".latency')
>> .groupBy('source','destination')
>> .period(1m)
>> .every(1m)
>> |log()
>> .prefix('LATENCY_AVG:SHORT')
>>
>> var last_2weeks = batch
>> |query('select mean(latency_avg) FROM "vdc"."default".latency')
>> .groupBy('source','destination')
>> .period(2w)
>> .every(1m)
>> |log()
>> .prefix('LATENCY_AVG:LONG')
>>
>> last_2weeks
>> |join(last_minute)
>> .as('last_2weeks','last_minute')
>> .tolerance(60s)
>> |log()
>> .prefix('LATENCY_AVG:JOINED')
>> |eval(lambda: "last_minute.mean" / "last_2weeks.mean")
>> .as('ratio')
>> |log()
>> .prefix('LATENCY_AVG:END')
>> |alert().crit(lambda: "ratio" > 1.0)
>> .log('/tmp/latency.log')
>>
>>
>> The vars are initially generated correctly:
>>
>>
>> [latency_avg:log2] 2016/10/25 11:16:20 I! LATENCY_AVG:SHORT
>> {"name":"latency","tmax":"2016-10-25T11:16:20.18402423Z","
>> group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-
>> 1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-
>> 002-eu-col-1"},"points":[{"time":"2016-10-25T11:15:20.184
>> 02423Z","fields":{"mean":37.7},"tags":{"destination":"zrh-
>> jos-eu-col-1","source":"sto-002-eu-col-1"}}]}
>> [latency_avg:log4] 2016/10/25 11:16:25 I! LATENCY_AVG:LONG
>> {"name":"latency","tmax":"2016-10-25T11:16:20.184029496Z","
>> group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-
>> col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"
>> sto-002-eu-col-1"},"points":[{"time":"2016-10-11T11:16:20.18
>> 4029496Z","fields":{"mean":37.731245818821975},"tags":{"dest
>> ination":"zrh-jos-eu-col-1","source":"s

Re: [influxdb] Re: Inconsistency: kapacitor requires retention policy to be set when writing but influxdb does not

2016-10-25 Thread Sean Beckett
Forest, that's an interesting find. Please open an issue
 on the Kapacitor repo
so the developers can take a look.

On Tue, Oct 25, 2016 at 12:09 PM,  wrote:

> Actually, I have discovered a workaround. when creating the task, I can
> use the following JSON:
>
> "dbrps": [
>   {
> "db": "my_db",
> "rp": ""
>   }
> ],
>
> instead of
>
> "dbrps": [
>   {
> "db": "my_db",
> "rp": "autogen"
>   }
> ],
>
> which I was using before.
>
> But I still think this inconsistency should be fixed.
>
> On Tuesday, October 25, 2016 at 12:51:48 PM UTC-5,
> fjoh...@peoplenetonline.com wrote:
> > We are using the TICK stack with the InfluxDb relay according to the
> diagram at https://github.com/influxdata/influxdb-relay#description
> >
> > We decided the cleanest way to configure kapacitor would be to have the
> relays forward data to it directly, rather than having kapacitor subscribe
> to one of the replicas or subscribe to a load balancer representing the
> replicas.
> >
> > We were able to configure that, but we hit a snag. When data is ingested
> by influxdb, if no retention policy is specified, influxdb will
> automatically put it in the "autogen" retention policy. However, it appears
> that when data is forwarded to Kapacitor, it won't show up in a StreamNode
> unless it has a retention policy specified.  This really threw me for a
> loop and I had no idea what was going on until I stumbled across the
> following thread on this mailing list:
> >
> > https://groups.google.com/forum/#!searchin/influxdb/
> kapacitor$20write$20|sort:relevance/influxdb/mnomTKVUK98/fYnMoP3sCgAJ
> >
> > In my opinion either InfluxDb should require a retention policy to be
> specified like Kapacitor does, or Kapacitor should automatically shovel
> metrics with null retention policies into one called autogen just like
> InfluxDb.
> >
> > For now we will probably have to configure Kapacitor to listen to one of
> the InfluxDb replicas since we can't update every single one of our
> Telegraf instances just to get around this inconsistency.
> >
> > Forest
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to the Google Groups
> "InfluxData" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to influxdb+unsubscr...@googlegroups.com.
> To post to this group, send email to influxdb@googlegroups.com.
> Visit this group at https://groups.google.com/group/influxdb.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/influxdb/8671deb5-fceb-4c17-b620-628bc5bb1631%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Sean Beckett
Director of Support and Professional Services
InfluxDB

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CALGqCvN%2B57G5vVu-EqPk0WHknzHgoAk%3DmCh%2B_wCoMzjw1NB1GQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: InfluxDB Related Blog Content - Tell Us What You Want to Read About!

2016-10-25 Thread ckirkos
On Friday, August 7, 2015 at 2:43:22 PM UTC-4, Paul Dix wrote:
> Hello,
> 
> We are ramping up to start publishing a lot more technical content on the 
> InfluxDB blog. To make sure we are writing on topics that are relevant and 
> interesting to this group, I am starting this thread so that you can reply 
> with the topics you'd like us to write about. The more specific and technical 
> the better!
> 
> Obviously, blogs aren't substitutes for good documentation, but hopefully the 
> blogs can dive into greater detail on some of InfluxDB's hidden or advanced 
> capabilities, as well as, how it can interact with related technologies like 
> Go, Grafana, Docker, OpenStack etc.
> 
> Let us know what you'd like to read about!
> 
> 
> 
> Thanks,
> Paul

I would love to see more use cases and data structures for modeling diverse 
scenarios. For example, I have a hard time figuring out whether to make new 
measurement names vs just adding more values to existing measurements. Also, 
how do I log events or errors; items without intrinsic values.

-Chris

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/84a54d8a-ce22-4eab-9e38-a2c1a063cfa7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Inconsistency: kapacitor requires retention policy to be set when writing but influxdb does not

2016-10-25 Thread fjohnson
Actually, I have discovered a workaround. when creating the task, I can use the 
following JSON:

"dbrps": [
  {
"db": "my_db",
"rp": ""
  }
],

instead of 

"dbrps": [
  {
"db": "my_db",
"rp": "autogen"
  }
],

which I was using before. 

But I still think this inconsistency should be fixed. 

On Tuesday, October 25, 2016 at 12:51:48 PM UTC-5, fjoh...@peoplenetonline.com 
wrote:
> We are using the TICK stack with the InfluxDb relay according to the diagram 
> at https://github.com/influxdata/influxdb-relay#description
> 
> We decided the cleanest way to configure kapacitor would be to have the 
> relays forward data to it directly, rather than having kapacitor subscribe to 
> one of the replicas or subscribe to a load balancer representing the 
> replicas. 
> 
> We were able to configure that, but we hit a snag. When data is ingested by 
> influxdb, if no retention policy is specified, influxdb will automatically 
> put it in the "autogen" retention policy. However, it appears that when data 
> is forwarded to Kapacitor, it won't show up in a StreamNode unless it has a 
> retention policy specified.  This really threw me for a loop and I had no 
> idea what was going on until I stumbled across the following thread on this 
> mailing list:
> 
> https://groups.google.com/forum/#!searchin/influxdb/kapacitor$20write$20|sort:relevance/influxdb/mnomTKVUK98/fYnMoP3sCgAJ
> 
> In my opinion either InfluxDb should require a retention policy to be 
> specified like Kapacitor does, or Kapacitor should automatically shovel 
> metrics with null retention policies into one called autogen just like 
> InfluxDb. 
> 
> For now we will probably have to configure Kapacitor to listen to one of the 
> InfluxDb replicas since we can't update every single one of our Telegraf 
> instances just to get around this inconsistency. 
> 
> Forest

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/8671deb5-fceb-4c17-b620-628bc5bb1631%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Inconsistency: kapacitor requires retention policy to be set when writing but influxdb does not

2016-10-25 Thread fjohnson
We are using the TICK stack with the InfluxDb relay according to the diagram at 
https://github.com/influxdata/influxdb-relay#description

We decided the cleanest way to configure kapacitor would be to have the relays 
forward data to it directly, rather than having kapacitor subscribe to one of 
the replicas or subscribe to a load balancer representing the replicas. 

We were able to configure that, but we hit a snag. When data is ingested by 
influxdb, if no retention policy is specified, influxdb will automatically put 
it in the "autogen" retention policy. However, it appears that when data is 
forwarded to Kapacitor, it won't show up in a StreamNode unless it has a 
retention policy specified.  This really threw me for a loop and I had no idea 
what was going on until I stumbled across the following thread on this mailing 
list:

https://groups.google.com/forum/#!searchin/influxdb/kapacitor$20write$20|sort:relevance/influxdb/mnomTKVUK98/fYnMoP3sCgAJ

In my opinion either InfluxDb should require a retention policy to be specified 
like Kapacitor does, or Kapacitor should automatically shovel metrics with null 
retention policies into one called autogen just like InfluxDb. 

For now we will probably have to configure Kapacitor to listen to one of the 
InfluxDb replicas since we can't update every single one of our Telegraf 
instances just to get around this inconsistency. 

Forest 

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/ebe30b52-bc3a-4000-9cda-0e7c6eae9632%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] InfluxDB import

2016-10-25 Thread Sean Beckett
https://github.com/influxdata/influxdb/issues/7516

On Tue, Oct 25, 2016 at 9:32 AM, manish jain  wrote:

> Thanks Sean. I will check it and will get back to you.
> Much appreciated.
>
> Best regards,
> Manish jain
> +917738684730
>
> On 25-Oct-2016 8:59 PM, "Sean Beckett"  wrote:
>
>> I'm unable the import your trivial file, as well. I'm not sure why. I
>> verified this import will work, though.
>>
>> On Tue, Oct 25, 2016 at 9:03 AM, manish jain  wrote:
>>
>>> Hello Sean,
>>> Would it be possible for you to share a data file that can be used. I
>>> just need to demonstrate the influxDB import functionality to my team.
>>> Hence it's not data specific.
>>>
>>> So if you have a valid data/import file that I can use to check the
>>> functionality. Please share.
>>>
>>> Best regards,
>>> Manish jain
>>> +917738684730
>>>
>>> On 25-Oct-2016 8:30 PM, "Sean Beckett"  wrote:
>>>
 The data so far don't make sense. The writes appear to be successful
 but there doesn't appear to be anything in the database.

 Can you share the InfluxDB logs from when the import was processed? The
 previously shared log lines in your first message have nothing to do with
 writes or importing and are not indicative of anything.

 On Tue, Oct 25, 2016 at 4:19 AM, manish jain 
 wrote:

> Hello Sean,
> Please find below results -
>
> SELECT * FROM db0..cpu -
>
> [image: Inline image 3]
>
> SELECt * FROM db1.rp1.cpu
> [image: Inline image 2]
>
>
> Thanks and Regards,
> Manish Jain
> +917738684730
>
> On Mon, Oct 24, 2016 at 9:30 PM, Sean Beckett 
> wrote:
>
>> What is returned by
>>
>> SELECT * FROM db0..cpu
>> SELECt * FROM db1.rp1.cpu
>>
>> On Thu, Oct 20, 2016 at 12:48 AM, manish jain 
>> wrote:
>>
>>> Hello Sean,
>>> Please find inline outputs :-
>>>
>>> *SHOW RETENTION POLICIES ON db0*
>>>
>>> [image: Inline image 1]
>>>
>>> *SHOW RETENTION POLICIES ON db1*
>>>
>>> [image: Inline image 2]
>>>
>>>
>>> Thanks and Regards,
>>> Manish Jain
>>> +917738684730
>>>
>>> On Thu, Oct 20, 2016 at 12:01 AM, Sean Beckett 
>>> wrote:
>>>
 What do the following return?

 SHOW RETENTION POLICIES ON db0
 SHOW RETENTION POLICIES ON db1

 On Wed, Oct 19, 2016 at 2:49 AM, manish jain 
 wrote:

> Hello Sean,
> This is the import file i am using -
> ---Cut--
> ---
>
> # DDL
> CREATE DATABASE db0
> CREATE DATABASE db1
> CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1
>
> # DML
> # CONTEXT-DATABASE:db0
> # CONTEXT-RETENTION-POLICY:autogen
> cpu,host=server1 value=33.3 14640263350
> cpu,host=server1 value=43.3 14640263950
> cpu,host=server1 value=63.3 14640265750
>
> # CONTEXT-DATABASE:db1
> # CONTEXT-RETENTION-POLICY:rp1
> cpu,host=server1 value=73.3 14640263350
> cpu,host=server1 value=83.3 14640263950
> cpu,host=server1 value=93.3 14640265750
>
> ---Cut--
> ---
> After running this file like this :- [root@integration@DEV
> influxdb]influx -import -path=first.txt
> I get following output - which shows success-
>
> 2016/10/14 09:22:45 Processed 3 commands
> 2016/10/14 09:22:45 Processed 6 inserts
> 2016/10/14 09:22:45 Failed 0 inserts
>
> Attached Screenshot of my query to the database.
>
> Log file -
>
> [retention] 2016/10/19 03:28:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 03:58:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 04:28:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 04:58:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 05:28:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 05:58:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 06:28:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 06:58:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 07:28:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 07:58:26 retention policy shard deletion
> check commencing
> [retention] 2016/10/19 08:28:26 retention policy shard deletion
> check commen

Re: [influxdb] InfluxDB import

2016-10-25 Thread manish jain
Thanks Sean. I will check it and will get back to you.
Much appreciated.

Best regards,
Manish jain
+917738684730

On 25-Oct-2016 8:59 PM, "Sean Beckett"  wrote:

> I'm unable the import your trivial file, as well. I'm not sure why. I
> verified this import will work, though.
>
> On Tue, Oct 25, 2016 at 9:03 AM, manish jain  wrote:
>
>> Hello Sean,
>> Would it be possible for you to share a data file that can be used. I
>> just need to demonstrate the influxDB import functionality to my team.
>> Hence it's not data specific.
>>
>> So if you have a valid data/import file that I can use to check the
>> functionality. Please share.
>>
>> Best regards,
>> Manish jain
>> +917738684730
>>
>> On 25-Oct-2016 8:30 PM, "Sean Beckett"  wrote:
>>
>>> The data so far don't make sense. The writes appear to be successful but
>>> there doesn't appear to be anything in the database.
>>>
>>> Can you share the InfluxDB logs from when the import was processed? The
>>> previously shared log lines in your first message have nothing to do with
>>> writes or importing and are not indicative of anything.
>>>
>>> On Tue, Oct 25, 2016 at 4:19 AM, manish jain 
>>> wrote:
>>>
 Hello Sean,
 Please find below results -

 SELECT * FROM db0..cpu -

 [image: Inline image 3]

 SELECt * FROM db1.rp1.cpu
 [image: Inline image 2]


 Thanks and Regards,
 Manish Jain
 +917738684730

 On Mon, Oct 24, 2016 at 9:30 PM, Sean Beckett 
 wrote:

> What is returned by
>
> SELECT * FROM db0..cpu
> SELECt * FROM db1.rp1.cpu
>
> On Thu, Oct 20, 2016 at 12:48 AM, manish jain 
> wrote:
>
>> Hello Sean,
>> Please find inline outputs :-
>>
>> *SHOW RETENTION POLICIES ON db0*
>>
>> [image: Inline image 1]
>>
>> *SHOW RETENTION POLICIES ON db1*
>>
>> [image: Inline image 2]
>>
>>
>> Thanks and Regards,
>> Manish Jain
>> +917738684730
>>
>> On Thu, Oct 20, 2016 at 12:01 AM, Sean Beckett 
>> wrote:
>>
>>> What do the following return?
>>>
>>> SHOW RETENTION POLICIES ON db0
>>> SHOW RETENTION POLICIES ON db1
>>>
>>> On Wed, Oct 19, 2016 at 2:49 AM, manish jain 
>>> wrote:
>>>
 Hello Sean,
 This is the import file i am using -
 ---Cut--
 ---

 # DDL
 CREATE DATABASE db0
 CREATE DATABASE db1
 CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1

 # DML
 # CONTEXT-DATABASE:db0
 # CONTEXT-RETENTION-POLICY:autogen
 cpu,host=server1 value=33.3 14640263350
 cpu,host=server1 value=43.3 14640263950
 cpu,host=server1 value=63.3 14640265750

 # CONTEXT-DATABASE:db1
 # CONTEXT-RETENTION-POLICY:rp1
 cpu,host=server1 value=73.3 14640263350
 cpu,host=server1 value=83.3 14640263950
 cpu,host=server1 value=93.3 14640265750

 ---Cut--
 ---
 After running this file like this :- [root@integration@DEV
 influxdb]influx -import -path=first.txt
 I get following output - which shows success-

 2016/10/14 09:22:45 Processed 3 commands
 2016/10/14 09:22:45 Processed 6 inserts
 2016/10/14 09:22:45 Failed 0 inserts

 Attached Screenshot of my query to the database.

 Log file -

 [retention] 2016/10/19 03:28:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 03:58:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 04:28:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 04:58:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 05:28:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 05:58:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 06:28:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 06:58:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 07:28:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 07:58:26 retention policy shard deletion
 check commencing
 [retention] 2016/10/19 08:28:26 retention policy shard deletion
 check commencing



 Thanks and Regards,
 Manish Jain
 +917738684730

 On Fri, Oct 14, 2016 at 11:30 PM, Sean Beckett 
 wrote:

> Please share the DDL from the top of the import file.
>>

Re: [influxdb] InfluxDB import

2016-10-25 Thread Sean Beckett
I'm unable the import your trivial file, as well. I'm not sure why. I
verified this import will work, though.

On Tue, Oct 25, 2016 at 9:03 AM, manish jain  wrote:

> Hello Sean,
> Would it be possible for you to share a data file that can be used. I just
> need to demonstrate the influxDB import functionality to my team. Hence
> it's not data specific.
>
> So if you have a valid data/import file that I can use to check the
> functionality. Please share.
>
> Best regards,
> Manish jain
> +917738684730
>
> On 25-Oct-2016 8:30 PM, "Sean Beckett"  wrote:
>
>> The data so far don't make sense. The writes appear to be successful but
>> there doesn't appear to be anything in the database.
>>
>> Can you share the InfluxDB logs from when the import was processed? The
>> previously shared log lines in your first message have nothing to do with
>> writes or importing and are not indicative of anything.
>>
>> On Tue, Oct 25, 2016 at 4:19 AM, manish jain  wrote:
>>
>>> Hello Sean,
>>> Please find below results -
>>>
>>> SELECT * FROM db0..cpu -
>>>
>>> [image: Inline image 3]
>>>
>>> SELECt * FROM db1.rp1.cpu
>>> [image: Inline image 2]
>>>
>>>
>>> Thanks and Regards,
>>> Manish Jain
>>> +917738684730
>>>
>>> On Mon, Oct 24, 2016 at 9:30 PM, Sean Beckett  wrote:
>>>
 What is returned by

 SELECT * FROM db0..cpu
 SELECt * FROM db1.rp1.cpu

 On Thu, Oct 20, 2016 at 12:48 AM, manish jain 
 wrote:

> Hello Sean,
> Please find inline outputs :-
>
> *SHOW RETENTION POLICIES ON db0*
>
> [image: Inline image 1]
>
> *SHOW RETENTION POLICIES ON db1*
>
> [image: Inline image 2]
>
>
> Thanks and Regards,
> Manish Jain
> +917738684730
>
> On Thu, Oct 20, 2016 at 12:01 AM, Sean Beckett 
> wrote:
>
>> What do the following return?
>>
>> SHOW RETENTION POLICIES ON db0
>> SHOW RETENTION POLICIES ON db1
>>
>> On Wed, Oct 19, 2016 at 2:49 AM, manish jain 
>> wrote:
>>
>>> Hello Sean,
>>> This is the import file i am using -
>>> ---Cut--
>>> ---
>>>
>>> # DDL
>>> CREATE DATABASE db0
>>> CREATE DATABASE db1
>>> CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1
>>>
>>> # DML
>>> # CONTEXT-DATABASE:db0
>>> # CONTEXT-RETENTION-POLICY:autogen
>>> cpu,host=server1 value=33.3 14640263350
>>> cpu,host=server1 value=43.3 14640263950
>>> cpu,host=server1 value=63.3 14640265750
>>>
>>> # CONTEXT-DATABASE:db1
>>> # CONTEXT-RETENTION-POLICY:rp1
>>> cpu,host=server1 value=73.3 14640263350
>>> cpu,host=server1 value=83.3 14640263950
>>> cpu,host=server1 value=93.3 14640265750
>>>
>>> ---Cut--
>>> ---
>>> After running this file like this :- [root@integration@DEV
>>> influxdb]influx -import -path=first.txt
>>> I get following output - which shows success-
>>>
>>> 2016/10/14 09:22:45 Processed 3 commands
>>> 2016/10/14 09:22:45 Processed 6 inserts
>>> 2016/10/14 09:22:45 Failed 0 inserts
>>>
>>> Attached Screenshot of my query to the database.
>>>
>>> Log file -
>>>
>>> [retention] 2016/10/19 03:28:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 03:58:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 04:28:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 04:58:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 05:28:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 05:58:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 06:28:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 06:58:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 07:28:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 07:58:26 retention policy shard deletion
>>> check commencing
>>> [retention] 2016/10/19 08:28:26 retention policy shard deletion
>>> check commencing
>>>
>>>
>>>
>>> Thanks and Regards,
>>> Manish Jain
>>> +917738684730
>>>
>>> On Fri, Oct 14, 2016 at 11:30 PM, Sean Beckett 
>>> wrote:
>>>
 Please share the DDL from the top of the import file.

 Dod you look in the logs? Are you sure you are querying the right
 database?

 On Fri, Oct 14, 2016 at 3:45 AM,  wrote:

> How can i see the data imported in the database.
> I reached till this step -
>
> [root@integration@DEV influxdb]influx -i

[influxdb] Re: kapacitor: delay alarms

2016-10-25 Thread nathaniel
To make sure I understand correctly you still want to detect the alert 
changes so flapping is not what you want, but you want to delay/throttle 
the amount of emails you send? Is that correct?

Currently this is not possible but a new alerting system is on the roadmap 
that should enable this kind of behavior, 
see https://github.com/influxdata/kapacitor/pull/884 for details on our 
plans. 

On Tuesday, October 25, 2016 at 8:04:21 AM UTC-6, Julien Ammous wrote:
>
> Hi,
> is there a way to avoid getting emails for alert which got resolved 
> quickly ?
> What I mean is that if an alarm is raised and less than 5 minutes later is 
> stopped I don't want want any email to get sent (but I still want them 
> logged in the database), I currently can't find any nice way to do that :(
>
> The other solution I am investigating is sending post request to another 
> process of mine and let it handle that logic but I would hate to do that 
> since it means it has to duplicate some of what kapacitor is already doing.
>
> Is there any way to achieve this ?
>
> PS: obviously this means that any alert raised will not be sending any 
> email before 5min, and only if it was not resolved after that delay.
>

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/452a3540-2620-40cc-a963-409e1431cf2d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Re: Kapacitor joins have no fields in

2016-10-25 Thread nathaniel
Peter,

Your script is nearly correct there is just a small subtle issue causing it 
to not quite work. Specifically the output of a query node is a batch edge, 
Meaning that is the data is batched into sets of data. In your case that 
batch of data contains one point, the mean value for the specified time 
range. If you look closely at the two log lines before the join you will 
see that each line has two times, tmax represents the max time for the 
entire batch and the time of the point itself. Since join, joins on time 
for batches this means tmax must match and the points times must also 
match. In your case the tmax match as they are within the tolerance of 60s, 
but the points times are 2w apart and as a result no longer match. The 
resulting data from the join node is a batch without any points, and hence 
no fields. 

So after that, what is the solution? Simply add a `last` operation to the 
query nodes so that you select only the last point from each batch, in your 
case the only point. This transforms the batch into a single "stream" point 
with the time of tmax. Then when the points arrive at the join operation 
they will properly match and the rest of your script should work.

TL;DR do this:

var last_minute = batch
|query('select mean(latency_avg) FROM "vdc"."default".latency')
.groupBy('source','destination')
.period(1m)
.every(1m)
 // Use the last operation to extract the single mean point from the 
result
 |last('mean')
.as('mean')
 |log()
 .prefix('LATENCY_AVG:SHORT')

var last_2weeks = batch
|query('select mean(latency_avg) FROM "vdc"."default".latency')
.groupBy('source','destination')
.period(2w)
.every(1m)
 // Use the last operation to extract the single mean point from the 
result
 |last('mean')
.as('mean')
 |log()
 .prefix('LATENCY_AVG:LONG')

last_2weeks
|join(last_minute)
.as('last_2weeks','last_minute')
.tolerance(60s)
|log()
.prefix('LATENCY_AVG:JOINED')
|eval(lambda: "last_minute.mean" / "last_2weeks.mean")
.as('ratio')
|log()
.prefix('LATENCY_AVG:END')
|alert().crit(lambda: "ratio" > 1.0)
.log('/tmp/latency.log')



On Tuesday, October 25, 2016 at 5:29:57 AM UTC-6, Peter Farmer wrote:
>
> Hi,
>
> Been using influxdb for quite a while now, and have recently started using 
> kapacitor to analysis data and generate alerts. All my simple alerts work 
> perfectly, but I'm trying to do something slightly more complicated. I'm 
> attempted to compare the average data from the last 60 seconds with the 
> average data from the last 14 days, and then generate an alert if the last 
> 60 seconds is significately greater than the last 14 days. Having looked at 
> previous discussions on this subject I create the following tick script:
>
> var last_minute = batch
> |query('select mean(latency_avg) FROM "vdc"."default".latency')
> .groupBy('source','destination')
> .period(1m)
> .every(1m)
> |log()
> .prefix('LATENCY_AVG:SHORT')
>
> var last_2weeks = batch
> |query('select mean(latency_avg) FROM "vdc"."default".latency')
> .groupBy('source','destination')
> .period(2w)
> .every(1m)
> |log()
> .prefix('LATENCY_AVG:LONG')
>
> last_2weeks
> |join(last_minute)
> .as('last_2weeks','last_minute')
> .tolerance(60s)
> |log()
> .prefix('LATENCY_AVG:JOINED')
> |eval(lambda: "last_minute.mean" / "last_2weeks.mean")
> .as('ratio')
> |log()
> .prefix('LATENCY_AVG:END')
> |alert().crit(lambda: "ratio" > 1.0)
> .log('/tmp/latency.log')
>
>
> The vars are initially generated correctly:
>
>
> [latency_avg:log2] 2016/10/25 11:16:20 I! LATENCY_AVG:SHORT 
> {"name":"latency","tmax":"2016-10-25T11:16:20.18402423Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"},"points":[{"time":"2016-10-25T11:15:20.18402423Z","fields":{"mean":37.7},"tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}]}
> [latency_avg:log4] 2016/10/25 11:16:25 I! LATENCY_AVG:LONG 
> {"name":"latency","tmax":"2016-10-25T11:16:20.184029496Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"},"points":[{"time":"2016-10-11T11:16:20.184029496Z","fields":{"mean":37.731245818821975},"tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}]}
>
>
> But once the join happens, there is are no fields in the data:
>
>
> [latency_avg:log7] 2016/10/25 11:16:25 I! LATENCY_AVG:JOINED 
> {"name":"latency","tmax":"2016-10-25T11:16:00Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}
>

Re: [influxdb] InfluxDB import

2016-10-25 Thread manish jain
Hello Sean,
Would it be possible for you to share a data file that can be used. I just
need to demonstrate the influxDB import functionality to my team. Hence
it's not data specific.

So if you have a valid data/import file that I can use to check the
functionality. Please share.

Best regards,
Manish jain
+917738684730

On 25-Oct-2016 8:30 PM, "Sean Beckett"  wrote:

> The data so far don't make sense. The writes appear to be successful but
> there doesn't appear to be anything in the database.
>
> Can you share the InfluxDB logs from when the import was processed? The
> previously shared log lines in your first message have nothing to do with
> writes or importing and are not indicative of anything.
>
> On Tue, Oct 25, 2016 at 4:19 AM, manish jain  wrote:
>
>> Hello Sean,
>> Please find below results -
>>
>> SELECT * FROM db0..cpu -
>>
>> [image: Inline image 3]
>>
>> SELECt * FROM db1.rp1.cpu
>> [image: Inline image 2]
>>
>>
>> Thanks and Regards,
>> Manish Jain
>> +917738684730
>>
>> On Mon, Oct 24, 2016 at 9:30 PM, Sean Beckett  wrote:
>>
>>> What is returned by
>>>
>>> SELECT * FROM db0..cpu
>>> SELECt * FROM db1.rp1.cpu
>>>
>>> On Thu, Oct 20, 2016 at 12:48 AM, manish jain 
>>> wrote:
>>>
 Hello Sean,
 Please find inline outputs :-

 *SHOW RETENTION POLICIES ON db0*

 [image: Inline image 1]

 *SHOW RETENTION POLICIES ON db1*

 [image: Inline image 2]


 Thanks and Regards,
 Manish Jain
 +917738684730

 On Thu, Oct 20, 2016 at 12:01 AM, Sean Beckett 
 wrote:

> What do the following return?
>
> SHOW RETENTION POLICIES ON db0
> SHOW RETENTION POLICIES ON db1
>
> On Wed, Oct 19, 2016 at 2:49 AM, manish jain 
> wrote:
>
>> Hello Sean,
>> This is the import file i am using -
>> ---Cut--
>> ---
>>
>> # DDL
>> CREATE DATABASE db0
>> CREATE DATABASE db1
>> CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1
>>
>> # DML
>> # CONTEXT-DATABASE:db0
>> # CONTEXT-RETENTION-POLICY:autogen
>> cpu,host=server1 value=33.3 14640263350
>> cpu,host=server1 value=43.3 14640263950
>> cpu,host=server1 value=63.3 14640265750
>>
>> # CONTEXT-DATABASE:db1
>> # CONTEXT-RETENTION-POLICY:rp1
>> cpu,host=server1 value=73.3 14640263350
>> cpu,host=server1 value=83.3 14640263950
>> cpu,host=server1 value=93.3 14640265750
>>
>> ---Cut--
>> ---
>> After running this file like this :- [root@integration@DEV
>> influxdb]influx -import -path=first.txt
>> I get following output - which shows success-
>>
>> 2016/10/14 09:22:45 Processed 3 commands
>> 2016/10/14 09:22:45 Processed 6 inserts
>> 2016/10/14 09:22:45 Failed 0 inserts
>>
>> Attached Screenshot of my query to the database.
>>
>> Log file -
>>
>> [retention] 2016/10/19 03:28:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 03:58:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 04:28:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 04:58:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 05:28:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 05:58:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 06:28:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 06:58:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 07:28:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 07:58:26 retention policy shard deletion check
>> commencing
>> [retention] 2016/10/19 08:28:26 retention policy shard deletion check
>> commencing
>>
>>
>>
>> Thanks and Regards,
>> Manish Jain
>> +917738684730
>>
>> On Fri, Oct 14, 2016 at 11:30 PM, Sean Beckett 
>> wrote:
>>
>>> Please share the DDL from the top of the import file.
>>>
>>> Dod you look in the logs? Are you sure you are querying the right
>>> database?
>>>
>>> On Fri, Oct 14, 2016 at 3:45 AM,  wrote:
>>>
 How can i see the data imported in the database.
 I reached till this step -

 [root@integration@DEV influxdb]influx -import -path=first.txt
 2016/10/14 09:22:45 Processed 3 commands
 2016/10/14 09:22:45 Processed 6 inserts
 2016/10/14 09:22:45 Failed 0 inserts

 Please help how i can see the measurements in the database.
 'Show measurements' shows nothing.

 --
 Remember to

Re: [influxdb] InfluxDB import

2016-10-25 Thread Sean Beckett
The data so far don't make sense. The writes appear to be successful but
there doesn't appear to be anything in the database.

Can you share the InfluxDB logs from when the import was processed? The
previously shared log lines in your first message have nothing to do with
writes or importing and are not indicative of anything.

On Tue, Oct 25, 2016 at 4:19 AM, manish jain  wrote:

> Hello Sean,
> Please find below results -
>
> SELECT * FROM db0..cpu -
>
> [image: Inline image 3]
>
> SELECt * FROM db1.rp1.cpu
> [image: Inline image 2]
>
>
> Thanks and Regards,
> Manish Jain
> +917738684730
>
> On Mon, Oct 24, 2016 at 9:30 PM, Sean Beckett  wrote:
>
>> What is returned by
>>
>> SELECT * FROM db0..cpu
>> SELECt * FROM db1.rp1.cpu
>>
>> On Thu, Oct 20, 2016 at 12:48 AM, manish jain 
>> wrote:
>>
>>> Hello Sean,
>>> Please find inline outputs :-
>>>
>>> *SHOW RETENTION POLICIES ON db0*
>>>
>>> [image: Inline image 1]
>>>
>>> *SHOW RETENTION POLICIES ON db1*
>>>
>>> [image: Inline image 2]
>>>
>>>
>>> Thanks and Regards,
>>> Manish Jain
>>> +917738684730
>>>
>>> On Thu, Oct 20, 2016 at 12:01 AM, Sean Beckett 
>>> wrote:
>>>
 What do the following return?

 SHOW RETENTION POLICIES ON db0
 SHOW RETENTION POLICIES ON db1

 On Wed, Oct 19, 2016 at 2:49 AM, manish jain 
 wrote:

> Hello Sean,
> This is the import file i am using -
> ---Cut--
> ---
>
> # DDL
> CREATE DATABASE db0
> CREATE DATABASE db1
> CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1
>
> # DML
> # CONTEXT-DATABASE:db0
> # CONTEXT-RETENTION-POLICY:autogen
> cpu,host=server1 value=33.3 14640263350
> cpu,host=server1 value=43.3 14640263950
> cpu,host=server1 value=63.3 14640265750
>
> # CONTEXT-DATABASE:db1
> # CONTEXT-RETENTION-POLICY:rp1
> cpu,host=server1 value=73.3 14640263350
> cpu,host=server1 value=83.3 14640263950
> cpu,host=server1 value=93.3 14640265750
>
> ---Cut--
> ---
> After running this file like this :- [root@integration@DEV
> influxdb]influx -import -path=first.txt
> I get following output - which shows success-
>
> 2016/10/14 09:22:45 Processed 3 commands
> 2016/10/14 09:22:45 Processed 6 inserts
> 2016/10/14 09:22:45 Failed 0 inserts
>
> Attached Screenshot of my query to the database.
>
> Log file -
>
> [retention] 2016/10/19 03:28:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 03:58:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 04:28:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 04:58:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 05:28:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 05:58:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 06:28:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 06:58:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 07:28:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 07:58:26 retention policy shard deletion check
> commencing
> [retention] 2016/10/19 08:28:26 retention policy shard deletion check
> commencing
>
>
>
> Thanks and Regards,
> Manish Jain
> +917738684730
>
> On Fri, Oct 14, 2016 at 11:30 PM, Sean Beckett 
> wrote:
>
>> Please share the DDL from the top of the import file.
>>
>> Dod you look in the logs? Are you sure you are querying the right
>> database?
>>
>> On Fri, Oct 14, 2016 at 3:45 AM,  wrote:
>>
>>> How can i see the data imported in the database.
>>> I reached till this step -
>>>
>>> [root@integration@DEV influxdb]influx -import -path=first.txt
>>> 2016/10/14 09:22:45 Processed 3 commands
>>> 2016/10/14 09:22:45 Processed 6 inserts
>>> 2016/10/14 09:22:45 Failed 0 inserts
>>>
>>> Please help how i can see the measurements in the database.
>>> 'Show measurements' shows nothing.
>>>
>>> --
>>> Remember to include the version number!
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "InfluxData" group.
>>> To unsubscribe from this group and stop receiving emails from it,
>>> send an email to influxdb+unsubscr...@googlegroups.com.
>>> To post to this group, send email to influxdb@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/influxdb.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/influ

[influxdb] kapacitor: delay alarms

2016-10-25 Thread Julien Ammous
Hi,
is there a way to avoid getting emails for alert which got resolved quickly 
?
What I mean is that if an alarm is raised and less than 5 minutes later is 
stopped I don't want want any email to get sent (but I still want them 
logged in the database), I currently can't find any nice way to do that :(

The other solution I am investigating is sending post request to another 
process of mine and let it handle that logic but I would hate to do that 
since it means it has to duplicate some of what kapacitor is already doing.

Is there any way to achieve this ?

PS: obviously this means that any alert raised will not be sending any 
email before 5min, and only if it was not resolved after that delay.

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/c3da522d-2457-4ede-887b-487df591ea9d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[influxdb] Kapacitor joins have no fields in

2016-10-25 Thread Peter Farmer
Hi,

Been using influxdb for quite a while now, and have recently started using 
kapacitor to analysis data and generate alerts. All my simple alerts work 
perfectly, but I'm trying to do something slightly more complicated. I'm 
attempted to compare the average data from the last 60 seconds with the 
average data from the last 14 days, and then generate an alert if the last 
60 seconds is significately greater than the last 14 days. Having looked at 
previous discussions on this subject I create the following tick script:

var last_minute = batch
|query('select mean(latency_avg) FROM "vdc"."default".latency')
.groupBy('source','destination')
.period(1m)
.every(1m)
|log()
.prefix('LATENCY_AVG:SHORT')

var last_2weeks = batch
|query('select mean(latency_avg) FROM "vdc"."default".latency')
.groupBy('source','destination')
.period(2w)
.every(1m)
|log()
.prefix('LATENCY_AVG:LONG')

last_2weeks
|join(last_minute)
.as('last_2weeks','last_minute')
.tolerance(60s)
|log()
.prefix('LATENCY_AVG:JOINED')
|eval(lambda: "last_minute.mean" / "last_2weeks.mean")
.as('ratio')
|log()
.prefix('LATENCY_AVG:END')
|alert().crit(lambda: "ratio" > 1.0)
.log('/tmp/latency.log')


The vars are initially generated correctly:


[latency_avg:log2] 2016/10/25 11:16:20 I! LATENCY_AVG:SHORT 
{"name":"latency","tmax":"2016-10-25T11:16:20.18402423Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"},"points":[{"time":"2016-10-25T11:15:20.18402423Z","fields":{"mean":37.7},"tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}]}
[latency_avg:log4] 2016/10/25 11:16:25 I! LATENCY_AVG:LONG 
{"name":"latency","tmax":"2016-10-25T11:16:20.184029496Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"},"points":[{"time":"2016-10-11T11:16:20.184029496Z","fields":{"mean":37.731245818821975},"tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}]}


But once the join happens, there is are no fields in the data:


[latency_avg:log7] 2016/10/25 11:16:25 I! LATENCY_AVG:JOINED 
{"name":"latency","tmax":"2016-10-25T11:16:00Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}

[latency_avg:log9] 2016/10/25 11:16:25 I! LATENCY_AVG:END 
{"name":"latency","tmax":"2016-10-25T11:16:00Z","group":"destination=zrh-jos-eu-col-1,source=sto-002-eu-col-1","tags":{"destination":"zrh-jos-eu-col-1","source":"sto-002-eu-col-1"}}


I'm pretty sure I'm doing something wrong here, so any pointers would be 
great.


Thanks,

Peter

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/e86b11f3-1dd9-4b38-b348-179b5371ca5e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [influxdb] InfluxDB import

2016-10-25 Thread manish jain
Hello Sean,
Please find below results -

SELECT * FROM db0..cpu -

[image: Inline image 3]

SELECt * FROM db1.rp1.cpu
[image: Inline image 2]


Thanks and Regards,
Manish Jain
+917738684730

On Mon, Oct 24, 2016 at 9:30 PM, Sean Beckett  wrote:

> What is returned by
>
> SELECT * FROM db0..cpu
> SELECt * FROM db1.rp1.cpu
>
> On Thu, Oct 20, 2016 at 12:48 AM, manish jain  wrote:
>
>> Hello Sean,
>> Please find inline outputs :-
>>
>> *SHOW RETENTION POLICIES ON db0*
>>
>> [image: Inline image 1]
>>
>> *SHOW RETENTION POLICIES ON db1*
>>
>> [image: Inline image 2]
>>
>>
>> Thanks and Regards,
>> Manish Jain
>> +917738684730
>>
>> On Thu, Oct 20, 2016 at 12:01 AM, Sean Beckett  wrote:
>>
>>> What do the following return?
>>>
>>> SHOW RETENTION POLICIES ON db0
>>> SHOW RETENTION POLICIES ON db1
>>>
>>> On Wed, Oct 19, 2016 at 2:49 AM, manish jain 
>>> wrote:
>>>
 Hello Sean,
 This is the import file i am using -
 ---Cut--
 ---

 # DDL
 CREATE DATABASE db0
 CREATE DATABASE db1
 CREATE RETENTION POLICY rp1 ON db1 DURATION 1h REPLICATION 1

 # DML
 # CONTEXT-DATABASE:db0
 # CONTEXT-RETENTION-POLICY:autogen
 cpu,host=server1 value=33.3 14640263350
 cpu,host=server1 value=43.3 14640263950
 cpu,host=server1 value=63.3 14640265750

 # CONTEXT-DATABASE:db1
 # CONTEXT-RETENTION-POLICY:rp1
 cpu,host=server1 value=73.3 14640263350
 cpu,host=server1 value=83.3 14640263950
 cpu,host=server1 value=93.3 14640265750

 ---Cut--
 ---
 After running this file like this :- [root@integration@DEV
 influxdb]influx -import -path=first.txt
 I get following output - which shows success-

 2016/10/14 09:22:45 Processed 3 commands
 2016/10/14 09:22:45 Processed 6 inserts
 2016/10/14 09:22:45 Failed 0 inserts

 Attached Screenshot of my query to the database.

 Log file -

 [retention] 2016/10/19 03:28:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 03:58:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 04:28:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 04:58:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 05:28:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 05:58:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 06:28:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 06:58:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 07:28:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 07:58:26 retention policy shard deletion check
 commencing
 [retention] 2016/10/19 08:28:26 retention policy shard deletion check
 commencing



 Thanks and Regards,
 Manish Jain
 +917738684730

 On Fri, Oct 14, 2016 at 11:30 PM, Sean Beckett 
 wrote:

> Please share the DDL from the top of the import file.
>
> Dod you look in the logs? Are you sure you are querying the right
> database?
>
> On Fri, Oct 14, 2016 at 3:45 AM,  wrote:
>
>> How can i see the data imported in the database.
>> I reached till this step -
>>
>> [root@integration@DEV influxdb]influx -import -path=first.txt
>> 2016/10/14 09:22:45 Processed 3 commands
>> 2016/10/14 09:22:45 Processed 6 inserts
>> 2016/10/14 09:22:45 Failed 0 inserts
>>
>> Please help how i can see the measurements in the database.
>> 'Show measurements' shows nothing.
>>
>> --
>> Remember to include the version number!
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "InfluxData" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to influxdb+unsubscr...@googlegroups.com.
>> To post to this group, send email to influxdb@googlegroups.com.
>> Visit this group at https://groups.google.com/group/influxdb.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/influxdb/a3319963-4b54-49d
>> e-969e-72330af701af%40googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB
>
> --
> Remember to include the version number!
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "InfluxData" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/influxdb/NE1jUrAVLjo

[influxdb] Re: Interpolate values

2016-10-25 Thread livio . rossani
I have the same exact need in my company's IoT project.
Sensors send values at irregular times, only when their reading change.
The last stored value is to be considered "the" value until the next one.
For simplicity, let's say the value is a boolean status ("presence"/"absence" 
in our case) encoded as "1" or "0".
I need to count the number of devices that are in state "1" for fixed time 
intervals. Using 

SELECT COUNT(value) ... GROUP BY TIME (1h) FILL(previous)

the filling is done _after_ the GROUPing, so the count doesn't reflect the 
number of devices that are _simultaneously_ in state "1". Using SUM() instead 
of COUNT() makes no difference.
The fact that Influx doesn't accept arithmetic expression as arguments of 
aggregate functions makes it difficult to work around this limitation.

Another thing that would help immensely would be a working INTEGRAL function: 
if it were possible to do something like:

SELECT INTEGRAL("value" != 0 ? 1 : 0) ... GROUP BY TIME (1h) FILL(previous)

one could easily extract the fraction of time inside every fixed interval when 
the value satisfied a given condition (value<>0 in my example, but could be 
anything).

At the moment I am querying all the raw data in the time interval and sending 
it to AWS Lambda for processing, but it would be much better if this kind of 
time-based manipulations were possible directly inside InfluxDB.

Thank you for any suggestion.

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/b1a8808d-f620-46b9-996a-e8efc6ecc732%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.