Weiru,

Currently there is no way to swarm over CoordinateEncoder parameters.
Swarming attempts to find the best parameters for *predicting* a
certain input field, and currently we cannot make predictions with the
CoordinateEncoder [1].

So currently you can only do anomaly detection with coordinates, which
uses a pre-configured set of model params, so swarming is unnecessary.

[1] 
http://lists.numenta.org/pipermail/nupic_lists.numenta.org/2015-July/011380.html

Regards,
---------
Matt Taylor
OS Community Flag-Bearer
Numenta


On Mon, Nov 23, 2015 at 6:36 AM, Weiru Zeng <[email protected]> wrote:
> Hello Kentaro:
>
> Thanks for your help, and the model_params.py help me great, but I still
> want to know that how can I configure swarmming description file
> search_def.json to make a coordinateEncoder. Becouse I think the model
> generated by the swarming process will be more accurate. So are there some
> friends who know how should I do?
>
> thanks everyone!
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "Kentaro Iizuka";<[email protected]>;
> 发送时间: 2015年11月17日(星期二) 晚上10:29
> 收件人: "Weiru Zeng"<[email protected]>;
> 主题: Re: Question about running swarming
>
> Hello Weiru,
>
> I tried to use your `13.csv` data with CoordinateEncoder.
> Here is the sample code.
>
> https://gist.github.com/iizukak/885b11fc33db566c5e3d
>
> It’s not OPF, but I think you will find how to use CoordinateEncoder.
> BTW, your CSV's first column looks incorrect.
>
> Thanks.
>
> 2015-11-17 20:16 GMT+09:00 Weiru Zeng <[email protected]>:
>> thank you for your reply.
>>
>> I have some data which contain coordinate and the date is attached by this
>> mail. I have searched almost all the documents, but can not find any
>> information about how to edit the search_def.josn file to make the opf use
>> the CoordianteEncoder to encode the input data. The fields I want to
>> predict
>> is the coordiante (x,y,z), the predictionSteps is 1. And how to edit the
>> search_def.json file?
>>
>> Weiru Zeng
>>
>>
>> ------------------ 原始邮件 ------------------
>> 发件人: "Matthew Taylor";<[email protected]>;
>> 发送时间: 2015年11月17日(星期二) 上午6:48
>> 收件人: "Weiru Zeng"<[email protected]>;
>> 主题: Re: Question about running swarming
>>
>> There are many reasons you may be getting a bad prediction. One major
>> reason could be that the data is not predictable.
>>
>> By the way, you can run the exact experiment you're trying with a very
>> simple Menorah script:
>> https://gist.github.com/rhyolight/d9342c4f0ada961ee406
>>
>> You just need to "pip install menorah" first.
>>
>> When I ran, I got the following from the swarm:
>>
>> Field Contributions:
>> {   u'chicago-beach-water-quality Calumet Beach turbidity':
>> 13.665743305632503,
>>     u'chicago-beach-water-quality Calumet Beach water_temperature':
>> 10.618651892890124,
>>     u'chicago-beach-water-quality Calumet Beach wave_height':
>> 4.709141274238225,
>>     u'chicago-beach-water-quality Calumet Beach wave_period':
>> 16.712834718374882,
>>     u'timestamp_dayOfWeek': 15.050784856879035,
>>     u'timestamp_timeOfDay': -11.04578340758864,
>>     u'timestamp_weekend': 0.0}
>>
>> These numbers are rather low, which tells us that wave_period may not
>> be very predictable.
>>
>> ---------
>> Matt Taylor
>> OS Community Flag-Bearer
>> Numenta
>>
>>
>> On Tue, Nov 10, 2015 at 5:39 AM, Weiru Zeng <[email protected]> wrote:
>>> Hi nupic:
>>>
>>> Recent I used the nupic to make prediction, and I attached my data  and
>>> the
>>> .json file  with this email. Run the swarming with my code and data, you
>>> will find the low constribution of the field, and it makes a bad
>>> prediction.
>>> My question is below:
>>>
>>> First: Is the bad predicion caused by my .json file?
>>> Second: The low contribution of all the fields means that the feature in
>>> the
>>> data is not enough to make a good prediction. Is that right?
>>> Last: This is about the .json file. I found that there is a item named
>>> "aggregation" in some other .json file, the detail is below:
>>>
>>> "aggregation": {
>>>       "hours": 1,
>>>       "microseconds": 0,
>>>       "seconds": 0,
>>>       "fields": [
>>>         [
>>>           "consumption",
>>>           "sum"
>>>         ],
>>>         [
>>>           "gym",
>>>           "first"
>>>         ],
>>>         [
>>>           "timestamp",
>>>           "first"
>>>         ]
>>>       ],
>>>       "weeks": 0,
>>>       "months": 0,
>>>       "minutes": 0,
>>>       "days": 0,
>>>       "milliseconds": 0,
>>>       "years": 0
>>>     }
>>>
>>> I atempted to set it, but fieled. I want to know what's the means of the
>>> time and the fields?. When should I set the time to 1, and how to set the
>>> "fields"?.
>>>
>>> Thanks in advance!
>>>
>>> Weiru Zeng
>
>
>
> --
> Kentaro Iizuka<[email protected]>
>
> Github
> https://github.com/iizukak/
>
> Facebook
> https://www.facebook.com/kentaroiizuka

Reply via email to