Hi Pat,
I dont think hbase TTL is the issue because
1. I added the data 1 day back
2. I have a simlar server running for 1.5 million events each having 6k
feature having data 10 days old and its working fine.
Regards,
Abhimanyu
On Thu, Nov 23, 2017 at 10:58 PM, Pat Ferrel
Use the default. Tuning with a threshold is only for atypical data and unless
you have a harness for cross-validation you would not know if you were making
things worse or better. We have our own tools for this but have never had the
need for threshold tuning.
Yes, llrDownsampled(PtP) is the
May I please get an answer to this question? I have a project that depends
on the answer to this question.
Using the Recommendation template (https://github.com/apache/
incubator-predictionio-template-recommender) and the ecom recs template (
But when I run the command "count 'pio_event:events' " on hbase it shows me
all the row 1.5 million
On Thu, Nov 23, 2017 at 2:53 PM, Александр Лактионов
wrote:
> Hi Abhimanyu,
>
> try setting TTL for rows in your hbase table
> it can be set in hbase shell:
> alter
Hi Abhimanyu,
try setting TTL for rows in your hbase table
it can be set in hbase shell:
alter 'pio_event:events_?', NAME => 'e', TTL =>
and then do the following in the shell:
major_compact 'pio_event:events_?'
You can configure auto major compact: it will delete all the rows
Hi,
I am stuck at this point .How to identify the problem?
Regards,
Abhimanyu
On Mon, Nov 20, 2017 at 11:08 AM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:
> Hi , I am new to predictionIO V 0.12.0 (elasticsearch - 5.2.1 , hbase -
> 1.2.6 , spark - 2.6.0) Hardware (244 GB RAM and