Hi,

here are the PRs for Prediction IO and UR.
https://github.com/apache/predictionio/pull/495
https://github.com/actionml/universal-recommender/pull/62

I tried to leverage the already present async interface of elasticsearch
and armouring the HBase and JDBC call with blocking constructs which will
tell the standard scala ExecutionContext to use a separate thread for that.

Let me know what you think,
Chris

On Fri, 12 Oct 2018 at 09:06, Chris Wewerka <chris.wewe...@gmail.com> wrote:

> Hi Donald,
>
> thanks for your answer and the hint to base off from Naoki's Akka Http
> thread. Saw the PR and had the same idea already, as it does not make sense
> to base off the old spray code. I worked with spray a couple of years ago
> and back then it already had full support for Scala Futures / Fully async
> programming. If I get the time I will start with a fork going off Naoki's
> Akka HTTP branch.
>
> Please have a look at my second mail also, as the usage of the bounded
> "standard" Scala Execution context has a dramatic impact of how the
> machines resources are leveraged. On our small "All in one" machine we
> didn't see much CPU / Load until yesterday when I set the mentioned params
> to allow much higher thread counts in the standard Scala Execution context.
> We have proven this in our small production environment and it has an huge
> impact. In fact the Query Server acted like a water dam, not letting enough
> requests in the system to use all of it's resources. You might consider
> adding this to the documentation, until I hopefully come up with a PR for
> full async engine.
>
> Cheers
> Chris
>
> On Fri, 12 Oct 2018 at 02:18, Donald Szeto <don...@apache.org> wrote:
>
>> Hi Chris,
>>
>> It is indeed a good idea to create asynchronous versions of the engine
>> server! Naoki has recently completed the migration from spray to Akka HTTP
>> so you may want to base off from that instead. Let us know if we can help
>> in any way.
>>
>> I do not recall the exact reason anymore, but engine server was created
>> almost 5 years ago, and I don’t remember whether spray could take futures
>> natively as responses like Akka HTTP could now. Nowadays there shouldn’t be
>> any reason to not provide asynchronous flavors of these APIs.
>>
>> Regards,
>> Donald
>>
>> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <take...@gmail.com> wrote:
>>
>>> Hi Chris,
>>>
>>> I think current LEventStore's blocking methods should take
>>> ExecutionContext as an implicit parameter and Future version of methods
>>> should be provided. I don't know why they aren't. Is there anyone who knows
>>> reason for the current LEventStore API?
>>>
>>> At the moment, you can consider to use LEvent directly to access Future
>>> version of methods as a workaround.
>>>
>>> 2018年10月11日(木) 23:05 Chris Wewerka <chris.wewe...@gmail.com>:
>>>
>>> >
>>> > Thanks George, good to hear that!
>>> >
>>> > Today I did a test by raising the bar for the max allowed threads in
>>> the "standard"
>>> >
>>> > scala.concurrent.ExecutionContext.Implicits.global
>>> >
>>> > I did this before calling "pio deploy" by adding
>>> >
>>> > export JAVA_OPTS="$JAVA_OPTS
>>> -Dscala.concurrent.context.numThreads=1000
>>> -Dscala.concurrent.context.maxThreads=1000"
>>> >
>>> > Now we do see much more CPU usage by elasticsearch. So it seems that
>>> the QueryServer by using the standard thread pool bounded to the available
>>> processors acted like a dam.
>>> >
>>> > By setting the above values we now have sth. like a traditional Java
>>> JEE or Spring application which blocks thread because of synchronous calls
>>> and creates new threads if there is demand (requests) for it.
>>> >
>>> > So this is far from being a good solution. Going full async/reactive
>>> is still the way to go in my opinion.
>>> >
>>> > Cheers
>>> > Chris
>>> >
>>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gyar...@griddynamics.com>
>>> wrote:
>>> >>
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> I'm not a contributor of the predictionio. But want to mention we
>>> also quite interested in that changes in my company.
>>> >> We often develop some custom pio engines, and it doesn't look right
>>> to me to use Await.result with non-blocking api.
>>> >> Totally agree with your point.
>>> >> Thanks for the question!
>>> >>
>>> >> George
>>>
>>>
>>>
>>> --
>>> Naoki Takezoe
>>>
>>

Reply via email to