That's interesting. After re-reading your earlier email, I think that I 
misunderstood what you were saying.

Since this is a mod_perl listserv, I imagine that the advice will always be to 
use mod_perl rather than starman? 

Personally, I'd say either option would be fine. In my experience, the key 
advantage of mod_perl or starman (say over CGI) is that you can pre-load 
libraries into memory at web server startup time, and that processes are 
persistent (although they do have limited lifetimes of course).

You could use a framework like Catalyst or Mojolicious (note Dancer is another 
framework, but I haven't worked with it) which can support different web 
servers, and then try the different options to see what suits you best. 

One thing to note would be that usually people put a reverse proxy in front of 
starman like Apache or Nginx (partially for serving static assets but other 
reasons as well). Your stack could be less complicated if you just went the 
mod_perl/Apache route. 

That said, what OS are you planning to use? It's worth checking if mod_perl is 
easily available in your target OS's package repositories. I think Red Hat 
dropped mod_perl starting with RHEL 8, although EPEL 8 now has mod_perl in it. 
Something to think about.

David Cook

-----Original Message-----
From: Wesley Peng <m...@yonghua.org> 
Sent: Wednesday, 5 August 2020 1:00 PM
To: dc...@prosentient.com.au; modperl@perl.apache.org
Subject: Re: Question about deployment of math computing

Hi

dc...@prosentient.com.au wrote:
> If your app isn't human-facing, then I don't see why a little delay would be 
> a problem?

Our app is not human facing. The application by other department will request 
the result from our app via HTTP.

The company has huge big-data stack deployed, such as Hadoop/Flink/Storm/Spark 
etc, all these solutions have been existing there. The data traffic each day is 
as huge as xx PB.

But, those stacks have complicated privileges control layer, they are most time 
running as backend service, for example, offline analysis, feature engineering, 
and some real time streaming.

We train the modes in backend, and use the stacks mentioned above.

But, once the mode finished training, they will be pushed to online as 
prediction service and serve as HTTP API, b/c third party apps will only like 
to request the interface via HTTP way.

Thanks.

Attachment: signature.asc
Description: PGP signature

Reply via email to