Hi,

We have a big data solution in trial - a part of which involves an
on-demand api for processing upto ~50 million data points.
Currently we have a redis cluster where the data is aggregated;  from where
it is filtered, fetched, and processed by python (a django app).
But the data processing doesn't scale well if the filtered dataset from
redis is more than 85k-1L in length.
We are looking to use map-reduce with mongo. Or any other alternative that
will reduce the querying & processing time for even larger datasets.

We are short of time & need more hands to help/code.
We have a release scheduled this Monday (US Time).

Please connect if you can be of some help here, or forward it to someone
who can.
Efforts would be properly compensated.

Regards
Apratim Ankur
whatsapp, primary contact: - +91 8984212389
secondary contact no: +91 9686800032
_______________________________________________
BangPypers mailing list
BangPypers@python.org
https://mail.python.org/mailman/listinfo/bangpypers

Reply via email to