This is really sad indeed. I did not know Raghav personally, but I got to
know the engineer in him through his work with the Scikit Learn project.
Almost every time I looked up an issue or tried to find a PR for a lacking
feature, there he was, either working on it himself or starting a
discussion
Hi Markus,
I find that in current LDA implementation we included
"E[log p(beta | eta) - log q (beta | lambda)]" in the approx bound function
and use it to calculate perplexity.
But this part was not included in the likelihood function in Blei's C
implementation.
Maybe this caused some difference.
This is truly, truly sad news.
Leaving the home country you grew up in to find your way in a new language
and culture takes considerable effort, and to thrive at it takes even more
effort.
He was to be commended for that.
I think many of us knew of his enthusiasm for the project and benefited
grea
Raghav was a core contributor to scikit-learn. Venkat Raghav Rajagopalan, or
@raghavrv -as we knew him- appeared out of the blue and started contributing
early 2015. From Chennai, he was helping us make scikit-learn a better library.
As often in open source, he was working with people that he h
On Thu, Oct 5, 2017 at 3:27 PM, wrote:
>
>
> On Thu, Oct 5, 2017 at 2:52 PM, Stuart Reynolds > wrote:
>
>> Turns out sm.Logit does allow setting the tolerance.
>> Some and quick and dirty time profiling of different methods on a 100k
>> * 30 features dataset, with different solvers and losses:
>