On Thu, Oct 13, 2011 at 11:23:59PM -0400, Skipper Seabold wrote:
> FWIW, scipy.stats defines entropy of p(x) = 0 to be 0, and I think it
> is so by definition.
Yes. I believe that that's the right way to do it.
G
--
All
On Thu, Oct 13, 2011 at 11:29 PM, Robert Layton wrote:
> That makes sense. I'll add an optional eps value, and handle the case of 0
> when it comes up.
> Thanks,
> Robert
>
> On 14 October 2011 14:23, Skipper Seabold wrote:
>>
>> On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton
>> wrote:
>> > I'm
Yes, you are right. I still haven't checked for correctness now (I'm going
to check against a different implementation), so take the code with a grain
of salt.
- Robert
On 14 October 2011 15:09, wrote:
> On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton
> wrote:
> > I'm working on adding Adjuste
On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton wrote:
> I'm working on adding Adjusted Mutual Information, and need to calculate the
> Mutual Information.
> I think I have the algorithm itself correct, except for the fact that
> whenever the contingency matrix is 0, a nan happens and propogates t
That makes sense. I'll add an optional eps value, and handle the case of 0
when it comes up.
Thanks,
Robert
On 14 October 2011 14:23, Skipper Seabold wrote:
> On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton
> wrote:
> > I'm working on adding Adjusted Mutual Information, and need to calculate
On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton wrote:
> I'm working on adding Adjusted Mutual Information, and need to calculate the
> Mutual Information.
> I think I have the algorithm itself correct, except for the fact that
> whenever the contingency matrix is 0, a nan happens and propogates t
I'm working on adding Adjusted Mutual Information, and need to calculate the
Mutual Information.
I think I have the algorithm itself correct, except for the fact that
whenever the contingency matrix is 0, a nan happens and propogates through
the code.
Sample code on the net [1] uses an eps=np.fin
Hi everybody,
I'm currently working on a Pull Request for Gradient Boosted
Regression Trees [1] (aka Gradient Boosting, MART, TreeNet) and I'm
looking for collaborators.
GBRTs have been advertised as one of the best off-the-shelf
data-mining procedures; they share many properties with random fore