On 15/2/2014 10:17 , Manoj Kumar wrote:
> Thanks Vlad,
>
> Can you tell me how to take care of that?
What exactly do you mean by "take care of that"?
Like Mathieu said, it probably doesn't matter too much in terms of the
final score.
Vlad
>
> On Sat, Feb 15, 2014 at 2:10 PM, Vlad Niculae
It is usually thought that the intercept / bias term should not be
penalized since it is only used to shift the decision surface to account
for the data mean. In my experience, penalizing the intercept term or not
makes no difference for high-dimensional problems. It sometimes make a
difference for
Thanks Vlad,
Can you tell me how to take care of that?
On Sat, Feb 15, 2014 at 2:10 PM, Vlad Niculae wrote:
> Hi Manoj,
>
> In the first example, the intercept is not regularized, hence the
> difference.
>
> Vlad
> On Feb 15, 2014 8:54 AM, "Manoj Kumar"
> wrote:
>
>> Hello
>>
>> I have a quer
Hi Manoj,
In the first example, the intercept is not regularized, hence the
difference.
Vlad
On Feb 15, 2014 8:54 AM, "Manoj Kumar"
wrote:
> Hello
>
> I have a query with fit_intercept parameter in most of the estimators.
>
> When we have a linear model like w0 + w1*x1 + w2*x2 + .. I'm assuming
Hello
I have a query with fit_intercept parameter in most of the estimators.
When we have a linear model like w0 + w1*x1 + w2*x2 + .. I'm assuming that
clf.intercept_ takes care of the w0 term since the data is centered.
Then why aren't these two equivalent?
X = np.random.rand(100, 10)
y = np.r