>reg = 0.1
lr = LogisticRegression(C=1/reg,max_iter=100,
fit_intercept=True,solver='lbfgs').fit(X_train, y_train)
ytrain_hat = lr.predict_proba(X_train)
loss = log_loss(y_train,ytrain_hat)
print loss
print loss + 0.5*reg*LA.norm(lr.coef_)
Maybe i am doing it wrong
_
Sorry for incomplete email.
Hi,
My question was that even after using many solvers, i dont get convergence
for Logistic regression. The loss value as calculated in the previous
email was less for maxiter=10 than when maxiter = 30. So, does the
optimization method diverge and also how do we monit