Yes, sorry, realised my mistake just after sending.
Regards,
Nigel Legg
07914 740972
http://www.treavnianlegg.co.uk
http://twitter.com/nigellegg
http://uk.linkedin.com/in/nigellegg
On 10 July 2013 06:46, Robert Layton wrote:
> Can confirm, it works without the *www*, but doesn't work with it.
Can confirm, it works without the *www*, but doesn't work with it.
On 10 July 2013 15:38, Nigel Legg wrote:
> Total content on www.scikit-learn.org
> This space is managed by SourceForge.net. You have attempted to access a
> URL that either never existed or is no longer active. Please check the
Total content on www.scikit-learn.org
This space is managed by SourceForge.net. You have attempted to access a
URL that either never existed or is no longer active. Please check the
source of your link and/or contact the maintainer of the link to have them
update their records.
Regards,
Nigel Legg
The ParameterGrid object is created before the jobs are run, so it would be
trivial to move this object creation up, calculate the number of jobs and
output it. Happy to take a PR.
On 10 July 2013 08:57, Joel Nothman wrote:
> The number of jobs is actually len(ParameterGrid(search.param_grid))
The number of jobs is actually len(ParameterGrid(search.param_grid)) *
len(check_cv(search.cv)), and I think this should be output at the start of
the search if verbose >= 1, and perhaps should also be calculated by some
method, so a user can estimate the time before finalising the grid...
- Joel
Hi Josh,
This is decided by the param_grid that you give it. The actual internals is
handled by the ParameterGrid class (
http://scikit-learn.org/dev/modules/generated/sklearn.grid_search.ParameterGrid.html
).
The example on that page shows how you could calculate the number of runs
based on you
Thank you all. http://scikit-learn.org was down for a few minutes when I
sent the email, but it's up again.
Josh
On Tue, Jul 9, 2013 at 6:40 PM, Robert Layton wrote:
> /stable and /dev are both up for me at this time (which was two hours
> since Josh's email).
>
>
> On 10 July 2013 06:29, Josh
Hi there,
Prior to running clf.fit(X,y) with GridSearchCV is there any easy/direct
way to know how many jobs will GridSearchCV run? (i.e. the total number of
parameter combinations in the grid search)
Josh
--
See everyt
/stable and /dev are both up for me at this time (which was two hours since
Josh's email).
On 10 July 2013 06:29, Josh Wasserstein wrote:
> FYI: The website seems to be currently down.
>
> Josh
>
>
> --
> See everything
Hi Josh,
The website works for me. Maybe you are trying to access
http://www.scikit-learn.org instead of http://scikit-learn.org?
Alexandre.
On Tue, Jul 9, 2013 at 10:29 PM, Josh Wasserstein wrote:
> FYI: The website seems to be currently down.
>
> Josh
>
>
> --
FYI: The website seems to be currently down.
Josh
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose
Thanks for getting that all set up. I'll take another crack at it tonight.
On Jul 9, 2013 2:46 PM, "Olivier Grisel" wrote:
> We are almost there. I have reconfigured the Jenkins build for Python
> 3.3, NumPy 1.7.1 and SciPy 0.12.0:
>
>
> https://jenkins.shiningpanda-ci.com/scikit-learn/job/python
We are almost there. I have reconfigured the Jenkins build for Python
3.3, NumPy 1.7.1 and SciPy 0.12.0:
https://jenkins.shiningpanda-ci.com/scikit-learn/job/python-3.3-numpy-1.7.1-scipy-0.12.0/
Here are the remaining errors / failures:
https://jenkins.shiningpanda-ci.com/scikit-learn/job/python
The REAMDE-Py3k.rst was not reflecting the current situation. I just
updated it. We don't use 2to3 anymore but a single code base with
helpers in sklearn.externals.six .
Please feel free to submit pull requests to fix the remaining test
failures if you wish.
--
Olivier
--
2013/7/9 Josh Wasserstein :
> After running a SVC grid search with linear, rbf and sigmoid kernels, I got
> the following:
>
[snip]
>
> note that the above says that the best estimator is a sigmoid with:
> coef0 = 20.0855369232
> gamma=0.367879441171
> degree=3 ?
>
> I am confused about the above.
After running a SVC grid search with linear, rbf and sigmoid kernels, I got
the following:
Classification report for the best estimator:
> SVC(C=403.428793493, cache_size=600, class_weight=None,
> coef0=20.0855369232, degree=3, gamma=0.367879441171, kernel=sigmoid,
> max_iter=-1, probability=Fals
This is about the application of One hot encoder. I used label encoder
because it would look like different categorical set of values. (Just to
demonstrate the functionality of One hot encoder)
What I want to know is, what those feature_indices_ and active_features_
indicate. As I've used the same
- Mail original -
> De: "Skipper Seabold"
> À: scikit-learn-general@lists.sourceforge.net
> Envoyé: Lundi 8 Juillet 2013 19:40:36
> Objet: Re: [Scikit-learn-general] Defining a Density Estimation Interface
>
> On Mon, Jul 8, 2013 at 1:20 PM, Bertrand Thirion
> wrote:
> >
> > De: "Jacob
2013/7/9 Joel Nothman :
> Sorry, I got confused with binarizer somehow. Thanks, Lars.
So did I because LabelBinarizer does not do a one-hot encoding, but
the general point stands.
--
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam
--
Sorry, I got confused with binarizer somehow. Thanks, Lars.
On Tue, Jul 9, 2013 at 9:27 PM, Lars Buitinck wrote:
> 2013/7/9 Joel Nothman :
> > * You probably want to use Encoder rather than LabelEncoder in that
> example
>
> There is no Encoder. But LabelEncoder is indeed the wrong thing to
> u
2013/7/9 Joel Nothman :
> * You probably want to use Encoder rather than LabelEncoder in that example
There is no Encoder. But LabelEncoder is indeed the wrong thing to
use, since it encodes *labels*, not *samples*, using a one-hot scheme.
On feature arrays, the result is unspecified.
But even if
So:
* There may not be an issue with RFE?
* You probably want to use Encoder rather than LabelEncoder in that example
* It seems as if the output of feature_indices_ needs to be understood as
if it is then masked by active_indices_, which only registers exactly those
features active in training. So
2013/7/8 Issam :
> On 7/8/2013 12:53 PM, Lars Buitinck wrote:
>> cost = np.sum(np.einsum('ij,ji->i', diff, diff.T)) / (2 * n_samples)
> Thanks for all the remarks!
>
> I found out that the `einsum` can be replaced simply by 'cost =
> np.sum(diff**2)/ (2 * n_samples)' which is faster and more reada
Hi,
I have the same issue there with R2 score for regression :
http://sourceforge.net/mailarchive/message.php?msg_id=31136945
Most scores use averages over a test sample.
Hence, I think the choice between mean(scores) and score(concatenation)
depends on the CV iterator:
- For KFold it makes sense
24 matches
Mail list logo