nz/>, Emory
> University, Atlanta, GA, USA
>
>
> On Mon, Apr 5, 2021 at 11:28 AM C W wrote:
>
>> Update:
>>
>> It seems I've already installed XGBoost before. But, I get the following
>> error:
>>
>> >>> import xgboost
>>
ers/mike/opt/miniconda3/lib/python3.8/site-packages/xgboost/lib/libxgboost.dylib\n
Reason: image not found']
It seems OpenMP runtime the one I am missing. I can install by running
> brew install libomp
But, I don't have brew on this computer. Any work around?
Thanks!
On Mon, Ap
ip3 command,
> pip3 install xgboost
I don't have pip.
3) From XGBoost website, I now see that you can build XGBoost from source (
https://xgboost.readthedocs.io/en/latest/build.html)
Question: will all the 3 methods install XGBoost in the same folder?
Thanks a lot!
On Mon, Apr 5, 2021 at 1
Hello all,
I can't install pip on this computer. It has conda installed (probably not
helpful). Is there a work around to install XGBoost and packages?
I remember reading on stackoverflow, there were some simple commands to do
it. I actually used it to install packages without pip.
I can't find
ht in EuroScipy last year:
>
> https://github.com/lesteve/euroscipy-2019-scikit-learn-tutorial/blob/master/rendered_notebooks/02_basic_preprocessing.ipynb
>
> On Fri, 1 May 2020 at 05:11, C W wrote:
>
>> Hermes,
>>
>> That's an interesting function. Does it work wit
e one-hot-encoding
> for categorical features? Can we have a "factor" data type?
>
> On Thu, Apr 30, 2020 at 03:55:00PM -0400, C W wrote:
> > I've used R and Stata software, none needs such transformation. They
> have a
> > data type called "factors",
Hello everyone,
I am frustrated with the one-hot-encoding requirement for categorical
feature. Why?
I've used R and Stata software, none needs such transformation. They have a
data type called "factors", which is different from "numeric".
My problem with OHE:
One-hot-encoding results in large nu
> Tue:
> https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/06_trees/code/06-trees_demo.ipynb
>
> Best,
> Sebastian
>
>
> > On Oct 4, 2019, at 10:09 PM, C W wrote:
> >
> > On a separate note, what do you use for plotting?
> >
> > I
ore as a computational workaround for achieving the
> same thing more efficiently (although it looks inelegant/weird)-- something
> like that wouldn't be mentioned in textbooks.
>
> Best,
> Sebastian
>
> > On Oct 4, 2019, at 6:33 PM, C W wrote:
> >
> > Than
tead, what it does is
>
> if x >= 0.5 then right child node
> else left child node
>
> These are basically equivalent as you can see when you just plug in values
> 0 and 1 for x.
>
> Best,
> Sebastian
>
> > On Oct 4, 2019, at 5:34 PM, C W wrote:
> >
variables in sklearn, so everything is treated as
> numerical.
>
> This is why we do one-hot-encoding: so that a set of numerical (one hot
> encoded) features can be treated as if they were just one categorical
> feature.
>
>
> Nicolas
> On 10/4/19 2:01 PM, C W wrote
't. In practice, it is hard to
> guess what is a nominal and what is a ordinal variable, so you have to do
> the onehot encoding before you give the data to the decision tree.
>
> Best,
> Sebastian
>
> On Oct 4, 2019, at 11:48 AM, C W wrote:
>
> I'm getting some
>
>
>
> On Sat, 14 Sep 2019 at 20:59, C W wrote:
>
>> Thanks, Guillaume.
>> Column transformer looks pretty neat. I've also heard though, this
>> pipeline can be tedious to set up? Specifying what you want for every
>> feature is a pain.
>>
>
>
> at catboost [1], which has an sklearn-compatible API.
>
> J
>
> [1] https://catboost.ai/
>
> On Sat, Sep 14, 2019 at 3:40 AM C W wrote:
>
>> Hello all,
>> I'm very confused. Can the decision tree module handle both continuous
>> and c
t; we discussed via the previous email was referring to feature variables.
> Whether you choose the DT regressor or classifier depends on the format of
> your target variable.
>
> Best,
> Sebastian
>
> > On Sep 13, 2019, at 11:41 PM, C W wrote:
> >
> > Thanks, Se
ables).
>
> In any case, I guess this is what
>
> > "scikit-learn implementation does not support categorical variables for
> now".
>
>
> means ;).
>
> Best,
> Sebastian
>
> > On Sep 13, 2019, at 9:38 PM, C W wrote:
> >
> > Hello all,
>
Hello all,
I'm very confused. Can the decision tree module handle both continuous and
categorical features in the dataset? In this case, it's just CART
(Classification and Regression Trees).
For example,
Gender Age Income Car Attendance
Male 30 1 BMW Yes
Female 35 9000
on, score is R^2.
On Wed, Jun 5, 2019 at 9:11 AM Andreas Mueller wrote:
>
> On 6/4/19 8:44 PM, C W wrote:
> > Thank you all for the replies.
> >
> > I agree that prediction accuracy is great for evaluating black-box ML
> > models. Especially advanced models like n
Thank you all for the replies.
I agree that prediction accuracy is great for evaluating black-box ML
models. Especially advanced models like neural networks, or not-so-black
models like LASSO, because they are NP-hard to solve.
Linear regression is not a black-box. I view prediction accuracy as a
alidation.html
>
> Nicolas
>
>
> On 5/31/19 8:54 PM, C W wrote:
>
> Hello everyone,
>
> I'm new to scikit learn. I see that many tutorial in scikit-learn follows
> the work-flow along the lines of
> 1) tranform the data
> 2) split the data: train, test
>
Hello everyone,
I'm new to scikit learn. I see that many tutorial in scikit-learn follows
the work-flow along the lines of
1) tranform the data
2) split the data: train, test
3) instantiate the sklearn object and fit
4) predict and tune parameter
But, linear regression is done in least squares, s
mparison-sheets useful in understanding both
> languages syntaxes and concepts:
>
>
>- https://www.datacamp.com/community/tutorials/r-or-
>python-for-data-analysis
>- http://pandas.pydata.org/pandas-docs/stable/comparison_with_r.html
>
>
> Gaƫl,
>
> Le 18
.
>
> Hope that explains,
> N
>
> On 18 June 2017 at 13:18, C W wrote:
> > Hi Sebastian,
> >
> > I looked through your book. I think it is great if you already know
> Python,
> > and looking to learn machine learning.
> >
> > For me, I ha
me good ways and resources to learn Python for data analysis?
>
> I think baed on your questions, a good resource would be an introduction
> to programming book or course. I think that sections on objected oriented
> programming would make the rationale/design/API of scikit-learn and Python
Dear Scikit-learn,
What are some good ways and resources to learn Python for data analysis?
I am extremely frustrated using this thing. Everything comes after a dot!
Why would you type the sam thing at the beginning of every line. It's not
efficient.
code 1:
y_sin = np.sin(x)
y_cos = np.cos(x)
iginal message
>
> G
> *From: *C W
> *Sent: *Monday, 5 June 2017 00:31
> *To: *scikit-learn@python.org
> *Reply To: *Scikit-learn user and developer mailing list
> *Subject: *[scikit-learn] How to best understand scikit-learn and know
> its modules and methods?
>
> Dea
Dear scikit learn list,
I am new to scikit-learn. I am getting confused about LinearRegression.
For example,
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
boston = load_boston()
X = boston.data
y = boston.target
model1 = LinearRegression()
model1.fit(X
27 matches
Mail list logo