Hi All,

I trust you're staying safe, I am acquainting myself with the Gluon API and 
still trying to comprehend what the errors I encounter really mean. I keep 
getting the listed error and googling it I stumbled upon [this 
post](https://discuss.mxnet.apache.org/t/positional-arguments-must-have-ndarray-type/4578),
 however, following the listed suggestions did not eradicate the error. Its 
apparent I'm over looking at something here. If you're wondering about the data 
I am reading the it from an HDF5 file. 

Thanks in advance!!, and I would appreciate your help greatly on this
Here is the details of the error:
AssertionError: Argument a must have NDArray type, but got [[  30.55   66.25 
1009.15   63.52]
 [  13.21   41.2  1016.63   74.1 ]
 [  26.99   72.99 1008.     76.1 ]
 ...
 [  18.59   41.1  1001.93   58.16]
 [  14.49   41.16 1000.5    82.17]
 [  26.56   65.59 1012.6    64.25]]

And the code responsible for the error:

    def sgd(params, lr, batch_size):
            print("entered sgd: ")
            for p in params:
                    p[:] -=  lr * p.grad / batch_size

    #@save
    from d2l import mxnet as d2l
    import mxnet as mx
    from mxnet import np, npx

    def linreg(X, w, b):
            return np.dot(X, w) + b

    def squared_loss(y_hat, y):
        return (y_hat - y.reshape(y_hat.shape))**2/2

    #@save
    def train_ch11(trainer_fn, lr, batch_size, data_iter, num_epochs=2):
            # Initialization
            print("Entering train_ch11")
            #feature_dim = data.shape[1]
            w = np.random.normal(scale=.01, size=(4, 1))
            b = np.zeros(1)
            w.attach_grad()
            b.attach_grad()
            lr = 0.01
            net = linreg 
            loss = squared_loss

            print("setting up net and loss functions")
            n, timer = 0, d2l.Timer()

            for _ in range(num_epochs):
                    ctx =  mx.gpu() if mx.context.num_gpus() else mx.cpu()
                    timer.start()
                    for X, y in data_iter:
                            Xdata, ydata = X.as_in_context(ctx), 
y.as_in_context(ctx)
                            X, y = np.float64(Xdata), np.float64(ydata)
                            with mx.autograd.record():
                                    inter = net(*[X], w, b) #X producing 
ArgumentError
                                    l = loss(inter, y)
                            l.backward()
                            sdg([w, b], lr, batch_size)
            #train_l = loss(net(features, w, b), labels)
            timer.stop()
            print("finish training model")
            print(f'performance in Gigaflops: block {2 / timer.times[3]:.3f}')





---
[Visit 
Topic](https://discuss.mxnet.apache.org/t/assertionerror-argument-a-must-have-ndarray-type-but-got/6768/1)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.apache.org/email/unsubscribe/4a6d151d86ac457bf552f19ba4e2875bfc9061e0cca3a4cf908c2b0ac5605925).

Reply via email to