Hi Rashmin, We found similar issue when we start/stop vmnet dev several time. (> 3 times) It happens kernel panic, and sometimes kernel will occur core dump. Let me know if you want to submit patch to fix it.
Thanks Waterman -----Original Message----- >From: Patel, Rashmin N >Sent: Friday, October 10, 2014 6:07 AM >To: Navakanth M; stephen at networkplumber.org; Cao, Waterman >Cc: dev at dpdk.org >Subject: RE: vmxnet3 pmd dev restart > >I just quickly looked into the code and instead of releasing memory or simply >set it to NULL (patch: > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/4683), you can zero > it out and it should work perfectly, you can give it a quick try. > >//rte_free(ring->buf_info); >memset(ring->buf_info, 0x0, ring->size*sizeof(vmxnet3_buf_info_t)); > >This will not free the memory from heap but just wipe it out to 0x0, provided >that we freed all the mbuf(s) pointed by each buf_info->m pointers. Hence you >won't need to reallocate it when you start device after this stop. > >Thanks, >Rashmin > >-----Original Message----- >From: Navakanth M [mailto:navakanthdev at gmail.com] >Sent: Wednesday, October 08, 2014 10:11 PM >To: stephen at networkplumber.org; Patel, Rashmin N; Cao, Waterman >Cc: dev at dpdk.org >Subject: Re: vmxnet3 pmd dev restart > >I had tried with Stephen's patch but after stop is done and when we call start >it crashed at vmxnet3_dev_start()-> >vmxnet3_dev_rxtx_init()->vmxnet3_post_rx_bufs() as buf_info is freed and is >not allocated again. buf_info is allocated in >vmxnet3_dev_rx_queue_setup() which would be called once at the initialization >only. >I also tried not freeing buf_info in stop but then i see different issue after >start, packets are not received due to check while (rcd->gen == >rxq->comp_ring.gen) { in vmxnet3_recv_pkts() > >Waterman, Have you got chance to test stop and start of vmnet dev if so did >you notice any issue similar to this? > >Thanks >Navakanth > >On Thu, Oct 9, 2014 at 12:46 AM, Patel, Rashmin N <rashmin.n.patel at >intel.com> wrote: >> Yes I had a local copy working with couple of lines fix. But someone else, I >> think Stephen added a fix patch for the same, and I assume if it's been >> merged, should be working, so did not follow up later. >> >> I don't have a VMware setup handy at moment but I think Waterman would have >> more information about testing that patch if he has found any issue with it. >> >> Thanks, >> Rashmin >> >> -----Original Message----- >> From: Navakanth M [mailto:navakanthdev at gmail.com] >> Sent: Wednesday, October 08, 2014 4:14 AM >> To: dev at dpdk.org; Patel, Rashmin N >> Subject: Re: vmxnet3 pmd dev restart >> >> Hi Rashmin >> >> I have come across your reply in following post that you have worked on this >> problem and would submit the patch for it. >> Can you please share information on the changes you worked on or patch log >> if you had submitted any for it? >> http://thread.gmane.org/gmane.comp.networking.dpdk.devel/4683 >> >> Thanks >> Navakanth >> >> On Tue, Sep 30, 2014 at 1:44 PM, Navakanth M <navakanthdev at gmail.com> >> wrote: >>> Hi >>> >>> I am using DPDKv1.7.0 running on Vmware Esxi 5.1 and am trying to >>> reset the port which uses pmd_vmnet3 library functions from below >>> function calls. >>> rte_eth_dev_stop >>> rte_eth_dev_start >>> >>> Doing this, i face panic while rte_free(ring->buf_info) in >>> Vmxnet3_cmd_ring_release(). >>> I have gone through following thread but the patch mentioned didn't >>> help rather it crashed in start function while accessing buf_info in >>> vmxnet3_post_rx_bufs. I see this buf_info is allocated in queue setup >>> functions which are called at initialization. >>> http://thread.gmane.org/gmane.comp.networking.dpdk.devel/4683 >>> >>> I tried not freeing it and then rx packets are not received due to >>> mismatch in while (rcd->gen == rxq->comp_ring.gen) in >>> vmxnet3_recv_pkts() >>> >>> To reset the device port, is this the right way what i am doing? >>> Or do I have to call vmxnet3_dev_tx_queue_setup() >>> vmxnet3_dev_rx_queue_setup() once stop is called? I have checked >>> recent patches and threads but did not get much information on this. >>> >>> Thanks >>> Navakanth