* Uwe Sauter [200514 22:23]:
[...]
> More details:
>
> I followed these two instructions:
>
> https://community.mellanox.com/s/article/howto-configure-sr-iov-for-connectx-4-connectx-5-with-kvm--ethernet-x
>
> https://community.mellanox.com/s/article/howto-configure-sr-iov-for-connect-ib-connect
Hi all,
I had to change the hardware of one of my Proxmox installations and now have the problem that I cannot configure a
Mellanox ConnectX-5 card for SR-IOV/passthrough. To be more precise I can boot the VM and it also recognizes the
Infiniband device but I'm unable to assign a Node GUID and
--- Begin Message ---
The info is in the first line of the log
"kvm: -device vfio-pci,host=:5e:00.0,id=hostpci0,bus=pci.0,addr=0x10:
vfio :5e:00.0: failed to open /dev/vfio/46: Device or resource busy"
This means the device is already passed through to another running VM, or
it is being l
Hello,
Thank you for your support.
But I am getting below error message after removing the 'hugepages =2' line
in the VM config file.
kvm: -device vfio-pci,host=:5e:00.0,id=hostpci0,bus=pci.0,addr=0x10:
vfio :5e:00.0: failed to open /dev/vfio/46: Device or resource busy
TASK ERROR: start
--- Begin Message ---
Apologies for replying to myself, but I have a litte more information.
‐‐‐ Original Message ‐‐‐
On Tuesday, May 12, 2020 1:12 PM, wrote:
> Hi PVE-users,
>
> Lately, on a no-subscription Proxmox (that I update regularly since release
> 6.0), I get the following erro
--- Begin Message ---
Remove the hugepages line from your vmid.conf (ie 100.conf)
On Thu, 14 May 2020, 17:24 Sivakumar SARAVANAN, <
sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> Thank you so much.
>
> What is the steps to disable the hugepage ?
>
>
> Best regards
> SK
>
> On Thu, May 14,
The preferred method to disable Transparent HugePages is to add
"transparent_hugepage=never" to the kernel boot line in the "/etc/grub.
conf" file. The server must be rebooted for this to take effect.
---
Gilberto Nunes Ferreira
Em qui., 14 de mai. de 2020 às 13:24, Sivakumar SARAVANAN <
siva
Thank you so much.
What is the steps to disable the hugepage ?
Best regards
SK
On Thu, May 14, 2020 at 6:20 PM Mark Adams via pve-user <
pve-user@pve.proxmox.com> wrote:
>
>
>
> -- Forwarded message --
> From: Mark Adams
> To: PVE User List
> Cc:
> Bcc:
> Date: Thu, 14 May 20
--- Begin Message ---
Do you really need hugepages? if not disable it.
On Thu, 14 May 2020 at 17:17, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> Hello Daniel,
>
> Thanks for coming back.
>
> I mean, I am unable to power ON the VM until shutdown the other VM's in t
Hello Daniel,
Thanks for coming back.
I mean, I am unable to power ON the VM until shutdown the other VM's in the
same host.
There are 6 VM's running on each Host, But sometimes all 6 VM's would run
without any issue. But Sometimes if I stop ( Shutdown) and Power ON (
Start) getting an error say
- Le 14 Mai 20, à 17:38, Sivakumar SARAVANAN
sivakumar.saravanan.jv@valeo-siemens.com a écrit :
> Hello,
>
> We have implemented the Proxmox VE in our environment.
>
> So each server will have a maximum 6 VM. But not able to start the few VM's
> ON until we bring down the 1 or 2 VM's
Hello,
We have implemented the Proxmox VE in our environment.
So each server will have a maximum 6 VM. But not able to start the few VM's
ON until we bring down the 1 or 2 VM's in the same Host.
What could be the reason.
Kindly suggest.
Thanks in advance.
Best Regards
SK
--
*This e-mail mes
Hi Hervé,
Glad to read this :)
Cheers
El 14/5/20 a las 16:48, Herve Ballans escribió:
Hi Eneko,
Thanks again for trying to help me.
Now, the problem is solved! We upgraded our entire cluster in PVE 6.2
and now all is optimal, including HA status.
We just upgraded each nodes, didn't change
Hi Mark,
Thanks. Yes we are investigating with network engineers.
We upgraded the entire cluster in PVE 6.2 and the cluster is fully
operational now.
But we think indeed that something in the network has changed and caused
the problem (switch upgrades ?)
Therefore, for example, does activa
Hi Eneko,
Thanks again for trying to help me.
Now, the problem is solved! We upgraded our entire cluster in PVE 6.2
and now all is optimal, including HA status.
We just upgraded each nodes, didn't change anything else (I mean in term
of configuration file).
Here, I'm just stating a fact, I
15 matches
Mail list logo