--- Begin Message ---
The simplest thing to set also is to make sure you are using writeback
cache in your vms with ceph. It makes a huge difference in performance.
On Wed, 10 Jun 2020, 07:31 Eneko Lacunza, wrote:
> Hi Marco,
>
> El 9/6/20 a las 19:46, Marco Bellini escribió:
> > Dear All,
> > I
--- Begin Message ---
Sivakumar - This is a "known issue" as far as I am aware, usually when you
are allocating quite a bit of memory (although 16G is not a lot in your
case, but maybe the server doesn't have much ram?) when starting a vm with
a PCI device passed through to it. It also only seems t
--- Begin Message ---
Have you enabled IOMMU in the BIOS? Assuming your server hardware supports
it?
On Fri, 15 May 2020 at 15:03, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> Hello,
>
> I am unable to add the PCI device to VM's, where I am getting below error
> me
ethrough,format=raw,aio=threads,detect-zeroes=on'
> -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb'
> -netdev
> 'type=tap,id=net0,ifname=tap145i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown'
> -device
>
> SK
>
> On Thu, May 14, 2020 at 6:20 PM Mark Adams via pve-user <
> pve-user@pve.proxmox.com> wrote:
>
> >
> >
> >
> > -- Forwarded message --
> > From: Mark Adams
> > To: PVE User List
> > Cc:
> > Bcc:
> >
--- Begin Message ---
Do you really need hugepages? if not disable it.
On Thu, 14 May 2020 at 17:17, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> Hello Daniel,
>
> Thanks for coming back.
>
> I mean, I am unable to power ON the VM until shutdown the other VM's in t
--- Begin Message ---
As Eneko already said, this really sounds like a network problem - if your
hosts lose connectivity to each other they will reboot themselves, and it
sounds like this is what happened to you.
You are sure there has been no changes to your network around the time this
happened?
--- Begin Message ---
Hi All,
I am having the issue that is detailed in this forum post:
https://forum.proxmox.com/threads/vm-start-timeout-with-pci-gpu.45843/
I thought I would take it to the mailing list to see if anyone here has any
ideas?
VM's boot fine the first time the machine starts up,
--- Begin Message ---
Is the data inside the VM's different? maybe the data on the bigger one is
not as compressible?
On Wed, 11 Mar 2020, 08:07 Renato Gallo via pve-user, <
pve-user@pve.proxmox.com> wrote:
>
>
>
> -- Forwarded message --
> From: Renato Gallo
> To: g noto
> Cc:
--- Begin Message ---
REF: "Thin provisioning is set on the storage, it is a checkbox and of
course it has to be a storage type than can be thin provisioned (ie
lvmthin, zfs, ceph etc)."
I have to correct myself on this sorry, it's been a long time since I used
lvmthin This checkbox option is
--- Begin Message ---
Atila - just to follow up on Giannis discard notes, depending on what OS
and filesystems you use inside of your VMs, you may need to run fstrim,
mount with different options, or run specific commands (ie zpool trim for
zfs) to get it all working correctly.
Regards,
Mark
On T
creating some VMs and work on them , I dont want to
> > create
> > > in a wrong way and have to destroy later.
> > >
> > > <
> > >
> >
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content
--- Begin Message ---
Thin provisioning is set on the storage, it is a checkbox and of course it
has to be a storage type than can be thin provisioned (ie lvmthin, zfs,
ceph etc).
Then every virtual disk that is created on that storage type is thin
provisioned.
Regards,
Mark
On Thu, 5 Mar 2020,
13 matches
Mail list logo