Re: [PVE-User] proxmox don't detect more than 26 disks ( /dev/sdXX)

2020-03-30 Thread Dominik Csapak

hi, sorry for the late reply (i was on holiday)

could you post a dmesg output of both kernels?
(the lspci looked the same)

regards

On 3/16/20 3:50 PM, Humberto Jose De Sousa via pve-user wrote:

00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890 Northbridge 
only dual slot (2x16) PCI-e GFX Hydra part (rev 02)
Subsystem: Hewlett-Packard Company RD890 Northbridge only dual slot (2x16) 
PCI-e GFX Hydra part
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD/ATI] RD890S/RD990 I/O Memory 
Management Unit (IOMMU)
Subsystem: Hewlett-Packard Company RD890S/RD990 I/O Memory Management Unit 
(IOMMU)
00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 
PCI to PCI bridge (PCI Express GPP Port 0)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 
PCI to PCI bridge (PCI Express GPP Port 5)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:0b.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to 
PCI bridge (PCI Express GFX2 port 0)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to 
PCI bridge (PCI Express GFX2 port 1)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] 
SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode]
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode]
Kernel driver in use: ahci
Kernel modules: ahci
00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] 
SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
Kernel driver in use: ohci-pci
00:12.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 
Controller
Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller
Kernel driver in use: ohci-pci
00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] 
SB7x0/SB8x0/SB9x0 USB EHCI Controller
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller
Kernel driver in use: ehci-pci
00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] 
SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
Kernel driver in use: ohci-pci
00:13.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 
Controller
Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller
Kernel driver in use: ohci-pci
00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] 
SB7x0/SB8x0/SB9x0 USB EHCI Controller
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller
Kernel driver in use: ehci-pci
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 SMBus Controller 
(rev 3d)
Subsystem: Hewlett-Packard Company SBx00 SMBus Controller
Kernel driver in use: piix4_smbus
Kernel modules: i2c_piix4, sp5100_tco
00:14.1 IDE interface: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 
IDE Controller
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 IDE Controller
Kernel driver in use: pata_atiixp
Kernel modules: pata_atiixp, pata_acpi
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 
LPC host controller
Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 LPC host controller
00:14.4 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 PCI to PCI 
Bridge
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 3
Kernel driver in use: k10temp
Kernel modules: k10temp
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 4
Kernel driver in use: fam15h_power
Kernel modules: fam15h_power
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 5
00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 0
00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 1
00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 2
00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 3
Kernel driver in use: k10temp
Kernel modules: k10temp
00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 4
Kernel modules: fam15h_power
00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 5
00:1a.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 0
00:1a.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 1
00:1a.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor 
Function 2
00:1a.3 Host bri

Re: [PVE-User] proxmox don't detect more than 26 disks ( /dev/sdXX)

2020-03-16 Thread Dominik Csapak

On 3/16/20 3:11 PM, Humberto Jose De Sousa via pve-user wrote:

Hi there.

Until pve-kernel-4.15.18-23-pve all disks did detected. After this kernel 
version, only disks with format /dev/sdX was detected. Disks with format 
/dev/sdXX don't are detected.



from the output it seems that your pci devices '03:00' and '04:00' do 
not show any disk anymore


whats the output of 'lspci -k' on both kernels?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Virtual Manager management per user configuration

2020-02-19 Thread Dominik Csapak

On 2/20/20 7:35 AM, Kazim Koybasi wrote:

Hello,

We would like to give a virtual machine service to our users in our campus
so that they can create their own virtual machine and see only their own
virtual machine. I found that it is possible from command line or with root
access from Proxmox interface.  Is it possible to create an environment an
give permission per user with Proxmox so that they can create and only see
their own virtual machine?



Hi,

this is not comfortably doable, for the following reasons

for creating a vm, a user has to have:
* allocate rights on the storage for the vm disks
(which will give him also rights to see/edit/destroy all other disks on 
that storage)

* allocate rights on /vms/{ID} which you can create beforehand,
but there is not 'pool', iow the user has to use the assigned ids

additionally, there is no mechanism for limiting resources per user
(e.g. only some amount of cores)

also, when deleting the vm, the acls to that vm will also get removed,
meaning if you given a user the right to /vms/100 and he deletes
the vm 100, he no longer has the rights to it

finally, there is generally no concept of resource 'ownership' for
users only privileges and acls

if you can workaround/ignore/accept those issues, you should be fine,
otherwise i would suggest either using or creating a seperate
interface which handles all of that with the API[0]
(handling ownership, limiting api calls, etc)

hope this helps
regards

Dominik

0: https://pve.proxmox.com/wiki/Proxmox_VE_API

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] question regarding colum in disks view / passtrough disk

2019-11-28 Thread Dominik Csapak

On 11/27/19 3:00 PM, Roland @web.de wrote:

Hello,


Hi,



in Datacenter->Nodename->Disks  there is a column "Usage" which shows
what filesystem is being used for the disk.

I have 2 System disks (ssd) which contain the proxmox system and they
are being used as ZFS mirror, i.e. i can put virtual machines on rpool/data

The other harddisks (ordinary large sata disks) are being used as
passtrough devices, i.e. have added them with command like

qm set 100 -scsi5 /dev/disk/by-id/ata-HGST_HUH721212ALE600_AAGVE62H

as raw disks to virtual machines, i.e. they are used from a single
virtual machine.

 From the Disk View in Webgui, you cannot differ between thos, i.e. they
simply look "the same" from a hosts management perspective.

Wouldn't it make sense to make a difference in the Webgui when a disk
containing filesystem/data which is (and should) not being accessed on
the host/hypervisor level ?


In general i agree with you that this would be nice.
The problem here is that during the disk enumeration, we do not touch
vm configs (also i am not even sure if we could do it that easily
because of package dependency chains) and thus have no information
which disk is used by vms



I would feel much better if proxmox knew some "the host OS should not
touch this disk at all" flag and if it would have an understanding of
"this is a disk i (can) use" and "this is a disk i can't/should not use"


if a disk is not used by a mountpoint/zfs/lvm/storage definition/etc. it 
will not be touched by pve (normally) so this is only a 'cosmetic' issue


passing through a disk to a vm is always a very advanced feature that
can be very dangerous (thus it is not exposed in the web interface)
so the admin should already know what hes doing...



regards
Roland

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VMID clarifying

2019-10-22 Thread Dominik Csapak

there was already a lengthy discussion of this topic on the bugtracker
see https://bugzilla.proxmox.com/show_bug.cgi?id=1822

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Trouble Creating CephFS UPDATE

2019-10-13 Thread Dominik Csapak

hi,

just fyi, i just tested this on pve 6 (on current packages) via gui:

1) install 3 or more pve hosts
2) cluster them
3) install/init ceph on all of them
4) create 3 mons
5) create at least 1 manager
6) add osds
7) add one mds
8) create ceph fs -> works

so i guess you are either missing something,
or something in your network/setup is not working correctly

kind regards
dominik

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Nested Virtualization and Live Migration

2019-09-10 Thread Dominik Csapak

> https://www.linux-kvm.org/page/Nested_Guests

the page is sadly outdated

there are efforts in kernel and qemu to enable real working live 
migration of nested machines.


currently qemu decided to disable migration altogether when nesting is 
enabled and the guest has the vmx/svm flag.[0]


you can try with a cpu model that does not include that flag, but
you lose nesting for that machine ofc.

kind regards
Dominik

0: 
https://github.com/qemu/qemu/commit/d98f26073bebddcd3da0ba1b86c3a34e840c0fb8


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-22 Thread Dominik Csapak

Hi,

On 8/21/19 2:37 PM, Eneko Lacunza wrote:


# pveceph createosd /dev/sdb -db_dev /dev/sdd
device '/dev/sdd' is already in use and has no LVM on it



this sounds like a bug.. can you open one on bugzilla.proxmox.com,
while i investigate ?
we should be able to use a disk as db/wal even if there are only 
partitions on it.


thanks
Dominik

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] rotating snapshot backups

2019-08-07 Thread Dominik Csapak

On 8/7/19 10:37 AM, Adam Weremczuk wrote:

Hello,

I have a active - cold backup pair of 5.4.6 hosts and I solely run LXC 
containers.


On the active host I've set up daily backups and they have been running 
fine.


The only problem is they don't rotate, i.e. old ones never get deleted.

I've tried setting "maxfiles: 25" in /etc/vzdump.conf but after last 
backup run I can see 56 files under /var/lib/vz/dump (28 .lzo + 28 .log).


I could work around this problem with a bash script but would prefer 
Proxmox to handle it.


Please advise.



the 'maxfiles' parameter is counted per guest, so setting it to 25
means for each guest 25 backpus will be kept

there is not setting to have such a limit for all backups on a storage 
together


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] network interfaces renamed after update to proxmox6

2019-07-23 Thread Dominik Csapak

On 7/23/19 1:30 PM, Dominik Csapak wrote:

Hi,

we changed from the out-of-tree intel driver to the intree kernel driver.

maybe there is some bug there

did yo change anything regarding network device naming (e.g. with udev)?

can you post an 'lspci --nnk' ?


sorry this is a typo, should be 'lspci -nnk' (only one -)

also dmesg output would be interesting



maybe there is a firmware update for your card?



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] network interfaces renamed after update to proxmox6

2019-07-23 Thread Dominik Csapak

Hi,

we changed from the out-of-tree intel driver to the intree kernel driver.

maybe there is some bug there

did yo change anything regarding network device naming (e.g. with udev)?

can you post an 'lspci --nnk' ?

maybe there is a firmware update for your card?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Host Rebooting when using PCIe Passthrough

2019-07-04 Thread Dominik Csapak

On 7/4/19 8:45 PM, Craig Jones wrote:

Hello,

I have a VM that I'm passing a GPU through to. The passthrough itself
works great. The issue is that whenever this VM is powered on, the host
will reboot without any interaction from me. The reboot happens anywhere
from 3 - 15 minutes after the VM has been powered on. I have many other
VMs that don't cause this. The only difference between them and this one
is the passthrough GPU. Attached are some potentially helpful outputs.
The syslogs have been truncated from when the VM had been powered on to
the last entry right before the host rebooted.

Thanks,
Craig




one thing you could do is setup kernel crash logging (kdump) to see
if the kernel crashes and why

aside from that the only thing i see is that your gpu is not
in an isolated iommu group:

8<
/sys/kernel/iommu_groups/1/devices/:00:01.0
/sys/kernel/iommu_groups/1/devices/:00:01.1
/sys/kernel/iommu_groups/1/devices/:01:00.0
/sys/kernel/iommu_groups/1/devices/:01:00.1
/sys/kernel/iommu_groups/1/devices/:02:00.0
>8

01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. 
[AMD/ATI] RV770 [Radeon HD 4870]
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] RV770 HDMI 
Audio [Radeon HD 4850/4870]
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. 
RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 07)


it seems it is together in a group with your nic

this can be the cause for the crashes...

hope this helps

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Routing inside LXC container (System V)

2019-06-04 Thread Dominik Csapak

Hi,

for current ubuntu linux containers, we use systemd-networkd to 
configure their network


so you should be able to use systemd-networkd to setup your routes
with overrides of the .network files in /etc/systemd/network

e.g. you can add an override for /etc/systemd/network/foo.network with 
conf files in /etc/systemd/network/foo.network.d/


see the systemd-networkd manpage for more info on this and how
to set custom routes there

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] API users

2019-04-24 Thread Dominik Csapak

On 4/24/19 11:54 AM, Mark Schouten wrote:


Hi,

we want all users to authenticate using 2FA, but we also want to use the API 
externally, and 2FA with the API is quite difficult.

In the latest version, you can enable 2FA per user, but you cannot disable GUI 
access for e.g. API users. So a API user can just login without 2FA. Is there a 
way to enable 2FA, and disable the GUI for users without 2FA? Perhaps by 
revoking a rolepermission?



Hi,

The GUI and TFA are two independent things. The GUI uses the API in the 
same way as any external api client would use it (via ajax calls).
If you want to disable just the gui, simply do not allow access to '/' 
via a reverse proxy or something similar.


If you want to enforce TFA, you have to enable it on the realm, then it 
is enforced for all users of that realm


The per user TFA is to enable single users to enhance the security of
their account, not to enforce using them.

hope this answers your question

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Intel Corporation Gigabit ET2 Quad Port Server Adapter

2019-04-11 Thread Dominik Csapak

hi

On 4/11/19 1:48 PM, David Lawley wrote:
not so fast..   borked network settings so I did a another fresh 
install, if enpX is the physical location I am at a loss where it gets 
enp67?? By this example I would assume I want enp43s0f0 ... etc


hex 43 = dec 67


pve gui shows

enp8s0f1

enp8s0f0

enp7s0f1

enp7s0f0

enp67s0f1

enp67s0f0

lspci shows

07:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
07:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
08:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
08:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
43:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
43:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
44:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)
44:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
Connection (rev 01)




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [PATCH container 1/1] Signed-off-by: Hugo Lecourt

2019-03-26 Thread Dominik Csapak

On 3/26/19 9:43 AM, Hugo Lecourt via pve-user wrote:

I used git-send-mail
However, i'm not accustom to post in mailing list, so i fail this post
Will you, I re-post this in pve devel?
I have believe i'm not able to post in this list


sorry about the part about git-send-mail, our mailing list
preserves the original mail as eml when dmarc is used (to not mangle the 
header) so that is ok


you have to first subscribe to the devel mail (like described in the 
link i included)




Yes, the second fix is crappy

I've been test again, and there is no bug...
It's make no sense!

Sorry for waste of time...


no problem, contributions are always welcome :)

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [PATCH container 1/1] Signed-off-by: Hugo Lecourt

2019-03-26 Thread Dominik Csapak

hi,

a few things to your patch:

* please use the developer list instead of the user list
* please send the patch directly as mail (e.g. with git-send-email) 
instead of as eml

* use a meaningful subject

this information is all available in our developer-documentation[0]

to the patch itself:

what exactly is the bug?

i can not verify that all devices are added as usb3, but correctly as 
usb2/usb1 (qm showcmd ID shows the qemu commandline that is generated)


the first hunk of yours does nothing, since at that point '$d' is not 
used anymore, and the second hunk is wrong, since ehci bus cannot be 
used with usb 1.x devices only usb 2.0


but even if your description was true (that all usb devices would be
put on the xhci bus) this would still be ok, since the xhci controller
can handle all usb 1.x,2.0, and 3.x devices

0: https://pve.proxmox.com/wiki/Developer_Documentation

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] IOMMU and Interrupt Remapping

2019-03-07 Thread Dominik Csapak

On 08/03/2019 03:43, Craig Jones wrote:

Hello,

I'm trying to gauge if I should use Proxmox on an Intel-based or AMD-based 
system. A major factor that's contributing to this is IOMMU support, but more 
specifically, Interrupt Remapping support.

Based on this article, it seems 
that both AMD and Intel support IOMMU, but Interrupt Remapping is not as widely 
supported. I thought IOMMU and Interrupt Remapping always went hand-in-hand. Is this 
not the case? Does Proxmox have better Interrupt Remapping support for one type of 
system over the other?



The article is at this point very old and ought to be updated.
For more concise and current information the reference documentation[0] 
is a better start


in any case, i have successfully used pci passthrough/iommu with both 
intel (>= haswell) and current amd systems (ryzen,epyc), so it

depends on the specific hardware (mostly mainboard)

hope that helps
Dominik

0: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_pci_passthrough

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] MxGPU with AMD S7150

2019-03-01 Thread Dominik Csapak

On 01.03.19 14:13, Mark Adams wrote:

On Fri, 1 Mar 2019 at 12:52, Dominik Csapak  wrote:


On 01.03.19 13:37, Mark Adams wrote:

Hi All,

I'm trying this out, based on the wiki post and the forum posts:



https://forum.proxmox.com/threads/amd-s7150-mxgpu-with-proxmox-ve-5-x.50464/


https://pve.proxmox.com/wiki/MxGPU_with_AMD_S7150_under_Proxmox_VE_5.x

However I'm having issues getting the gim driver working. Was just
wondering if the Proxmox staff member that tested this out came across

this

particular issue, or if anyone else had any insights.


Hi, i am the one that tested this.



Hi Dominik, Thanks for getting back to me so quickly.



Hi, no problem







My hardware is an ASRock EPYCD8-2T motherboard (SR-IOV enabled in bios)

and

an AMD S7150. Proxmox is 5.3-11.

When running the modprobe of gim, it crashes out with the following:

[Fri Mar  1 12:31:49 2019] gim info:(enable_sriov:299) Enable SRIOV
[Fri Mar  1 12:31:49 2019] gim info:(enable_sriov:300) Enable SRIOV vfs
count = 16
[Fri Mar  1 12:31:49 2019] pci :61:02.0: [1002:692f] type 7f class
0xff
[Fri Mar  1 12:31:49 2019] pci :61:02.0: unknown header type 7f,
ignoring device
[Fri Mar  1 12:31:50 2019] gim error:(enable_sriov:311) Fail to enable
sriov, status = fffb
[Fri Mar  1 12:31:50 2019] gim error:(set_new_adapter:668) Failed to
properly enable SRIOV
[Fri Mar  1 12:31:50 2019] gim info:(gim_probe:91) AMD GIM probe:

pf_count

= 1



mhmm i cannot really remember if that exact error message occured, but
you have to enable several things in the bios

AMD-Vi/VT-d
SR-IOV
ARI
and possibly above-4g-decoding

also make sure you enable the 'legacy' or non uefi oprom for
that card

on our supermicro board we could select the oprom for each pcie port
separately



It's the same with this ASRock Rack board. I've set the oprom to legacy for
PCIE slot 1, but it doesn't seem to make any difference. I've also tried
other slots but that doesn't make a difference either.

The only thing I can't find, is any option relating to ARI. Do you recall
at all what the option was called? I think the supermicro and asrock boards
are pretty similar when it comes to options, but maybe this board is
missing ARI.


With ARI i mean Alternative Routing-ID Interpretation, an PCI Extension[1]



Also I have ACS enabled but that doesn't help either.



names may be different in your bios,
or some options may not exists at all

at last, a different pcie port may be necessary, depending on how
the mainboard is wired (with epyc all pcie ports should go to the
cpu, but i do not know about your specific board)

if all else fails, i would open an issue on github for the gim project
and ask there if anything is known



Thanks I will do that.



ok, if i remember anything else, i will answer here on the list

1: 
https://pcisig.com/sites/default/files/specification_documents/ECN-alt-rid-interpretation-070604.pdf



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] MxGPU with AMD S7150

2019-03-01 Thread Dominik Csapak

On 01.03.19 13:37, Mark Adams wrote:

Hi All,

I'm trying this out, based on the wiki post and the forum posts:

https://forum.proxmox.com/threads/amd-s7150-mxgpu-with-proxmox-ve-5-x.50464/

https://pve.proxmox.com/wiki/MxGPU_with_AMD_S7150_under_Proxmox_VE_5.x

However I'm having issues getting the gim driver working. Was just
wondering if the Proxmox staff member that tested this out came across this
particular issue, or if anyone else had any insights.


Hi, i am the one that tested this.



My hardware is an ASRock EPYCD8-2T motherboard (SR-IOV enabled in bios) and
an AMD S7150. Proxmox is 5.3-11.

When running the modprobe of gim, it crashes out with the following:

[Fri Mar  1 12:31:49 2019] gim info:(enable_sriov:299) Enable SRIOV
[Fri Mar  1 12:31:49 2019] gim info:(enable_sriov:300) Enable SRIOV vfs
count = 16
[Fri Mar  1 12:31:49 2019] pci :61:02.0: [1002:692f] type 7f class
0xff
[Fri Mar  1 12:31:49 2019] pci :61:02.0: unknown header type 7f,
ignoring device
[Fri Mar  1 12:31:50 2019] gim error:(enable_sriov:311) Fail to enable
sriov, status = fffb
[Fri Mar  1 12:31:50 2019] gim error:(set_new_adapter:668) Failed to
properly enable SRIOV
[Fri Mar  1 12:31:50 2019] gim info:(gim_probe:91) AMD GIM probe: pf_count
= 1



mhmm i cannot really remember if that exact error message occured, but
you have to enable several things in the bios

AMD-Vi/VT-d
SR-IOV
ARI
and possibly above-4g-decoding

also make sure you enable the 'legacy' or non uefi oprom for
that card

on our supermicro board we could select the oprom for each pcie port 
separately


names may be different in your bios,
or some options may not exists at all

at last, a different pcie port may be necessary, depending on how
the mainboard is wired (with epyc all pcie ports should go to the
cpu, but i do not know about your specific board)

if all else fails, i would open an issue on github for the gim project
and ask there if anything is known

hope this helps

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] sequential node backup

2019-02-22 Thread Dominik Csapak

On 2/21/19 9:14 PM, Roberto Alvarado wrote:

Hi Foks!

If someone can help me with this…. I’m looking for a way to do a sequential 
backup of all my proxmox nodes, for example for the node1, node2, node3, start 
the backup process with the node1 when this node finish the backup, now start 
with the node2 , and the same when this node finish the backup process start 
with the node3.

Now when you want to configure the backups, you can select only the 
time/day/vms for each node, but this don't prevent two nodes backing up at the 
same time, and this what I want to avoid.
For example, if you set up a backup job for “ALL” nodes, all the nodes start 
the backup job at the same time, this is overkill for my backup storage :(

If someone have an idea or workaround for this, will be great!



what you could do is to leverage the possibility to have a hookscript
per host for backups, so you could start a backup on machine 2 at the 
end of backup of machine 1 etc.


the disadvantage is that you do not see the secondary backup jobs in the 
gui...


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve cluster error

2019-02-13 Thread Dominik Csapak

On 2/13/19 8:32 PM, Denis Morejon wrote:
I have a 12 members cluster. I had problems with two nodes and It was 
necessary to replace memory on them. But when I shut the servers down I 
lost the cluster in the web interface.


what do you mean you 'lost the cluster' ? what was displayed?
can you provide any errors from the browsers console?

However, when I type "pvecm 
status" on one of the working nodes, It appears to be ok (see below, all 
nodes votes).


was that when you already started the 2 servers again? (ok) or
was it when the 2 were offline (this would be weird)

Then I decided to restart the pve-cluster service on a 
node (One different from the two that I shutdown) and the service failed 


can you provide the syslog/journal from that time?

kind regards
Dominik

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph pool wrongly displayed in GUI

2019-02-13 Thread Dominik Csapak

On 2/13/19 11:27 AM, Sten Aus wrote:

Hi

I've discovered that Proxmox 5 GUI shows rbd (rbd here as a type) pool 
wrongly.


When I look to /etc/pve/storage.cfg I see that pool is "normal" (like 
it's in ceph).


But when I look to GUI I see that pool is "rbd" (which should be default 
pool when initializing ceph).


See attachments.


the attachments did not come through, but i guess you are running into
https://bugzilla.proxmox.com/show_bug.cgi?id=2058

an update will fix that

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VIOSCSI, WSFC, and S2D woes - Solved! & RFE

2019-02-13 Thread Dominik Csapak

On 2/12/19 5:45 PM, Edwin Pers wrote:

Tried that this morning, no luck. Not having much luck finding anything about 
83h in vioscsi, but I did find a few things:

https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg04999.html
https://marc.info/?l=qemu-devel&m=146296689703152&w=2
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/VOYDNZT5UK3773GW2GU6DFJND4RQPZCO/

Most of that is related to pointing vioscsi at a remote iscsi target though, 
instead of my use case of a local disk image/block device.

Later - I got it working!
I had to specify the wwn and serial parameters on the -device parameter, like 
so:

-drive 
file=/mnt/pve/cc1-sn1/images/5007/vm-5007-disk-1.raw,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on\
-device 
scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,wwn=0x5000c50015ea71ad,serial=yCCfBgH1\
   <- note the wwn= and serial= parameters

Currently testing this out by running the vm manually, looks like I'll have to add 
this disk via the args: entry in .conf, which is acceptable.

I suppose at this point we can call this a RFE to expose the wwn= and serial= 
parameters in the api in some future version.
Might be able to randomly generate wwn/serial entries, but I don't know nearly 
enough about the sas protocols to say whether this is a good idea or not.


sounds sensible, serial is already exposed (you can set it via api or 
qm), wwn is not afaics


can you open an enhancement request for the wwn? 
https://bugzilla.proxmox.com




Some more references that I found:
https://lists.wpkg.org/pipermail/stgt/2013-May/018875.html
https://ipads.se.sjtu.edu.cn:1312/qiuzhe/qemu-official/commit/fd9307912d0a2ffa0310f9e20935d96d5af0a1ca
https://bugzilla.redhat.com/show_bug.cgi?id=831102 <- this is the one that got 
me on the right track finally.

Full kvm command is here for those interested:
https://gist.github.com/epers/b0340c897c4403ba09b247f2d614b674

-Original Message-
From: pve-user  On Behalf Of Dominik Csapak
Sent: Tuesday, February 12, 2019 3:12 AM
To: pve-user@pve.proxmox.com
Subject: Re: [PVE-User] VIOSCSI, WSFC, and S2D woes

On 2/11/19 9:18 PM, Edwin Pers wrote:

Happy Monday all,
Trying to get storage spaces direct running. WinSvr2016 guests, PVE 5.2-2, NFS 
shared storage for the guest disk images.
I'm getting an error when running the cluster validator in windows: "The required 
inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported."
As a result, I'm unable to run s2d.
It looks like the RHEL guys had to make some changes in vioscsi.sys & qemu:
https://bugzilla.redhat.com/show_bug.cgi?id=1219841
Have these changes made it into pve? Or am I overlooking something?
Any thoughts on this matter are appreciated.



afaics from the bug report, this should be fixed since 2016 if their changes 
made it into the upstream qemu (unknown, since they do not disclose what needs 
to change) our qemu version should include it

you can try to upgrade to a current version (PVE 5.3, with qemu 2.12.1) and use 
the most recent virtio drivers

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VIOSCSI, WSFC, and S2D woes

2019-02-12 Thread Dominik Csapak

On 2/11/19 9:18 PM, Edwin Pers wrote:

Happy Monday all,
Trying to get storage spaces direct running. WinSvr2016 guests, PVE 5.2-2, NFS 
shared storage for the guest disk images.
I'm getting an error when running the cluster validator in windows: "The required 
inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported."
As a result, I'm unable to run s2d.
It looks like the RHEL guys had to make some changes in vioscsi.sys & qemu:
https://bugzilla.redhat.com/show_bug.cgi?id=1219841
Have these changes made it into pve? Or am I overlooking something?
Any thoughts on this matter are appreciated.



afaics from the bug report, this should be fixed since 2016
if their changes made it into the upstream qemu (unknown, since they do
not disclose what needs to change) our qemu version should include it

you can try to upgrade to a current version (PVE 5.3, with qemu 2.12.1)
and use the most recent virtio drivers

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Removing VM does not remove it from backup

2019-02-10 Thread Dominik Csapak

On 2/10/19 9:40 AM, Sten Aus wrote:

Hi

Is it me or ... :D

Using Proxmox 5.3 and I've removed VM from GUI, but this does not remove 
VMID from vzdump.cron file. So, next time backup runs, I get error that 
this VM does not exist.




this is expected behaviour at the moment
see https://bugzilla.proxmox.com/show_bug.cgi?id=1291

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Error trying to dist-upgrade a fresh installation

2019-01-24 Thread Dominik Csapak

On 1/24/19 3:15 PM, Gilberto Nunes wrote:

Jan 24 12:12:52 pve01 pvedaemon[2766]: Can't load
'/usr/lib/x86_64-linux-gnu/perl5/5.24/auto/PVE/RADOS/RADOS.so' for module
PVE::RADOS


does that file exist?

it should be contained in the package

librados2-perl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Error trying to dist-upgrade a fresh installation

2019-01-24 Thread Dominik Csapak

On 1/24/19 2:21 PM, Gilberto Nunes wrote:

Hi list

I have a fresh installation here, and when I try to updagre it I get some
errors:
apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n]
Setting up pve-manager (5.3-8) ...
Job for pvedaemon.service failed because the control process exited with
error code.
See "systemctl status pvedaemon.service" and "journalctl -xe" for details.
dpkg: error processing package pve-manager (--configure):
  subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
  pve-manager
E: Sub-process /usr/bin/dpkg returned an error code (1)

systemctl status pvedaemon.service
* pvedaemon.service - PVE API Daemon
Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor
preset: enabled)
Active: active (running) (Result: exit-code) since Thu 2019-01-24
11:15:52 -02; 5min ago
   Process: 2634 ExecReload=/usr/bin/pvedaemon restart (code=exited,
status=2)
  Main PID: 1366 (pvedaemon)
 Tasks: 4 (limit: 4915)
Memory: 115.6M
   CPU: 6.251s
CGroup: /system.slice/pvedaemon.service
|-1366 pvedaemon
|-1369 pvedaemon worker
|-1370 pvedaemon worker
`-1371 pvedaemon worker

Jan 24 11:19:38 pve01 pvedaemon[2634]: Compilation failed in require at
/usr/share/perl5/PVE/API2/Cluster.pm line 13,  line 755.
Jan 24 11:19:38 pve01 pvedaemon[2634]: BEGIN failed--compilation aborted at
/usr/share/perl5/PVE/API2/Cluster.pm line 13,  line
Jan 24 11:19:38 pve01 pvedaemon[2634]: Compilation failed in require at
/usr/share/perl5/PVE/API2.pm line 13,  line 755.
Jan 24 11:19:38 pve01 pvedaemon[2634]: BEGIN failed--compilation aborted at
/usr/share/perl5/PVE/API2.pm line 13,  line 755.
Jan 24 11:19:38 pve01 pvedaemon[2634]: Compilation failed in require at
/usr/share/perl5/PVE/Service/pvedaemon.pm line 8,  line
Jan 24 11:19:38 pve01 pvedaemon[2634]: BEGIN failed--compilation aborted at
/usr/share/perl5/PVE/Service/pvedaemon.pm line 8,  l
Jan 24 11:19:38 pve01 pvedaemon[2634]: Compilation failed in require at
/usr/bin/pvedaemon line 11,  line 755.
Jan 24 11:19:38 pve01 pvedaemon[2634]: BEGIN failed--compilation aborted at
/usr/bin/pvedaemon line 11,  line 755.
Jan 24 11:19:38 pve01 systemd[1]: pvedaemon.service: Control process
exited, code=exited status=2
Jan 24 11:19:38 pve01 systemd[1]: Reload failed for PVE API Daemon.




can you post the complete error from the journal?
also the complete output of

apt update
apt dist-upgrade

could be helpful

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox - CT problem

2018-11-26 Thread Dominik Csapak

On 11/26/18 1:57 PM, lord_Niedzwiedz wrote:

         Hi,
I have a debian-9-turnkey-symfony_15.0-1_amd64 container.
Which worked half a year well.
Now, every now and then, the mysql disappears into me.
How is this possible ?
I do not touch or change anything.
Any auto updates inside?
The idea of what this may be caused.
After restoring the base version, everything is ok, for a day, two and 
again it sits  ;-/


Linux walls 4.15.18-4-pve #1 SMP PVE 4.15.18-23 (Thu, 30 Aug 2018 
13:04:08 +0200) x86_64

You have mail.
root@walls ~# /etc/init.d/mysql restart
[] Restarting mysql (via systemctl): mysql.serviceFailed to restart 
mysql.service: Unit mysql.service not found.

  failed!
root@walls ~# service mysqld restart
Failed to restart mysqld.service: Unit mysqld.service not found.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


first, please write a new message to the mailing list instead of 
answering to an existing thread with a new topic


second, it seems there was an issue with mysql and turnkeylinux
https://www.turnkeylinux.org/blog/debian-secupdate-breaks-lamp-server

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] deleting a cluster

2018-10-29 Thread Dominik Csapak

On 10/29/18 11:00 AM, Adam Weremczuk wrote:

Hi all,

I'm experimenting with 5.2 and trying to delete a cluster I created on 
one of the nodes using web GUI.


I haven't managed to find any options in web GUI, shell or documentation.


please have a look at
https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node

especially the section 'Separate A Node Without Reinstalling'



The closest thing appears to be:

pvecm delnode node1
Cannot delete myself from cluster!

Can you please advise how to completely purge current cluster's settings?

Thanks,
Adam

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] DHCP for non cloudinit VM

2018-08-20 Thread Dominik Csapak
but that cannot happen without an agent/cloud init like program in the 
vm, so what and why (if we have cloud-init) should we implement there?


On 08/20/2018 03:25 PM, José Manuel Giner wrote:
I mean, when you install the ISO of an operating system on a VM, when 
configuring the network, that the user can choose the DHCP option 
instead of defining the values by hand.




On 20/08/2018 15:21, Ian Coetzee wrote:

Hi José,

Using the Qemu Agent you are able to determine the IP of the VM.

Kind regards

On Mon, 20 Aug 2018 at 13:43, José Manuel Giner  wrote:


The possibility of being able to define IPs directly from the
Proxmox/API interface. Just like with Cloud-init or containers.



On 20/08/2018 11:14, Dominik Csapak wrote:

On 08/20/2018 09:36 AM, José Manuel Giner wrote:

Hello,

there any plan to implement DHCP for non cloudinit VMs?

Thanks!




the question does not really make sense as we did not implement dhcp 
for

cloudinit, the config there only tells the vm how to configure its
network (same as with containers, were we also don't implement a dhcp
server)

but where is the problem in having a host/vm in your network serving
dhcp?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
José Manuel Giner
http://ginernet.com

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user







___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] DHCP for non cloudinit VM

2018-08-20 Thread Dominik Csapak

On 08/20/2018 09:36 AM, José Manuel Giner wrote:

Hello,

there any plan to implement DHCP for non cloudinit VMs?

Thanks!




the question does not really make sense as we did not implement dhcp for 
cloudinit, the config there only tells the vm how to configure its 
network (same as with containers, were we also don't implement a dhcp 
server)


but where is the problem in having a host/vm in your network serving
dhcp?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Can't set password with qm guest passwd

2018-07-29 Thread Dominik Csapak
yes there was a perl import missing, i already sent a fix on the devel 
list, see:


https://pve.proxmox.com/pipermail/pve-devel/2018-July/033180.html

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Network Interface Card Passthroug

2018-03-05 Thread Dominik Csapak
you should verify that iommu grouping is correct and works, e.g. with 
this script (courtesy of the arch linux wiki)


8<-
#!/bin/bash
shopt -s nullglob
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done;
>8-

also verify that vt-d is activated in the bios

On 03/02/2018 06:06 PM, Gilberto Nunes wrote:

pve02:~# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.13.13-6-pve root=/dev/mapper/pve-root ro quiet
intel_iommu=on,iommu=pt,igfx_off,pass-through pci-stub.ids=dd01:0003
pve02:~# cat /etc/pve/qemu-server/100.conf
boot: dcn
bootdisk: scsi0
cores: 1
machine: q35
hostpci0: 00:04.0
ide2: local:iso/mini-bionic-net-install.iso,media=cdrom
kvm: 0
memory: 512
name: VMTESTE
net0: virtio=4E:DA:E3:90:AA:AC,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,size=8G
scsihw: virtio-scsi-pci
smbios1: uuid=67dc9730-f0b5-4bc1-a99f-52253ef18903
sockets: 1
pve02:~# qm start 100
Cannot open iommu_group: No such file or directory

Where am I wrong???

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox SPICE

2018-02-06 Thread Dominik Csapak

Hi,

do you mean this? https://github.com/eyeos/spice-web-client

the last time i tried, i needed a websocket proxy in between.
but i did not see any advantages versus novnc, so i did not test very long

On 02/04/2018 02:33 PM, Gilberto Nunes wrote:

Hi

Is there a way to use SPICE HTML5 client with Proxmox spiceproxy?


Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Question about zfs-zed

2018-01-31 Thread Dominik Csapak

it seems you do not have our repositories configured correctly?
what does
apt list | grep zfs-zed
and
apt show zfs-zed
say?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster issue!

2018-01-30 Thread Dominik Csapak

On 01/30/2018 08:09 PM, Gilberto Nunes wrote:

Hi there

After I change the corosync.conf, cluster is function again:

Here's the original corosync.conf, just after I create the cluster:

logging {
   debug: off
   to_syslog: yes
}

nodelist {
   node {
 name: pve01
 nodeid: 1
 quorum_votes: 1
 ring0_addr: 10.10.10.210
   }
   node {
 name: pve02
 nodeid: 2
 quorum_votes: 1
 ring0_addr: 10.10.10.220
   }
}

quorum {
   provider: corosync_votequorum
}

totem {
   cluster_name: HOMECLUSTER
   config_version: 2
   interface {
 bindnetaddr: 10.10.10.120---> this is the IP of "master"
node


the bindnetaddr is not the ip of the master, but is used to determine in 
which network corosync sends/receives (so it should not really matter if 
it is yyy.120 or yyy.0 as long as those are in the same network)



 ringnumber: 0

   }
   ip_version: ipv4
   secauth: on
   version: 2
}
}


And this is the "now working" version:

logging {
   debug: off
   to_syslog: yes
}

nodelist {
   node {
 name: pve01
 nodeid: 1
 quorum_votes: 1
 ring0_addr: 10.10.10.210
   }
   node {
 name: pve02
 nodeid: 2
 quorum_votes: 1
 ring0_addr: 10.10.10.220
   }
}

quorum {
   provider: corosync_votequorum
}

totem {
   cluster_name: HOMECLUSTER
   config_version: 2
   interface {
 bindnetaddr: 10.10.10.0
 ringnumber: 0
 mcastport: 5405
   }
   transport: udpu


i guess this is the thing which made it work, namely i guess that
multicast does not properly work in your network


   ip_version: ipv4
   secauth: on
   version: 2
}
logging {
 fileline: off
 to_logfile: yes
 to_syslog: yes
 debug: off
 logfile: /var/log/cluster/corosync.log
 debug: off
 timestamp: on
 logger_subsys {
 subsys: AMF
 debug: off
 }
}


After reboot, everything is running smootlhy

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




2018-01-30 15:39 GMT-02:00 Gilberto Nunes :


Hi

I have a fresh instalation of Proxmox 5.1.
In the /etc/hosts I have:

127.0.0.1 localhost.localdomain localhost
10.10.10.210 pve01.domain.com pve01 pvelocalhost
10.10.10.220 pve02.domain.com pve02

in both sides, pve01 and pve02

I form the cluster with the command pvecm create HOMECLUSTER
I ssh to pve02 and do pvecm add pve01.
The cluster are formed as expected, but after 2 minutes, I get this error
in /var/log/syslog:

Jan 30 15:23:04 pve01 corosync[1482]: error   [TOTEM ] FAILED TO RECEIVE
Jan 30 15:23:04 pve01 corosync[1482]:  [TOTEM ] FAILED TO RECEIVE
Jan 30 15:23:05 pve01 corosync[1482]: notice  [TOTEM ] A new membership (
10.10.10.210:12) was formed. Members left: 2
Jan 30 15:23:05 pve01 corosync[1482]: notice  [TOTEM ] Failed to receive
the leave message. failed: 2
Jan 30 15:23:05 pve01 corosync[1482]:  [TOTEM ] A new membership (
10.10.10.210:12) was formed. Members left: 2
Jan 30 15:23:05 pve01 corosync[1482]:  [TOTEM ] Failed to receive the
leave message. failed: 2

So, I stop the cluster ( systemctl stop pve-cluster;systemctl stop
corosync) and start pmxcfs -l (localy).
I saw that in /etc/pve/corosync.conf file, the statement line:

 bindnetaddr: 10.10.10.210

So after I change this line to this:

 bindnetaddr: 10.10.10.0

and restart both nodes, the cluster back to normality.

This second line wouldn't add but pvecm script?
Why I need to change it to the network address by myself and not pvecm
script do this automaticaly??

I cannot understand!

Any advice?

Thanks a lot.





---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Trooble with ceph on the last version of proxmox

2017-09-11 Thread Dominik Csapak

On 09/11/2017 11:47 AM, Jean-mathieu CHANTREIN wrote:

_ When I add my ceph-vm pool to my proxmox storage, I can not use it and my 
local storage is no longer visible when I try to create VMs, the ceph pools are 
no longer visible in GUI but clearly visible in CLI.


are you sure you have copied the keyring to the correct place?

/etc/pve/priv/ceph/.keyring
?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] USB Passthrough doesn't work???

2017-09-03 Thread Dominik Csapak

On 09/01/2017 12:27 AM, Gilberto Nunes wrote:

Hi

I am try use USB Passthrough in Proxmox VE 5.0 but I need to deactive and
active it again, in order to see the USB Device inside VM.
In the other hand, when I use
qm set  -scsi2 /dev/sdc

I am able to add and remove the device correct, without down the VM!
It's clear to me, that seems USB to add pendriver, for example, doesn't
work.
Perhaps I do something wrong!



hi,

as i already wrote on August 14 
(https://pve.proxmox.com/pipermail/pve-user/2017-August/168672.html)


usb hotplug currently does not work because of some restrictions 
regarding live migration


so for now, if you want to add usb devices with our tools,
you have to power the vm off and on again

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Migration error!

2017-08-24 Thread Dominik Csapak

On 08/23/2017 08:50 PM, Gilberto Nunes wrote:

more info:


pvesr status
JobID  EnabledTarget   LastSync
NextSync   Duration  FailCount State
100-0  Yeslocal/prox01-
  2017-08-23_15:55:04   3.151884  1 command 'set -o pipefail &&
pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot
__replicate_100-0_1503514204__ | /usr/bin/cstream -t 102400 |
/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01' root@10.1.1.10 --
pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit
code 255
100-1  Yeslocal/prox02-
  2017-08-23_15:55:01   3.089044  1 command 'set -o pipefail &&
pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot
__replicate_100-1_1503514201__ | /usr/bin/cstream -t 102400 |
/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox02' root@10.1.1.20 --
pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit
code 255




according to this output, no lastsync was completed, so i guess the 
replication did never work, so the migration will also not worK?


i would remove all replication jobs (maybe with -force, via commandline)
delete all images of this vm from all nodes where the vm *not* is at the 
moment (afaics from prox01 and prox02, as the vm is currently on prox03)


then add the replication again wait for it to complete (verify with 
pvesr status) and try again to migrate


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] USB Devices hotplug

2017-08-14 Thread Dominik Csapak

On 08/11/2017 03:31 PM, Gilberto Nunes wrote:

Ok!
I performe a fresh installation and just realize that with Virtio SCSI
Single, command qm set doesn't work.
I change to Virtio SCSI and qm set works properly.
But USB Device passthrough doesn't work, unless I turn off the VM. It's not
suppose to work on line??? I meant hotplug???

Thanks any way


Hi,

i just want to chime in here, to clarify a few things:

the 'Usb hotplug' option in the gui is sadly a bit mislabled, as
currently it only controls the hotplug of the 'use tablet as pointer 
device' option, not usb in general


with usb hotplug we currently have a few problems namely:

when adding many usb devices, it can happen that qemu
adds a usb-hub without id, which we then can never remove (because it 
has no id) so no more live migration of this vm, even if you remove all 
usb devices again


when adding the first usb2 device, we hotplug an usb2 controller,
but last time i checked, this could not be hot-unplugged, so
we are again in a situation where we cannot live migrate,
even when removing all usb devices

there are some ideas how to work around those issues, but
we have to be careful to not break old->new live migration completetly

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox 5 - replication fails

2017-07-12 Thread Dominik Csapak

hi,

i reply here, to avoid confusion in the other thread

can you post the content of the two files:

/etc/pve/replication.cfg
/var/lib/pve-manager/pve-replication-state.json (of the source node)

?


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] qm agent

2017-06-01 Thread Dominik Csapak

On 05/31/2017 10:06 PM, Gilberto Nunes wrote:

Hi
I have Windows 7 64 bits installed in PVE 5 (last beta ) and after install
qemu-agent I try trigger the following command:

qm agent 101 network-get-interfaces

But nothing happen!

Is there something more can I do, in order to make this work??

Thanks





the reason for this is that the qemu guest agent builds for windows are 
still based on qemu 0.12 release from rhel6,

which does not have this functionality

there is already a bug open on our bugtracker 
https://bugzilla.proxmox.com/show_bug.cgi?id=1356


and we also opened one on the redhat bugtracker, where they acknowledged it

i hope they update their build in a future release of the virtio-win iso

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Can not select language/realm at login using Firefox52

2017-03-20 Thread Dominik Csapak

On 03/18/2017 04:34 AM, ribbon wrote:

When using Firefox 52 on Windows 10, proxmox ve 4.4 / 4.3 can not
select language or realm at login. It can be selected with MSIE 11
and Edge. And firofox 52 on openSUSE Leap work well.

Is it a bug in Firefox for windows?


is it maybe this bug?

https://bugzilla.proxmox.com/show_bug.cgi?id=1223



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] qemu write cache mode

2017-01-03 Thread Dominik Csapak

On 01/03/2017 11:33 AM, Dhaussy Alexandre wrote:

Hello,

I just spotted something strange with storage cache mode.

I set cache=none on all my VMs (i believe this is the default), however
qm monitor says "writeback, direct".

Am i missing something ?



from the kvm manpage:

The host page cache can be avoided entirely with cache=none.  This will 
attempt to do disk IO directly to the guest's memory.
   QEMU may still perform an internal copy of the data. Note 
that this is considered a writeback mode and the guest OS must handle
   the disk write cache correctly in order to avoid data 
corruption on host crashes.


so cache=none is considered a writeback mode and this is represented by
writeback, direct

if you select writeback mode for a disk it shows only
writeback

so i think this is correct

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Windows & Gluster 3.8

2016-12-15 Thread Dominik Csapak

On 12/15/2016 09:23 AM, Wolfgang Bumiller wrote:

On December 14, 2016 at 1:25 PM Lindsay Mathieson  
wrote:


When host VM's on gluster you should be setting the following:

(...)

And possibly consider downgrading to 3.8.4?


Unfortunately I'll have to confirm that there are a few bugs in
versions prior and after 3.8.4 which are easily triggered with qemu.

Though I just saw 3.8.7 is available by now which should also contain the
fixes. Seems to work in my local tests. Would be nice if some more people
could test it.



i tested a bit here with

proxmox 4.4-1
glusterfs 3.8.7-1

bricks:

3x1

3 hosts

what i tested and worked:

vm create & install (debian 8) with qcow2
little usage (install some packages, copied some files)
snapshot and rollback
clone offline
linked clones of templates (also across hosts)

what did not work reliably:

online clone (the source vm sometime simply stops, have to investigate 
if this is another gluster or qemu bug)


what i did not test:

migration, raw and vmdk
different replicas



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] vm description for vm #a seems to be used for all new created vms

2016-12-07 Thread Dominik Csapak

On 12/07/2016 01:43 PM, IMMO WETZEL wrote:

Hi,

it looks like that if the documentation for a vm is set via API call this 
description is afterwards used for all following created VMs if they are 
created via GUI.
Can someone please verify this.  If so I can write the Bugzilla request.



hi, no i cannot reproduce that,

how do call the api exactly?


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] api call to get the right node name

2016-11-17 Thread Dominik Csapak

On 11/17/2016 02:49 PM, IMMO WETZEL wrote:

HI,

is there any direct api call to get the node name where the vm is currently 
running on ?



not directly no,
but you can call /cluster/resources and parse the output for your vm

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Hierarchical pools

2016-11-08 Thread Dominik Csapak

On 11/08/2016 10:01 AM, Angel Docampo wrote:

Hello,

I would like to suggest a feature, or to know how do you do to (if you
do it), when want to add the VMs to more than a pool.

As an example, if I have several pools of VMs, say "Development",
"Systems" and "Lab", I would like to let some users/groups both from
developer and system groups to control the VMs from pool "Lab" with some
elevated privileges like poweroff, CDROM, etc to install and configure
the VM before entering production.

I cannot see how to acomplish this, because one reource (VM/storage) can
belong only to one Pool. As far as I can see, there are two options, let
the pools be hierarchical or let the resources belong to more than one
pool.


if i understand you correctly, you want to achieve something like this:
vms:

Pool System:
vm 100 - 105

Pool Development:
vm 106 - 110

Pool Lab:
vm 111 - 115


Users:

user  1 - 10 : access to system
user 11 - 20: access to development
user 21 - 30: access to lab


now you want user 5 and 15 grant access to "lab" machines?

i would do it like this:

create for each pool a user group (under datacenter -> Permissions -> 
groups)


grant each group the right permissions (under datacenter -> permissions)

and now add the group for system to user 1 -10 and so on
but also add the lab group to user 5 and 15

is this what you wanted?

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 05:04 PM, Dietmar Maurer wrote:



IMHO such lists can be quite long, so how do you plan to display
a long lists here?



as it is now, it would simply linewrap and make the boxes bigger,
but yes this is a good point, i have to experiment a little with this

imho offline nodes won't be too many i think, and ha error vms should
also not be that much?

if you have that many vms in error state, i think you have bigger
problems than how the list is displayed, or not?


I would not design such problematic GUI. We can easily display
vms/nodes in error state elsewhere?



hmm the nodes are already listed in the grid on the bottom (including 
online status),

we could add another box with the ha error vms?

or we could display the status in the tree on the left (as an icon)
but this would mean we would have to embed the ha status in the 
/cluster/resources api call


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 04:07 PM, Kevin Lemonnier wrote:

t to be able to get each values separatly
for monitoring purposes. Right now we monitor each nodes independently,
I would be happy to add a "cluster check" on top of that though !
Would help knowing when it's time to increase the number of nodes.


do you mean the resources from each node?

i only use the /cluster/resources api call
maybe this is what you want?

there you have all resources for each node and vm

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 04:52 PM, Dietmar Maurer wrote:

https://www.pictshare.net/cb2c08d9ca.png


Seems you try to display lists of Nodes/Guests in:

Offline Nodes:
Guest with errors:

IMHO such lists can be quite long, so how do you plan to display
a long lists here?



as it is now, it would simply linewrap and make the boxes bigger,
but yes this is a good point, i have to experiment a little with this

imho offline nodes won't be too many i think, and ha error vms should 
also not be that much?


if you have that many vms in error state, i think you have bigger 
problems than how the list is displayed, or not?


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 04:19 PM, Michael Rasmussen wrote:

On Mon, 17 Oct 2016 16:17:48 +0200
Michael Rasmussen  wrote:


I would say 5 out of 6 cores is in use so 83,33 % CPU usage in the
cluster.


Forgot to mention: For people coming from VmWare this makes sense since
that is how vSphere cluster client displays it.



yes, thinking about it, it does really make more sense, thanks :)


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 04:04 PM, Michael Rasmussen wrote:

On Mon, 17 Oct 2016 15:58:35 +0200
Dominik Csapak  wrote:


so cpu is the node average

Why average on not total? If this is supposed to be a cluster wide
dashboard average gives no mening.



i agree, but with cpu usage it is not so easy

for example:

if you have 1 node with 50% usage and 2 cores
and 1 with 100% usage and 4 cores

what would you display here?


my initial idea would be to display: 75% used of 6 CPU(s) (maybe i 
should change this to cores)


but you are right it would probably be better to display:

83% of 6 cores

or do you mean something completely different?
(keep in mind we only get # of cores and total cpu usage in the api call 
i use)


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

Hi all,

i am currently working on a cluster dashboard,
and wanted to get feedback from you all.

please be aware that this is a mockup only, no functionality yet,
so no patches for now

i will also post it on the forum (maybe tomorrow) to get
additional feedback

please discuss :)

ps: for clarification:

the values under cluster resources are for the whole cluster,
so cpu is the node average
memory is summed up over all nodes
and storages are also summed up over all nodes (but each distinct 
storage is only counted once)


and yes i know that the status on top is not consistent with the one
on the bottom :P

https://www.pictshare.net/cb2c08d9ca.png

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Remove node name from title

2016-10-12 Thread Dominik Csapak

On 10/12/2016 10:31 AM, Sten Aus wrote:

Hi

How can I remove node name from the web title?



is there a specific reason why you want to remove the nodename?
imho it is very useful when you have multiple tabs for multple nodes,
as you can directly see which tab is which node

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Live Migration Problem

2016-07-15 Thread Dominik Csapak

this seems to be an qemu bug

see here:
https://forum.proxmox.com/threads/vm-freezeing-with-vcpu-at-100-when-doing-livemigration.25987/

and here:
http://lists.gnu.org/archive/html/qemu-discuss/2014-02/msg2.html

On 07/15/2016 02:34 PM, Kilian Ries wrote:

Hi,

today i tested different virtual CPU configurations for the KVM, for example:

- qemu64
- kvm64
- 1 core / 1 cpu
- NUMA active / deactivated

but everytime i migrate from Host1 -> Host2 the KVM freezes. Migration from Host2 
-> Host1 works without any problem.

The Host-CPU configuration is:

Host 1:
2x AMD Opteron(tm) Processor 4234

Host 2:
2x AMD Opteron(tm) Processor 4184



From the Proxmox Wiki:

"in order to guarantee migration between physical hosts does not result in 
non-functioning virtual machines, QEMU & KVM disable the guest's ability to directly 
access some of the features which may be exclusive to the host CPU."

https://pve.proxmox.com/wiki/Allow_Guests_Access_to_Host_CPU

I did a quick comparison of the CPU settings / feature flags on Host1 and Host2 
and both were the same (inside the KVM; running with kvm64 CPU type).


Does anybody have an idea whats the problem?


Von: pve-user  im Auftrag von Kilian Ries 

Gesendet: Donnerstag, 14. Juli 2016 16:40
An: pve-user@pve.proxmox.com
Betreff: Re: [PVE-User] Live Migration Problem

Thanks for the hint, there was an error with the sources.list. Now i'm really 
on the latest version:

pveversion -v

proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-70
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5.7-pve10~bpo80




However, after live-migration from host1 to host2 the VM is still frozen:

###
Jul 14 16:37:54 starting migration of VM 104 to node 'proxmox2' 
(192.168.100.253)
Jul 14 16:37:54 copying disk images
Jul 14 16:37:55 starting VM 104 on remote node 'proxmox2'
Jul 14 16:37:57 start remote tunnel
Jul 14 16:37:58 starting online/live migration on 
unix:/run/qemu-server/104.migrate
Jul 14 16:37:58 migrate_set_speed: 8589934592
Jul 14 16:37:58 migrate_set_downtime: 0.1
Jul 14 16:37:58 set migration_caps
Jul 14 16:37:58 set cachesize: 53687091
Jul 14 16:37:58 start migrate command to unix:/run/qemu-server/104.migrate
Jul 14 16:38:00 migration status: active (transferred 119489478, remaining 
113209344), total 546119680)
Jul 14 16:38:00 migration xbzrle cachesize: 33554432 transferred 0 pages 0 
cachemiss 0 overflow 0
Jul 14 16:38:02 migration speed: 128.00 MB/s - downtime 92 ms
Jul 14 16:38:02 migration status: completed
Jul 14 16:38:05 migration finished successfully (duration 00:00:11)
TASK OK
###

From host2 to host1 everything is fine ...

________
Von: pve-user  im Auftrag von Dominik Csapak 

Gesendet: Donnerstag, 14. Juli 2016 15:37
An: pve-user@pve.proxmox.com
Betreff: Re: [PVE-User] Live Migration Problem

On 07/14/2016 02:05 PM, Kilian Ries wrote:

Both systems are up to date (apt-get dist-upgrade doesn't show me any package 
to upgrade).

pveversion -v

proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie


you probably have an error in your repositoriy configuration,
see https://pve.proxmox.com/wiki/Package_repositories





Von: pve-user  im Auftrag von Thomas Lamprecht 

Gesendet: Donnerstag, 14. Juli 2016 13:51
An: pve-user@pve.proxmox.com
Betreff: Re: [PVE-User] Live Migration Problem

Hi


On 07/14/2016 12:51 PM, Kilian Ries wrote:

Just tested it, ssh works in both directions.

As additional information here is the migration output from proxmox:

###
Jul 14 12:46:03 starting migration of VM 101 to node 'proxmox2' 
(192.168.100.253)
Jul 14 12:46:03 copying disk images
Jul 14 12:46:04 starting VM 101 on remote node 'proxmox2'
Jul 14 12:46:05 starting ssh migration tunnel
Jul 14

Re: [PVE-User] Live Migration Problem

2016-07-14 Thread Dominik Csapak

On 07/14/2016 02:05 PM, Kilian Ries wrote:

Both systems are up to date (apt-get dist-upgrade doesn't show me any package 
to upgrade).

pveversion -v

proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie


you probably have an error in your repositoriy configuration,
see https://pve.proxmox.com/wiki/Package_repositories





Von: pve-user  im Auftrag von Thomas Lamprecht 

Gesendet: Donnerstag, 14. Juli 2016 13:51
An: pve-user@pve.proxmox.com
Betreff: Re: [PVE-User] Live Migration Problem

Hi


On 07/14/2016 12:51 PM, Kilian Ries wrote:

Just tested it, ssh works in both directions.

As additional information here is the migration output from proxmox:

###
Jul 14 12:46:03 starting migration of VM 101 to node 'proxmox2' 
(192.168.100.253)
Jul 14 12:46:03 copying disk images
Jul 14 12:46:04 starting VM 101 on remote node 'proxmox2'
Jul 14 12:46:05 starting ssh migration tunnel
Jul 14 12:46:06 starting online/live migration on localhost:6


can you do an update, via:

apt-get update
apt-get dist-upgrade

it seems that you use an older package versions than available in the repos.
We use UNIX sockets for securely forwarding the migration to the other
node, your log shows that it uses TCP ones..

And ensure that both nodes are on the same versions, else you may get
problems when migrating from new to old..
Old to new works.

cheers


Jul 14 12:46:06 migrate_set_speed: 8589934592
Jul 14 12:46:06 migrate_set_downtime: 0.1
Jul 14 12:46:08 migration status: active (transferred 127103147, remaining 
262205440), total 2156732416)
Jul 14 12:46:08 migration xbzrle cachesize: 134217728 transferred 0 pages 0 
cachemiss 0 overflow 0
Jul 14 12:46:10 migration status: active (transferred 248201655, remaining 
123899904), total 2156732416)
Jul 14 12:46:10 migration xbzrle cachesize: 134217728 transferred 0 pages 0 
cachemiss 0 overflow 0
Jul 14 12:46:12 migration speed: 341.33 MB/s - downtime 54 ms
Jul 14 12:46:12 migration status: completed
Jul 14 12:46:16 migration finished successfully (duration 00:00:13)
TASK OK
###


Von: pve-user  im Auftrag von Jean-Laurent Ivars 

Gesendet: Donnerstag, 14. Juli 2016 12:07
An: PVE User List
Betreff: Re: [PVE-User] Live Migration Problem

hello

maybe just a silly idea, but did you tried to ssh from host1 -> Host2 maybe 
it’s just a knowhost issue…

regards


Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 


Le 14 juil. 2016 à 12:00, Kilian Ries  a écrit :

Hi,


just installed a two-node proxmox 4.2 cluster:



###

proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)

pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)

pve-kernel-4.4.6-1-pve: 4.4.6-48

lvm2: 2.02.116-pve2

corosync-pve: 2.3.5-2

libqb0: 1.0-1

pve-cluster: 4.0-39

qemu-server: 4.0-72

pve-firmware: 1.1-8

libpve-common-perl: 4.0-59

libpve-access-control: 4.0-16

libpve-storage-perl: 4.0-50

pve-libspice-server1: 0.12.5-2

vncterm: 1.2-1

pve-qemu-kvm: 2.5-14

pve-container: 1.0-62

pve-firewall: 2.0-25

pve-ha-manager: 1.0-28

ksm-control-daemon: 1.2-1

glusterfs-client: 3.5.2-2+deb8u2

lxc-pve: 1.1.5-7

lxcfs: 2.0.0-pve2

cgmanager: 0.39-pve1

criu: 1.6.0-1

zfsutils: 0.6.5-pve9~jessie

###



I'm trying a live Migration via NFS-Storage with a KVM. Migration from Host 2 -> 
Host 1 always works, Migration from Host 1 -> Host 2 seems to work (no error in 
live-migration output) but the KVM hangs after migration. I can't ping the VM and VNC 
output is frozen.


Tried it several times, always with the same result. The only difference 
between the two hosts are the CPUs:


Host 1:

AMD Opteron(tm) Processor 4234


Host 2:

AMD Opteron(tm) Processor 4184



Howerver, the KVM is set do Default CPU (KVM 64).


How can that happen?


Thanks

Greets,

Kilian



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-

Re: [PVE-User] Proxmox storage -> adding ceph storage monhost issue

2016-07-06 Thread Dominik Csapak

On 07/06/2016 03:03 PM, Wolfgang Bumiller wrote:

On Wed, Jul 06, 2016 at 02:16:41PM +0200, Alwin Antreich wrote:

Hi all,

there is an issue when adding IPs in the storage.cfg file at the line monhost, 
when formatted with commas as opposed to
spaces as separator between IPs.


We expect semicolons (';') (though spaces seem to work, too, and commas
work everywhere except with qemu). We should probably enforce semicolons
via the schema definition and improve the documentation & error messages
there.



i think we should split the list of monhosts (with split_list) and then
use the semicolon in the command, this way no existing config with
spaces is made unusable and everone can use their preferred style of
separating

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cannot boot on a Windows 7 VM from a full-cloned template

2016-03-19 Thread Dominik Csapak
Hi,

could you maybe send your configuration of the template?

(should be under /etc/pve/qemu/106.conf)

I can reproduce the issue, but only when the cache mode of the disk is writeback
or writeback(unsafe)

regards
Dominik

> On March 16, 2016 at 7:15 PM Gaël Jobin  wrote:
> 
> Hi all,
> 
> I'm using Proxmox with the testing repo.
> 
> I successfully installed a Windows 7 VM and made a template of it. Then, I
> tried to create a new VM by cloning the previous template (full clone).
> Unfortunately, the new VM cannot boot Windows. On the other hand, with a
> "linked-clone", it works fine.
> 
> I noticed that the cloning was internally doing a "qemu-img convert". More
> precisely in my case, "/usr/bin/qemu-img convert -p -f raw -O raw
> /var/lib/vz/images/106/base-106-disk-1.raw
> /var/lib/vz/images/109/vm-109-disk-1.raw".
> 
> I did the same command manually and was quiet surprised to see that the
> new disk has the exact same size but not the same MD5 hash (md5sum command).
> 
> Any idea why qemu-img corrupt the disk?
> 
> For the moment, I just manually "cp" the base disk to my newly created VM
> directory and it's working. Also, I tried to convert the base disk from raw to
> qcow2 and back qcow2 to raw and the new raw disk is booting fine ! The problem
> seems related to "raw to raw" conversion...
> 
> qemu-img version 2.5.0pve-qemu-kvm_2.5-9
> 
> Thank you for your help,
> 
> Regards,
> Gaël
> 


 

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user