Re: [pve-devel] [PATCH v3 many 0/7] notifications: add SMTP endpoint

2023-09-25 Thread Lukas Wagner

The notification system overhaul is available in the pvetest repository,
however that does not yet include the SMTP endpoint type. Only gotify 
and sendmail (uses the system's sendmail command to send mails) are 
available so far.


--
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/3] fix #2816: restore: remove timeout when allocating disks

2023-09-25 Thread Fiona Ebner
Am 20.09.23 um 13:23 schrieb Dominik Csapak:
> comment inline:

Feel free to cut out irrelevant parts in the reply ;)

> On 9/12/23 11:16, Fiona Ebner wrote:
>> @@ -7483,14 +7483,11 @@ sub restore_vma_archive {
>>   $devinfo->{$devname} = { size => $size, dev_id => $dev_id };
>>   } elsif ($line =~ m/^CTIME: /) {
>>   # we correctly received the vma config, so we can disable
>> -    # the timeout now for disk allocation (set to 10 minutes, so
>> -    # that we always timeout if something goes wrong)
>> -    alarm(600);
>> +    # the timeout now for disk allocation

I would interpret this comment about disabling of the timeout to be
talking about the short 5 second timeout for reading the config.

>> +    alarm($oldtimeout || 0);
>> +    $oldtimeout = undef;
> 
> 
> this part looks wrong to me, because AFAIU you want to disable the timeout
> (by canceling the alarm), but what you do here is to set it to $oldtimeout
> if that was set before?
> 
> i guess what we want to do here is:
> 
> 
> alarm(0);
> <... do stuff ...>
> alarm($oldtimeout || 0);
> $oldtimeout = undef;
> 
> 
> ?

Hmm, I see what you mean. But I'd argue that it's unexpected to disable
the outer timeout for the full duration of the allocation from a
caller's perspective.

sub in_a_hurry {
alarm(120); # outer/old timeout
restore_vma_archive(...);
}

With the code before the patch, it could take up to 5 + 600 + 120
seconds to hit the outer timeout, with your suggestion up to 5 +
potentially unlimited + 120 seconds, with patched code up to 5 + 120
seconds. Since there currently are no callers setting an outer timeout,
the patch doesn't make the situation worse.

We could even make the calculation more complicated and have the timeout
always be hit within 120 seconds in the example above, but not sure if
worth it.

AFAICS, we do similar "delay" of the outer timeout in e.g.
run_with_timeout(), where it can also take up to $inner_timeout +
$outer_timeout seconds to hit the outer timeout.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/3] fix #2816: restore: remove timeout when allocating disks

2023-09-25 Thread Dominik Csapak

On 9/25/23 10:46, Fiona Ebner wrote:

Am 20.09.23 um 13:23 schrieb Dominik Csapak:

comment inline:


Feel free to cut out irrelevant parts in the reply ;)


On 9/12/23 11:16, Fiona Ebner wrote:

@@ -7483,14 +7483,11 @@ sub restore_vma_archive {
   $devinfo->{$devname} = { size => $size, dev_id => $dev_id };
   } elsif ($line =~ m/^CTIME: /) {
   # we correctly received the vma config, so we can disable
-    # the timeout now for disk allocation (set to 10 minutes, so
-    # that we always timeout if something goes wrong)
-    alarm(600);
+    # the timeout now for disk allocation


I would interpret this comment about disabling of the timeout to be
talking about the short 5 second timeout for reading the config.


ok, i interpreted it to be disabling *any* timeout to be able
to allocate the disks properly, and since there is only one global
timeout here, selectively disabling one seems strange?




+    alarm($oldtimeout || 0);
+    $oldtimeout = undef;



this part looks wrong to me, because AFAIU you want to disable the timeout
(by canceling the alarm), but what you do here is to set it to $oldtimeout
if that was set before?

i guess what we want to do here is:


alarm(0);
<... do stuff ...>
alarm($oldtimeout || 0);
$oldtimeout = undef;


?


Hmm, I see what you mean. But I'd argue that it's unexpected to disable
the outer timeout for the full duration of the allocation from a
caller's perspective.

sub in_a_hurry {
 alarm(120); # outer/old timeout
 restore_vma_archive(...);
}

With the code before the patch, it could take up to 5 + 600 + 120
seconds to hit the outer timeout, with your suggestion up to 5 +
potentially unlimited + 120 seconds, with patched code up to 5 + 120
seconds. Since there currently are no callers setting an outer timeout,
the patch doesn't make the situation worse.



i get what you mean, but maybe that would warrant a comment on the function?
or maybe we should be able to clean up half allocated disks in there
in case the outer timeout triggers?

in any case, i'd find it good to improve the comment that speaks of
'disabling the timeout' that it's meant to only disable the inner 5s one.



We could even make the calculation more complicated and have the timeout
always be hit within 120 seconds in the example above, but not sure if
worth it.


meh. imho thats not worth it for the (up to) 5 seconds that the extraction
can take.



AFAICS, we do similar "delay" of the outer timeout in e.g.
run_with_timeout(), where it can also take up to $inner_timeout +
$outer_timeout seconds to hit the outer timeout.



exactly, only our "inner" timeout here is undefined/unlimited because
disk allocation can take forever?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] ui: vm create wizard: default to 2 cores for win 11 os type

2023-09-25 Thread Fiona Ebner
Am 12.09.23 um 16:38 schrieb Alexander Zeidler:
>  www/manager6/qemu/ProcessorEdit.js | 16 
>  1 file changed, 16 insertions(+)
> 

I think this should be done in www/manager6/qemu/OSDefaults.js instead
of having this afterrender callback with an explicit check.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu] fix #2874: SATA: avoid unsolicited write to sector 0 during reset

2023-09-25 Thread Fiona Ebner
Am 24.08.23 um 15:51 schrieb Fiona Ebner:
> If there is a pending DMA operation during ide_bus_reset(), the fact
> that the IDEstate is already reset before the operation is canceled
> can be problematic. In particular, ide_dma_cb() might be called and
> then use the reset IDEstate which contains the signature after the
> reset. When used to construct the IO operation this leads to
> ide_get_sector() returning 0 and nsector being 1. This is particularly
> bad, because a write command will thus destroy the first sector which
> often contains a partition table or similar.
> 
> Upstream discussion:
> https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg04239.html
> 
> Signed-off-by: Fiona Ebner 

Ping


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/3] fix #2816: restore: remove timeout when allocating disks

2023-09-25 Thread Fiona Ebner
Am 25.09.23 um 10:57 schrieb Dominik Csapak:
> On 9/25/23 10:46, Fiona Ebner wrote:
>> Am 20.09.23 um 13:23 schrieb Dominik Csapak:
>>> On 9/12/23 11:16, Fiona Ebner wrote:
 @@ -7483,14 +7483,11 @@ sub restore_vma_archive {
    $devinfo->{$devname} = { size => $size, dev_id => $dev_id };
    } elsif ($line =~ m/^CTIME: /) {
    # we correctly received the vma config, so we can disable
 -    # the timeout now for disk allocation (set to 10 minutes, so
 -    # that we always timeout if something goes wrong)
 -    alarm(600);
 +    # the timeout now for disk allocation
>>
>> I would interpret this comment about disabling of the timeout to be
>> talking about the short 5 second timeout for reading the config.
> 
> ok, i interpreted it to be disabling *any* timeout to be able
> to allocate the disks properly, and since there is only one global
> timeout here, selectively disabling one seems strange?

With that interpretation the code would be wrong of course.

> i get what you mean, but maybe that would warrant a comment on the
> function?
> or maybe we should be able to clean up half allocated disks in there
> in case the outer timeout triggers?

AFAICS, my patch didn't change cleanup behavior and what you suggest
already happens? The allocation is within an eval and we call
restore_destroy_volumes() if there was an error during allocation (that
also applies for a timeout error).

> 
> in any case, i'd find it good to improve the comment that speaks of
> 'disabling the timeout' that it's meant to only disable the inner 5s one.
> 

It can be seen from the code, but feel free to send a patch to improve it ;)

>> AFAICS, we do similar "delay" of the outer timeout in e.g.
>> run_with_timeout(), where it can also take up to $inner_timeout +
>> $outer_timeout seconds to hit the outer timeout.
> 
> 
> exactly, only our "inner" timeout here is undefined/unlimited because
> disk allocation can take forever?
> 

Why treat the operation as having unlimited inner timeout? What is the
benefit? I'd expect our caller to have a good reason to set a timeout if
it does, so why not try to honor it?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 stable-7+master manager 1/2] ui: vm selector: handle empty string gracefully

2023-09-25 Thread Fiona Ebner
Ping again, because there is another report:

https://forum.proxmox.com/threads/133735/

Am 30.08.23 um 11:15 schrieb Fiona Ebner:
> Ping for the series
> 
> Am 07.07.23 um 10:02 schrieb Fiona Ebner:
>> which is passed by the backup job window when using selection mode
>> 'all', would be converted to [""] and wrongly add an entry with VMID
>> 0 because the item "" could not be found in the store.
>>
>> Reported in the community forum:
>> https://forum.proxmox.com/threads/130164/
>>
>> Fixes: 7a5ca76a ("fix #4239: ui: show selected but non-existing vmids in 
>> backup edit")
>> Signed-off-by: Fiona Ebner 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 stable-7+master manager 1/2] ui: vm selector: handle empty string gracefully

2023-09-25 Thread Dominik Csapak

On 7/7/23 10:02, Fiona Ebner wrote:

which is passed by the backup job window when using selection mode
'all', would be converted to [""] and wrongly add an entry with VMID
0 because the item "" could not be found in the store.

Reported in the community forum:
https://forum.proxmox.com/threads/130164/

Fixes: 7a5ca76a ("fix #4239: ui: show selected but non-existing vmids in backup 
edit")
Signed-off-by: Fiona Ebner 
---

Applying either of the patches to stable-7 is enough to fix the issue.

No changes in v2.

  www/manager6/form/VMSelector.js | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/form/VMSelector.js b/www/manager6/form/VMSelector.js
index 4c0bba13..bf2c8df7 100644
--- a/www/manager6/form/VMSelector.js
+++ b/www/manager6/form/VMSelector.js
@@ -162,7 +162,7 @@ Ext.define('PVE.form.VMSelector', {
  setValue: function(value) {
let me = this;
if (!Ext.isArray(value)) {
-   value = value.split(',');
+   value = value === '' ? [] : value.split(',');
}
  
  	let store = me.getStore();


sorry for the late answer

the patch LGTM, but i would even go a step further and do a

value ??= [];


before the isArray check (that way we'd also handle undefined/null values)
(can ofc be done as a follow-up/later)


an alternative would be to filter out all empty values e.g. like this:

---
value.split(',').filter(v => v !== '')
---

this would then also handle values like: '100,,200'
idk if that's even possible to get here


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 manager 2/2] ui: vm selector: don't add invalid not found items

2023-09-25 Thread Dominik Csapak

On 7/7/23 10:02, Fiona Ebner wrote:

Doing a simple numericity check and warn in the console so developers
can notice if there is something off.

Signed-off-by: Fiona Ebner 
---

New in v2.

  www/manager6/form/VMSelector.js | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/www/manager6/form/VMSelector.js b/www/manager6/form/VMSelector.js
index bf2c8df7..4b04ac30 100644
--- a/www/manager6/form/VMSelector.js
+++ b/www/manager6/form/VMSelector.js
@@ -136,7 +136,11 @@ Ext.define('PVE.form.VMSelector', {
let selection = value.map(item => {
let found = store.findRecord('vmid', item, 0, false, true, true);
if (!found) {
-   notFound.push(item);
+   if (Ext.isNumeric(item)) {
+   notFound.push(item);
+   } else {
+   console.warn(`invalid item in vm selection: ${item}`);
+   }
}
return found;
}).filter(r => r);



LGTM

Reviewed-by: Dominik Csapak 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/3] fix #2816: restore: remove timeout when allocating disks

2023-09-25 Thread Dominik Csapak

On 9/25/23 12:30, Fiona Ebner wrote:

Am 25.09.23 um 10:57 schrieb Dominik Csapak:

On 9/25/23 10:46, Fiona Ebner wrote:

Am 20.09.23 um 13:23 schrieb Dominik Csapak:

On 9/12/23 11:16, Fiona Ebner wrote:

@@ -7483,14 +7483,11 @@ sub restore_vma_archive {
    $devinfo->{$devname} = { size => $size, dev_id => $dev_id };
    } elsif ($line =~ m/^CTIME: /) {
    # we correctly received the vma config, so we can disable
-    # the timeout now for disk allocation (set to 10 minutes, so
-    # that we always timeout if something goes wrong)
-    alarm(600);
+    # the timeout now for disk allocation


I would interpret this comment about disabling of the timeout to be
talking about the short 5 second timeout for reading the config.


ok, i interpreted it to be disabling *any* timeout to be able
to allocate the disks properly, and since there is only one global
timeout here, selectively disabling one seems strange?


With that interpretation the code would be wrong of course.


i get what you mean, but maybe that would warrant a comment on the
function?
or maybe we should be able to clean up half allocated disks in there
in case the outer timeout triggers?


AFAICS, my patch didn't change cleanup behavior and what you suggest
already happens? The allocation is within an eval and we call
restore_destroy_volumes() if there was an error during allocation (that
also applies for a timeout error).


true, did not see that





in any case, i'd find it good to improve the comment that speaks of
'disabling the timeout' that it's meant to only disable the inner 5s one.



It can be seen from the code, but feel free to send a patch to improve it ;)


sure, i'll see if i can come up with better wording ;)




AFAICS, we do similar "delay" of the outer timeout in e.g.
run_with_timeout(), where it can also take up to $inner_timeout +
$outer_timeout seconds to hit the outer timeout.



exactly, only our "inner" timeout here is undefined/unlimited because
disk allocation can take forever?



Why treat the operation as having unlimited inner timeout? What is the
benefit? I'd expect our caller to have a good reason to set a timeout if
it does, so why not try to honor it?


no sure, having the outer timeout as upper bound is fine for me too,
i was seemingly just confused by the two timeouts and the wording of the
comment

just to note, the function can still take an infinite time if
it hangs in the cleanup for example, but that's then expected i guess.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 stable-7+master manager 1/2] ui: vm selector: handle empty string gracefully

2023-09-25 Thread Fiona Ebner
Am 25.09.23 um 13:17 schrieb Dominik Csapak:
> 
> sorry for the late answer
> 
> the patch LGTM, but i would even go a step further and do a
> 
> value ??= [];
> 
> 
> before the isArray check (that way we'd also handle undefined/null values)
> (can ofc be done as a follow-up/later)
> 
> 
> an alternative would be to filter out all empty values e.g. like this:
> 
> ---
> value.split(',').filter(v => v !== '')
> ---
> 
> this would then also handle values like: '100,,200'
> idk if that's even possible to get here
> 

Thank you for the review! I'll send a v3 with the suggested changes.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container v2] setup: fix architecture detection for NixOS containers

2023-09-25 Thread Christoph Heiss
NixOS is special and deviates in many places from a "standard" Linux
system. In this case, /bin/sh does not exist in the filesystem, before
the initial activation (aka. first boot) - which creates a symlink at
/bin/sh.

Due to the currently existing fallback code, only an error message is
logged and the architecture is defaulted to x86_64. Still, this is not
something users might expect.

Thus try a bit harder to detect the architecture for NixOS containers by
inspecting the init script, which contains a shebang-line with the full
path to the system shell.

This moves the architecture detection code to the end of the container
creation lifecycle, so that it can be implemented as a plugin
subroutine. Therefore this mechanism is now generic enough that it can
be adapted to other container OS's in the future if needed. AFAICS
`arch` is only used when writing the actual LXC config, so determining
it later during creation does not change anything.

detect_architecture() has been made a bit more generic; the LXC-specific
error was moved out of this function, as well as the chroot(). Ensuring
that it is executed from the correct rootdir/chroot should be handled by
the caller.

Tested by creating a NixOS and a Debian container (to verify that
nothing regressed) and checking if the warning "Architecure detection
failed: [..]" no longer appears for the NixOS CT and if  `arch` in the
CT config is correct. Also tested restoring both containers from a local
and a PBS backup, as well as migrating both container.

Signed-off-by: Christoph Heiss 
---
v1: https://lists.proxmox.com/pipermail/pve-devel/2023-February/055949.html

Changes since v1:
  * Moved detect_architecture() to PVE::LXC::Tools to avoid a cyclic
include
  * Properly log/report errors from detect_architecture()

 src/PVE/LXC/Create.pm   | 76 -
 src/PVE/LXC/Setup.pm| 18 +
 src/PVE/LXC/Setup/Base.pm   |  9 +
 src/PVE/LXC/Setup/NixOS.pm  | 17 +
 src/PVE/LXC/Setup/Plugin.pm |  5 +++
 src/PVE/LXC/Tools.pm| 50 
 6 files changed, 99 insertions(+), 76 deletions(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index f4c3220..277c6a9 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -16,72 +16,6 @@ use PVE::VZDump::ConvertOVZ;
 use PVE::Tools;
 use POSIX;

-sub detect_architecture {
-my ($rootdir) = @_;
-
-# see https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
-
-my $supported_elf_machine = {
-   0x03 => 'i386',
-   0x3e => 'amd64',
-   0x28 => 'armhf',
-   0xb7 => 'arm64',
-   0xf3 => 'riscv',
-};
-
-my $elf_fn = '/bin/sh'; # '/bin/sh' is POSIX mandatory
-my $detect_arch = sub {
-   # chroot avoids a problem where we check the binary of the host system
-   # if $elf_fn is an absolut symlink (e.g. $rootdir/bin/sh -> /bin/bash)
-   chroot($rootdir) or die "chroot '$rootdir' failed: $!\n";
-   chdir('/') or die "failed to change to root directory\n";
-
-   open(my $fh, "<", $elf_fn) or die "open '$elf_fn' failed: $!\n";
-   binmode($fh);
-
-   my $length = read($fh, my $data, 20) or die "read failed: $!\n";
-
-   # 4 bytes ELF magic number and 1 byte ELF class, padding, machine
-   my ($magic, $class, undef, $machine) = unpack("A4CA12n", $data);
-
-   die "'$elf_fn' does not resolve to an ELF!\n"
-   if (!defined($class) || !defined($magic) || $magic ne "\177ELF");
-
-   my $arch = $supported_elf_machine->{$machine};
-   die "'$elf_fn' has unknown ELF machine '$machine'!\n"
-   if !defined($arch);
-
-   if ($arch eq 'riscv') {
-   if ($class eq 1) {
-   $arch = 'riscv32';
-   } elsif ($class eq 2) {
-   $arch = 'riscv64';
-   } else {
-   die "'$elf_fn' has invalid class '$class'!\n";
-   }
-   }
-
-   return $arch;
-};
-
-my $arch = eval { PVE::Tools::run_fork_with_timeout(10, $detect_arch); };
-my $err = $@;
-
-if (!defined($arch) && !defined($err)) {
-   # on timeout
-   die "Architecture detection failed: timeout\n";
-} elsif ($err) {
-   # any other error
-   $arch = 'amd64';
-   print "Architecture detection failed: $err\nFalling back to $arch.\n" .
- "Use `pct set VMID --arch ARCH` to change.\n";
-} else {
-   print "Detected container architecture: $arch\n";
-}
-
-return $arch;
-}
-
 sub restore_archive {
 my ($storage_cfg, $archive, $rootdir, $conf, $no_unpack_error, $bwlimit) = 
@_;

@@ -122,11 +56,6 @@ sub restore_proxmox_backup_archive {

 PVE::Storage::PBSPlugin::run_raw_client_cmd(
$scfg, $storeid, $cmd, $param, userns_cmd => $userns_cmd);
-
-# if arch is set, we do not try to autodetect it
-return if defined($conf->{arch});
-
-$conf->{arch} = detect_architecture($rootdir);
 }

 sub restore_tar_archive {
@@ -187,11 +116,6 @@ sub rest

[pve-devel] [PATCH v3 stable-7+master manager 1/3] ui: vm selector: gracefully handle empty IDs in setValue function

2023-09-25 Thread Fiona Ebner
An empty string is passed by the backup job window when using
selection mode 'all', would be converted to [""] and wrongly add an
entry with VMID 0 because the item "" could not be found in the store.

Reported in the community forum:
https://forum.proxmox.com/threads/130164/

Fixes: 7a5ca76a ("fix #4239: ui: show selected but non-existing vmids in backup 
edit")
Suggested-by: Dominik Csapak 
Signed-off-by: Fiona Ebner 
---

It is enough to apply this or the second patch to stable-7 to fix the
issue.

Changes in v3:
* use filter function to handle more general cases like "100,,200"
  and not just the empty string.

 www/manager6/form/VMSelector.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/form/VMSelector.js b/www/manager6/form/VMSelector.js
index 4c0bba13..0c884aae 100644
--- a/www/manager6/form/VMSelector.js
+++ b/www/manager6/form/VMSelector.js
@@ -162,7 +162,7 @@ Ext.define('PVE.form.VMSelector', {
 setValue: function(value) {
let me = this;
if (!Ext.isArray(value)) {
-   value = value.split(',');
+   value = value.split(',').filter(v => v !== '');
}
 
let store = me.getStore();
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 2/3] ui: vm selector: don't add invalid not found items

2023-09-25 Thread Fiona Ebner
Doing a simple numericity check and warn in the console so developers
can notice if there is something off.

Signed-off-by: Fiona Ebner 
Reviewed-by: Dominik Csapak 
---

No changes in v3.

 www/manager6/form/VMSelector.js | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/www/manager6/form/VMSelector.js b/www/manager6/form/VMSelector.js
index 0c884aae..22f7dd11 100644
--- a/www/manager6/form/VMSelector.js
+++ b/www/manager6/form/VMSelector.js
@@ -136,7 +136,11 @@ Ext.define('PVE.form.VMSelector', {
let selection = value.map(item => {
let found = store.findRecord('vmid', item, 0, false, true, true);
if (!found) {
-   notFound.push(item);
+   if (Ext.isNumeric(item)) {
+   notFound.push(item);
+   } else {
+   console.warn(`invalid item in vm selection: ${item}`);
+   }
}
return found;
}).filter(r => r);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 3/3] ui: vm selector: gracefully handle undefined/null in setValue function

2023-09-25 Thread Fiona Ebner
Suggested-by: Dominik Csapak 
Signed-off-by: Fiona Ebner 
---

New in v3.

 www/manager6/form/VMSelector.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/form/VMSelector.js b/www/manager6/form/VMSelector.js
index 22f7dd11..d59847f2 100644
--- a/www/manager6/form/VMSelector.js
+++ b/www/manager6/form/VMSelector.js
@@ -165,6 +165,7 @@ Ext.define('PVE.form.VMSelector', {
 
 setValue: function(value) {
let me = this;
+   value ??= [];
if (!Ext.isArray(value)) {
value = value.split(',').filter(v => v !== '');
}
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-kernel] cherry-pick fix for new amd64 ucode

2023-09-25 Thread Stoiko Ivanov
The latest amd64-microcode package in sid [0] (which probably will
eventually make it to bookworm-security) has a change that requires
the added patch to work properly.

The changelog-entry refers to stable k.o branches only - but a quick
look through the linux-firmware.git log identifies:
`f2eb058afc57348cde66852272d6bf11da1eef8f` as relevant commit, which
refers (as NOTE in the patch) to:
a32b0f0db3f3 ("x86/microcode/AMD: Load late on both threads too")
which applies cleanly (although I cherry-picked the patch from the
6.1.y stable branch to have the original commit in the commit message).

quickly tested compiling and booting the result in a VM (however w/o
a fitting CPU (Epyc Genoa or Bergamo) it should cause a change)

reported in our Enterprise Support as potential culprit for one
thread from 128 being reported as offline in `lscpu`

[0] 
https://metadata.ftp-master.debian.org/changelogs//non-free-firmware/a/amd64-microcode/amd64-microcode_3.20230808.1.1_changelog

Signed-off-by: Stoiko Ivanov 
---
 ...de-AMD-Load-late-on-both-threads-too.patch | 32 +++
 1 file changed, 32 insertions(+)
 create mode 100644 
patches/kernel/0018-x86-microcode-AMD-Load-late-on-both-threads-too.patch

diff --git 
a/patches/kernel/0018-x86-microcode-AMD-Load-late-on-both-threads-too.patch 
b/patches/kernel/0018-x86-microcode-AMD-Load-late-on-both-threads-too.patch
new file mode 100644
index ..7f62eac2efd1
--- /dev/null
+++ b/patches/kernel/0018-x86-microcode-AMD-Load-late-on-both-threads-too.patch
@@ -0,0 +1,32 @@
+From  Mon Sep 17 00:00:00 2001
+From: "Borislav Petkov (AMD)" 
+Date: Tue, 2 May 2023 19:53:50 +0200
+Subject: [PATCH] x86/microcode/AMD: Load late on both threads too
+
+commit a32b0f0db3f396f1c9be2fe621e77c09ec3d8e7d upstream.
+
+Do the same as early loading - load on both threads.
+
+Signed-off-by: Borislav Petkov (AMD) 
+Cc: 
+Link: https://lore.kernel.org/r/20230605141332.25948-1...@alien8.de
+Signed-off-by: Greg Kroah-Hartman 
+(cherry picked from commit 94a69d6999419cd21365111b4493070182712299)
+Signed-off-by: Stoiko Ivanov 
+---
+ arch/x86/kernel/cpu/microcode/amd.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c 
b/arch/x86/kernel/cpu/microcode/amd.c
+index ac59783e6e9f..53f21fb431c0 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
 b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -705,7 +705,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+   rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+ 
+   /* need to apply patch? */
+-  if (rev >= mc_amd->hdr.patch_id) {
++  if (rev > mc_amd->hdr.patch_id) {
+   ret = UCODE_OK;
+   goto out;
+   }
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] Thank you for being part of pve-devel

2023-09-25 Thread Alexandre Aguirre via pve-devel
--- Begin Message ---
Hello goodnight !

My name is Alexandre, I'm Brazilian and I've been a proxmox user since
2015, I'm an active collaborator in BR communities as well as groups on
Telegram and WhatsApp, I'd like to know how I can collaborate with the
solution, I'm available to be part of the growth and evolution of the tool.
Now I have a question, I would like to know how this discussion list works.

Thank you in advance for your attention!
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel