[pve-devel] health check uri for proxmox web front end?

2023-06-22 Thread Wolf Noble

hi all!

I have looked thru the api docs, and forums, and haven’t found a solution 
myself yet.
I’m looking for a lightweight uri to assess the health of a proxmox node for 
the purposes of  having the proxmox web ui behind a loadbalanced vip 
( haproxy run on opnsense )

Im aware of the existing api face, however what exists now requires 
authentication, and seems a little heavy for my intended use:

 (hey, you alive? yes? cool! i’ll check again in a couple seconds) 

Ideally, there’d be a super lightweight check face that responds with a 200/ok, 
perhaps even with some light metadata cached from other normal operations…

the ideal (from my perspective) would be a target endpoint that requires no 
auth, but the authorized calling hosts must be explicitly whitelisted for the 
node to respond to the query.

ideally logging of ‘good’ state request responses would be optional.

ideally, data included in the response, and it’s acceptable freshness would 
also be configurable.. but i don’t want to overcomplicate things either….

does such a mechanism exist already, and I just couldn’t find it?

if not, is there already a feature request, or someplace this was already 
discussed?



—- TANGENT

another thought I had which seemed totally tangental at first blush was 
wondering if the web ui for a cluster could additionally (ie not exclusively) 
be served by a different class of node (i was thinking pi4’s … ) the thought 
was that by having an ‘administrative function only’ cluster node type could be 
a way to as a way to slowly build arm64 support… 
… as i imagine this could be useful, but getting EVERYTHING working on a dif 
arch is a monumentally complex task not likely to bear much fruit terribly 
quickly, but I digress…



tia! 
I’ve been really happy with my proxmox experience over the last several years…. 

thanks for all the hard work you’ve done keeping proxmox such a stable 
abstraction layer… its greatly appreciated.

❤️W


[= The contents of this message have been written, read, processed, erased, 
sorted, sniffed, compressed, rewritten, misspelled, overcompensated, lost, 
found, and most importantly delivered entirely with recycled electrons =]
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] kvm kernel bug (kvm_nx_huge_page_recovery_) reported by user (fixed in kernel 6.3)

2023-06-22 Thread DERUMIER, Alexandre
hi,
a forum user have reported a kernel bug with kvm:
https://forum.proxmox.com/threads/kvm_nx_huge_page_recovery_worker-message-in-log.129352/#post-566581



a patch for 6.3 is available here
https://www.spinics.net/lists/stable-commits/msg302488.html


"KVM: x86/mmu: Grab memslot for correct address space in NX
recovery worker

to the 6.3-stable tree which can be found at:
   
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
 kvm-x86-mmu-grab-memslot-for-correct-address-space-in-nx-recovery-
worker.patch
and it can be found in the queue-6.3 subdirectory."
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer] fix space calculation for small disks for pve product

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 15:57 schrieb Stoiko Ivanov:
> The convoluted calculation logic in case the disks is 8GB leads to
> datasize becoming 16EiB further down:
> * after calculating and removing the rootsize from $rest, $rest becomes
>   smaller than $space (which should be the minimal non-used space in the
>   volume-group) - this leads to a negative value, which overflows in
>   the `& ~0xFFF` opration.
> 
> Signed-off-by: Stoiko Ivanov 
> ---
> tested in a VM with an 8GB disk
> 
>  Proxmox/Install.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer 1/2] align metadatasize to 4 MiB

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 16:18 schrieb Fiona Ebner:
> First step towards fixing an issue reported in the community forum [0]
> where using 250.00 hdsize, 250 maxroot and 0 minfree would fail.
> 
> Turns out two extents would be missing because of lvcreate implicitly
> rounding up, one of them for the metadata.
> 
> [0]: https://forum.proxmox.com/threads/129320/post-566375
> 
> Signed-off-by: Fiona Ebner 
> ---
>  Proxmox/Install.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer 1/2] align metadatasize to 4 MiB

2023-06-22 Thread Fiona Ebner
First step towards fixing an issue reported in the community forum [0]
where using 250.00 hdsize, 250 maxroot and 0 minfree would fail.

Turns out two extents would be missing because of lvcreate implicitly
rounding up, one of them for the metadata.

[0]: https://forum.proxmox.com/threads/129320/post-566375

Signed-off-by: Fiona Ebner 
---
 Proxmox/Install.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index 7970f83..c2c014d 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -469,9 +469,10 @@ sub create_lvm_volumes {
die "unable to create root volume\n";
 
 if ($datasize > 4 * 1024 * 1024) {
-   my $metadatasize = $datasize/100; # default 1% of data
+   my $metadatasize = int($datasize/100); # default 1% of data
$metadatasize = 1024*1024 if $metadatasize < 1024*1024; # but at least 
1G
$metadatasize = 16*1024*1024 if $metadatasize > 16*1024*1024; # but at 
most 16G
+   $metadatasize &= ~0xFFF; # align down to 4 MB boundaries
 
# otherwise the metadata is taken out of $minfree
$datasize -= 2 * $metadatasize;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer 2/2] always align rootdisk size to 4 MiB

2023-06-22 Thread Fiona Ebner
While this was already done in the $rest < 48 GiB cases, it wasn't yet
done for the else branch and also not if $maxroot_mb was assigned,
because of being smaller.

Second and last step towards fixing an issue reported in the community
forum [0] where using 250.00 hdsize, 250 maxroot and 0 minfree would
fail.

Turns out two extents would be missing because of lvcreate implicitly
rounding up, one of them for the root LV (the one for metadata was
already handled in the previous commit).

[0]: https://forum.proxmox.com/threads/129320/post-566375

Signed-off-by: Fiona Ebner 
---

I think it'd be possible to drop the alignments in the branches now,
but let's fix the issue for now and tackle improving/reworking the
logic more for later.

 Proxmox/Install.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index c2c014d..25031f8 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -435,6 +435,7 @@ sub create_lvm_volumes {
 
$rootsize_mb = $maxroot_mb if $rootsize_mb > $maxroot_mb;
$rootsize = int($rootsize_mb * 1024);
+   $rootsize &= ~0xFFF; # align down to 4 MB boundaries
 
$rest -= $rootsize; # in KB
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] fix space calculation for small disks for pve product

2023-06-22 Thread Stoiko Ivanov
The convoluted calculation logic in case the disks is 8GB leads to
datasize becoming 16EiB further down:
* after calculating and removing the rootsize from $rest, $rest becomes
  smaller than $space (which should be the minimal non-used space in the
  volume-group) - this leads to a negative value, which overflows in
  the `& ~0xFFF` opration.

Signed-off-by: Stoiko Ivanov 
---
tested in a VM with an 8GB disk

 Proxmox/Install.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index 7970f83..28add10 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -425,7 +425,7 @@ sub create_lvm_volumes {
my $rootsize_mb;
if ($rest_mb < 12 * 1024) {
# no point in wasting space, try to get us actually installed and 
align down to 4 MB
-   $rootsize_mb = ($rest_mb - 0.1) & ~3;
+   $rootsize_mb = ($rest_mb - 4) & ~3;
} elsif ($rest_mb < 48 * 1024) {
my $masked = int($rest_mb / 2) & ~3; # align down to 4 MB
$rootsize_mb = $masked;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-manager] ui: ceph status: add pg warning state

2023-06-22 Thread Aaron Lauterer
The light blue used in .normal is a tad too light IMHO for the cake diagram. It 
is the light blue used in the light theme, for example to indicate the active 
element in the tree view.


What about using a stronger blue? I switched to .info-blue and it was definitely 
nicer. It might even be a good idea to specify a dedicated .working color CSS 
class in the ext6-pmx.css (widget-toolkit).


Another thing I noticed, in the commit message you are talking about PGs in 
critical state. But in the code we use "Error". I would suggest that we rename 
that to Critical to align it more with the nomenclature.


e.g.:

diff --git a/www/manager6/ceph/StatusDetail.js 
b/www/manager6/ceph/StatusDetail.js
index e1bf425a..11dfb0d2 100644
--- a/www/manager6/ceph/StatusDetail.js
+++ b/www/manager6/ceph/StatusDetail.js
@@ -169,7 +169,7 @@ Ext.define('PVE.ceph.StatusDetail', {
degraded: 3,
undersized: 3,

-   // error
+   // critical
backfill_toofull: 4,
backfill_unfound: 4,
down: 4,
@@ -201,7 +201,7 @@ Ext.define('PVE.ceph.StatusDetail', {
cls: 'warning',
},
{
-   text: gettext('Error'),
+   text: gettext('Critical'),
cls: 'critical',
},
 ],

On 6/22/23 12:54, Alexandre Derumier wrote:

Like ceph mgr dashboard, we need a warning state.

- set degraded && undersized as warning instead criticial

- add "normal" (light blue) color for working state

- use warning (orange) color for warning state

Signed-off-by: Alexandre Derumier 
---
  www/manager6/ceph/StatusDetail.js | 29 ++---
  1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/www/manager6/ceph/StatusDetail.js 
b/www/manager6/ceph/StatusDetail.js
index d6c0763b..e1bf425a 100644
--- a/www/manager6/ceph/StatusDetail.js
+++ b/www/manager6/ceph/StatusDetail.js
@@ -94,6 +94,7 @@ Ext.define('PVE.ceph.StatusDetail', {
colors: [
'#CFCFCF',
'#21BF4B',
+   '#C2DDF2',
'#FFCC00',
'#FF6C59',
],
@@ -152,7 +153,6 @@ Ext.define('PVE.ceph.StatusDetail', {
backfilling: 2,
creating: 2,
deep: 2,
-   degraded: 2,
forced_backfill: 2,
forced_recovery: 2,
peered: 2,
@@ -165,17 +165,20 @@ Ext.define('PVE.ceph.StatusDetail', {
snaptrim: 2,
snaptrim_wait: 2,
  
-	// error

-   backfill_toofull: 3,
-   backfill_unfound: 3,
-   down: 3,
-   incomplete: 3,
-   inconsistent: 3,
-   recovery_toofull: 3,
-   recovery_unfound: 3,
-   snaptrim_error: 3,
-   stale: 3,
+   //warning
+   degraded: 3,
undersized: 3,
+
+   // error
+   backfill_toofull: 4,
+   backfill_unfound: 4,
+   down: 4,
+   incomplete: 4,
+   inconsistent: 4,
+   recovery_toofull: 4,
+   recovery_unfound: 4,
+   snaptrim_error: 4,
+   stale: 4,
  },
  
  statecategories: [

@@ -191,6 +194,10 @@ Ext.define('PVE.ceph.StatusDetail', {
},
{
text: gettext('Working'),
+   cls: 'normal',
+   },
+   {
+   text: gettext('Warning'),
cls: 'warning',
},
{



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer] tui: do not auto reboot on failures

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 15:08 schrieb Maximiliano Sandoval:
> Otherwise the user only has 5 seconds to see the error message before
> the machine reboots.
> 
> Signed-off-by: Maximiliano Sandoval 
> ---
>  proxmox-tui-installer/src/main.rs | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] tui: do not auto reboot on failures

2023-06-22 Thread Maximiliano Sandoval
Otherwise the user only has 5 seconds to see the error message before
the machine reboots.

Signed-off-by: Maximiliano Sandoval 
---
 proxmox-tui-installer/src/main.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/proxmox-tui-installer/src/main.rs 
b/proxmox-tui-installer/src/main.rs
index d2a5fcf..8d05292 100644
--- a/proxmox-tui-installer/src/main.rs
+++ b/proxmox-tui-installer/src/main.rs
@@ -820,7 +820,7 @@ fn install_progress_dialog(siv:  Cursive) -> 
InstallerView {
 .map(|state| state.options.autoreboot)
 .unwrap_or_default();
 
-if autoreboot {
+if autoreboot && success {
 let cb_sink = siv.cb_sink();
 thread::spawn({
 let cb_sink = cb_sink.clone();
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH common] systemd: add helper to cleanup transient unit

2023-06-22 Thread Thomas Lamprecht
Am 20/06/2023 um 17:00 schrieb Fiona Ebner:
> which combines the stop+wait logic previously present at the single
> call site of wait_for_unit_removed() in QemuServer.pm. It also does a
> reset-failed call first, to ensure a unit in a failed state is also
> cleaned up properly.
> 
> Signed-off-by: Fiona Ebner 
> ---
>  src/PVE/Systemd.pm | 16 +++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Systemd.pm b/src/PVE/Systemd.pm
> index 2517d31..327106f 100644
> --- a/src/PVE/Systemd.pm
> +++ b/src/PVE/Systemd.pm
> @@ -7,7 +7,7 @@ use Net::DBus qw(dbus_uint32 dbus_uint64 dbus_boolean);
>  use Net::DBus::Callback;
>  use Net::DBus::Reactor;
>  
> -use PVE::Tools qw(file_set_contents file_get_contents trim);
> +use PVE::Tools qw(file_set_contents file_get_contents run_command trim);
>  
>  sub escape_unit {
>  my ($val, $is_path) = @_;
> @@ -167,6 +167,20 @@ sub wait_for_unit_removed($;$) {
>  }, $timeout);
>  }
>  
> +sub cleanup_transient_unit($;$) {
> +my ($unit, $timeout) = @_;
> +
> +eval {
> + my %param = ( outfunc => sub {}, errfunc => sub {} );
> + # If the unit is in a failed state (e.g. after being OOM-killed), 
> stopping is not enough.
> + run_command(['/bin/systemctl', 'reset-failed', $unit], %param);
> + run_command(['/bin/systemctl', 'stop', $unit], %param);
> +};
> +
> +# Issues with the above not being fully completed are rare, but not 
> impossible, see bug #3733.
> +wait_for_unit_removed($unit, $timeout);
> +}
> +
>  sub read_ini {
>  my ($filename) = @_;
>  

for the record, this the same in qemu-server directly for now, as talked off 
list,
didn't remembered that we got a run_command there already, still big thanks for
finding this!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH] tui: focus next button by default

2023-06-22 Thread Thomas Lamprecht
Am 21/06/2023 um 16:48 schrieb Dominik Csapak:
> except the password dialog, since the user must provide input
> 
> to do that, we have to set the focus index on all relevant views
> 
> Signed-off-by: Dominik Csapak 
> ---
> 
> not sure if this is the correct approach, also the extra parameter feels
> slightly wrong, but didn't found a nicer way to do this
> 
> any errors from focusing will be ignrored, but that shouldn't happen
> anyway until we add/remove buttons and the index changes
> 
> alternatively we could create a second 'new_with_focus_next' (or
> 'without') that gets called respectively, but also seems a bit weird for
> that
> 
>  proxmox-tui-installer/src/main.rs | 101 +++---
>  1 file changed, 52 insertions(+), 49 deletions(-)
> 
>

seemingly forgot to write: I applied this one, but avoided setting the focus
on the install button for the summary step, as that can be a bit unexpected,
or even dangerous, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] ui: migrate: fix disabled migrate button glitch

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 14:15 schrieb Dominik Csapak:
> under certain circumstances, the migrate button stays disabled, even
> when a valid target node was selected:
> * the first node that gets autoselected (most likely the second)
>   is not a valid migration target
> * the user changes to a migration target that is a valid one
> 
> if that happens, the migration button would stay disabled.
> switching once to a non valid target and would enable the button.
> 
> To fix it, we have to do two things here:
> 
> 'checkQemuPreconditions' is actually an async function that awaits an
> api call and uses the result to set the 'migration.allowedNodes'
> property
> 
> 'checkMigratePreconditions' calls 'checkQemuPreconditions' and uses the
> 'migration.allowedNodes' property afterwards.
> 
> but since 'checkMigratePreconditions' is not async, that happens before
> the api call can return the valid data and thus leaves it empty, making
> all nodes valid in the selector. (thus the initial selected node is
> valid)
> 
> instead make 'checkMigratePreconditions' also async and await the result
> of 'checkQemuPreconditions'
> 
> this unearthed another issue, namely we access an object that is
> possibly undefined (worked out before due to race conditions) so
> fallback to an empty object.
> 
> and lastly, since we want the 'disallowedNodes' set before actually
> checking the qemu preconditions, we move the setting of that on
> the node selector above the qemu preconditions check
> (this is the only place where we set it anyway, and the source does not
> change, we probably could move that out of that function altogether)
> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/window/Migrate.js | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 1/2] tui: wrap multi-disk selection in scrollable view

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 11:56 schrieb Christoph Heiss:
> If lots of disks are present and the available screen size is rather
> small, it might be impossible for users to properly set all disks as
> they want.
> 
> Fix it by making the view scrollable.
> 

what I forgot to write: It's much better than nothing if one has many disks, 
but the
selection behavior and scrolling interacts a bit weirdly IMO, i.e., something 
like:

- if I first move down the disk selection gets moved down, scrollbar stays as is
- if the selection goes over the view border the scroll bar moves but I see no
  selection any more
- once I then reached the end of the whole view (scrolled to bottom) the 
selection
  starts moving down again, but depending on how many disks entries there are it
  requires quite a few moves until it's in view again.

It would be more like expected if the selection moves along when scrolling down,
and up if scrolling up, respectively.




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer 2/2] tui: disable automatic text wrapping for form labels

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 11:56 schrieb Christoph Heiss:
> This just causes weird layouts; such that labels and inputs do not line
> up anymore.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  proxmox-tui-installer/src/views/mod.rs | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer 1/2] tui: wrap multi-disk selection in scrollable view

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 11:56 schrieb Christoph Heiss:
> If lots of disks are present and the available screen size is rather
> small, it might be impossible for users to properly set all disks as
> they want.
> 
> Fix it by making the view scrollable.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  proxmox-tui-installer/src/views/bootdisk.rs | 9 ++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer] tui: switch to `f64` for disk sizes

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 11:20 schrieb Stefan Sterz:
> previously the tui used `u64` internally to represent the disk size.
> since the perl-based installer expects GiB as floats and that is also
> what is displayed in the tui that meant a lot of converting back and
> forth. it also lead to an error where the disk sizes that were set
> seemed to not have been persisted, even though the sizes were
> correctly set. this commit refactors the installer to convert the size
> once in the beginning and then stick to `f64`.
> 
> Signed-off-by: Stefan Sterz 
> ---
>  proxmox-tui-installer/src/options.rs   | 26 --
>  proxmox-tui-installer/src/setup.rs | 16 
>  proxmox-tui-installer/src/views/mod.rs | 18 +++---
>  3 files changed, 27 insertions(+), 33 deletions(-)
> 
>

applied, with Christophs T-b and R-b, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: migrate: fix disabled migrate button glitch

2023-06-22 Thread Dominik Csapak
under certain circumstances, the migrate button stays disabled, even
when a valid target node was selected:
* the first node that gets autoselected (most likely the second)
  is not a valid migration target
* the user changes to a migration target that is a valid one

if that happens, the migration button would stay disabled.
switching once to a non valid target and would enable the button.

To fix it, we have to do two things here:

'checkQemuPreconditions' is actually an async function that awaits an
api call and uses the result to set the 'migration.allowedNodes'
property

'checkMigratePreconditions' calls 'checkQemuPreconditions' and uses the
'migration.allowedNodes' property afterwards.

but since 'checkMigratePreconditions' is not async, that happens before
the api call can return the valid data and thus leaves it empty, making
all nodes valid in the selector. (thus the initial selected node is
valid)

instead make 'checkMigratePreconditions' also async and await the result
of 'checkQemuPreconditions'

this unearthed another issue, namely we access an object that is
possibly undefined (worked out before due to race conditions) so
fallback to an empty object.

and lastly, since we want the 'disallowedNodes' set before actually
checking the qemu preconditions, we move the setting of that on
the node selector above the qemu preconditions check
(this is the only place where we set it anyway, and the source does not
change, we probably could move that out of that function altogether)

Signed-off-by: Dominik Csapak 
---
 www/manager6/window/Migrate.js | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index c310342d..5473821b 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -155,7 +155,7 @@ Ext.define('PVE.window.Migrate', {
});
},
 
-   checkMigratePreconditions: function(resetMigrationPossible) {
+   checkMigratePreconditions: async function(resetMigrationPossible) {
var me = this,
vm = me.getViewModel();
 
@@ -165,12 +165,13 @@ Ext.define('PVE.window.Migrate', {
vm.set('running', true);
}
 
+   me.lookup('pveNodeSelector').disallowedNodes = [vm.get('nodename')];
+
if (vm.get('vmtype') === 'qemu') {
-   me.checkQemuPreconditions(resetMigrationPossible);
+   await me.checkQemuPreconditions(resetMigrationPossible);
} else {
me.checkLxcPreconditions(resetMigrationPossible);
}
-   me.lookup('pveNodeSelector').disallowedNodes = [vm.get('nodename')];
 
// Only allow nodes where the local storage is available in case of 
offline migration
// where storage migration is not possible
@@ -218,7 +219,7 @@ Ext.define('PVE.window.Migrate', {
migration.allowedNodes = migrateStats.allowed_nodes;
let target = me.lookup('pveNodeSelector').value;
if (target.length && 
!migrateStats.allowed_nodes.includes(target)) {
-   let disallowed = migrateStats.not_allowed_nodes[target];
+   let disallowed = migrateStats.not_allowed_nodes[target] ?? 
{};
if (disallowed.unavailable_storages !== undefined) {
let missingStorages = 
disallowed.unavailable_storages.join(', ');
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] new installer: preseed support ?

2023-06-22 Thread DERUMIER, Alexandre
Le mercredi 21 juin 2023 à 18:21 +0200, Thomas Lamprecht a écrit :
> Hi,
> 
> Am 21/06/2023 um 17:56 schrieb DERUMIER, Alexandre:
> > I just see all the patches for the new installer (don't have tested
> > it
> > yet).
> > 
> > Any plan to add support for preseed file like debian for automatic
> > install ?
> > 
> 
> Yes, for the mid/long term we'd definitively like to provide some
> mechanism
> for this, and tbh. with the rework of the installer and how it
> handles the
> config to a single point of "truth", and how the new low-level
> installer
> (used for the Text-UI mode) works it will be possible to add quite a
> few
> ways (from simple json config lives on a separte partition of the
> installer
> image to some web-based varians with API and a password defined on
> the image,
> ...); just see the "start-session" command implementation [0] for
> what's
> required between actual installer and UI, it's really not that much
> anymore:

Oh great ! Thanks !

I'll try to have a look at it next week and do some tests.




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Proxmox VE 8.0 released!

2023-06-22 Thread Martin Maurer

Hi all!

We're very excited to announce the major release 8.0 of Proxmox Virtual 
Environment! It's based on the great Debian 12 "Bookworm" but using a 
newer Linux kernel 6.2, QEMU 8.0.2, LXC 5.0.2, and OpenZFS 2.1.12.


Here is a selection of the highlights of the Proxmox VE 8.0 final version:

- Debian 12, but using a newer Linux kernel 6.2
- QEMU 8.0.2, LXC 5.0.2, ZFS 2.1.12
- Ceph Server: Ceph Quincy 17.2 is the default and comes with continued 
support. There is now an enterprise repository for Ceph which can be 
accessed via any Proxmox VE subscription, providing the best stability 
for production systems.


- Additional text-based user interface (TUI) for the installer ISO.
- Integrate host network bridge and VNet access when configuring virtual 
guests into the  ACL system of Proxmox VE.


-Add access realm sync jobs to conveniently synchronize users and groups 
from an LDAP/AD server automatically at regular intervals.


- New default CPU type for VMs: x86-64-v2-AES
- Resource mappings: between PCI(e) or USB devices, and nodes in a 
Proxmox VE cluster.


- Countless GUI and API improvements.

As always, we have included countless bugfixes and improvements on many 
places; see the release notes for all details.


Release notes
https://pve.proxmox.com/wiki/Roadmap

Press release
https://www.proxmox.com/en/news/press-releases/

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-8-0

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

There has been a lot of feedback from our community members and 
customers, and many of you reported bugs, submitted patches and were 
involved in testing - THANK YOU for your support!


FAQ

Q: Can I upgrade latest Proxmox VE 7 to 8 beta with apt?
A: Yes, please follow the upgrade instructions on 
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8


Q: Can I upgrade an 8.0 beta installation to the stable 8.0 via apt?
A: Yes, upgrading from beta to stable installation can be done via apt.

Q: Can I install Proxmox VE 8.0 on top of Debian 12 "Bookworm"?
A: Yes, see 
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm


Q: Can I upgrade my Proxmox VE 7.4 cluster with Ceph Pacific to 8.0?
A: This is a two-step process. First, you have to upgrade Ceph from 
Pacific to Quincy, and afterwards you can then upgrade Proxmox VE from 
7.4 to 8.0. There are a lot of improvements and changes, so please 
follow exactly the upgrade documentation:


https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

Q: Where can I get more information about feature updates?
A: Check the https://pve.proxmox.com/wiki/Roadmap, 
https://forum.proxmox.com/, the https://lists.proxmox.com/, and/or 
subscribe to our https://www.proxmox.com/en/news.


--
Best Regards,

Martin Maurer

mar...@proxmox.com
https://www.proxmox.com


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] tui: fix FQDN validation

2023-06-22 Thread Christoph Heiss
Add checks to ensure that:
* It actually has a hostname, not just a domain name
* Properly check if the hostname is purely numeric, by verifying each
  FQDN part

Signed-off-by: Christoph Heiss 
---
 proxmox-tui-installer/src/main.rs  |  8 ++--
 proxmox-tui-installer/src/utils.rs | 12 +++-
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/proxmox-tui-installer/src/main.rs 
b/proxmox-tui-installer/src/main.rs
index d2a5fcf..b49f2be 100644
--- a/proxmox-tui-installer/src/main.rs
+++ b/proxmox-tui-installer/src/main.rs
@@ -587,10 +587,14 @@ fn network_dialog(siv:  Cursive) -> InstallerView {
 Err("host and gateway IP address version must not 
differ".to_owned())
 } else if address.addr().is_ipv4() != dns_server.is_ipv4() {
 Err("host and DNS IP address version must not 
differ".to_owned())
-} else if fqdn.to_string().chars().all(|c| c.is_ascii_digit()) 
{
+} else if fqdn
+.parts()
+.iter()
+.any(|p| p.chars().all(|c| c.is_ascii_digit() || c == '.'))
+{
 // Not supported/allowed on Debian
 Err("hostname cannot be purely numeric".to_owned())
-} else if fqdn.to_string().ends_with(".invalid") {
+} else if !fqdn.is_valid() || 
fqdn.to_string().ends_with(".invalid") {
 Err("hostname does not look valid".to_owned())
 } else {
 Ok(NetworkOptions {
diff --git a/proxmox-tui-installer/src/utils.rs 
b/proxmox-tui-installer/src/utils.rs
index 3245fac..08521f3 100644
--- a/proxmox-tui-installer/src/utils.rs
+++ b/proxmox-tui-installer/src/utils.rs
@@ -122,7 +122,7 @@ impl Fqdn {
 .map(ToOwned::to_owned)
 .collect::>();

-if !parts.iter().all(::validate_single) {
+if parts.is_empty() || !parts.iter().all(::validate_single) {
 Err(())
 } else {
 Ok(Self { parts })
@@ -143,6 +143,16 @@ impl Fqdn {
 parts.join(".")
 }

+pub fn parts() -> &[String] {
+
+}
+
+pub fn is_valid() -> bool {
+// It's a valid FQDN if it has at least a hostname and a TLD name, the
+// latter is ensured by the constructor.
+self.has_host()
+}
+
 fn has_host() -> bool {
 self.parts.len() > 1
 }
--
2.40.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager] ui: ceph status: add pg warning state

2023-06-22 Thread Alexandre Derumier
Like ceph mgr dashboard, we need a warning state.

- set degraded && undersized as warning instead criticial

- add "normal" (light blue) color for working state

- use warning (orange) color for warning state

Signed-off-by: Alexandre Derumier 
---
 www/manager6/ceph/StatusDetail.js | 29 ++---
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/www/manager6/ceph/StatusDetail.js 
b/www/manager6/ceph/StatusDetail.js
index d6c0763b..e1bf425a 100644
--- a/www/manager6/ceph/StatusDetail.js
+++ b/www/manager6/ceph/StatusDetail.js
@@ -94,6 +94,7 @@ Ext.define('PVE.ceph.StatusDetail', {
colors: [
'#CFCFCF',
'#21BF4B',
+   '#C2DDF2',
'#FFCC00',
'#FF6C59',
],
@@ -152,7 +153,6 @@ Ext.define('PVE.ceph.StatusDetail', {
backfilling: 2,
creating: 2,
deep: 2,
-   degraded: 2,
forced_backfill: 2,
forced_recovery: 2,
peered: 2,
@@ -165,17 +165,20 @@ Ext.define('PVE.ceph.StatusDetail', {
snaptrim: 2,
snaptrim_wait: 2,
 
-   // error
-   backfill_toofull: 3,
-   backfill_unfound: 3,
-   down: 3,
-   incomplete: 3,
-   inconsistent: 3,
-   recovery_toofull: 3,
-   recovery_unfound: 3,
-   snaptrim_error: 3,
-   stale: 3,
+   //warning
+   degraded: 3,
undersized: 3,
+
+   // error
+   backfill_toofull: 4,
+   backfill_unfound: 4,
+   down: 4,
+   incomplete: 4,
+   inconsistent: 4,
+   recovery_toofull: 4,
+   recovery_unfound: 4,
+   snaptrim_error: 4,
+   stale: 4,
 },
 
 statecategories: [
@@ -191,6 +194,10 @@ Ext.define('PVE.ceph.StatusDetail', {
},
{
text: gettext('Working'),
+   cls: 'normal',
+   },
+   {
+   text: gettext('Warning'),
cls: 'warning',
},
{
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer] tui: switch to `f64` for disk sizes

2023-06-22 Thread Christoph Heiss
While going over this I had a though: What about wrapping the disk size
in something like:

struct DiskSize(f64);

and having e.g. `::from_mib()`, `.in_gib()`, `Display` impl etc.?

Just a thought, but this would IMO provide some (valuable) context
everywhere disk sizes are handled and hopefully avoid (more) confusion
and mistakes in the future.

On Thu, Jun 22, 2023 at 11:20:38AM +0200, Stefan Sterz wrote:
> previously the tui used `u64` internally to represent the disk size.
> since the perl-based installer expects GiB as floats and that is also
> what is displayed in the tui that meant a lot of converting back and
> forth. it also lead to an error where the disk sizes that were set
> seemed to not have been persisted, even though the sizes were
> correctly set. this commit refactors the installer to convert the size
> once in the beginning and then stick to `f64`.
>

Pretty straight-forward changes, LGTM.

Reviewed-by: Christoph Heiss 
Tested-by: Christoph Heiss 
> Signed-off-by: Stefan Sterz 
> ---
>  proxmox-tui-installer/src/options.rs   | 26 --
>  proxmox-tui-installer/src/setup.rs | 16 
>  proxmox-tui-installer/src/views/mod.rs | 18 +++---
>  3 files changed, 27 insertions(+), 33 deletions(-)
>
> diff --git a/proxmox-tui-installer/src/options.rs 
> b/proxmox-tui-installer/src/options.rs
> index 5f3d295..e1218df 100644
> --- a/proxmox-tui-installer/src/options.rs
> +++ b/proxmox-tui-installer/src/options.rs
> @@ -86,11 +86,11 @@ pub const FS_TYPES: &[FsType] = {
>
>  #[derive(Clone, Debug)]
>  pub struct LvmBootdiskOptions {
> -pub total_size: u64,
> -pub swap_size: Option,
> -pub max_root_size: Option,
> -pub max_data_size: Option,
> -pub min_lvm_free: Option,
> +pub total_size: f64,
> +pub swap_size: Option,
> +pub max_root_size: Option,
> +pub max_data_size: Option,
> +pub min_lvm_free: Option,
>  }
>
>  impl LvmBootdiskOptions {
> @@ -107,7 +107,7 @@ impl LvmBootdiskOptions {
>
>  #[derive(Clone, Debug)]
>  pub struct BtrfsBootdiskOptions {
> -pub disk_size: u64,
> +pub disk_size: f64,
>  }
>
>  impl BtrfsBootdiskOptions {
> @@ -180,7 +180,7 @@ pub struct ZfsBootdiskOptions {
>  pub compress: ZfsCompressOption,
>  pub checksum: ZfsChecksumOption,
>  pub copies: usize,
> -pub disk_size: u64,
> +pub disk_size: f64,
>  }
>
>  impl ZfsBootdiskOptions {
> @@ -202,12 +202,12 @@ pub enum AdvancedBootdiskOptions {
>  Btrfs(BtrfsBootdiskOptions),
>  }
>
> -#[derive(Clone, Debug, Eq, PartialEq)]
> +#[derive(Clone, Debug, PartialEq)]
>  pub struct Disk {
>  pub index: String,
>  pub path: String,
>  pub model: Option,
> -pub size: u64,
> +pub size: f64,
>  }
>
>  impl fmt::Display for Disk {
> @@ -219,11 +219,7 @@ impl fmt::Display for Disk {
>  // FIXME: ellipsize too-long names?
>  write!(f, " ({model})")?;
>  }
> -write!(
> -f,
> -" ({:.2} GiB)",
> -(self.size as f64) / 1024. / 1024. / 1024.
> -)
> +write!(f, " ({:.2} GiB)", self.size)
>  }
>  }
>
> @@ -233,6 +229,8 @@ impl From<> for String {
>  }
>  }
>
> +impl cmp::Eq for Disk {}
> +
>  impl cmp::PartialOrd for Disk {
>  fn partial_cmp(, other: ) -> Option {
>  self.index.partial_cmp()
> diff --git a/proxmox-tui-installer/src/setup.rs 
> b/proxmox-tui-installer/src/setup.rs
> index 43e4b0d..1c5ff3e 100644
> --- a/proxmox-tui-installer/src/setup.rs
> +++ b/proxmox-tui-installer/src/setup.rs
> @@ -109,15 +109,15 @@ pub struct InstallConfig {
>
>  #[serde(serialize_with = "serialize_fstype")]
>  filesys: FsType,
> -hdsize: u64,
> +hdsize: f64,
>  #[serde(skip_serializing_if = "Option::is_none")]
> -swapsize: Option,
> +swapsize: Option,
>  #[serde(skip_serializing_if = "Option::is_none")]
> -maxroot: Option,
> +maxroot: Option,
>  #[serde(skip_serializing_if = "Option::is_none")]
> -minfree: Option,
> +minfree: Option,
>  #[serde(skip_serializing_if = "Option::is_none")]
> -maxvz: Option,
> +maxvz: Option,
>
>  #[serde(skip_serializing_if = "Option::is_none")]
>  zfs_opts: Option,
> @@ -153,7 +153,7 @@ impl From for InstallConfig {
>  autoreboot: options.autoreboot as usize,
>
>  filesys: options.bootdisk.fstype,
> -hdsize: 0,
> +hdsize: 0.,
>  swapsize: None,
>  maxroot: None,
>  minfree: None,
> @@ -243,13 +243,13 @@ fn deserialize_disks_map<'de, D>(deserializer: D) -> 
> Result, D::Error>
>  where
>  D: Deserializer<'de>,
>  {
> -let disks =  String)>>::deserialize(deserializer)?;
> +let disks =  String)>>::deserialize(deserializer)?;
>  Ok(disks
>  .into_iter()
>  .map(
>  |(index, device, size_mb, model, logical_bsize, _syspath)| Disk {
>  index: index.to_string(),
> - 

[pve-devel] [PATCH installer 0/2] tui: make multi-disk selection view scrollable

2023-06-22 Thread Christoph Heiss
Small improvement; as in the title.

The second patch also does as it says on the tin; as otherwise weird
layouts can happen on small screen/cramped views.

Christoph Heiss (2):
  tui: wrap multi-disk selection in scrollable view
  tui: disable automatic text wrapping for form labels

 proxmox-tui-installer/src/views/bootdisk.rs | 9 ++---
 proxmox-tui-installer/src/views/mod.rs  | 2 +-
 2 files changed, 7 insertions(+), 4 deletions(-)

--
2.40.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer 2/2] tui: disable automatic text wrapping for form labels

2023-06-22 Thread Christoph Heiss
This just causes weird layouts; such that labels and inputs do not line
up anymore.

Signed-off-by: Christoph Heiss 
---
 proxmox-tui-installer/src/views/mod.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/proxmox-tui-installer/src/views/mod.rs 
b/proxmox-tui-installer/src/views/mod.rs
index ee96398..14d3209 100644
--- a/proxmox-tui-installer/src/views/mod.rs
+++ b/proxmox-tui-installer/src/views/mod.rs
@@ -305,7 +305,7 @@ impl FormView {
 }
 
 pub fn add_child( self, label: , view: impl View) {
-self.add_to_column(0, TextView::new(format!("{label}: ")));
+self.add_to_column(0, TextView::new(format!("{label}: ")).no_wrap());
 self.add_to_column(1, view);
 }
 
-- 
2.40.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer 1/2] tui: wrap multi-disk selection in scrollable view

2023-06-22 Thread Christoph Heiss
If lots of disks are present and the available screen size is rather
small, it might be impossible for users to properly set all disks as
they want.

Fix it by making the view scrollable.

Signed-off-by: Christoph Heiss 
---
 proxmox-tui-installer/src/views/bootdisk.rs | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/proxmox-tui-installer/src/views/bootdisk.rs 
b/proxmox-tui-installer/src/views/bootdisk.rs
index 09e6803..775ffa4 100644
--- a/proxmox-tui-installer/src/views/bootdisk.rs
+++ b/proxmox-tui-installer/src/views/bootdisk.rs
@@ -2,7 +2,9 @@ use std::{cell::RefCell, marker::PhantomData, rc::Rc};
 
 use cursive::{
 view::{Nameable, Resizable, ViewWrapper},
-views::{Button, Dialog, DummyView, LinearLayout, NamedView, Panel, 
SelectView, TextView},
+views::{
+Button, Dialog, DummyView, LinearLayout, NamedView, Panel, ScrollView, 
SelectView, TextView,
+},
 Cursive, View,
 };
 
@@ -268,7 +270,7 @@ impl MultiDiskOptionsView {
 let disk_select_view = LinearLayout::vertical()
 .child(TextView::new("Disk setup").center())
 .child(DummyView)
-.child(disk_form);
+.child(ScrollView::new(disk_form));
 
 let options_view = LinearLayout::vertical()
 .child(TextView::new("Advanced options").center())
@@ -306,7 +308,8 @@ impl MultiDiskOptionsView {
 .get_child(0)?
 .downcast_ref::()?
 .get_child(2)?
-.downcast_ref::()?;
+.downcast_ref::>()?
+.get_inner();
 
 for i in 0..disk_form.len() {
 let disk = disk_form.get_value::>, _>(i)?;
-- 
2.40.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] tui: switch to `f64` for disk sizes

2023-06-22 Thread Stefan Sterz
previously the tui used `u64` internally to represent the disk size.
since the perl-based installer expects GiB as floats and that is also
what is displayed in the tui that meant a lot of converting back and
forth. it also lead to an error where the disk sizes that were set
seemed to not have been persisted, even though the sizes were
correctly set. this commit refactors the installer to convert the size
once in the beginning and then stick to `f64`.

Signed-off-by: Stefan Sterz 
---
 proxmox-tui-installer/src/options.rs   | 26 --
 proxmox-tui-installer/src/setup.rs | 16 
 proxmox-tui-installer/src/views/mod.rs | 18 +++---
 3 files changed, 27 insertions(+), 33 deletions(-)

diff --git a/proxmox-tui-installer/src/options.rs 
b/proxmox-tui-installer/src/options.rs
index 5f3d295..e1218df 100644
--- a/proxmox-tui-installer/src/options.rs
+++ b/proxmox-tui-installer/src/options.rs
@@ -86,11 +86,11 @@ pub const FS_TYPES: &[FsType] = {
 
 #[derive(Clone, Debug)]
 pub struct LvmBootdiskOptions {
-pub total_size: u64,
-pub swap_size: Option,
-pub max_root_size: Option,
-pub max_data_size: Option,
-pub min_lvm_free: Option,
+pub total_size: f64,
+pub swap_size: Option,
+pub max_root_size: Option,
+pub max_data_size: Option,
+pub min_lvm_free: Option,
 }
 
 impl LvmBootdiskOptions {
@@ -107,7 +107,7 @@ impl LvmBootdiskOptions {
 
 #[derive(Clone, Debug)]
 pub struct BtrfsBootdiskOptions {
-pub disk_size: u64,
+pub disk_size: f64,
 }
 
 impl BtrfsBootdiskOptions {
@@ -180,7 +180,7 @@ pub struct ZfsBootdiskOptions {
 pub compress: ZfsCompressOption,
 pub checksum: ZfsChecksumOption,
 pub copies: usize,
-pub disk_size: u64,
+pub disk_size: f64,
 }
 
 impl ZfsBootdiskOptions {
@@ -202,12 +202,12 @@ pub enum AdvancedBootdiskOptions {
 Btrfs(BtrfsBootdiskOptions),
 }
 
-#[derive(Clone, Debug, Eq, PartialEq)]
+#[derive(Clone, Debug, PartialEq)]
 pub struct Disk {
 pub index: String,
 pub path: String,
 pub model: Option,
-pub size: u64,
+pub size: f64,
 }
 
 impl fmt::Display for Disk {
@@ -219,11 +219,7 @@ impl fmt::Display for Disk {
 // FIXME: ellipsize too-long names?
 write!(f, " ({model})")?;
 }
-write!(
-f,
-" ({:.2} GiB)",
-(self.size as f64) / 1024. / 1024. / 1024.
-)
+write!(f, " ({:.2} GiB)", self.size)
 }
 }
 
@@ -233,6 +229,8 @@ impl From<> for String {
 }
 }
 
+impl cmp::Eq for Disk {}
+
 impl cmp::PartialOrd for Disk {
 fn partial_cmp(, other: ) -> Option {
 self.index.partial_cmp()
diff --git a/proxmox-tui-installer/src/setup.rs 
b/proxmox-tui-installer/src/setup.rs
index 43e4b0d..1c5ff3e 100644
--- a/proxmox-tui-installer/src/setup.rs
+++ b/proxmox-tui-installer/src/setup.rs
@@ -109,15 +109,15 @@ pub struct InstallConfig {
 
 #[serde(serialize_with = "serialize_fstype")]
 filesys: FsType,
-hdsize: u64,
+hdsize: f64,
 #[serde(skip_serializing_if = "Option::is_none")]
-swapsize: Option,
+swapsize: Option,
 #[serde(skip_serializing_if = "Option::is_none")]
-maxroot: Option,
+maxroot: Option,
 #[serde(skip_serializing_if = "Option::is_none")]
-minfree: Option,
+minfree: Option,
 #[serde(skip_serializing_if = "Option::is_none")]
-maxvz: Option,
+maxvz: Option,
 
 #[serde(skip_serializing_if = "Option::is_none")]
 zfs_opts: Option,
@@ -153,7 +153,7 @@ impl From for InstallConfig {
 autoreboot: options.autoreboot as usize,
 
 filesys: options.bootdisk.fstype,
-hdsize: 0,
+hdsize: 0.,
 swapsize: None,
 maxroot: None,
 minfree: None,
@@ -243,13 +243,13 @@ fn deserialize_disks_map<'de, D>(deserializer: D) -> 
Result, D::Error>
 where
 D: Deserializer<'de>,
 {
-let disks = >::deserialize(deserializer)?;
+let disks = >::deserialize(deserializer)?;
 Ok(disks
 .into_iter()
 .map(
 |(index, device, size_mb, model, logical_bsize, _syspath)| Disk {
 index: index.to_string(),
-size: size_mb * logical_bsize,
+size: (size_mb * logical_bsize) / 1024. / 1024. / 1024.,
 path: device,
 model: (!model.is_empty()).then_some(model),
 },
diff --git a/proxmox-tui-installer/src/views/mod.rs 
b/proxmox-tui-installer/src/views/mod.rs
index ee96398..faa0052 100644
--- a/proxmox-tui-installer/src/views/mod.rs
+++ b/proxmox-tui-installer/src/views/mod.rs
@@ -189,17 +189,15 @@ impl DiskSizeEditView {
 }
 }
 
-pub fn content(mut self, content: u64) -> Self {
-let val = (content as f64) / 1024. / 1024. / 1024.;
-
+pub fn content(mut self, content: f64) -> Self {
 if let Some(view) = self.view.get_child_mut(0).and_then(|v| 
v.downcast_mut()) {
-*view = 

[pve-devel] [PATCH manager 1/1] use 'pve-eslint' instead of 'eslint'

2023-06-22 Thread Dominik Csapak
since we changed the binary name

Signed-off-by: Dominik Csapak 
---
 www/manager6/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2d884f4a..d19167c2 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -314,13 +314,13 @@ 
WIDGETKIT=/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
 all:
 
 .lint-incremental: $(JSSRC)
-   eslint $?
+   pve-eslint $?
touch "$@"
 
 .PHONY: lint
 check: lint
 lint: $(JSSRC)
-   eslint --strict $(JSSRC)
+   pve-eslint --strict $(JSSRC)
touch ".lint-incremental"
 
 pvemanagerlib.js: .lint-incremental OnlineHelpInfo.js $(JSSRC)
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH eslint/manager/wt/pmg-gui/proxmox-backup] change eslint

2023-06-22 Thread Dominik Csapak
from 'eslint' to 'pve-eslint' to avoid a conflict with debians 'eslint'
package which ships the same binary

we have to bump the package and update the dev-dependency in the other
repositories

maybe we can/should also apply this on stable-7/2? so that on an
upgrade to 8.x/3.x it does not run into issues?

pve-eslint:

Dominik Csapak (1):
  change binary name from 'eslint' to 'pve-eslint'

 debian/links | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

pve-manager:

Dominik Csapak (1):
  use 'pve-eslint' instead of 'eslint'

 www/manager6/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

proxmox-widget-toolkit:

Dominik Csapak (1):
  use 'pve-eslint' instead of 'eslint'

 src/Makefile | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

pmg-gui:

Dominik Csapak (1):
  use 'pve-eslint' instead of 'eslint'

 js/Makefile| 4 ++--
 js/mobile/Makefile | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

proxmox-backup:

Dominik Csapak (1):
  use 'pve-eslint' instead of 'eslint'

 www/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pmg-gui 1/1] use 'pve-eslint' instead of 'eslint'

2023-06-22 Thread Dominik Csapak
since we changed the binary name

Signed-off-by: Dominik Csapak 
---
 js/Makefile| 4 ++--
 js/mobile/Makefile | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/js/Makefile b/js/Makefile
index fad2bd6..d0f02ff 100644
--- a/js/Makefile
+++ b/js/Makefile
@@ -106,7 +106,7 @@ OnlineHelpInfo.js: /usr/bin/asciidoc-pmg
mv $@.tmp $@
 
 .lint-incremental: ${JSSRC}
-   eslint $?
+   pve-eslint $?
touch "$@"
 
 .PHONY: lint
@@ -114,7 +114,7 @@ lint: .lint-incremental
 
 .PHONY: check
 check: ${JSSRC}
-   eslint --strict ${JSSRC}
+   pve-eslint --strict ${JSSRC}
touch ".lint-incremental"
 
 pmgmanagerlib.js: OnlineHelpInfo.js ${JSSRC}
diff --git a/js/mobile/Makefile b/js/mobile/Makefile
index 3e379d2..e63f179 100644
--- a/js/mobile/Makefile
+++ b/js/mobile/Makefile
@@ -10,7 +10,7 @@ MOBILESRC=\
  app.js\
 
 lint: pmgmanagerlib-mobile.js
-   eslint $^
+   pve-eslint $^
 
 .PHONY: check
 check: lint
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH eslint 1/1] change binary name from 'eslint' to 'pve-eslint'

2023-06-22 Thread Dominik Csapak
so that we don't conflict with 'eslint' package in debian, which ships
the same binary

Signed-off-by: Dominik Csapak 
---
 debian/links | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/debian/links b/debian/links
index 99342ed..0a1546f 100644
--- a/debian/links
+++ b/debian/links
@@ -1 +1 @@
-usr/share/nodejs/pve-eslint/bin/app.js usr/bin/eslint
+usr/share/nodejs/pve-eslint/bin/app.js usr/bin/pve-eslint
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-backup 1/1] use 'pve-eslint' instead of 'eslint'

2023-06-22 Thread Dominik Csapak
since we changed the binary name

Signed-off-by: Dominik Csapak 
---
 www/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/www/Makefile b/www/Makefile
index 476c80b6..bc1fd6f2 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -133,11 +133,11 @@ js/proxmox-backup-gui.js: .lint-incremental js 
OnlineHelpInfo.js ${JSSRC}
 
 .PHONY: check
 check:
-   eslint --strict ${JSSRC}
+   pve-eslint --strict ${JSSRC}
touch ".lint-incremental"
 
 .lint-incremental: ${JSSRC}
-   eslint $?
+   pve-eslint $?
touch "$@"
 
 .PHONY: clean
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH widget-toolkit 1/1] use 'pve-eslint' instead of 'eslint'

2023-06-22 Thread Dominik Csapak
since we changed the binary name

Signed-off-by: Dominik Csapak 
---
 src/Makefile | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/Makefile b/src/Makefile
index 7cff5dd..d312308 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -110,14 +110,14 @@ all: $(SUBDIRS)
set -e && for i in $(SUBDIRS); do $(MAKE) -C $$i; done
 
 .lint-incremental: $(JSSRC)
-   eslint $?
+   pve-eslint $?
touch "$@"
 
 .PHONY: lint
 check: lint
-   eslint --strict api-viewer/APIViewer.js
+   pve-eslint --strict api-viewer/APIViewer.js
 lint: $(JSSRC)
-   eslint --strict $(JSSRC)
+   pve-eslint --strict $(JSSRC)
touch ".lint-incremental"
 
 BUILD_TIME=$(or $(SOURCE_DATE_EPOCH),$(shell date '+%s.%N'))
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer] tui: multiply the disk size back into bytes

2023-06-22 Thread Thomas Lamprecht
Am 22/06/2023 um 08:56 schrieb Stefan Sterz:
> On 21.06.23 16:36, Thomas Lamprecht wrote:
>> Am 21/06/2023 um 16:00 schrieb Stefan Sterz:
>>> previously the installer correctly divided the value when using them
>>> for the `FloatEditView`, but forgot to multiply the value again when
>>> retrieving it after editing. this commit fixes that
>>>
>>> Signed-off-by: Stefan Sterz 
>>> ---
>>> tested this only locally and didn't build the installer completelly.
>>> i am not sure if the installer handles this value correctly once it
>>> is forwarded to the perl installer. if the perl installer expects
>>> bytes here, it should work just fine, though.
>>
>> no it doesn't it expects Gigabyte in floats, see:
>> https://git.proxmox.com/?p=pve-installer.git;a=commitdiff;h=9a2d64977f73cec225c407ff13765ef02e2ff9e9
>>
> 
> alright, thanks for that, i am not too familiar with this code base ^^'.
> should we then model these sizes as `f64` instead?

tbh. I though Christoph already switched all of those over to f64.

> 
> i'd go ahead and prepare a patch with that, but it's a bit more churn so
> i want to make sure that's the way to go.

I mean, in the end we need to be able to get a float of GB, so either we
use here af f64 that ensures all of the code doing the actual installation
needs no change, and thus has no regression potential, or we switch the
installation config over to megabytes (which would be enough for all
practicable disk sizes, especially as we expose only two decimal places
anyway.




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer] tui: multiply the disk size back into bytes

2023-06-22 Thread Stefan Sterz
On 21.06.23 16:36, Thomas Lamprecht wrote:
> Am 21/06/2023 um 16:00 schrieb Stefan Sterz:
>> previously the installer correctly divided the value when using them
>> for the `FloatEditView`, but forgot to multiply the value again when
>> retrieving it after editing. this commit fixes that
>>
>> Signed-off-by: Stefan Sterz 
>> ---
>> tested this only locally and didn't build the installer completelly.
>> i am not sure if the installer handles this value correctly once it
>> is forwarded to the perl installer. if the perl installer expects
>> bytes here, it should work just fine, though.
> 
> no it doesn't it expects Gigabyte in floats, see:
> https://git.proxmox.com/?p=pve-installer.git;a=commitdiff;h=9a2d64977f73cec225c407ff13765ef02e2ff9e9
> 

alright, thanks for that, i am not too familiar with this code base ^^'.
should we then model these sizes as `f64` instead?

i'd go ahead and prepare a patch with that, but it's a bit more churn so
i want to make sure that's the way to go.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel