On 10/20/22 14:55, Thomas Lamprecht wrote:
> Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
>> since ceph luminous (ceph 12) pools need to be associated with at
>> least one applicaton. expose this information here too so that clients
>> of this endpoint can use it
>>
>> Signed-off-by: Stefan Sterz
Hi,
This is to avoid to have ipv6 local-link ip address on every generated
tap interfaces (and fwbr bridges too).
(and have bad packets send to the network)
This, of course, don't disabling ipv6 support inside the vm/ct.
Le jeudi 20 octobre 2022 à 17:07 +, Mark Schouten via pve-devel a
éc
--- Begin Message ---
Hi,
Sorry. But I always get extremely triggered by functions called
‘disable_ipv6()’.
Can someone hit me with a cluebat as to why that function even exists?
(Since we deploy Proxmox without IPv4, so anywhere where ipv6 is
actively disabled, will break stuff for us).
—
I'm really unable to reproduce this.
User is able to reproduce it 100%, depending on the brigde where the vm is
started.
(some bridge with sdn generated for example).
I don't have asked to user to reboot.
ifupdown2 seem to thrown warning too, so I don't known if it's a special sysctl
trigg
If the config doesn't contain the cloud-init disk anymore after the
rollback, we have to clean it up since otherwise no further disk can be
attached unless the one still existing on the storage is deleted.
Signed-off-by: Mira Limbeck
---
v2:
- chose the add_unused_volume way as @fiona recommende
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to return the index.
The metadata one provides various metadata regarding the OSD.
Such as
* process
Render the OSD listening addresses a bit nicer and one per line.
Signed-off-by: Aaron Lauterer
---
changes since v2:
- improve and simplify the first preparation steps
- if regex matching fails, show the raw value
www/manager6/Utils.js | 15 +++
1 file changed, 15 insertions(+)
dif
This patch series adds 2 new API endpoints for OSDs to fetch more
detailed information about a single OSD. One for overall information and
one for a single volume (block, db, wal).
More in the actual patches.
Changes since v2:
drop widget-toolkit patch
implementing suggestions received on v2
reph
This new windows provides more detailes about an OSD such as:
* PID
* Memory usage
* various metadata that could be of interest
* list of phyiscal disks used for the main disk, db and wal with
additional infos about the volumes for each
A new 'Details' button is added to the OSD overview and a d
intended as a replacement for my previous patch: [0]
while we may not want users to login into a non-quorate cluster,
preventing it as a side-effect of locking the tfa config is wrong.
currently there is only one situation where we actually need to lock
the tfa config, namely when using recovery
just above, we check & return if $tfa_challenge is set, so there is no
way that it would be set here. To make it clearer that it must be undef
here, just omit it in the call.
Signed-off-by: Dominik Csapak
---
src/PVE/AccessControl.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --
we currently only need to lock the tfa config when we got a recovery key
as a tfa challenge response, since that's the only thing that can
actually change the tfa config (every other method only reads from
there).
so to do that, factor out the code that was inside the lock, and call it
with/withou
since that is what it really is, not only a otp
Signed-off-by: Dominik Csapak
---
src/PVE/AccessControl.pm | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index c45188c..d83dee2 100644
--- a/src/
Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
> previously the ui would allow adding all pools (even the default
> ceph-mon pools) as storage. this could lead to issues when users did
> use these pools as storage (e.g.: vms missing their disks after a
> migration). hence, restrict the pool selector
Am 20/10/2022 um 09:17 schrieb Stefan Sterz:
> since ceph luminous (ceph 12) pools need to be associated with at
> least one applicaton. expose this information here too so that clients
> of this endpoint can use it
>
> Signed-off-by: Stefan Sterz
> ---
> v3: add an api viewer entry for the appli
Am 19/10/2022 um 12:35 schrieb Aaron Lauterer:
> It has been possible since quite a while to live migrate replicated
> guests.
>
> Signed-off-by: Aaron Lauterer
> ---
> pvesr.adoc | 1 -
> 1 file changed, 1 deletion(-)
>
>
applied, thanks!
___
pve-
Am 18/10/2022 um 11:20 schrieb Fabian Grünbichler:
> this series implements filtering based on package section (exact match)
> or package name (glob), and extends mirroring support to source
> packages/deb-src repositories.
>
> technically the first patch in proxmox-apt is a breaking change, but t
Signed-off-by: Markus Frank
---
qm.adoc | 8
1 file changed, 8 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 4d0c7c4..38bc788 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -693,6 +693,14 @@ Selecting `serialX` as display 'type' disables the VGA
output, and redirects
the Web Console to th
Signed-off-by: Markus Frank
---
.../patches/0019-show-clipboard-button.patch | 31 +++
debian/patches/series | 1 +
2 files changed, 32 insertions(+)
create mode 100644 debian/patches/0019-show-clipboard-button.patch
diff --git a/debian/patches/0019-sho
Signed-off-by: Markus Frank
---
www/manager6/qemu/DisplayEdit.js | 9 +
1 file changed, 9 insertions(+)
diff --git a/www/manager6/qemu/DisplayEdit.js b/www/manager6/qemu/DisplayEdit.js
index 9bb1763e..77434b7e 100644
--- a/www/manager6/qemu/DisplayEdit.js
+++ b/www/manager6/qemu/DisplayE
added Options to use the qemu vdagent implementation to enable the noVNC
clipboard.
When enabled with SPICE the spice-vdagent gets replaced with the qemu
implementation.
This patch does not solve #1406, but does allow copy and paste with
a running X-session, when spice-vdagent is installed.
Sign
By that noVNC is able to check if clipboard is active.
Signed-off-by: Markus Frank
---
PVE/API2/Qemu.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 99b426e..25f3a1d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2428,6 +2428,11 @@
On Thu, Oct 20, 2022 at 12:24:29AM +0200, Alexandre Derumier wrote:
> It's possible to have a
> /proc/sys/net/ipv6/ directory
>
> but no
> /proc/sys/net/ipv6/conf/$iface/disable_ipv6
Do we know why this happens? That doesn't seem right to me, unless
some kind of race somewhere with the interface
On 10/20/22 06:07, Alwin Antreich wrote:
> On October 19, 2022 2:16:44 PM GMT+02:00, Stefan Sterz
> wrote:
>> when using a hyper-converged cluster it was previously possible to add
>> the pool used by the ceph-mgr modules (".mgr" since quincy or
>> "device_health_metrics" previously) as an RBD st
previously the ui would allow adding all pools (even the default
ceph-mon pools) as storage. this could lead to issues when users did
use these pools as storage (e.g.: vms missing their disks after a
migration). hence, restrict the pool selector to rbd pools.
fails gracefully by reverting to the p
since ceph luminous (ceph 12) pools need to be associated with at
least one applicaton. expose this information here too so that clients
of this endpoint can use it
Signed-off-by: Stefan Sterz
---
v3: add an api viewer entry for the applications object
thanks @ alwin antreich for pointing out th
26 matches
Mail list logo