Le mardi 18 octobre 2022 à 08:39 +0200, Thomas Lamprecht a écrit :
>
> We plan to have such functionality in the datacenter manager, as that
> should provide
> a better way to manage such remotes and interfacing, in PVE it'd be
> bolted on and
> would require the need to manage this on every host/
Am 17/10/2022 um 20:09 schrieb Stoiko Ivanov:
> While discussing whether to push the 2.1.6 packages further we noticed
> that a small glitch happened on debian-upstream.
> Our build should not be affected (we don't ship init-scripts), but syncing
> the changes could help in a few corner cases (user
Hi,
Am 17/10/2022 um 16:40 schrieb DERUMIER, Alexandre:
>> an example invocation:
>>
>> $ qm remote-migrate 1234 4321
> 'host=123.123.123.123,apitoken=pveapitoken=user@pve!incoming=----,fingerprint=aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:e
I think it's missing cleaning of vm on remote storage,
if migration is aborted on phase1 or phase2.
It safe to delete vm conf && disk on remotehost until the end of the phase2,
when disk are switched.
If job is aborted, It trigger "mtunnel exited unexpectedly" on
remotehost, without any cleani
While checking the current state of 2.1.6 we noticed that there were
some changes in debian-upstream [0] resulting from a bug-report in
zfs-upstream [1].
Our packages should be unaffected (they do not ship the
init-scripts in the first place).
Since the issue was fixed by zfs-upstream already on t
following upstream shipping it as a symlnk to /dev/null (to mask it)
follows commit b18419d7068b7ebcaa6dfbee85263177feffa711 from
debian-upstream:
https://salsa.debian.org/zfsonlinux-team/zfs/
Signed-off-by: Stoiko Ivanov
---
debian/zfsutils-linux.install | 1 +
1 file changed, 1 insertion(+)
While discussing whether to push the 2.1.6 packages further we noticed
that a small glitch happened on debian-upstream.
Our build should not be affected (we don't ship init-scripts), but syncing
the changes could help in a few corner cases (user installed 2.1.6-1 from
debian-upstream and then chang
Le 28/09/22 à 14:50, Fabian Grünbichler a écrit :
> which wraps the remote_migrate_vm API endpoint, but does the
> precondition checks that can be done up front itself.
>
> this now just leaves the FP retrieval and target node name lookup to the
> sync part of the API endpoint, which should be do-
Hi Fabian,
> an example invocation:
>
> $ qm remote-migrate 1234 4321
'host=123.123.123.123,apitoken=pveapitoken=user@pve!incoming=----,fingerprint=aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb'
--target-bridge v
LGTM, some comments inline (mostly nitpicks/qestions)
On 7/6/22 15:01, Aaron Lauterer wrote:
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to ret
high level looks mostly good, a small question:
is there a special reason why we ignore pre-lvm osds here?
AFAICS, we simply error out for osds that don't live on lvm
(though we can add additional types later i guess)
comments in the individual patches
_
comments inline:
On 7/6/22 15:01, Aaron Lauterer wrote:
This new windows provides more detailes about an OSD such as:
* PID
* Memory usage
* various metadata that could be of interest
* list of phyiscal disks used for the main disk, db and wal with
additional infos about the volumes for each
overall, i'd like the renderer to be a bit more robust.
a small change in output results in nothing showing at all.
i'd try to parse it as good as possible, but fallback to the 'raw' value
in case it fails. that way the user can at least see what ceph returned
On 7/6/22 15:01, Aaron Lauterer wro
as talked off-list, we can omit the whole patch
by using the rstore.load()'s callback directly
(we already have to go into the internals of the objectgrid there,
so we can use 'rstore.load()' directly too instead of 'reload')
On 7/6/22 15:01, Aaron Lauterer wrote:
Signed-off-by: Aaron Lauterer
The option to set the mtu parameter for lxc containers already exists
in the backend. It only had to be exposed in the web UI as well.
Signed-off-by: Daniel Tschlatscher
---
www/manager6/lxc/Network.js | 14 ++
1 file changed, 14 insertions(+)
diff --git a/www/manager6/lxc/Network.j
The new MTU field and the rate limit field are now in the advanced
section of the NetworkInputPanel to parallel the layout of the
NetworkEdit for VMs.
Signed-off-by: Daniel Tschlatscher
---
While the layout now is the same as in the VM NetworkEdit, I just feel
like the advanced column does not c
Changing the read-only status of a disk is not possible through QMP, so
it needs to be exempt from the hotpluggable values as to notify the
user.
Signed-off-by: Leo Nunner
---
PVE/QemuServer.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServ
The documentation in include/io/channel.h states that -1 or
QIO_CHANNEL_ERR_BLOCK should be returned upon error. Simply passing
along the return value from the blk-functions has the potential to
confuse the call sides. Non-blocking mode is not implemented
currently, so -1 it is.
The "return ret" w
18 matches
Mail list logo