applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
I also added a screenshot :-)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
This just fixes your first patch. So please send a v2 of the first
patch instead (in future).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied (with fixed comma).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied whole series
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 3433874..5d498ce 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -307,6 +307,7 @@ sub get_wear_leveling_info {
'samsung' => 177,
in my initial patch series for the regression test, i failed to notice
the missing attributes for the sandisk ssds (which had not been parsed)
Signed-off-by: Dominik Csapak
---
changes to v1:
* threshold is now 0 instead of ---
test/disk_tests/ssd_smart/disklist_expected.json | 2 +-
test/dis
sandisk ssds have a default threshold of '---' on nearly all fields,
which prevents our parsing
Signed-off-by: Dominik Csapak
---
changes since v1:
* made the distinction and reason for the --- special case more clear
PVE/Diskmanage.pm | 11 +--
1 file changed, 9 insertions(+), 2 deleti
as we now can parse the wearout
Signed-off-by: Dominik Csapak
---
test/disk_tests/ssd_smart/disklist_expected.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/test/disk_tests/ssd_smart/disklist_expected.json
b/test/disk_tests/ssd_smart/disklist_expected.json
index 2eab67
This makes it easier to execute the agent command and restrict to the allowed.
---
PVE/API2/Qemu.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 500e2df..5c47de7 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2840,6 +284
---
www/manager6/lxc/CreateWizard.js | 2 +-
www/manager6/lxc/Options.js | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js
index 7c4a722..bd6090d 100644
--- a/www/manager6/lxc/CreateWizard.js
+++ b/www/
The setting is afterwards displayed as a read only option in the option time
---
www/manager6/lxc/CreateWizard.js | 6 ++
www/manager6/lxc/Options.js | 7 ++-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizar
We will use this to document the first tab of the Create CT wizard.
Also move the priviledged/unpriviledge explanation here, since
the related checkbox will be placed in this tab.
---
pct.adoc | 70
1 file changed, 44 insertions(+),
sorry, the code below was correct - no need to change.
> > diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> > index a0e0ba5..0599260 100755
> > --- a/PVE/CLI/qm.pm
> > +++ b/PVE/CLI/qm.pm
> > @@ -529,6 +529,8 @@ our $cmddef = {
> >
> > monitor => [ __PACKAGE__, 'monitor', ['vmid']],
> >
> >
applied, but see comments inline:
> __PACKAGE__->register_method({
> +name => 'agent',
> +path => '{vmid}/agent',
> +method => 'POST',
> +protected => 1,
> +proxyto => 'node',
> +description => "Execute Qemu Guest Agent commands.",
> +permissions => {
> + check =>
we want to set/get the flags in the ceph/osd tab, so we have to
return it there
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7b1bbd0..3dd1439 100644
--- a/PVE/API2/Ceph.pm
this patch adds a set/unset noout button (for easy maintenance of your
ceph cluster) and reorders the buttons so that global actions (reload,
add osd, set noout) are left, and osd specific actions are on the right
to reduce confusion, there is now a label left of the osd actions which
displays the
we add a get/post/delete api call for ceph flags
Signed-off-by: Dominik Csapak
---
changes to v1:
* new get call
* separated set/unset in a post/delete call
PVE/API2/Ceph.pm | 121 +++
1 file changed, 121 insertions(+)
diff --git a/PVE/API2/
changes to v1:
* added a get api call for the ceph flags
* instead of one set/unset call we now have
a post and a delete call on
/nodes/NODENAME/ceph/flags/FLAGNAME
where we set or unset the flag
* changed the gui code to reflect the api change
Dominik Csapak (5):
also return the cep
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/OSD.js | 13 +++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 96d398b..3e1935a 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -188,8 +
to get faster from the datacenter dashboard to the ceph dashboard
Signed-off-by: Dominik Csapak
---
www/css/ext6-pve.css | 3 +++
www/manager6/Workspace.js | 4 +---
www/manager6/dc/Health.js | 26 +-
3 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/
comments inline:
> On November 3, 2016 at 9:06 AM Wolfgang Link wrote:
>
>
> We use this function in 3 different packets with the same code.
>
> It will moved to the CLIHandler, because we need it only on the command line.
> ---
> src/PVE/CLIHandler.pm | 25 +
> 1 file
> > Both works:
> > 10.1.2.1/24 == 10.1.2.0/24 == 10.1.2.128/24
> >
> > /24 tells us that the last 8 bit are irrelevant and masked away, at least in
> > this case, ip-tools can handle it just fine :)
>
> After thinking about it a bit I do think using .0 is a more consistent
> and sane choice for
This error message replaces the error:
TASK ERROR: command '/bin/nc6 -l -p 5900 -w 10 -e '/usr/bin/ssh -T -o
BatchMode=yes 192.168.16.75 /usr/sbin/qm vncproxy 401 2>/dev/null'' failed:
exit code 1
as seen in the task list.
It is not currently possible to test if a VM is running or not for the wh
applied (with cleanups).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Tue, Nov 29, 2016 at 04:44:30PM +0100, Thomas Lamprecht wrote:
> Hi,
>
>
> On 11/29/2016 04:34 PM, Alexandre DERUMIER wrote:
> >Hi,
> >
> >+
> >>>+Here we want to use the 10.1.2.1/24 network as migration network.
> >>>+migration: secure,network=10.1.2.1/24
> >I think the network is:
> >
> >10.
28 matches
Mail list logo