> diff --git a/src/PVE/API2/LXC/Makefile b/src/PVE/API2/LXC/Makefile
> index f372d95..dd1b214 100644
> --- a/src/PVE/API2/LXC/Makefile
> +++ b/src/PVE/API2/LXC/Makefile
> @@ -1,8 +1,12 @@
> -SOURCES=Config.pm Status.pm Snapshot.pm
> +SOURCES=Config.pm Status.pm Snapshot.pm Features.pm
> +
> +LXCPRO
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
This class provides methods for starting and checking the current
status of a fence job.
When a fence job is started we execute a fence agent command.
If we can fork this happens in forked worker, which can be multiple
processes also, when parallel devices are configured.
When a device fails to s
Signed-off-by: Thomas Lamprecht
---
README | 111 +
1 file changed, 111 insertions(+)
diff --git a/README b/README
index 1c5177f..6cd523d 100644
--- a/README
+++ b/README
@@ -72,6 +72,117 @@ works using reliable HW fence devices.
A node now can be fenced with the use of external hardware fence
devices.
Those device can be configured at /etc/pve/ha/fence.cfg
also the fencing option in the datacenter configuration file must
be set to either 'hardware' or 'both', else configured devices
will *not* be used.
If hardware is sele
Add a FenceConfig class which includes methods to parse a config
file for fence devices in the format specified by dlm.conf, see the
Fencing section in the dlm.conf manpage for more details regarding
this format.
With this we can generate commands for fencing a node from the
parsed config file.
W
This adds three to hardware fencing related functions:
* read_fence_config
* fencing_mode
* exec_fence_agent
'read_fence_config' allows to create a common code base between the
real world and the test/sim world regarding fencing a bit easier.
In PVE2 it parses the config from /etc/pve/ha/fence.cfg
This class provides methods for starting and checking the current
status of a fence job.
When a fence job is started we execute a fence agent command.
If we can fork this happens in forked worker, which can be multiple
processes also, when parallel devices are configured.
When a device fails to s
Third iteration of the hardware fencing.
changes from v2:
* throw away the hardware class for the PVE2 env, use an Env method instead
* reworked 'allow hardware fencing' patch (nr. 5) a bit
* fixed small error in config parser and writer (thanks wolfgang)
What still needs to be done:
* API integr
We are only allowed to recover (=steal) a service when we have its
LRMs lock, as this guarantees us that even if said LRM comes up
again during the steal operation the LRM cannot start the services
when the service config still belongs to it for a short time.
This is important, else we have a poss
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied all 4 patches, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, but this patch has some coding style errors - see comments
> www/manager6/dc/Summary.js | 2 +-
> www/manager6/form/ComboGrid.js | 4 ++--
> www/manager6/lxc/Config.js | 2 +-
> www/manager6/node/Config.js | 2 +-
> www/manager6/panel/LogView.js | 4 +
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> * usability improvement for enabled buttons:
I am unable to see any effect (please show me tomorrow).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 03/14/2016 04:33 PM, Dietmar Maurer wrote:
>> To summarize the possible states:
>> * the node is fenced and stays until someone comes and checks it
>> (through switch, power, ... - fencing) - here we can do everything with
>> the lock we want
>> * the node comes back immediately (reset) becaus
> To summarize the possible states:
> * the node is fenced and stays until someone comes and checks it
> (through switch, power, ... - fencing) - here we can do everything with
> the lock we want
> * the node comes back immediately (reset) because someone thought this
> was a good way to setup the
Signed-off-by: Dominik Csapak
---
www/manager6/Toolkit.js | 4 ++--
www/manager6/grid/PoolMembers.js| 6 +++---
www/manager6/storage/ContentView.js | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/www/manager6/Toolkit.js b/www/manager6/Toolkit.js
index 0976fb
this removes the two overrides for extjs4
Signed-off-by: Dominik Csapak
---
i tested much, but did not find any case where these
two hacks are necessary anymore, but if somebody
finds something that does not work anymore,
please send a message, and we should check if we can avoid
these hacks, oth
change show event to activate
Signed-off-by: Dominik Csapak
---
www/manager6/grid/PoolMembers.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/grid/PoolMembers.js b/www/manager6/grid/PoolMembers.js
index 932f475..ee50933 100644
--- a/www/manager6/grid/PoolMembe
fix the double loadMask, (it exists, because extjs has a default
loadmask for gridpanels)
also move static configuration to class definition
Signed-off-by: Dominik Csapak
---
www/manager6/storage/ContentView.js | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/www/man
create_base() uses '-ky' to prevent base images from being
activated by default, similar to snapshots. This means we
need to activate them like snapshots with the '-K' option.
---
Without this you cannot clone template VMs with disks on LVM-thin.
On a related note: We need to special-case the clon
On 03/12/2016 01:39 PM, Dietmar Maurer wrote:
>> We are only allowed to recover (=steal) a service when we have its
>> LRMs lock, as this guarantees us that even if said LRM comes up
>> again during the steal operation the LRM cannot start the services
>> when the service config still belongs to
This introduces a 'features' option which currently contains
3 AppArmor related options: netmount, blockmount and
nesting. These change the container's AppArmor profile and
are thus incompatible with a custom 'lxc.aa_profile' option,
and are used to allow mounting of network or block devices
as wel
---
We already disable the quota element, but bind mounts also can't get
their own ACL setting...
www/manager/lxc/ResourceEdit.js | 21 +
www/manager6/lxc/ResourceEdit.js | 21 +
2 files changed, 26 insertions(+), 16 deletions(-)
diff --git a/www/manager/l
If the CRM is dead or not active yet and we add a new service, we do
not see it in the HA status. This can be confusing for the user as
it is queued for adding but does not shows up, so lets show those
services also.
Signed-off-by: Thomas Lamprecht
---
src/PVE/API2/HA/Status.pm | 12 ++--
This should avoid confusion if we remove all service from the CRM,
as else we would always see "old timestamp -dead?" in the status.
Signed-off-by: Thomas Lamprecht
---
Also some whitespace cleanup in the diff context.
src/PVE/API2/HA/Status.pm | 23 +++
1 file changed, 15
when we click on a node/container/vm and quickly
click on something else, there might be a race condition,
where the store finished loading and we try to change
dom elements, which are not there anymore
so we change the store.on to me.mon, which
deletes the handler when the component is gone
in t
in extjs 5/6 there is a caching issue, where they
save dom elements for reuse, but the garbage collector
can set them to null
when the framework now reuses the "cached" element it is null,
and any action on it produces an error, which breaks the site
for details see the forum link in the comment
---
www/manager6/lxc/Config.js | 36 +---
1 file changed, 17 insertions(+), 19 deletions(-)
diff --git a/www/manager6/lxc/Config.js b/www/manager6/lxc/Config.js
index 8902c86..491a5f8 100644
--- a/www/manager6/lxc/Config.js
+++ b/www/manager6/lxc/Config.js
@@ -121,
* usability improvement for enabled buttons:
in the default theme, ExtJS uses two different nuances of grey to
distinguish enabled or disabled buttons
the problem is that compared to the full black of the panels titles, it gives
the impression that everything is disabled (the contrast is not strong
* replace scrollable with autoScroll and move to prototype body
* use 'activate' to load store on F5
* do not set a height on the StatusView component: it hides some rows,
and the framework sets a good working default height
---
www/manager6/lxc/StatusView.js | 1 -
www/manager6/lxc/Summary.js
Refactored and now using PVE::QemuConfig and PVE::LXC::Config
Moved the next if.. statements into the corresponding branches
Signed-off-by: Caspar Smit
---
PVE/API2/Nodes.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 68496d7..f1f
Signed-off-by: Caspar Smit
---
PVE/API2/Nodes.pm | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index f1fb392..b2de907 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -1208,9 +1208,6 @@ my $get_start_stop_list = sub {
applied both patches
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
remove the limit to HTTP only, since it would only apply for
custom yubico validation server urls anyway.
---
if there is a problem with certificate validation or proxies without https
support, the user can simply change the url to an http one.
PVE/AccessControl.pm | 6 ++
1 file changed, 2 i
---
Note: the last hunk needs the other yubico patch to apply cleanly
PVE/AccessControl.pm | 31 +++
1 file changed, 15 insertions(+), 16 deletions(-)
diff --git a/PVE/AccessControl.pm b/PVE/AccessControl.pm
index 550fa87..6023285 100644
--- a/PVE/AccessControl.pm
+++
This should avoid confusion if we remove all service from the CRM,
as else we would always see "old timestamp -dead?" in the status.
Signed-off-by: Thomas Lamprecht
---
Also made a whitespace cleanup in the diff context.
src/PVE/API2/HA/Status.pm | 6 --
1 file changed, 4 insertions(+), 2
if the selected node has its status changed between stop &
running, the node was removed and then readded
during the remove / add process the 'selected' status of the node
was lost if it has one
instead of deleting / readding the node, we update now its content
this was the default behaviour for s
This patch builds upon the previous patch, so please rebase it also when
you update the other one else it won't apply, else it looks good.
On 03/11/2016 12:59 PM, Caspar Smit wrote:
> When using the migrate all button, HA enabled VMs and Containers are now also
> migrated. The start/stop all butt
The idea of this patch looks good,
please address the issues mentioned by Fabian (respin it on current
master & move the "next if .." parts in the respective branch above) and
sign the CLA (as mentioned in my other reply to your first patch) and
then this should be good to go.
Thanks for your con
Hi,
first thanks for your contribution!
We use the Harmony CLA for contributions:
http://www.harmonyagreements.org/
See:
https://pve.proxmox.com/wiki/Developer_Documentation#Software_License_and_Copyright
for more information.
Please send a signed CLA to off...@proxmox.com (or per Mail/Fax if
p
If the cluster config file is missing, pvecm status, nodes
and expected will probably not work. Add a helpful warning
because the corosync-quorumtool error message is not very
descriptive here.
Add a helper sub in Cluster.pm to actually do the check.
---
Changes compared to v2:
- hardcode msg in h
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
fstrim is still better.
discard mount option has too much overhead
- Mail original -
De: "dietmar"
À: "pve-devel"
Envoyé: Dimanche 13 Mars 2016 16:59:06
Objet: [pve-devel] use ext4 discard option for containers?
Hi all,
I wonder if we can/should use the ext4 "discard" option
for con
52 matches
Mail list logo