Re: [pve-devel] [PATCH pve-docs] Close #1623: replace apt-get to apt

2020-07-06 Thread Oguz Bektas


hi,

what about the 'dist-upgrade' occurences? should these also be replaced
by 'full-upgrade' for completeness sake?

On Mon, Jul 06, 2020 at 01:38:29PM +0200, Moayad Almalat wrote:
> Signed-off-by: Moayad Almalat 
> ---
>  api-viewer/apidata.js| 8 
>  local-zfs.adoc   | 4 ++--
>  pve-firewall.adoc| 2 +-
>  pve-installation.adoc| 4 ++--
>  pve-package-repos.adoc   | 2 +-
>  pve-storage-iscsi.adoc   | 2 +-
>  qm-cloud-init.adoc   | 2 +-
>  system-software-updates.adoc | 6 +++---
>  8 files changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/api-viewer/apidata.js b/api-viewer/apidata.js
> index a44e50e..818740c 100644
> --- a/api-viewer/apidata.js
> +++ b/api-viewer/apidata.js
> @@ -34214,7 +34214,7 @@ var pveapi = [
> },
> "POST" : {
>"allowtoken" : 1,
> -  "description" : "This is used to resynchronize 
> the package index files from their sources (apt-get update).",
> +  "description" : "This is used to resynchronize 
> the package index files from their sources (apt update).",
>"method" : "POST",
>"name" : "update_database",
>"parameters" : {
> @@ -37010,7 +37010,7 @@ var pveapi = [
>},
>"upgrade" : {
>   "default" : 0,
> - "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt-get dist-upgrade' instead of normal shell.",
> + "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt dist-upgrade' instead of normal shell.",
>   "optional" : 1,
>   "type" : "boolean",
>   "typetext" : ""
> @@ -37097,7 +37097,7 @@ var pveapi = [
>},
>"upgrade" : {
>   "default" : 0,
> - "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt-get dist-upgrade' instead of normal shell.",
> + "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt dist-upgrade' instead of normal shell.",
>   "optional" : 1,
>   "type" : "boolean",
>   "typetext" : ""
> @@ -37229,7 +37229,7 @@ var pveapi = [
>},
>"upgrade" : {
>   "default" : 0,
> - "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt-get dist-upgrade' instead of normal shell.",
> + "description" : "Deprecated, use the 'cmd' 
> property instead! Run 'apt dist-upgrade' instead of normal shell.",
>   "optional" : 1,
>   "type" : "boolean",
>   "typetext" : ""
> diff --git a/local-zfs.adoc b/local-zfs.adoc
> index fd03e89..0271510 100644
> --- a/local-zfs.adoc
> +++ b/local-zfs.adoc
> @@ -333,10 +333,10 @@ Activate E-Mail Notification
>  ZFS comes with an event daemon, which monitors events generated by the
>  ZFS kernel module. The daemon can also send emails on ZFS events like
>  pool errors. Newer ZFS packages ship the daemon in a separate package,
> -and you can install it using `apt-get`:
> +and you can install it using `apt`:
>  
>  
> -# apt-get install zfs-zed
> +# apt install zfs-zed
>  
>  
>  To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with 
> your
> diff --git a/pve-firewall.adoc b/pve-firewall.adoc
> index 7089778..f286018 100644
> --- a/pve-firewall.adoc
> +++ b/pve-firewall.adoc
> @@ -573,7 +573,7 @@ Rejected/Dropped firewall packets don't go to the IPS.
>  Install suricata on proxmox host:
>  
>  
> -# apt-get install suricata
> +# apt install suricata
>  # modprobe nfnetlink_queue  
>  
>  
> diff --git a/pve-installation.adoc b/pve-installation.adoc
> index 0d416ac..5129c14 100644
> --- a/pve-installation.adoc
> +++ b/pve-installation.adoc
> @@ -291,8 +291,8 @@ xref:sysadmin_package_repositories[After configuring the 
> repositories] you need
>  to run the following commands:
>  
>  
> -# apt-get update
> -# apt-get install proxmox-ve
> +# apt update
> +# apt install proxmox-ve
>  
>  
>  Installing on top of an existing Debian installation looks easy, but it 
> presumes
> diff --git a/pve-package-repos.adoc b/pve-package-repos.adoc
> index 4fcf147..34d1700 100644
> --- a/pve-package-repos.adoc
> +++ b/pve-package-repos.adoc
> @@ -13,7 +13,7 @@ defined in the file 

Re: [pve-devel] [RFC v2 0/3] nvme emulation

2020-07-06 Thread Oguz Bektas
hi,

the email i've sent to qemu-discuss list to ask about this issue while
reattaching a drive with the same id has gone unanswered...

how do we want to proceed with this feature?

maybe we can fix this issue or workaround it somehow?


On Mon, May 18, 2020 at 05:34:01PM +0200, Oguz Bektas wrote:
> add support for nvme emulation.
> 
> v1->v2:
> * implement thomas' suggestions from mailing list
> 
> 
> i'm sending this as RFC because of the following issue, maybe someone has a 
> tip for me:
> 
> alpine linux vm, with 5 disks. nvme0-4 and sata0.
> hot-unplug nvme4. this works.
> re-hotplug nvme4 without rebooting will cause the following error:
> 
> ---
> TASK ERROR: 400 Parameter verification failed. 
> nvme4: hotplug problem - adding drive failed: Duplicate ID 'drive-nvme4' for 
> drive
> ---
> 
> and drive cannot be hotplugged with the same name until reboot. changing the 
> id from nvme4 to nvme5 will work however...
> 
> i'm not sure why this happens, something in qemu isn't being deleted after 
> removing the device/drive, but `info qtree` and `info qom-tree` do
> not show nvme4 after being unplugged.
> 
> 
> 
> 
> 
> qemu-server:
> Oguz Bektas (2):
>   fix #2255: add support for nvme emulation
>   drive: use more compact for-loop expression for initializing drive
> descriptions
> 
>  PVE/QemuServer.pm   | 23 ---
>  PVE/QemuServer/Drive.pm | 38 +++---
>  2 files changed, 43 insertions(+), 18 deletions(-)
> 
> pve-manager:
> Oguz Bektas (1):
>   gui: add nvme as a bus type for creating disks
> 
>  www/manager6/Utils.js   | 3 ++-
>  www/manager6/form/BusTypeSelector.js| 2 ++
>  www/manager6/form/ControllerSelector.js | 4 ++--
>  www/manager6/qemu/CloudInit.js  | 4 ++--
>  www/mobile/QemuSummary.js   | 2 +-
>  5 files changed, 9 insertions(+), 6 deletions(-)
> 
> 
> -- 
> 2.20.1
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH widget-toolkit 2/4] add TimezonePanel for containers

2020-07-06 Thread Oguz Bektas
hi,

> +
> > +   if (deletes.length) {
> > +   values.delete = deletes.join(',');
> > +   }
> > +
> > +   return values;
> > +},
> > +
> > +
> > +initComponent: function() {
> > +   var me = this;
> > +
> > +   var items = [];
> 
> how's that not just a static
> items: [
> { ... },
> ],
> 
> definition?
> 
> And even if it would need to be in initComponent then to me.items = [ ... ]
> not that weird push every item step by step.

honestly it didn't even occur to me that there's any difference.

> 
> But please wait for a v4, I did not looked at the rest or tested this at
> all..


okay, i'll wait for the testing.


> 
> > +   items.push({
> > +  xtype: 'radiofield',
> > +  name: 'tzmode',
> > +  inputValue: '__default__',
> > +  boxLabel: gettext('Container managed'),
> > +  checked: true,
> > +   });
> > +   items.push({
> > +  xtype: 'radiofield',
> > +  name: 'tzmode',
> > +  inputValue: 'host',
> > +  boxLabel: gettext('Use host settings'),
> > +   });
> > +   items.push({
> > +  xtype: 'radiofield',
> > +  name: 'tzmode',
> > +  inputValue: 'select',
> > +  boxLabel: gettext('Select a timezone'),
> > +  listeners: {
> > +  change: function(f, value) {
> > +  if (!this.rendered) {
> > +  return;
> > +  }
> > +  let timezoneSelect = me.down('field[name=timezone]');
> > +  timezoneSelect.setDisabled(!value);
> > +  },
> > +  },
> > +   });
> > +   items.push({
> > +  xtype: 'combobox',
> > +  itemId: 'tzlist',
> > +  fieldLabel: gettext('Time zone'),
> > +  disabled: true,
> > +  name: 'timezone',
> > +  queryMode: 'local',
> > +  store: Ext.create('Proxmox.data.TimezoneStore'),
> > +  displayField: 'zone',
> > +  editable: true,
> > +  anyMatch: true,
> > +  forceSelection: true,
> > +  allowBlank: false,
> > +   });
> > +
> > +   me.items = items;
> > +
> > +   me.callParent();
> > +},
> > +});
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 0/4] timezones for containers

2020-07-02 Thread Oguz Bektas
v3 of the timezone series.

since pve-common patch has already been applied, it's not included.

only the widget-toolkit patch is different than v2.

v2->v3:
* use radiofield in widget-toolkit


proxmox-widget-toolkit:

Oguz Bektas (1):
  add TimezonePanel for containers

 src/Makefile   |  1 +
 src/data/TimezoneStore.js  |  2 +-
 src/panel/TimezonePanel.js | 91 ++
 3 files changed, 93 insertions(+), 1 deletion(-)
 create mode 100644 src/panel/TimezonePanel.js

pve-manager:

Oguz Bektas (2):
  add timezone setting to lxc options
  add timezone option to container creation wizard

 www/manager6/lxc/CreateWizard.js | 16 +---
 www/manager6/lxc/Options.js  | 13 +
 2 files changed, 26 insertions(+), 3 deletions(-)

pve-container:

Oguz Bektas (1):
  fix #1423: add timezone config option

 src/PVE/LXC/Config.pm | 14 ++
 src/PVE/LXC/Setup.pm  | 13 +
 src/PVE/LXC/Setup/Base.pm | 33 ++---
 3 files changed, 57 insertions(+), 3 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 container 1/4] fix #1423: add timezone config option

2020-07-02 Thread Oguz Bektas
optionally enabled.

adds the 'timezone' option to config, which takes a valid timezone (i.e.
Europe/Vienna) to set in the container.

if nothing is selected, then it will show as 'container managed' in
GUI, and nothing will be done.

if set to 'host', the /etc/localtime symlink from the host node will be
cached and set in the container rootfs.

Signed-off-by: Oguz Bektas 
---
v2->v3:
no change




 src/PVE/LXC/Config.pm | 14 ++
 src/PVE/LXC/Setup.pm  | 13 +
 src/PVE/LXC/Setup/Base.pm | 33 ++---
 3 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 0a28380..edd587b 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -454,6 +454,11 @@ my $confdesc = {
type => 'string', format => 'address-list',
description => "Sets DNS server IP address for a container. Create will 
automatically use the setting from the host if you neither set searchdomain nor 
nameserver.",
 },
+timezone => {
+   optional => 1,
+   type => 'string', format => 'pve-ct-timezone',
+   description => "Time zone to use in the container. If option isn't set, 
then nothing will be done. Can be set to 'host' to match the host time zone, or 
an arbitrary time zone option from /usr/share/zoneinfo/zone.tab",
+},
 rootfs => get_standard_option('pve-ct-rootfs'),
 parent => {
optional => 1,
@@ -716,6 +721,15 @@ for (my $i = 0; $i < $MAX_LXC_NETWORKS; $i++) {
 };
 }
 
+PVE::JSONSchema::register_format('pve-ct-timezone', \_ct_timezone);
+sub verify_ct_timezone {
+my ($timezone, $noerr) = @_;
+
+return if $timezone eq 'host'; # using host settings
+
+PVE::JSONSchema::pve_verify_timezone($timezone);
+}
+
 PVE::JSONSchema::register_format('pve-lxc-mp-string', \_lxc_mp_string);
 sub verify_lxc_mp_string {
 my ($mp, $noerr) = @_;
diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index 6ebd465..d424aaa 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -5,6 +5,8 @@ use warnings;
 use POSIX;
 use PVE::Tools;
 
+use Cwd 'abs_path';
+
 use PVE::LXC::Setup::Debian;
 use PVE::LXC::Setup::Ubuntu;
 use PVE::LXC::Setup::CentOS;
@@ -112,6 +114,7 @@ sub new {
 
 # Cache some host files we need access to:
 $plugin->{host_resolv_conf} = PVE::INotify::read_file('resolvconf');
+$plugin->{host_localtime} = abs_path('/etc/localtime');
 
 # pass on user namespace information:
 my ($id_map, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
@@ -214,6 +217,16 @@ sub set_dns {
 $self->protected_call($code);
 }
 
+sub set_timezone {
+my ($self) = @_;
+
+return if !$self->{plugin}; # unmanaged
+my $code = sub {
+   $self->{plugin}->set_timezone($self->{conf});
+};
+$self->protected_call($code);
+}
+
 sub setup_init {
 my ($self) = @_;
 
diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
index 93dace7..d73335b 100644
--- a/src/PVE/LXC/Setup/Base.pm
+++ b/src/PVE/LXC/Setup/Base.pm
@@ -3,6 +3,7 @@ package PVE::LXC::Setup::Base;
 use strict;
 use warnings;
 
+use Cwd 'abs_path';
 use File::stat;
 use Digest::SHA;
 use IO::File;
@@ -451,6 +452,30 @@ my $randomize_crontab = sub {
}
 };
 
+sub set_timezone {
+my ($self, $conf) = @_;
+
+my $zoneinfo = $conf->{timezone};
+
+return if !defined($zoneinfo);
+
+my $tz_path = "/usr/share/zoneinfo/$zoneinfo";
+
+if ($zoneinfo eq 'host') {
+   $tz_path = $self->{host_localtime};
+}
+
+return if abs_path('/etc/localtime') eq $tz_path;
+
+if ($self->ct_file_exists($tz_path)) {
+   my $tmpfile = "localtime.$$.new.tmpfile";
+   $self->ct_symlink($tz_path, $tmpfile);
+   $self->ct_rename($tmpfile, "/etc/localtime");
+} else {
+   warn "container does not have $tz_path, timezone can not be modified\n";
+}
+}
+
 sub pre_start_hook {
 my ($self, $conf) = @_;
 
@@ -458,6 +483,7 @@ sub pre_start_hook {
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_dns($conf);
+$self->set_timezone($conf);
 
 # fixme: what else ?
 }
@@ -466,16 +492,17 @@ sub post_create_hook {
 my ($self, $conf, $root_password, $ssh_keys) = @_;
 
 $self->template_fixup($conf);
-
+
 &$randomize_crontab($self, $conf);
-
+
 $self->set_user_password($conf, 'root', $root_password);
 $self->set_user_authorized_ssh_keys($conf, 'root', $ssh_keys) if $ssh_keys;
 $self->setup_init($conf);
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_dns($conf);
-
+$self->set_timezone($conf);
+
 # fixme: what else ?
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/4] add timezone setting to lxc options

2020-07-02 Thread Oguz Bektas
this allows us to set the timezone of a container in the options menu.

Signed-off-by: Oguz Bektas 
---
v2->v3:
no change



 www/manager6/lxc/Options.js | 13 +
 1 file changed, 13 insertions(+)

diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
index 40bde35f..f6e457a3 100644
--- a/www/manager6/lxc/Options.js
+++ b/www/manager6/lxc/Options.js
@@ -140,6 +140,19 @@ Ext.define('PVE.lxc.Options', {
editor: Proxmox.UserName === 'root@pam' ?
'PVE.lxc.FeaturesEdit' : undefined
},
+   timezone: {
+   header: gettext('Time zone'),
+   defaultValue: 'CT managed',
+   deleteDefaultValue: true,
+   deleteEmpty: true,
+   editor: caps.vms['VM.Config.Options'] ? {
+   xtype: 'proxmoxWindowEdit',
+   items: {
+   xtype: 'PVETimezonePanel',
+   subject: gettext('Time zone'),
+   }
+   } : undefined
+   },
hookscript: {
header: gettext('Hookscript')
}
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/4] add timezone option to container creation wizard

2020-07-02 Thread Oguz Bektas
renames the 'DNS' step to 'DNS / Time' and allows one to set the
timezone of the container during setup.

Signed-off-by: Oguz Bektas 
---

v2->v3:
no change

 www/manager6/lxc/CreateWizard.js | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js
index 87076e0d..3a715669 100644
--- a/www/manager6/lxc/CreateWizard.js
+++ b/www/manager6/lxc/CreateWizard.js
@@ -227,9 +227,19 @@ Ext.define('PVE.lxc.CreateWizard', {
isCreate: true
},
{
-   xtype: 'pveLxcDNSInputPanel',
-   title: gettext('DNS'),
-   insideWizard: true
+   xtype: 'container',
+   layout: 'hbox',
+   title: gettext('DNS / Time'),
+   items: [
+   {
+   xtype: 'pveLxcDNSInputPanel',
+   insideWizard: true
+   },
+   {
+   xtype: 'PVETimezonePanel',
+   insideWizard: true
+   }
+   ]
},
{
title: gettext('Confirm'),
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH widget-toolkit 2/4] add TimezonePanel for containers

2020-07-02 Thread Oguz Bektas
with 3 modes;
- CT managed (no action)
- match host (use same timezone as host)
- select from list

also move 'UTC' to the top of the TimezoneStore for convenience

Signed-off-by: Oguz Bektas 
---

v2->v3:
* use radiofields



 src/Makefile   |  1 +
 src/data/TimezoneStore.js  |  2 +-
 src/panel/TimezonePanel.js | 91 ++
 3 files changed, 93 insertions(+), 1 deletion(-)
 create mode 100644 src/panel/TimezonePanel.js

diff --git a/src/Makefile b/src/Makefile
index 12dda30..8bd576f 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -41,6 +41,7 @@ JSSRC=\
panel/JournalView.js\
panel/RRDChart.js   \
panel/GaugeWidget.js\
+   panel/TimezonePanel.js  \
window/Edit.js  \
window/PasswordEdit.js  \
window/TaskViewer.js\
diff --git a/src/data/TimezoneStore.js b/src/data/TimezoneStore.js
index a67ad8b..fcaca3e 100644
--- a/src/data/TimezoneStore.js
+++ b/src/data/TimezoneStore.js
@@ -7,6 +7,7 @@ Ext.define('Proxmox.data.TimezoneStore', {
 extend: 'Ext.data.Store',
 model: 'Timezone',
 data: [
+   ['UTC'],
['Africa/Abidjan'],
['Africa/Accra'],
['Africa/Addis_Ababa'],
@@ -414,6 +415,5 @@ Ext.define('Proxmox.data.TimezoneStore', {
['Pacific/Tongatapu'],
['Pacific/Wake'],
['Pacific/Wallis'],
-   ['UTC'],
],
 });
diff --git a/src/panel/TimezonePanel.js b/src/panel/TimezonePanel.js
new file mode 100644
index 000..25d6423
--- /dev/null
+++ b/src/panel/TimezonePanel.js
@@ -0,0 +1,91 @@
+Ext.define('PVE.panel.TimezonePanel', {
+extend: 'Proxmox.panel.InputPanel',
+alias: 'widget.PVETimezonePanel',
+
+insideWizard: false,
+
+setValues: function(values) {
+   var me = this;
+
+   if (!values.timezone) {
+   delete values.tzmode;
+   } else if (values.timezone === 'host') {
+   values.tzmode = 'host';
+   } else {
+   values.tzmode = 'select';
+   }
+   return me.callParent([values]);
+},
+
+onGetValues: function(values) {
+   var me = this;
+
+   var deletes = [];
+   if (values.tzmode === '__default__') {
+   deletes.push('timezone');
+   } else if (values.tzmode === 'host') {
+   values.timezone = 'host';
+   }
+
+   delete values.tzmode;
+
+   if (deletes.length) {
+   values.delete = deletes.join(',');
+   }
+
+   return values;
+},
+
+
+initComponent: function() {
+   var me = this;
+
+   var items = [];
+   items.push({
+  xtype: 'radiofield',
+  name: 'tzmode',
+  inputValue: '__default__',
+  boxLabel: gettext('Container managed'),
+  checked: true,
+   });
+   items.push({
+  xtype: 'radiofield',
+  name: 'tzmode',
+  inputValue: 'host',
+  boxLabel: gettext('Use host settings'),
+   });
+   items.push({
+  xtype: 'radiofield',
+  name: 'tzmode',
+  inputValue: 'select',
+  boxLabel: gettext('Select a timezone'),
+  listeners: {
+  change: function(f, value) {
+  if (!this.rendered) {
+  return;
+  }
+  let timezoneSelect = me.down('field[name=timezone]');
+  timezoneSelect.setDisabled(!value);
+  },
+  },
+   });
+   items.push({
+  xtype: 'combobox',
+  itemId: 'tzlist',
+  fieldLabel: gettext('Time zone'),
+  disabled: true,
+  name: 'timezone',
+  queryMode: 'local',
+  store: Ext.create('Proxmox.data.TimezoneStore'),
+  displayField: 'zone',
+  editable: true,
+  anyMatch: true,
+  forceSelection: true,
+  allowBlank: false,
+   });
+
+   me.items = items;
+
+   me.callParent();
+},
+});
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 container] fix #2820: don't hotplug over existing mpX

2020-07-02 Thread Oguz Bektas
check if the given mpX already exists in the config.  if it does, then
skip hotplugging and write the changes to [pve:pending] for the next
reboot of CT.

after rebooting the CT, the preexisting mpX will be added as unused and
the mpX will be mounted.

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Config.pm | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 0a28380..4981dc5 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1248,6 +1248,10 @@ sub vmconfig_hotplug_pending {
die "skip\n";
}
 
+   if (exists($conf->{$opt})) {
+   die "skip\n"; # don't try to hotplug over existing mp
+   }
+
$class->apply_pending_mountpoint($vmid, $conf, $opt, $storecfg, 
1);
# apply_pending_mountpoint modifies the value if it creates a 
new disk
$value = $conf->{pending}->{$opt};
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container] fix #2820: block adding new volume with same id if it's pending delete

2020-07-01 Thread Oguz Bektas


fabian's variant can be done like this:
---

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 0a28380..ba5e548 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1248,6 +1248,9 @@ sub vmconfig_hotplug_pending {
die "skip\n";
}

+   if (exists($conf->{$opt})) {
+   die "skip\n";
+   }
$class->apply_pending_mountpoint($vmid, $conf, $opt, $storecfg, 
1);
# apply_pending_mountpoint modifies the value if it creates a 
new disk
$value = $conf->{pending}->{$opt};

---

we just check if the mpX is already in the config, if yes then the hotplug is 
skipped, adding it as a pending change for the next reboot.
the "replaced" disk becomes unused.



On Wed, Jul 01, 2020 at 02:50:06PM +0200, Thomas Lamprecht wrote:
> On 01.07.20 14:43, Fabian Grünbichler wrote:
> > On July 1, 2020 2:05 pm, Thomas Lamprecht wrote:
> >> On 01.07.20 09:11, Fabian Grünbichler wrote:
> >>> - we can actually just put the new mpX into the pending queue, and 
> >>>   remove the entry from the pending deletion queue? (it's hotplugging 
> >>>   that is the problem, not queuing the pending change)
> >>
> >> Even if we could I'm not sure I want to be able to add a new mpX as pending
> >> if the old is still pending its deletion. But, tbh, I did not looked at 
> >> details
> >> so I may missing something..
> > 
> > well, the sequence is
> > 
> > - delete mp0 (queued)
> > - set a new mp0 (queued)
> > 
> > just like a general
> > 
> > - delete foo  (queued)
> > - set foo (queued)
> > 
> > where the set removes the queued deletion. in the case of mp, applying 
> > that pending change should then add the old volume ID as unused, but 
> > that IMHO does not change the semantics of '(queuing a) set overrides 
> > earlier queued delete'.
> 
> IMO the set mpX isn't your general option setting, and I'd just not allow
> re-setting it with a delete still pending, to dangerous IMO.
> Maybe better make it clear for the user that they either need to apply the
> pending change (e.g., CT reboot), revert it or just use another mpX id.


if this is too dangerous, then i'll instead make a v3, changing the match logic 
to work with the parse_pending_delete helper.

which is better?

> 
> > 
> > but this is broken for regular hotplug without deletion as well, setting 
> > mpX with a new volume ID if the slot is already used does not queue it 
> > as pending change, but
> > - mounts the new volume ID in addition to the old one
> > - adds the old volume ID as unused, even though it is still mounted in 
> >   the container
> 
> gosh.. yeah that needs to fail too.
> 
> > 
> > so this is broken in more ways than just what I initially found..
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container] fix #2820: block adding new volume with same id if it's pending delete

2020-07-01 Thread Oguz Bektas
if a user tries to add a mountpoint mpX which is waiting for a pending
delete, hotplugging a new mountpoint with name mpX before the
previous one is detached should not be allowed.

do a simple check to see if the given mpX is already in the pending delete 
section.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* use exact matching
* change full stop to comma
* s/mountpoint/mount point/


 src/PVE/LXC/Config.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 0a28380..f582eb8 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -974,6 +974,9 @@ sub update_pct_config {
my $value = $param->{$opt};
if ($opt =~ m/^mp(\d+)$/ || $opt eq 'rootfs') {
$class->check_protection($conf, "can't update CT $vmid drive 
'$opt'");
+   if ($conf->{pending}->{delete} =~ m/$opt\b/) {
+   die "${opt} is in pending delete queue, please choose another 
mount point ID\n";
+   }
my $mp = $class->parse_volume($opt, $value);
$check_content_type->($mp) if ($mp->{type} eq 'volume');
} elsif ($opt eq 'hookscript') {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 0/5] timezones for containers

2020-06-30 Thread Oguz Bektas
hi,

is anyone reviewing this patch?

On Wed, Jun 17, 2020 at 03:32:28PM +0200, Oguz Bektas wrote:
> this patch series implements the 'timezone' option for containers.
> 
> more detailed info is in the commit messages.
> 
> v1->v2:
> 
> * don't use an array of valid timezones in format verification, instead
> just loop through the zonetab file and compare line by line (still has
> to be split according to \t while doing the comparison)
> 
> * add thomas' suggestions about atomic move on tmpfile before doing
> symlink
> 
> * add 'pve-ct-timezone' format in pve-container to special-case the
> 'host' setting without breaking 'timezone' format in pve-common for more
> general use
> 
> pve-common:
> 
> Oguz Bektas (1):
>   jsonschema: register 'timezone' format and add verification method
> 
>  src/PVE/JSONSchema.pm | 19 +++
>  1 file changed, 19 insertions(+)
> 
> pve-container:
> 
> Oguz Bektas (1):
>   fix #1423: add timezone config option
> 
>  src/PVE/LXC/Config.pm | 14 ++
>  src/PVE/LXC/Setup.pm  | 13 +
>  src/PVE/LXC/Setup/Base.pm | 33 ++++++---
>  3 files changed, 57 insertions(+), 3 deletions(-)
> 
> proxmox-widget-toolkit:
> 
> Oguz Bektas (1):
>   add TimezonePanel for containers
> 
>  src/Makefile   |  1 +
>  src/data/TimezoneStore.js  |  2 +-
>  src/panel/TimezonePanel.js | 73 ++
>  3 files changed, 75 insertions(+), 1 deletion(-)
>  create mode 100644 src/panel/TimezonePanel.js
> 
> 
> 
> pve-manager:
> 
> Oguz Bektas (2):
>   add timezone setting to lxc options
>   add timezone option to container creation wizard
> 
>  www/manager6/lxc/CreateWizard.js | 16 +---
>  www/manager6/lxc/Options.js  | 13 +
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> -- 
> 2.20.1
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] fix #2820: block adding new volume with same id if it's pending delete

2020-06-30 Thread Oguz Bektas
do a simple check to see if our $opt is already in the delete section.

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Config.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 0a28380..237e2e5 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -974,6 +974,9 @@ sub update_pct_config {
my $value = $param->{$opt};
if ($opt =~ m/^mp(\d+)$/ || $opt eq 'rootfs') {
$class->check_protection($conf, "can't update CT $vmid drive 
'$opt'");
+   if ($conf->{pending}->{delete} =~ m/$opt/) {
+   die "${opt} is in pending delete queue. please select another 
mountpoint ID\n";
+   }
my $mp = $class->parse_volume($opt, $value);
$check_content_type->($mp) if ($mp->{type} eq 'volume');
} elsif ($opt eq 'hookscript') {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager] fix #2810: reset state properly when editing mount features of containers

2020-06-22 Thread Oguz Bektas
initializing 'mounts' array in the panel scope causes edits on subsequent
containers to get the values (mount=nfs) from the previous container. fix this 
by
initializing the 'mounts' array in 'onGetValues' and 'setValues'
separately.

Signed-off-by: Oguz Bektas 
---
 www/manager6/lxc/FeaturesEdit.js | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/www/manager6/lxc/FeaturesEdit.js b/www/manager6/lxc/FeaturesEdit.js
index 1275a2e0..dffd77fd 100644
--- a/www/manager6/lxc/FeaturesEdit.js
+++ b/www/manager6/lxc/FeaturesEdit.js
@@ -2,9 +2,6 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
 extend: 'Proxmox.panel.InputPanel',
 xtype: 'pveLxcFeaturesInputPanel',
 
-// used to save the mounts fstypes until sending
-mounts: [],
-
 fstypes: ['nfs', 'cifs'],
 
 viewModel: {
@@ -70,7 +67,7 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
 
 onGetValues: function(values) {
var me = this;
-   var mounts = me.mounts;
+   var mounts = [];
me.fstypes.forEach(function(fs) {
if (values[fs]) {
mounts.push(fs);
@@ -83,6 +80,7 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
}
 
var featuresstring = PVE.Parser.printPropertyString(values, undefined);
+
if (featuresstring == '') {
return { 'delete': 'features' };
}
@@ -94,13 +92,13 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
 
me.viewModel.set('unprivileged', values.unprivileged);
 
+   var mounts = [];
if (values.features) {
var res = PVE.Parser.parsePropertyString(values.features);
-   me.mounts = [];
if (res.mount) {
res.mount.split(/[; ]/).forEach(function(item) {
if (me.fstypes.indexOf(item) === -1) {
-   me.mounts.push(item);
+   mounts.push(item);
} else {
res[item] = 1;
}
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] fix #2810: don't add options multiple times to features property

2020-06-22 Thread Oguz Bektas
instead of unconditionally pushing to the 'mounts' array we need to check
if we already have the option in there. without this, we get config
options like:

features: nfs;nfs;nfs

Signed-off-by: Oguz Bektas 
---
 www/manager6/lxc/FeaturesEdit.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/lxc/FeaturesEdit.js b/www/manager6/lxc/FeaturesEdit.js
index 1275a2e0..63a9a2a9 100644
--- a/www/manager6/lxc/FeaturesEdit.js
+++ b/www/manager6/lxc/FeaturesEdit.js
@@ -72,7 +72,7 @@ Ext.define('PVE.lxc.FeaturesInputPanel', {
var me = this;
var mounts = me.mounts;
me.fstypes.forEach(function(fs) {
-   if (values[fs]) {
+   if (values[fs] && !mounts.includes(fs)) {
mounts.push(fs);
}
delete values[fs];
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] fix #2778: use vm_start instead of systemctl to start/restart container

2020-06-18 Thread Oguz Bektas
when a backup task in 'stop' mode is executed, VZDump calls 'start_vm'
sub instead of 'PVE::LXC::vm_start'.

'start_vm' however does not follow our regular process but instead uses
systemctl to start the container, which results in the guest hookscripts
not being executed in 'pre-start' and 'post-start'.

to call the hooks correctly we can just make use of the
PVE::LXC::vm_start routine which already handles them.

Signed-off-by: Oguz Bektas 
---
 src/PVE/VZDump/LXC.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 0dc60c4..ca3dc10 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -266,7 +266,8 @@ sub stop_vm {
 sub start_vm {
 my ($self, $task, $vmid) = @_;
 
-$self->cmd(['systemctl', 'start', "pve-container\@$vmid"]);
+my $conf = PVE::LXC::Config->load_config($vmid);
+PVE::LXC::vm_start($vmid, $conf);
 }
 
 sub suspend_vm {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 5/5] add timezone option to container creation wizard

2020-06-17 Thread Oguz Bektas
renames the 'DNS' step to 'DNS / Time' and allows one to set the
timezone of the container during setup.

Signed-off-by: Oguz Bektas 
---
v1->v2:
no changes

 www/manager6/lxc/CreateWizard.js | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js
index 87076e0d..3a715669 100644
--- a/www/manager6/lxc/CreateWizard.js
+++ b/www/manager6/lxc/CreateWizard.js
@@ -227,9 +227,19 @@ Ext.define('PVE.lxc.CreateWizard', {
isCreate: true
},
{
-   xtype: 'pveLxcDNSInputPanel',
-   title: gettext('DNS'),
-   insideWizard: true
+   xtype: 'container',
+   layout: 'hbox',
+   title: gettext('DNS / Time'),
+   items: [
+   {
+   xtype: 'pveLxcDNSInputPanel',
+   insideWizard: true
+   },
+   {
+   xtype: 'PVETimezonePanel',
+   insideWizard: true
+   }
+   ]
},
{
title: gettext('Confirm'),
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 2/5] fix #1423: add timezone config option

2020-06-17 Thread Oguz Bektas
optionally enabled.

adds the 'timezone' option to config, which takes a valid timezone (i.e.
Europe/Vienna) to set in the container.

if nothing is selected, then it will show as 'container managed' in
GUI, and nothing will be done.

if set to 'host', the /etc/localtime symlink from the host node will be
cached and set in the container rootfs.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* s/zone1970/zone/
* use 'pve-ct-timezone' format instead of 'timezone' format directly
* avoid IO if target is already set to correct zone
* create a tmpfile to rename after symlinking, instead of symlinking
/etc/localtime directly

 src/PVE/LXC/Config.pm | 14 ++
 src/PVE/LXC/Setup.pm  | 13 +
 src/PVE/LXC/Setup/Base.pm | 33 ++---
 3 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 8d1854a..29f2cd9 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -444,6 +444,11 @@ my $confdesc = {
type => 'string', format => 'address-list',
description => "Sets DNS server IP address for a container. Create will 
automatically use the setting from the host if you neither set searchdomain nor 
nameserver.",
 },
+timezone => {
+   optional => 1,
+   type => 'string', format => 'pve-ct-timezone',
+   description => "Time zone to use in the container. If option isn't set, 
then nothing will be done. Can be set to 'host' to match the host time zone, or 
an arbitrary time zone option from /usr/share/zoneinfo/zone.tab",
+},
 rootfs => get_standard_option('pve-ct-rootfs'),
 parent => {
optional => 1,
@@ -706,6 +711,15 @@ for (my $i = 0; $i < $MAX_LXC_NETWORKS; $i++) {
 };
 }
 
+PVE::JSONSchema::register_format('pve-ct-timezone', \_ct_timezone);
+sub verify_ct_timezone {
+my ($timezone, $noerr) = @_;
+
+return if $timezone eq 'host'; # using host settings
+
+PVE::JSONSchema::pve_verify_timezone($timezone);
+}
+
 PVE::JSONSchema::register_format('pve-lxc-mp-string', \_lxc_mp_string);
 sub verify_lxc_mp_string {
 my ($mp, $noerr) = @_;
diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index c738e64..0e07796 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -5,6 +5,8 @@ use warnings;
 use POSIX;
 use PVE::Tools;
 
+use Cwd 'abs_path';
+
 use PVE::LXC::Setup::Debian;
 use PVE::LXC::Setup::Ubuntu;
 use PVE::LXC::Setup::CentOS;
@@ -103,6 +105,7 @@ sub new {
 
 # Cache some host files we need access to:
 $plugin->{host_resolv_conf} = PVE::INotify::read_file('resolvconf');
+$plugin->{host_localtime} = abs_path('/etc/localtime');
 
 # pass on user namespace information:
 my ($id_map, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
@@ -205,6 +208,16 @@ sub set_dns {
 $self->protected_call($code);
 }
 
+sub set_timezone {
+my ($self) = @_;
+
+return if !$self->{plugin}; # unmanaged
+my $code = sub {
+   $self->{plugin}->set_timezone($self->{conf});
+};
+$self->protected_call($code);
+}
+
 sub setup_init {
 my ($self) = @_;
 
diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
index 93dace7..d73335b 100644
--- a/src/PVE/LXC/Setup/Base.pm
+++ b/src/PVE/LXC/Setup/Base.pm
@@ -3,6 +3,7 @@ package PVE::LXC::Setup::Base;
 use strict;
 use warnings;
 
+use Cwd 'abs_path';
 use File::stat;
 use Digest::SHA;
 use IO::File;
@@ -451,6 +452,30 @@ my $randomize_crontab = sub {
}
 };
 
+sub set_timezone {
+my ($self, $conf) = @_;
+
+my $zoneinfo = $conf->{timezone};
+
+return if !defined($zoneinfo);
+
+my $tz_path = "/usr/share/zoneinfo/$zoneinfo";
+
+if ($zoneinfo eq 'host') {
+   $tz_path = $self->{host_localtime};
+}
+
+return if abs_path('/etc/localtime') eq $tz_path;
+
+if ($self->ct_file_exists($tz_path)) {
+   my $tmpfile = "localtime.$$.new.tmpfile";
+   $self->ct_symlink($tz_path, $tmpfile);
+   $self->ct_rename($tmpfile, "/etc/localtime");
+} else {
+   warn "container does not have $tz_path, timezone can not be modified\n";
+}
+}
+
 sub pre_start_hook {
 my ($self, $conf) = @_;
 
@@ -458,6 +483,7 @@ sub pre_start_hook {
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_dns($conf);
+$self->set_timezone($conf);
 
 # fixme: what else ?
 }
@@ -466,16 +492,17 @@ sub post_create_hook {
 my ($self, $conf, $root_password, $ssh_keys) = @_;
 
 $self->template_fixup($conf);
-
+
 &$randomize_crontab($self, $conf);
-
+
 $self->set_user_password($conf, 'root', $root_password);
 $self->set_user_authorized_ssh_keys($conf, 'root', $ssh_keys) if $ssh_keys;
 $self->setup_init($conf);
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_d

[pve-devel] [PATCH v2 0/5] timezones for containers

2020-06-17 Thread Oguz Bektas
this patch series implements the 'timezone' option for containers.

more detailed info is in the commit messages.

v1->v2:

* don't use an array of valid timezones in format verification, instead
just loop through the zonetab file and compare line by line (still has
to be split according to \t while doing the comparison)

* add thomas' suggestions about atomic move on tmpfile before doing
symlink

* add 'pve-ct-timezone' format in pve-container to special-case the
'host' setting without breaking 'timezone' format in pve-common for more
general use

pve-common:

Oguz Bektas (1):
  jsonschema: register 'timezone' format and add verification method

 src/PVE/JSONSchema.pm | 19 +++
 1 file changed, 19 insertions(+)

pve-container:

Oguz Bektas (1):
  fix #1423: add timezone config option

 src/PVE/LXC/Config.pm | 14 ++
 src/PVE/LXC/Setup.pm  | 13 +
 src/PVE/LXC/Setup/Base.pm | 33 ++---
 3 files changed, 57 insertions(+), 3 deletions(-)

proxmox-widget-toolkit:

Oguz Bektas (1):
  add TimezonePanel for containers

 src/Makefile   |  1 +
 src/data/TimezoneStore.js  |  2 +-
 src/panel/TimezonePanel.js | 73 ++
 3 files changed, 75 insertions(+), 1 deletion(-)
 create mode 100644 src/panel/TimezonePanel.js



pve-manager:

Oguz Bektas (2):
  add timezone setting to lxc options
  add timezone option to container creation wizard

 www/manager6/lxc/CreateWizard.js | 16 +---
 www/manager6/lxc/Options.js  | 13 +
 2 files changed, 26 insertions(+), 3 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 widget-toolkit 3/5] add TimezonePanel for containers

2020-06-17 Thread Oguz Bektas
with 3 modes;
- CT managed (no action)
- match host (use same timezone as host)
- select from list

also move 'UTC' to the top of the TimezoneStore for convenience

Signed-off-by: Oguz Bektas 
---

v1->v2:
no changes


 src/Makefile   |  1 +
 src/data/TimezoneStore.js  |  2 +-
 src/panel/TimezonePanel.js | 73 ++
 3 files changed, 75 insertions(+), 1 deletion(-)
 create mode 100644 src/panel/TimezonePanel.js

diff --git a/src/Makefile b/src/Makefile
index 659e876..e1a31e8 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -38,6 +38,7 @@ JSSRC=\
panel/JournalView.js\
panel/RRDChart.js   \
panel/GaugeWidget.js\
+   panel/TimezonePanel.js  \
window/Edit.js  \
window/PasswordEdit.js  \
window/TaskViewer.js\
diff --git a/src/data/TimezoneStore.js b/src/data/TimezoneStore.js
index a67ad8b..fcaca3e 100644
--- a/src/data/TimezoneStore.js
+++ b/src/data/TimezoneStore.js
@@ -7,6 +7,7 @@ Ext.define('Proxmox.data.TimezoneStore', {
 extend: 'Ext.data.Store',
 model: 'Timezone',
 data: [
+   ['UTC'],
['Africa/Abidjan'],
['Africa/Accra'],
['Africa/Addis_Ababa'],
@@ -414,6 +415,5 @@ Ext.define('Proxmox.data.TimezoneStore', {
['Pacific/Tongatapu'],
['Pacific/Wake'],
['Pacific/Wallis'],
-   ['UTC'],
],
 });
diff --git a/src/panel/TimezonePanel.js b/src/panel/TimezonePanel.js
new file mode 100644
index 000..5ebac65
--- /dev/null
+++ b/src/panel/TimezonePanel.js
@@ -0,0 +1,73 @@
+Ext.define('PVE.panel.TimezonePanel', {
+extend: 'Proxmox.panel.InputPanel',
+alias: 'widget.PVETimezonePanel',
+
+insideWizard: false,
+
+setValues: function(values) {
+   var me = this;
+
+   if (!values.timezone) {
+   delete values.tzmode;
+   } else if (values.timezone === 'host') {
+   values.tzmode = 'host';
+   } else {
+   values.tzmode = 'select';
+   }
+   return me.callParent([values]);
+},
+
+onGetValues: function(values) {
+   var me = this;
+   var deletes = [];
+   if (!values.tzmode) {
+   deletes.push('timezone');
+   } else if (values.tzmode === 'host') {
+   values.timezone = 'host';
+   }
+   delete values.tzmode;
+   if (deletes.length > 0) {
+   values.delete = deletes;
+   }
+
+   return values;
+},
+
+items: [
+   {
+   xtype: 'proxmoxKVComboBox',
+   name: 'tzmode',
+   fieldLabel: gettext('Time zone mode'),
+   value: '__default__',
+   comboItems: [
+   ['__default__', 'CT managed'],
+   ['host', 'use host settings'],
+   ['select', 'choose from list'],
+   ],
+   listeners: {
+   change: function(kvcombo, newValue, oldValue, eOpts) {
+   var combo = kvcombo.up('form').down('#tzlistcombo');
+   if (newValue === 'select') {
+   combo.enable();
+   } else if (newValue !== 'select') {
+   combo.disable();
+   }
+   },
+   },
+   },
+   {
+   xtype: 'combobox',
+   itemId: 'tzlistcombo',
+   fieldLabel: gettext('Time zone'),
+   disabled: true,
+   name: 'timezone',
+   queryMode: 'local',
+   store: Ext.create('Proxmox.data.TimezoneStore'),
+   displayField: 'zone',
+   editable: true,
+   anyMatch: true,
+   forceSelection: true,
+   allowBlank: false,
+   },
+],
+});
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 common 1/5] jsonschema: register 'timezone' format and add verification method

2020-06-17 Thread Oguz Bektas
/usr/share/zoneinfo/zone.tab has the valid list of time zones.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* don't use array for verifying format

 src/PVE/JSONSchema.pm | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 84fb694..15a498c 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -482,6 +482,25 @@ sub pve_verify_dns_name {
 return $name;
 }
 
+register_format('timezone', \_verify_timezone);
+sub pve_verify_timezone {
+my ($timezone, $noerr) = @_;
+
+my $zonetab = "/usr/share/zoneinfo/zone.tab";
+return $timezone if $timezone eq 'UTC';
+open(my $fh, "<", $zonetab);
+while(my $line = <$fh>) {
+   next if $line =~ /^#/;
+   chomp $line;
+   return $timezone if $timezone eq (split /\t/, $line)[2]; # found
+}
+close $fh;
+
+return undef if $noerr;
+die "invalid time zone '$timezone'\n";
+
+}
+
 # network interface name
 register_format('pve-iface', \_verify_iface);
 sub pve_verify_iface {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 4/5] add timezone setting to lxc options

2020-06-17 Thread Oguz Bektas
this allows us to set the timezone of a container in the options menu.

Signed-off-by: Oguz Bektas 
---

v1->v2:
no changes

 www/manager6/lxc/Options.js | 13 +
 1 file changed, 13 insertions(+)

diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
index 40bde35f..f6e457a3 100644
--- a/www/manager6/lxc/Options.js
+++ b/www/manager6/lxc/Options.js
@@ -140,6 +140,19 @@ Ext.define('PVE.lxc.Options', {
editor: Proxmox.UserName === 'root@pam' ?
'PVE.lxc.FeaturesEdit' : undefined
},
+   timezone: {
+   header: gettext('Time zone'),
+   defaultValue: 'CT managed',
+   deleteDefaultValue: true,
+   deleteEmpty: true,
+   editor: caps.vms['VM.Config.Options'] ? {
+   xtype: 'proxmoxWindowEdit',
+   items: {
+   xtype: 'PVETimezonePanel',
+   subject: gettext('Time zone'),
+   }
+   } : undefined
+   },
hookscript: {
header: gettext('Hookscript')
}
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 2/5] fix #1423: add timezone config option

2020-06-17 Thread Oguz Bektas
hi,

On Tue, Jun 16, 2020 at 04:45:34PM +0200, Thomas Lamprecht wrote:
> Am 6/16/20 um 3:36 PM schrieb Oguz Bektas:
> > optionally enabled.
> > 
> > adds the 'timezone' option to config, which takes a valid timezone (i.e.
> > Europe/Vienna) to set in the container.
> > 
> > if nothing is selected, then it will show as 'container managed' in
> > GUI, and nothing will be done.
> > 
> > if set to 'host', the /etc/localtime symlink from the host node will be
> > cached and set in the container rootfs.
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  src/PVE/LXC/Config.pm |  5 +
> >  src/PVE/LXC/Setup.pm  | 13 +
> >  src/PVE/LXC/Setup/Base.pm | 28 +---
> >  3 files changed, 43 insertions(+), 3 deletions(-)
> > 
> > diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> > index 8d1854a..0ebf517 100644
> > --- a/src/PVE/LXC/Config.pm
> > +++ b/src/PVE/LXC/Config.pm
> > @@ -444,6 +444,11 @@ my $confdesc = {
> > type => 'string', format => 'address-list',
> > description => "Sets DNS server IP address for a container. Create will 
> > automatically use the setting from the host if you neither set searchdomain 
> > nor nameserver.",
> >  },
> > +timezone => {
> > +   optional => 1,
> > +   type => 'string', format => 'timezone',
> > +   description => "Time zone to use in the container. If option isn't set, 
> > then nothing will be done. Can be set to 'host' to match the host time 
> > zone, or an arbitrary time zone option from 
> > /usr/share/zoneinfo/zone1970.tab",
> > +},
> >  rootfs => get_standard_option('pve-ct-rootfs'),
> >  parent => {
> > optional => 1,
> > diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
> > index c738e64..0e07796 100644
> > --- a/src/PVE/LXC/Setup.pm
> > +++ b/src/PVE/LXC/Setup.pm
> > @@ -5,6 +5,8 @@ use warnings;
> >  use POSIX;
> >  use PVE::Tools;
> >  
> > +use Cwd 'abs_path';
> > +
> >  use PVE::LXC::Setup::Debian;
> >  use PVE::LXC::Setup::Ubuntu;
> >  use PVE::LXC::Setup::CentOS;
> > @@ -103,6 +105,7 @@ sub new {
> >  
> >  # Cache some host files we need access to:
> >  $plugin->{host_resolv_conf} = PVE::INotify::read_file('resolvconf');
> > +$plugin->{host_localtime} = abs_path('/etc/localtime');
> 
> Hmm, I'd not save that in the $plugin, makes no sense to me.
> I mean, I see where this comes from but that also doesn't makes much sense to 
> me.
> 
> This isn't both expected to be used often, so just using it directly without 
> cache
> or at least cache only if at least used once would be nicer IMO.


this was the simplest way i found, and it's just caching a ~50 byte
string (/etc/localtime symlink) to be used in the setup.

how else can i access that in the setup routines
which are running in CT rootfs?


> 
> >  
> >  # pass on user namespace information:
> >  my ($id_map, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
> > @@ -205,6 +208,16 @@ sub set_dns {
> >  $self->protected_call($code);
> >  }
> >  
> > +sub set_timezone {
> > +my ($self) = @_;
> > +
> > +return if !$self->{plugin}; # unmanaged
> > +my $code = sub {
> > +   $self->{plugin}->set_timezone($self->{conf});
> > +};
> > +$self->protected_call($code);
> > +}
> > +
> >  sub setup_init {
> >  my ($self) = @_;
> >  
> > diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
> > index 93dace7..4c7c2e6 100644
> > --- a/src/PVE/LXC/Setup/Base.pm
> > +++ b/src/PVE/LXC/Setup/Base.pm
> > @@ -451,6 +451,26 @@ my $randomize_crontab = sub {
> > }
> >  };
> >  
> > +sub set_timezone {
> > +my ($self, $conf) = @_;
> > +
> > +my $zoneinfo = $conf->{timezone};
> > +
> > +return if !defined($zoneinfo);
> > +
> > +my $tz_path = "/usr/share/zoneinfo/$zoneinfo";
> > +if ($zoneinfo eq 'host') {
> > +   $tz_path= $self->{host_localtime};
> 
> whitespace error before =
> 
> > +}
> > +
> > +if ($self->ct_file_exists($tz_path)) {
> > +   $self->ct_unlink("/etc/localtime");
> > +   $self->ct_symlink($tz_path, "/etc/localtime");
> 
> could be nicer to do a atomic move, i.e. symlink to some 
> "localtime.$$.new.tmpfile" and do a
> rename (move) afterwards to "/etc/localt

Re: [pve-devel] [PATCH common 1/5] jsonschema: register 'timezone' format and add verification method

2020-06-17 Thread Oguz Bektas
hi,

On Tue, Jun 16, 2020 at 04:28:05PM +0200, Thomas Lamprecht wrote:
> Am 6/16/20 um 3:36 PM schrieb Oguz Bektas:
> > /usr/share/zoneinfo/zone.tab has the valid list of time zones.
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  src/PVE/JSONSchema.pm | 24 
> >  1 file changed, 24 insertions(+)
> > 
> > diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> > index 84fb694..ff97a3d 100644
> > --- a/src/PVE/JSONSchema.pm
> > +++ b/src/PVE/JSONSchema.pm
> > @@ -482,6 +482,30 @@ sub pve_verify_dns_name {
> >  return $name;
> >  }
> >  
> > +register_format('timezone', \_verify_timezone);
> > +sub pve_verify_timezone {
> > +my ($timezone, $noerr) = @_;
> > +
> > +my $zonetab = "/usr/share/zoneinfo/zone.tab";
> > +my @valid_tzlist;
> > +push @valid_tzlist, 'host'; # host localtime
> 
> do not add that here, this isn't a timezone - filter that value out in 
> pve-container API
> as it's just a special value there.


then in PVE/LXC/Config.pm
something like:

```
PVE::JSONSchema::register_format('pve-lxc-timezone', \_ct_timezone);
sub verify_ct_timezone {
  my ($timezone) = @_;

  return if $timezone eq 'host';

  PVE::JSONSchema::verify_timezone($timezone);
}
```


and keep the 'verify_timezone' in pve-common intact for more general use??


> 
> > +push @valid_tzlist, 'UTC'; # not in $zonetab
> 
> Don push unconditionally on array you just declared, rather:
> 
> my @valid_tzlist = ('UTC');
> 
> But actually that array isn't required at all:
> 
> Rather do:
> 
> return $timezone if $timezone eq 'UTC';
> 
> open(my $fh, "<", $zonetab);
> while(my $line = <$fh>) {
> next if $line =~ /^#/;
> chomp $line;
> 
> return $timezone if $line eq $timezone; # found
> }
> close $fh;
> 
> die "invalid time zone '$timezone'\n";
> 
> Shorter and faster.
> 
> > +open(my $fh, "<", $zonetab);
> > +while(my $line = <$fh>) {
> > +   next if $line =~ /^#/;
> > +   chomp $line;
> > +   push @valid_tzlist, (split /\t/, $line)[2];
> > +}
> > +close $fh;
> > +
> > +if (grep (/^$timezone$/, @valid_tzlist) eq 0) {
> > +   return undef if $noerr;
> > +   die "invalid time zone '$timezone'\n";
> > +}
> > +
> > +return $timezone;
> > +}
> > +
> >  # network interface name
> >  register_format('pve-iface', \_verify_iface);
> >  sub pve_verify_iface {
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH common 1/5] jsonschema: register 'timezone' format and add verification method

2020-06-16 Thread Oguz Bektas
/usr/share/zoneinfo/zone.tab has the valid list of time zones.

Signed-off-by: Oguz Bektas 
---
 src/PVE/JSONSchema.pm | 24 
 1 file changed, 24 insertions(+)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 84fb694..ff97a3d 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -482,6 +482,30 @@ sub pve_verify_dns_name {
 return $name;
 }
 
+register_format('timezone', \_verify_timezone);
+sub pve_verify_timezone {
+my ($timezone, $noerr) = @_;
+
+my $zonetab = "/usr/share/zoneinfo/zone.tab";
+my @valid_tzlist;
+push @valid_tzlist, 'host'; # host localtime
+push @valid_tzlist, 'UTC'; # not in $zonetab
+open(my $fh, "<", $zonetab);
+while(my $line = <$fh>) {
+   next if $line =~ /^#/;
+   chomp $line;
+   push @valid_tzlist, (split /\t/, $line)[2];
+}
+close $fh;
+
+if (grep (/^$timezone$/, @valid_tzlist) eq 0) {
+   return undef if $noerr;
+   die "invalid time zone '$timezone'\n";
+}
+
+return $timezone;
+}
+
 # network interface name
 register_format('pve-iface', \_verify_iface);
 sub pve_verify_iface {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH widget-toolkit 3/5] add TimezonePanel for containers

2020-06-16 Thread Oguz Bektas
with 3 modes;
- CT managed (no action)
- match host (use same timezone as host)
- select from list

also move 'UTC' to the top of the TimezoneStore for convenience

Signed-off-by: Oguz Bektas 
---
 src/Makefile   |  1 +
 src/data/TimezoneStore.js  |  2 +-
 src/panel/TimezonePanel.js | 73 ++
 3 files changed, 75 insertions(+), 1 deletion(-)
 create mode 100644 src/panel/TimezonePanel.js

diff --git a/src/Makefile b/src/Makefile
index 659e876..e1a31e8 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -38,6 +38,7 @@ JSSRC=\
panel/JournalView.js\
panel/RRDChart.js   \
panel/GaugeWidget.js\
+   panel/TimezonePanel.js  \
window/Edit.js  \
window/PasswordEdit.js  \
window/TaskViewer.js\
diff --git a/src/data/TimezoneStore.js b/src/data/TimezoneStore.js
index a67ad8b..fcaca3e 100644
--- a/src/data/TimezoneStore.js
+++ b/src/data/TimezoneStore.js
@@ -7,6 +7,7 @@ Ext.define('Proxmox.data.TimezoneStore', {
 extend: 'Ext.data.Store',
 model: 'Timezone',
 data: [
+   ['UTC'],
['Africa/Abidjan'],
['Africa/Accra'],
['Africa/Addis_Ababa'],
@@ -414,6 +415,5 @@ Ext.define('Proxmox.data.TimezoneStore', {
['Pacific/Tongatapu'],
['Pacific/Wake'],
['Pacific/Wallis'],
-   ['UTC'],
],
 });
diff --git a/src/panel/TimezonePanel.js b/src/panel/TimezonePanel.js
new file mode 100644
index 000..5ebac65
--- /dev/null
+++ b/src/panel/TimezonePanel.js
@@ -0,0 +1,73 @@
+Ext.define('PVE.panel.TimezonePanel', {
+extend: 'Proxmox.panel.InputPanel',
+alias: 'widget.PVETimezonePanel',
+
+insideWizard: false,
+
+setValues: function(values) {
+   var me = this;
+
+   if (!values.timezone) {
+   delete values.tzmode;
+   } else if (values.timezone === 'host') {
+   values.tzmode = 'host';
+   } else {
+   values.tzmode = 'select';
+   }
+   return me.callParent([values]);
+},
+
+onGetValues: function(values) {
+   var me = this;
+   var deletes = [];
+   if (!values.tzmode) {
+   deletes.push('timezone');
+   } else if (values.tzmode === 'host') {
+   values.timezone = 'host';
+   }
+   delete values.tzmode;
+   if (deletes.length > 0) {
+   values.delete = deletes;
+   }
+
+   return values;
+},
+
+items: [
+   {
+   xtype: 'proxmoxKVComboBox',
+   name: 'tzmode',
+   fieldLabel: gettext('Time zone mode'),
+   value: '__default__',
+   comboItems: [
+   ['__default__', 'CT managed'],
+   ['host', 'use host settings'],
+   ['select', 'choose from list'],
+   ],
+   listeners: {
+   change: function(kvcombo, newValue, oldValue, eOpts) {
+   var combo = kvcombo.up('form').down('#tzlistcombo');
+   if (newValue === 'select') {
+   combo.enable();
+   } else if (newValue !== 'select') {
+   combo.disable();
+   }
+   },
+   },
+   },
+   {
+   xtype: 'combobox',
+   itemId: 'tzlistcombo',
+   fieldLabel: gettext('Time zone'),
+   disabled: true,
+   name: 'timezone',
+   queryMode: 'local',
+   store: Ext.create('Proxmox.data.TimezoneStore'),
+   displayField: 'zone',
+   editable: true,
+   anyMatch: true,
+   forceSelection: true,
+   allowBlank: false,
+   },
+],
+});
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 2/5] fix #1423: add timezone config option

2020-06-16 Thread Oguz Bektas
optionally enabled.

adds the 'timezone' option to config, which takes a valid timezone (i.e.
Europe/Vienna) to set in the container.

if nothing is selected, then it will show as 'container managed' in
GUI, and nothing will be done.

if set to 'host', the /etc/localtime symlink from the host node will be
cached and set in the container rootfs.

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Config.pm |  5 +
 src/PVE/LXC/Setup.pm  | 13 +
 src/PVE/LXC/Setup/Base.pm | 28 +---
 3 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 8d1854a..0ebf517 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -444,6 +444,11 @@ my $confdesc = {
type => 'string', format => 'address-list',
description => "Sets DNS server IP address for a container. Create will 
automatically use the setting from the host if you neither set searchdomain nor 
nameserver.",
 },
+timezone => {
+   optional => 1,
+   type => 'string', format => 'timezone',
+   description => "Time zone to use in the container. If option isn't set, 
then nothing will be done. Can be set to 'host' to match the host time zone, or 
an arbitrary time zone option from /usr/share/zoneinfo/zone1970.tab",
+},
 rootfs => get_standard_option('pve-ct-rootfs'),
 parent => {
optional => 1,
diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index c738e64..0e07796 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -5,6 +5,8 @@ use warnings;
 use POSIX;
 use PVE::Tools;
 
+use Cwd 'abs_path';
+
 use PVE::LXC::Setup::Debian;
 use PVE::LXC::Setup::Ubuntu;
 use PVE::LXC::Setup::CentOS;
@@ -103,6 +105,7 @@ sub new {
 
 # Cache some host files we need access to:
 $plugin->{host_resolv_conf} = PVE::INotify::read_file('resolvconf');
+$plugin->{host_localtime} = abs_path('/etc/localtime');
 
 # pass on user namespace information:
 my ($id_map, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
@@ -205,6 +208,16 @@ sub set_dns {
 $self->protected_call($code);
 }
 
+sub set_timezone {
+my ($self) = @_;
+
+return if !$self->{plugin}; # unmanaged
+my $code = sub {
+   $self->{plugin}->set_timezone($self->{conf});
+};
+$self->protected_call($code);
+}
+
 sub setup_init {
 my ($self) = @_;
 
diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
index 93dace7..4c7c2e6 100644
--- a/src/PVE/LXC/Setup/Base.pm
+++ b/src/PVE/LXC/Setup/Base.pm
@@ -451,6 +451,26 @@ my $randomize_crontab = sub {
}
 };
 
+sub set_timezone {
+my ($self, $conf) = @_;
+
+my $zoneinfo = $conf->{timezone};
+
+return if !defined($zoneinfo);
+
+my $tz_path = "/usr/share/zoneinfo/$zoneinfo";
+if ($zoneinfo eq 'host') {
+   $tz_path= $self->{host_localtime};
+}
+
+if ($self->ct_file_exists($tz_path)) {
+   $self->ct_unlink("/etc/localtime");
+   $self->ct_symlink($tz_path, "/etc/localtime");
+} else {
+   warn "container does not have $tz_path, timezone can not be modified\n";
+}
+}
+
 sub pre_start_hook {
 my ($self, $conf) = @_;
 
@@ -458,6 +478,7 @@ sub pre_start_hook {
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_dns($conf);
+$self->set_timezone($conf);
 
 # fixme: what else ?
 }
@@ -466,16 +487,17 @@ sub post_create_hook {
 my ($self, $conf, $root_password, $ssh_keys) = @_;
 
 $self->template_fixup($conf);
-
+
 &$randomize_crontab($self, $conf);
-
+
 $self->set_user_password($conf, 'root', $root_password);
 $self->set_user_authorized_ssh_keys($conf, 'root', $ssh_keys) if $ssh_keys;
 $self->setup_init($conf);
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_dns($conf);
-
+$self->set_timezone($conf);
+
 # fixme: what else ?
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 0/5] timezones for containers

2020-06-16 Thread Oguz Bektas
this patch series implements the 'timezone' option for containers.

more detailed info is in the commit messages.



pve-common:

Oguz Bektas (1):
  jsonschema: register 'timezone' format and add verification method

 src/PVE/JSONSchema.pm | 24 
 1 file changed, 24 insertions(+)


pve-container:
Oguz Bektas (1):
  fix #1423: add timezone config option

 src/PVE/LXC/Config.pm |  5 +
 src/PVE/LXC/Setup.pm  | 13 +
 src/PVE/LXC/Setup/Base.pm | 28 +---
 3 files changed, 43 insertions(+), 3 deletions(-)


proxmox-widget-toolkit:
Oguz Bektas (1):
  add TimezonePanel for containers

 src/Makefile   |  1 +
 src/data/TimezoneStore.js  |  2 +-
 src/panel/TimezonePanel.js | 73 ++
 3 files changed, 75 insertions(+), 1 deletion(-)
 create mode 100644 src/panel/TimezonePanel.js


pve-manager:

Oguz Bektas (2):
  add timezone setting to lxc options
  add timezone option to container creation wizard

 www/manager6/lxc/CreateWizard.js | 16 +---
 www/manager6/lxc/Options.js  | 13 +
 2 files changed, 26 insertions(+), 3 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/5] add timezone setting to lxc options

2020-06-16 Thread Oguz Bektas
this allows us to set the timezone of a container in the options menu.

Signed-off-by: Oguz Bektas 
---
 www/manager6/lxc/Options.js | 13 +
 1 file changed, 13 insertions(+)

diff --git a/www/manager6/lxc/Options.js b/www/manager6/lxc/Options.js
index 40bde35f..f6e457a3 100644
--- a/www/manager6/lxc/Options.js
+++ b/www/manager6/lxc/Options.js
@@ -140,6 +140,19 @@ Ext.define('PVE.lxc.Options', {
editor: Proxmox.UserName === 'root@pam' ?
'PVE.lxc.FeaturesEdit' : undefined
},
+   timezone: {
+   header: gettext('Time zone'),
+   defaultValue: 'CT managed',
+   deleteDefaultValue: true,
+   deleteEmpty: true,
+   editor: caps.vms['VM.Config.Options'] ? {
+   xtype: 'proxmoxWindowEdit',
+   items: {
+   xtype: 'PVETimezonePanel',
+   subject: gettext('Time zone'),
+   }
+   } : undefined
+   },
hookscript: {
header: gettext('Hookscript')
}
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 5/5] add timezone option to container creation wizard

2020-06-16 Thread Oguz Bektas
renames the 'DNS' step to 'DNS / Time' and allows one to set the
timezone of the container during setup.

Signed-off-by: Oguz Bektas 
---
 www/manager6/lxc/CreateWizard.js | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js
index 87076e0d..3a715669 100644
--- a/www/manager6/lxc/CreateWizard.js
+++ b/www/manager6/lxc/CreateWizard.js
@@ -227,9 +227,19 @@ Ext.define('PVE.lxc.CreateWizard', {
isCreate: true
},
{
-   xtype: 'pveLxcDNSInputPanel',
-   title: gettext('DNS'),
-   insideWizard: true
+   xtype: 'container',
+   layout: 'hbox',
+   title: gettext('DNS / Time'),
+   items: [
+   {
+   xtype: 'pveLxcDNSInputPanel',
+   insideWizard: true
+   },
+   {
+   xtype: 'PVETimezonePanel',
+   insideWizard: true
+   }
+   ]
},
{
title: gettext('Confirm'),
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] fix #1423: set container timezone to match host in post-create

2020-05-28 Thread Oguz Bektas
we cache the /etc/localtime symlink path from the node and set it in the
container rootfs if zone file exists in the container

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Setup.pm  |  3 +++
 src/PVE/LXC/Setup/Base.pm | 20 +---
 2 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index c738e64..929f75b 100644
--- a/src/PVE/LXC/Setup.pm
+++ b/src/PVE/LXC/Setup.pm
@@ -5,6 +5,8 @@ use warnings;
 use POSIX;
 use PVE::Tools;
 
+use Cwd 'abs_path';
+
 use PVE::LXC::Setup::Debian;
 use PVE::LXC::Setup::Ubuntu;
 use PVE::LXC::Setup::CentOS;
@@ -103,6 +105,7 @@ sub new {
 
 # Cache some host files we need access to:
 $plugin->{host_resolv_conf} = PVE::INotify::read_file('resolvconf');
+$plugin->{host_localtime} = abs_path('/etc/localtime');
 
 # pass on user namespace information:
 my ($id_map, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
index 93dace7..e94e802 100644
--- a/src/PVE/LXC/Setup/Base.pm
+++ b/src/PVE/LXC/Setup/Base.pm
@@ -451,6 +451,19 @@ my $randomize_crontab = sub {
}
 };
 
+sub set_timezone {
+my ($self) = @_;
+
+my $zoneinfo = $self->{host_localtime};
+
+($zoneinfo) = $zoneinfo =~ m/^(.*)$/; # untaint
+
+if ($self->ct_file_exists($zoneinfo)) {
+   $self->ct_unlink("/etc/localtime");
+   $self->ct_symlink($zoneinfo, "/etc/localtime");
+}
+}
+
 sub pre_start_hook {
 my ($self, $conf) = @_;
 
@@ -466,16 +479,17 @@ sub post_create_hook {
 my ($self, $conf, $root_password, $ssh_keys) = @_;
 
 $self->template_fixup($conf);
-
+
 &$randomize_crontab($self, $conf);
-
+
 $self->set_user_password($conf, 'root', $root_password);
 $self->set_user_authorized_ssh_keys($conf, 'root', $ssh_keys) if $ssh_keys;
 $self->setup_init($conf);
 $self->setup_network($conf);
 $self->set_hostname($conf);
 $self->set_dns($conf);
-
+$self->set_timezone();
+
 # fixme: what else ?
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container] fix #2655: don't forget to setup securetty for centos >= 7

2020-05-25 Thread Oguz Bektas
in template_fixup we only call this method for version < 7, but greater
versions also need to allow lxc/tty[N] as secure.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* call setup_securetty unconditionally

 src/PVE/LXC/Setup/CentOS.pm | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
index 1e6894b..3721ca7 100644
--- a/src/PVE/LXC/Setup/CentOS.pm
+++ b/src/PVE/LXC/Setup/CentOS.pm
@@ -109,10 +109,9 @@ sub template_fixup {
my $data = $self->ct_file_get_contents($filename);
$data =~ s!^(/sbin/start_udev.*)$!#$1!gm;
$self->ct_file_set_contents($filename, $data);
-   
-   # edit /etc/securetty (enable login on console)
-   $self->setup_securetty($conf);
 }
+# edit /etc/securetty (enable login on console)
+$self->setup_securetty($conf);
 }
 
 sub setup_init {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container] fix #2655: don't forget to setup securetty for centos >= 7

2020-05-25 Thread Oguz Bektas
On Mon, May 25, 2020 at 02:24:34PM +0200, Thomas Lamprecht wrote:
> On 5/25/20 2:15 PM, Oguz Bektas wrote:
> > in template_fixup we only call this method for version < 7, but greater
> > versions also need to allow lxc/tty[N] as secure.
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  src/PVE/LXC/Setup/CentOS.pm | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
> > index 1e6894b..757bc63 100644
> > --- a/src/PVE/LXC/Setup/CentOS.pm
> > +++ b/src/PVE/LXC/Setup/CentOS.pm
> > @@ -109,9 +109,10 @@ sub template_fixup {
> > my $data = $self->ct_file_get_contents($filename);
> > $data =~ s!^(/sbin/start_udev.*)$!#$1!gm;
> > $self->ct_file_set_contents($filename, $data);
> > -   
> > # edit /etc/securetty (enable login on console)
> > $self->setup_securetty($conf);
> > +} else {
> > +   $self->setup_securetty($conf);
> >  }
> 
> so a if-else both ending in the same statement.. Why not move it out and
> do that unconditionally after the if?
okay
> 
> And it doesn't regresses for other CentOS versions and un/privileged combos?
worked fine after the patch, seems to fix the warnings and the
login problems for privileged containers (centos 7). unprivileged
containers work fine as before.

centos 8 template doesn't have /etc/securetty at all, so root login is
allowed by default.
> 
> >  }
> >  
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] fix #2655: don't forget to setup securetty for centos >= 7

2020-05-25 Thread Oguz Bektas
in template_fixup we only call this method for version < 7, but greater
versions also need to allow lxc/tty[N] as secure.

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Setup/CentOS.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
index 1e6894b..757bc63 100644
--- a/src/PVE/LXC/Setup/CentOS.pm
+++ b/src/PVE/LXC/Setup/CentOS.pm
@@ -109,9 +109,10 @@ sub template_fixup {
my $data = $self->ct_file_get_contents($filename);
$data =~ s!^(/sbin/start_udev.*)$!#$1!gm;
$self->ct_file_set_contents($filename, $data);
-   
# edit /etc/securetty (enable login on console)
$self->setup_securetty($conf);
+} else {
+   $self->setup_securetty($conf);
 }
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 3/3] gui: add nvme as a bus type for creating disks

2020-05-18 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---
 www/manager6/Utils.js   | 3 ++-
 www/manager6/form/BusTypeSelector.js| 2 ++
 www/manager6/form/ControllerSelector.js | 4 ++--
 www/manager6/qemu/CloudInit.js  | 4 ++--
 www/mobile/QemuSummary.js   | 2 +-
 5 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 0cce81d4..47b6e5c1 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -26,7 +26,7 @@ Ext.define('PVE.Utils', { utilities: {
 
 toolkit: undefined, // (extjs|touch), set inside Toolkit.js
 
-bus_match: /^(ide|sata|virtio|scsi)\d+$/,
+bus_match: /^(ide|sata|virtio|scsi|nvme)\d+$/,
 
 log_severity_hash: {
0: "panic",
@@ -1286,6 +1286,7 @@ Ext.define('PVE.Utils', { utilities: {
ide: 4,
sata: 6,
scsi: 31,
+   nvme: 8,
virtio: 16,
 },
 
diff --git a/www/manager6/form/BusTypeSelector.js 
b/www/manager6/form/BusTypeSelector.js
index 04643e77..c6820b26 100644
--- a/www/manager6/form/BusTypeSelector.js
+++ b/www/manager6/form/BusTypeSelector.js
@@ -15,6 +15,8 @@ Ext.define('PVE.form.BusTypeSelector', {
 
me.comboItems.push(['scsi', 'SCSI']);
 
+   me.comboItems.push(['nvme', 'NVMe']);
+
me.callParent();
 }
 });
diff --git a/www/manager6/form/ControllerSelector.js 
b/www/manager6/form/ControllerSelector.js
index 89ecdf4a..0cea5fce 100644
--- a/www/manager6/form/ControllerSelector.js
+++ b/www/manager6/form/ControllerSelector.js
@@ -37,7 +37,7 @@ Ext.define('PVE.form.ControllerSelector', {
 
me.vmconfig = Ext.apply({}, vmconfig);
 
-   var clist = ['ide', 'virtio', 'scsi', 'sata'];
+   var clist = ['ide', 'virtio', 'scsi', 'sata', 'nvme'];
var bussel = me.down('field[name=controller]');
var deviceid = me.down('field[name=deviceid]');
 
@@ -47,7 +47,7 @@ Ext.define('PVE.form.ControllerSelector', {
deviceid.setValue(2);
return;
}
-   clist = ['ide', 'scsi', 'sata'];
+   clist = ['ide', 'scsi', 'sata', 'nvme'];
} else  {
// in most cases we want to add a disk to the same controller
// we previously used
diff --git a/www/manager6/qemu/CloudInit.js b/www/manager6/qemu/CloudInit.js
index cbb4af9d..ca00698d 100644
--- a/www/manager6/qemu/CloudInit.js
+++ b/www/manager6/qemu/CloudInit.js
@@ -135,7 +135,7 @@ Ext.define('PVE.qemu.CloudInit', {
var id = record.data.key;
var value = record.data.value;
var ciregex = new RegExp("vm-" + me.pveSelNode.data.vmid + 
"-cloudinit");
-   if (id.match(/^(ide|scsi|sata)\d+$/) && ciregex.test(value)) {
+   if (id.match(/^(ide|scsi|sata|nvme)\d+$/) && 
ciregex.test(value)) {
found = id;
me.ciDriveId = found;
me.ciDrive = value;
@@ -316,7 +316,7 @@ Ext.define('PVE.qemu.CloudInit', {
}
/*jslint confusion: false*/
 
-   PVE.Utils.forEachBus(['ide', 'scsi', 'sata'], function(type, id) {
+   PVE.Utils.forEachBus(['ide', 'scsi', 'sata', 'nvme'], function(type, 
id) {
me.rows[type+id] = {
visible: false
};
diff --git a/www/mobile/QemuSummary.js b/www/mobile/QemuSummary.js
index 6cbaba1b..9b306a45 100644
--- a/www/mobile/QemuSummary.js
+++ b/www/mobile/QemuSummary.js
@@ -12,7 +12,7 @@ Ext.define('PVE.QemuSummary', {
 
 config_keys: [
'name', 'memory', 'sockets', 'cores', 'ostype', 'bootdisk', /^net\d+/,
-   /^ide\d+/, /^virtio\d+/, /^sata\d+/, /^scsi\d+/, /^unused\d+/
+   /^ide\d+/, /^virtio\d+/, /^sata\d+/, /^scsi\d+/, /^nvme\d+/, 
/^unused\d+/
 ],
 
 initialize: function() {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC v2 qemu-server 1/3] fix #2255: add support for nvme emulation

2020-05-18 Thread Oguz Bektas
now we can add nvme drives;

nvme0: local-lvm:vm-103-disk-0,size=32G

or

qm set VMID --nvme0 local-lvm:32

max number is 8 for now, as most real hardware has 1-3 nvme slots and
can have a few more with pcie. most cases won't go over 8, if there's an
actual usecase at some point this can always be increased without
breaking anything (although the same isn't valid for decreasing it).

Signed-off-by: Oguz Bektas 
---

v1->v2:
* serial can be configured by user
* add missing "id=drive-nvme[X]" in the device string, without this we
can't unplug at all

 PVE/QemuServer.pm   | 23 ---
 PVE/QemuServer/Drive.pm | 21 +
 2 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dcf05df..3623fdb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -406,7 +406,7 @@ EODESC
optional => 1,
type => 'string', format => 'pve-qm-bootdisk',
description => "Enable booting from specified disk.",
-   pattern => '(ide|sata|scsi|virtio)\d+',
+   pattern => '(ide|sata|scsi|virtio|nvme)\d+',
 },
 smp => {
optional => 1,
@@ -1424,7 +1424,10 @@ sub print_drivedevice_full {
$device .= ",rotation_rate=1";
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
-
+} elsif ($drive->{interface} eq 'nvme') {
+   my $path = $drive->{file};
+   $drive->{serial} //= "$drive->{interface}$drive->{index}"; # serial is 
mandatory for nvme
+   $device = 
"nvme,drive=drive-$drive->{interface}$drive->{index},id=$drive->{interface}$drive->{index}";
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
@@ -2157,7 +2160,7 @@ sub parse_vm_config {
} else {
$key = 'ide2' if $key eq 'cdrom';
my $fmt = $confdesc->{$key}->{format};
-   if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata)$/) {
+   if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata|nvme)$/) {
my $v = parse_drive($key, $value);
if (my $volid = filename_to_volume_id($vmid, $v->{file}, 
$v->{media})) {
$v->{file} = $volid;
@@ -3784,7 +3787,17 @@ sub vm_deviceplug {
warn $@ if $@;
die $err;
 }
+} elsif ($deviceid =~ m/^(nvme)(\d+)$/) {
+
+   qemu_driveadd($storecfg, $vmid, $device);
 
+   my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
$device, $arch, $machine_type);
+   eval { qemu_deviceadd($vmid, $devicefull); };
+   if (my $err = $@) {
+   eval { qemu_drivedel($vmid, $deviceid); };
+   warn $@ if $@;
+   die $err;
+}
 } elsif ($deviceid =~ m/^(net)(\d+)$/) {
 
return undef if !qemu_netdevadd($vmid, $conf, $arch, $device, 
$deviceid);
@@ -3862,6 +3875,10 @@ sub vm_deviceunplug {
 qemu_drivedel($vmid, $deviceid);
qemu_deletescsihw($conf, $vmid, $deviceid);
 
+} elsif ($deviceid =~ m/^(nvme)(\d+)$/) {
+   qemu_devicedel($vmid, $deviceid);
+   qemu_drivedel($vmid, $deviceid);
+
 } elsif ($deviceid =~ m/^(net)(\d+)$/) {
 
 qemu_devicedel($vmid, $deviceid);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index f84333f..b8a553a 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -27,6 +27,7 @@ 
PVE::JSONSchema::register_standard_option('pve-qm-image-format', {
 
 my $MAX_IDE_DISKS = 4;
 my $MAX_SCSI_DISKS = 31;
+my $MAX_NVME_DISKS = 8;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
@@ -275,6 +276,20 @@ my $scsidesc = {
 };
 PVE::JSONSchema::register_standard_option("pve-qm-scsi", $scsidesc);
 
+my $nvme_fmt = {
+%drivedesc_base,
+%ssd_fmt,
+%wwn_fmt,
+};
+
+my $nvmedesc = {
+optional => 1,
+type => 'string', format => $nvme_fmt,
+description => "Use volume as NVME disk (n is 0 to " . ($MAX_NVME_DISKS 
-1) . ").",
+};
+
+PVE::JSONSchema::register_standard_option("pve-qm-nvme", $nvmedesc);
+
 my $sata_fmt = {
 %drivedesc_base,
 %ssd_fmt,
@@ -364,6 +379,11 @@ for (my $i = 0; $i < $MAX_SCSI_DISKS; $i++)  {
 $drivedesc_hash->{"scsi$i"} = $scsidesc;
 }
 
+for (my $i = 0; $i < $MAX_NVME_DISKS; $i++)  {
+$drivedesc_hash->{"nvme$i"} = $nvmedesc;
+}
+
+
 for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++)  {
 $drivedesc_hash->{"virtio$i"} = $virtiodesc;
 }
@@ -380,6 +400,7 @@ sub valid_drive_names {
 (map { "scsi$_" } (0 .. ($MAX_SCSI_DISKS - 1))),
 (map { "virt

[pve-devel] [PATCH v2 qemu-server 2/3] drive: use more compact for-loop expression for initializing drive descriptions

2020-05-18 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---
 PVE/QemuServer/Drive.pm | 25 ++---
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index b8a553a..5dc5508 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -367,32 +367,19 @@ my $unuseddesc = {
 description => "Reference to unused volumes. This is used internally, and 
should not be modified manually.",
 };
 
-for (my $i = 0; $i < $MAX_IDE_DISKS; $i++)  {
-$drivedesc_hash->{"ide$i"} = $idedesc;
-}
+$drivedesc_hash->{"ide$_"} = $idedesc for (0..$MAX_IDE_DISKS);
 
-for (my $i = 0; $i < $MAX_SATA_DISKS; $i++)  {
-$drivedesc_hash->{"sata$i"} = $satadesc;
-}
+$drivedesc_hash->{"sata$_"} = $satadesc for (0..$MAX_SATA_DISKS);
 
-for (my $i = 0; $i < $MAX_SCSI_DISKS; $i++)  {
-$drivedesc_hash->{"scsi$i"} = $scsidesc;
-}
+$drivedesc_hash->{"scsi$_"} = $scsidesc for (0..$MAX_SCSI_DISKS);
 
-for (my $i = 0; $i < $MAX_NVME_DISKS; $i++)  {
-$drivedesc_hash->{"nvme$i"} = $nvmedesc;
-}
+$drivedesc_hash->{"nvme$_"} = $nvmedesc for (0..$MAX_NVME_DISKS);
 
-
-for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++)  {
-$drivedesc_hash->{"virtio$i"} = $virtiodesc;
-}
+$drivedesc_hash->{"virtio$_"} = $virtiodesc for (0..$MAX_VIRTIO_DISKS);
 
 $drivedesc_hash->{efidisk0} = $efidisk_desc;
 
-for (my $i = 0; $i < $MAX_UNUSED_DISKS; $i++) {
-$drivedesc_hash->{"unused$i"} = $unuseddesc;
-}
+$drivedesc_hash->{"unused$_"} = $unuseddesc for (0..$MAX_UNUSED_DISKS);
 
 sub valid_drive_names {
 # order is important - used to autoselect boot disk
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC v2 0/3] nvme emulation

2020-05-18 Thread Oguz Bektas
add support for nvme emulation.

v1->v2:
* implement thomas' suggestions from mailing list


i'm sending this as RFC because of the following issue, maybe someone has a tip 
for me:

alpine linux vm, with 5 disks. nvme0-4 and sata0.
hot-unplug nvme4. this works.
re-hotplug nvme4 without rebooting will cause the following error:

---
TASK ERROR: 400 Parameter verification failed. 
nvme4: hotplug problem - adding drive failed: Duplicate ID 'drive-nvme4' for 
drive
---

and drive cannot be hotplugged with the same name until reboot. changing the id 
from nvme4 to nvme5 will work however...

i'm not sure why this happens, something in qemu isn't being deleted after 
removing the device/drive, but `info qtree` and `info qom-tree` do
not show nvme4 after being unplugged.





qemu-server:
Oguz Bektas (2):
  fix #2255: add support for nvme emulation
  drive: use more compact for-loop expression for initializing drive
descriptions

 PVE/QemuServer.pm   | 23 ---
 PVE/QemuServer/Drive.pm | 38 +++---
 2 files changed, 43 insertions(+), 18 deletions(-)

pve-manager:
Oguz Bektas (1):
  gui: add nvme as a bus type for creating disks

 www/manager6/Utils.js   | 3 ++-
 www/manager6/form/BusTypeSelector.js| 2 ++
 www/manager6/form/ControllerSelector.js | 4 ++--
 www/manager6/qemu/CloudInit.js  | 4 ++--
 www/mobile/QemuSummary.js   | 2 +-
 5 files changed, 9 insertions(+), 6 deletions(-)


-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/2] add support for nvme emulation

2020-05-14 Thread Oguz Bektas
hi,

On Thu, May 14, 2020 at 09:27:31AM +0200, Thomas Lamprecht wrote:
> 
> please always include the bug/feature # somewhere as reference, e.g. a
> "fix #2255: ..." ideally in the subject, or at least in the commit message
> would be good.


will do for v2
> 
> On 5/13/20 5:36 PM, Oguz Bektas wrote:
> > now we can add nvme drives;
> > 
> > nvme0: local-lvm:vm-103-disk-0,size=32G
> 
> An example I can use end to end is nicer for a reviewer, e.g., something like:
> qm set 100 --nvme0 local-lvm:32
> 
> 
> > max number is 8
> 
> The "hows" and especially the "whys" are missing a bit here, I know them as I
> answered question to you directly during development of this, but in a month
> most is forgot, git commit messages are eternal ;)
> 
> Nicer would be something like:
> "allow maximal 8 drives, most real hardware provides normally 1 to 3 slots and
> PCIe can host possibly a few more. For the default case 8 should be enough to
> mirror common HW, and more can be added easily if a real usecase comes up.
>  
> Also, decreasing it later is impossible without breaking setups"

roger
> 
> 
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  PVE/QemuServer.pm   | 20 +---
> >  PVE/QemuServer/Drive.pm | 21 +
> >  2 files changed, 38 insertions(+), 3 deletions(-)
> > 
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index dcf05df..441d209 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -406,7 +406,7 @@ EODESC
> > optional => 1,
> > type => 'string', format => 'pve-qm-bootdisk',
> > description => "Enable booting from specified disk.",
> > -   pattern => '(ide|sata|scsi|virtio)\d+',
> > +   pattern => '(ide|sata|scsi|virtio|nvme)\d+',
> >  },
> >  smp => {
> > optional => 1,
> > @@ -1424,7 +1424,11 @@ sub print_drivedevice_full {
> > $device .= ",rotation_rate=1";
> > }
> > $device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
> > -
> > +} elsif ($drive->{interface} eq 'nvme') {
> > +   my $maxdev = $PVE::QemuServer::Drive::MAX_NVME_DISKS;
> 
> not used here
right
> 
> > +   my $path = $drive->{file};
> > +   $drive->{serial} = "$drive->{interface}$drive->{index}"; # serial is 
> > mandatory for nvme
> 
> hmm, but this doesn't allows for users setting there own serial...
right, changed to your suggestion
> Maybe:
> 
> $drive->{serial} //= "$drive->{interface}$drive->{index}";
> 
> 
> > +   $device = "nvme,drive=drive-$drive->{interface}$drive->{index}";
> >  } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 
> > 'sata') {
> > my $maxdev = ($drive->{interface} eq 'sata') ? 
> > $PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
> > my $controller = int($drive->{index} / $maxdev);
> > @@ -2157,7 +2161,7 @@ sub parse_vm_config {
> > } else {
> > $key = 'ide2' if $key eq 'cdrom';
> > my $fmt = $confdesc->{$key}->{format};
> > -   if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata)$/) {
> > +   if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata|nvme)$/) {
> > my $v = parse_drive($key, $value);
> > if (my $volid = filename_to_volume_id($vmid, $v->{file}, 
> > $v->{media})) {
> > $v->{file} = $volid;
> > @@ -3784,7 +3788,17 @@ sub vm_deviceplug {
> > warn $@ if $@;
> > die $err;
> >  }
> > +} elsif ($deviceid =~ m/^(nvme)(\d+)$/) {
> > +
> > +qemu_driveadd($storecfg, $vmid, $device);
> >  
> > +   my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
> > $device, $arch, $machine_type);
> > +   eval { qemu_deviceadd($vmid, $devicefull); };
> > +   if (my $err = $@) {
> > +   eval { qemu_drivedel($vmid, $deviceid); };
> > +   warn $@ if $@;
> > +   die $err;
> > +}
> >  } elsif ($deviceid =~ m/^(net)(\d+)$/) {
> >  
> > return undef if !qemu_netdevadd($vmid, $conf, $arch, $device, 
> > $deviceid);
> > diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> > index f84333f..b8a553a 100644
> > --- a/PVE/QemuServer/Drive.pm
> > +++ b/PVE/QemuServer/Drive.pm
> > @@ -27,6 +27,7 @@ 
> > PVE::JSONSchema::register_standard_option('pve-qm-image-format', {
> >  
&

[pve-devel] [PATCH 0/2] nvme emulation

2020-05-13 Thread Oguz Bektas
add nvme emulation support for disks.

qemu-server:

Oguz Bektas (1):
  add support for nvme emulation

 PVE/QemuServer.pm   | 20 +---
 PVE/QemuServer/Drive.pm | 21 +
 2 files changed, 38 insertions(+), 3 deletions(-)

pve-manager:

Oguz Bektas (1):
  gui: add nvme as a bus type for creating disks

 www/manager6/Utils.js   |   3 ++-
 www/manager6/form/BusTypeSelector.js|   2 ++
 www/manager6/form/ControllerSelector.js |   4 ++--
 www/manager6/qemu/.Snapshot.js.swp  | Bin 0 -> 12288 bytes
 www/manager6/qemu/CloudInit.js  |   4 ++--
 www/mobile/QemuSummary.js   |   2 +-
 6 files changed, 9 insertions(+), 6 deletions(-)
 create mode 100644 www/manager6/qemu/.Snapshot.js.swp


-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/2] gui: add nvme as a bus type for creating disks

2020-05-13 Thread Oguz Bektas
add nvme to the bus list and relevant spots in gui

Signed-off-by: Oguz Bektas 
---
 www/manager6/Utils.js   |   3 ++-
 www/manager6/form/BusTypeSelector.js|   2 ++
 www/manager6/form/ControllerSelector.js |   4 ++--
 www/manager6/qemu/.Snapshot.js.swp  | Bin 0 -> 12288 bytes
 www/manager6/qemu/CloudInit.js  |   4 ++--
 www/mobile/QemuSummary.js   |   2 +-
 6 files changed, 9 insertions(+), 6 deletions(-)
 create mode 100644 www/manager6/qemu/.Snapshot.js.swp

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 0cce81d4..47b6e5c1 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -26,7 +26,7 @@ Ext.define('PVE.Utils', { utilities: {
 
 toolkit: undefined, // (extjs|touch), set inside Toolkit.js
 
-bus_match: /^(ide|sata|virtio|scsi)\d+$/,
+bus_match: /^(ide|sata|virtio|scsi|nvme)\d+$/,
 
 log_severity_hash: {
0: "panic",
@@ -1286,6 +1286,7 @@ Ext.define('PVE.Utils', { utilities: {
ide: 4,
sata: 6,
scsi: 31,
+   nvme: 8,
virtio: 16,
 },
 
diff --git a/www/manager6/form/BusTypeSelector.js 
b/www/manager6/form/BusTypeSelector.js
index 04643e77..c65eba79 100644
--- a/www/manager6/form/BusTypeSelector.js
+++ b/www/manager6/form/BusTypeSelector.js
@@ -15,6 +15,8 @@ Ext.define('PVE.form.BusTypeSelector', {
 
me.comboItems.push(['scsi', 'SCSI']);
 
+   me.comboItems.push(['nvme', 'NVME']);
+
me.callParent();
 }
 });
diff --git a/www/manager6/form/ControllerSelector.js 
b/www/manager6/form/ControllerSelector.js
index 89ecdf4a..0cea5fce 100644
--- a/www/manager6/form/ControllerSelector.js
+++ b/www/manager6/form/ControllerSelector.js
@@ -37,7 +37,7 @@ Ext.define('PVE.form.ControllerSelector', {
 
me.vmconfig = Ext.apply({}, vmconfig);
 
-   var clist = ['ide', 'virtio', 'scsi', 'sata'];
+   var clist = ['ide', 'virtio', 'scsi', 'sata', 'nvme'];
var bussel = me.down('field[name=controller]');
var deviceid = me.down('field[name=deviceid]');
 
@@ -47,7 +47,7 @@ Ext.define('PVE.form.ControllerSelector', {
deviceid.setValue(2);
return;
}
-   clist = ['ide', 'scsi', 'sata'];
+   clist = ['ide', 'scsi', 'sata', 'nvme'];
} else  {
// in most cases we want to add a disk to the same controller
// we previously used
diff --git a/www/manager6/qemu/.Snapshot.js.swp 
b/www/manager6/qemu/.Snapshot.js.swp
new file mode 100644
index 
..bcfd26a5a863605108667b951d6d8f3c9b3afa10
GIT binary patch
literal 12288
zcmeI%%?g4*5Ww-VdldBoR_EM4Abf-9&?)M^kyt@xsR^rIYYqaK~YH
zhw*olUG^4}sqH#_Nk`pGpYd=-4t{*e^Iy@|8~tlNU%Q%uYP^Y5aQ3=8z1O~XQc04i
z?0uw;-Y!https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/2] add support for nvme emulation

2020-05-13 Thread Oguz Bektas
now we can add nvme drives;

nvme0: local-lvm:vm-103-disk-0,size=32G

max number is 8

Signed-off-by: Oguz Bektas 
---
 PVE/QemuServer.pm   | 20 +---
 PVE/QemuServer/Drive.pm | 21 +
 2 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dcf05df..441d209 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -406,7 +406,7 @@ EODESC
optional => 1,
type => 'string', format => 'pve-qm-bootdisk',
description => "Enable booting from specified disk.",
-   pattern => '(ide|sata|scsi|virtio)\d+',
+   pattern => '(ide|sata|scsi|virtio|nvme)\d+',
 },
 smp => {
optional => 1,
@@ -1424,7 +1424,11 @@ sub print_drivedevice_full {
$device .= ",rotation_rate=1";
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
-
+} elsif ($drive->{interface} eq 'nvme') {
+   my $maxdev = $PVE::QemuServer::Drive::MAX_NVME_DISKS;
+   my $path = $drive->{file};
+   $drive->{serial} = "$drive->{interface}$drive->{index}"; # serial is 
mandatory for nvme
+   $device = "nvme,drive=drive-$drive->{interface}$drive->{index}";
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
@@ -2157,7 +2161,7 @@ sub parse_vm_config {
} else {
$key = 'ide2' if $key eq 'cdrom';
my $fmt = $confdesc->{$key}->{format};
-   if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata)$/) {
+   if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata|nvme)$/) {
my $v = parse_drive($key, $value);
if (my $volid = filename_to_volume_id($vmid, $v->{file}, 
$v->{media})) {
$v->{file} = $volid;
@@ -3784,7 +3788,17 @@ sub vm_deviceplug {
warn $@ if $@;
die $err;
 }
+} elsif ($deviceid =~ m/^(nvme)(\d+)$/) {
+
+qemu_driveadd($storecfg, $vmid, $device);
 
+   my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
$device, $arch, $machine_type);
+   eval { qemu_deviceadd($vmid, $devicefull); };
+   if (my $err = $@) {
+   eval { qemu_drivedel($vmid, $deviceid); };
+   warn $@ if $@;
+   die $err;
+}
 } elsif ($deviceid =~ m/^(net)(\d+)$/) {
 
return undef if !qemu_netdevadd($vmid, $conf, $arch, $device, 
$deviceid);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index f84333f..b8a553a 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -27,6 +27,7 @@ 
PVE::JSONSchema::register_standard_option('pve-qm-image-format', {
 
 my $MAX_IDE_DISKS = 4;
 my $MAX_SCSI_DISKS = 31;
+my $MAX_NVME_DISKS = 8;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
@@ -275,6 +276,20 @@ my $scsidesc = {
 };
 PVE::JSONSchema::register_standard_option("pve-qm-scsi", $scsidesc);
 
+my $nvme_fmt = {
+%drivedesc_base,
+%ssd_fmt,
+%wwn_fmt,
+};
+
+my $nvmedesc = {
+optional => 1,
+type => 'string', format => $nvme_fmt,
+description => "Use volume as NVME disk (n is 0 to " . ($MAX_NVME_DISKS 
-1) . ").",
+};
+
+PVE::JSONSchema::register_standard_option("pve-qm-nvme", $nvmedesc);
+
 my $sata_fmt = {
 %drivedesc_base,
 %ssd_fmt,
@@ -364,6 +379,11 @@ for (my $i = 0; $i < $MAX_SCSI_DISKS; $i++)  {
 $drivedesc_hash->{"scsi$i"} = $scsidesc;
 }
 
+for (my $i = 0; $i < $MAX_NVME_DISKS; $i++)  {
+$drivedesc_hash->{"nvme$i"} = $nvmedesc;
+}
+
+
 for (my $i = 0; $i < $MAX_VIRTIO_DISKS; $i++)  {
 $drivedesc_hash->{"virtio$i"} = $virtiodesc;
 }
@@ -380,6 +400,7 @@ sub valid_drive_names {
 (map { "scsi$_" } (0 .. ($MAX_SCSI_DISKS - 1))),
 (map { "virtio$_" } (0 .. ($MAX_VIRTIO_DISKS - 1))),
 (map { "sata$_" } (0 .. ($MAX_SATA_DISKS - 1))),
+(map { "nvme$_" } (0 .. ($MAX_NVME_DISKS - 1))),
 'efidisk0');
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH aab] add vi and nano to base template

2020-05-08 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---
 PVE/AAB.pm | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/PVE/AAB.pm b/PVE/AAB.pm
index 405841b..6620ac8 100644
--- a/PVE/AAB.pm
+++ b/PVE/AAB.pm
@@ -11,8 +11,7 @@ use IPC::Open2;
 use IPC::Open3;
 use UUID;
 use Cwd;
-
-my @BASE_PACKAGES = qw(base openssh);
+my @BASE_PACKAGES = qw(base openssh vi nano);
 my @BASE_EXCLUDES = qw(e2fsprogs
jfsutils
linux
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] implement CT reinstall option for restore

2020-04-01 Thread Oguz Bektas
this adds the 'reinstall' flag, which is a special forced restore
(overwrites an existing container with chosen template)

testing command example:

pct restore  /path/to/template/file --reinstall --storage local-lvm 
--unprivileged 1 --password 123456

should reinstall the CT with the given template (different distros can
be chosen as well, f.e. existing alpine container reinstalled as
archlinux).
password or ssh key is required for this, since it calls setup
routines like post_create_hook (otherwise we won't be able to log-in to
CT).

Signed-off-by: Oguz Bektas 
---
 src/PVE/API2/LXC.pm   | 40 
 src/PVE/LXC/Create.pm | 11 ---
 2 files changed, 44 insertions(+), 7 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index f4c1a49..f1cd67e 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -147,6 +147,11 @@ __PACKAGE__->register_method({
type => 'boolean',
description => "Mark this as restore task.",
},
+   reinstall => {
+   optional => 1,
+   type => 'boolean',
+   description => "Reinstall container using a template.",
+   },
unique => {
optional => 1,
type => 'boolean',
@@ -208,26 +213,39 @@ __PACKAGE__->register_method({
my $unprivileged = extract_param($param, 'unprivileged');
my $restore = extract_param($param, 'restore');
my $unique = extract_param($param, 'unique');
+   my $reinstall = extract_param($param, 'reinstall');
 
# used to skip firewall config restore if user lacks permission
my $skip_fw_config_restore = 0;
+   my $force = extract_param($param, 'force');
+
+   if ($reinstall) {
+   # reinstall is handled as a special force restore (possibly w/ 
extra config opts
+   # given in the call)
+   $restore = 1;
+   $force = 1;
+   }
 
if ($restore) {
# fixme: limit allowed parameters
}
 
-   my $force = extract_param($param, 'force');
 
+   my $reinstall_conf;
if (!($same_container_exists && $restore && $force)) {
PVE::Cluster::check_vmid_unused($vmid);
} else {
die "can't overwrite running container\n" if 
PVE::LXC::check_running($vmid);
my $conf = PVE::LXC::Config->load_config($vmid);
+   $reinstall_conf = $conf;
PVE::LXC::Config->check_protection($conf, "unable to restore CT 
$vmid");
}
 
my $password = extract_param($param, 'password');
my $ssh_keys = extract_param($param, 'ssh-public-keys');
+   if ($reinstall && !defined($password) && !defined($ssh_keys)) {
+   die "password/ssh key required during reinstall. aborting...\n";
+   }
PVE::Tools::validate_ssh_public_keys($ssh_keys) if defined($ssh_keys);
 
my $pool = extract_param($param, 'pool');
@@ -354,7 +372,15 @@ __PACKAGE__->register_method({
die "can't overwrite running container\n" if 
PVE::LXC::check_running($vmid);
if ($is_root && $archive ne '-') {
my $orig_conf;
-   ($orig_conf, $orig_mp_param) = 
PVE::LXC::Create::recover_config($storage_cfg, $archive);
+   if ($reinstall) {
+   
PVE::LXC::Config->foreach_mountpoint($reinstall_conf, sub {
+   my ($ms, $mountpoint) = @_;
+   $orig_mp_param->{$ms} = $reinstall_conf->{$ms};
+   });
+   }
+   else {
+   ($orig_conf, $orig_mp_param) = 
PVE::LXC::Create::recover_config($storage_cfg, $archive);
+   }
$was_template = delete $orig_conf->{template};
# When we're root call 'restore_configuration' with 
restricted=0,
# causing it to restore the raw lxc entries, among 
which there may be
@@ -365,7 +391,12 @@ __PACKAGE__->register_method({
}
if ($storage_only_mode) {
if ($restore) {
-   if (!defined($orig_mp_param)) {
+   if ($reinstall) {
+   
PVE::LXC::Config->foreach_mountpoint($reinstall_conf, sub {
+   my ($ms, $mountpoint) = @_;
+   $orig_mp_param->{$ms} = $reinstall_conf->{$ms};
+   });
+   } elsif (!defined($orig_mp_param)) {
(undef, $orig_mp_param) = 
PVE::LXC::Create::recover_config($storage_cfg, $archive);
}
$mp_para

Re: [pve-devel] [PATCH v2 qemu-server 5/5] hotplug_pending: allow partial fast plugging

2020-03-12 Thread Oguz Bektas
On Thu, Mar 12, 2020 at 02:18:07PM +0100, Thomas Lamprecht wrote:
> On 3/12/20 2:11 PM, Oguz Bektas wrote:
> > On Tue, Mar 10, 2020 at 11:25:23AM +0100, Thomas Lamprecht wrote:
> >> On 2/19/20 5:07 PM, Oguz Bektas wrote:
> >>> adds a loop after the main fastplug loop, to check if any of the options
> >>> are partially fast pluggable.
> >>>
> >>> these are defined in $partial_fast_plug_option
> >>>
> >>> Signed-off-by: Oguz Bektas 
> >>> ---
> >>>
> >>> v1->v2:
> >>> * set $changes according to partial_fast_plug result as well as fast_plug 
> >>> result
> >>> * do cleanup_pending before writing fastplug changes
> >>>
> >>>
> >>>
> >>>  PVE/QemuConfig.pm |  7 +++
> >>>  PVE/QemuServer.pm | 19 +++
> >>>  2 files changed, 26 insertions(+)
> >>>
> >>> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> >>> index 1ba728a..d1727b2 100644
> >>> --- a/PVE/QemuConfig.pm
> >>> +++ b/PVE/QemuConfig.pm
> >>> @@ -399,6 +399,13 @@ sub __snapshot_foreach_volume {
> >>>  
> >>>  PVE::QemuServer::foreach_drive($conf, $func);
> >>>  }
> >>> +
> >>> +sub get_partial_fast_plug_option {
> >>> +my ($class) = @_;
> >>> +
> >>> +return $PVE::QemuServer::partial_fast_plug_option;
> >>> +}
> >>> +
> >>>  # END implemented abstract methods from PVE::AbstractConfig
> >>>  
> >>>  1;
> >>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> >>> index 44d0dee..8a689a0 100644
> >>> --- a/PVE/QemuServer.pm
> >>> +++ b/PVE/QemuServer.pm
> >>> @@ -4732,6 +4732,18 @@ my $fast_plug_option = {
> >>>  'tags' => 1,
> >>>  };
> >>>  
> >>> +# name of opt
> >>> +# -> fmt -> format variable
> >>> +# -> properties -> fastpluggable options hash
> >>> +our $partial_fast_plug_option = {
> >>> +agent => {
> >>> + fmt => $agent_fmt,
> >>> + properties => {
> >>> + fstrim_cloned_disks => 1
> >>> + },
> >>> +},
> >>> +};
> >> this belongs solely in the get_partial_fast_plug_option method, I do not 
> >> want
> >> to tighten the cyclic use of both modules and it's not required here.
> > when we move the hash to QemuConfig it complains about $agent_fmt which
> > resides in QemuServer (as well as other formats which could be used
> > later on for partial fastplugging). that was the initial reason i added
> > it there because i didn't want to move the formats. should we change the
> > definitions from 'my' to 'our' to use them in QemuConfig? or how to
> > handle this better?
> > 
> 
> Ah yeah, in the longer run the schemas should move to QemuConfig in the 
> longer run,
> IIRC some discussions. 

starting to work on QemuSchema rebase then, since it seems to be the
better option in the long run.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 5/5] hotplug_pending: allow partial fast plugging

2020-03-12 Thread Oguz Bektas
hi,


On Tue, Mar 10, 2020 at 11:25:23AM +0100, Thomas Lamprecht wrote:
> On 2/19/20 5:07 PM, Oguz Bektas wrote:
> > adds a loop after the main fastplug loop, to check if any of the options
> > are partially fast pluggable.
> > 
> > these are defined in $partial_fast_plug_option
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> > 
> > v1->v2:
> > * set $changes according to partial_fast_plug result as well as fast_plug 
> > result
> > * do cleanup_pending before writing fastplug changes
> > 
> > 
> > 
> >  PVE/QemuConfig.pm |  7 +++
> >  PVE/QemuServer.pm | 19 +++
> >  2 files changed, 26 insertions(+)
> > 
> > diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> > index 1ba728a..d1727b2 100644
> > --- a/PVE/QemuConfig.pm
> > +++ b/PVE/QemuConfig.pm
> > @@ -399,6 +399,13 @@ sub __snapshot_foreach_volume {
> >  
> >  PVE::QemuServer::foreach_drive($conf, $func);
> >  }
> > +
> > +sub get_partial_fast_plug_option {
> > +my ($class) = @_;
> > +
> > +return $PVE::QemuServer::partial_fast_plug_option;
> > +}
> > +
> >  # END implemented abstract methods from PVE::AbstractConfig
> >  
> >  1;
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index 44d0dee..8a689a0 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -4732,6 +4732,18 @@ my $fast_plug_option = {
> >  'tags' => 1,
> >  };
> >  
> > +# name of opt
> > +# -> fmt -> format variable
> > +# -> properties -> fastpluggable options hash
> > +our $partial_fast_plug_option = {
> > +agent => {
> > +   fmt => $agent_fmt,
> > +   properties => {
> > +   fstrim_cloned_disks => 1
> > +   },
> > +},
> > +};
> 
> this belongs solely in the get_partial_fast_plug_option method, I do not want
> to tighten the cyclic use of both modules and it's not required here.

when we move the hash to QemuConfig it complains about $agent_fmt which
resides in QemuServer (as well as other formats which could be used
later on for partial fastplugging). that was the initial reason i added
it there because i didn't want to move the formats. should we change the
definitions from 'my' to 'our' to use them in QemuConfig? or how to
handle this better?

> 
> > +
> >  # hotplug changes in [PENDING]
> >  # $selection hash can be used to only apply specified options, for
> >  # example: { cores => 1 } (only apply changed 'cores')
> > @@ -4761,7 +4773,14 @@ sub vmconfig_hotplug_pending {
> > }
> >  }
> >  
> > +foreach my $opt (keys %{$conf->{pending}}) {
> > +   if ($partial_fast_plug_option->{$opt}) {
> > +   $changes ||= PVE::QemuConfig->partial_fast_plug($conf, $opt);
> > +   }
> > +}
> 
> with my followup for the guest-common series where I early return with 0 (no
> change) if no partial_fast_plug_option is set for $opt you could just do:
> 
> for my $opt (keys %{$conf->{pending}}) {
> $changes ||= PVE::QemuConfig->partial_fast_plug($conf, $opt);
> }
> 
> This avoids a split of logic between here and the guest common method, i.e.,
> untangles spaghetti code a bit :)

okay i'll do that
> 
> > +
> >  if ($changes) {
> > +   PVE::QemuConfig->cleanup_pending($conf);
> > PVE::QemuConfig->write_config($vmid, $conf);
> >  }
> >  
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] slirp: re-add security patches

2020-03-12 Thread Oguz Bektas
the first two patches were mistakenly left out during the 4.2 qemu
rebase.

also adds another patch for issue CVE-2019-14378 (heap-based BOF)

Signed-off-by: Oguz Bektas 
---
 .../0001-util-add-slirp_fmt-helpers.patch | 126 
 ...2-tcp_emu-fix-unsafe-snprintf-usages.patch | 135 ++
 .../0003-ip_reass-Fix-use-after-free.patch|  46 ++
 debian/patches/series |   3 +
 4 files changed, 310 insertions(+)
 create mode 100644 
debian/patches/security/0001-util-add-slirp_fmt-helpers.patch
 create mode 100644 
debian/patches/security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch
 create mode 100644 
debian/patches/security/0003-ip_reass-Fix-use-after-free.patch

diff --git a/debian/patches/security/0001-util-add-slirp_fmt-helpers.patch 
b/debian/patches/security/0001-util-add-slirp_fmt-helpers.patch
new file mode 100644
index 000..af944f8
--- /dev/null
+++ b/debian/patches/security/0001-util-add-slirp_fmt-helpers.patch
@@ -0,0 +1,126 @@
+From  Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= 
+Date: Mon, 27 Jan 2020 10:24:09 +0100
+Subject: [PATCH 1/2] util: add slirp_fmt() helpers
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Various calls to snprintf() in libslirp assume that snprintf() returns
+"only" the number of bytes written (excluding terminating NUL).
+
+https://pubs.opengroup.org/onlinepubs/9699919799/functions/snprintf.html#tag_16_159_04
+
+"Upon successful completion, the snprintf() function shall return the
+number of bytes that would be written to s had n been sufficiently
+large excluding the terminating null byte."
+
+Introduce slirp_fmt() that handles several pathological cases the
+way libslirp usually expect:
+
+- treat error as fatal (instead of silently returning -1)
+
+- fmt0() will always \0 end
+
+- return the number of bytes actually written (instead of what would
+have been written, which would usually result in OOB later), including
+the ending \0 for fmt0()
+
+- warn if truncation happened (instead of ignoring)
+
+Other less common cases can still be handled with strcpy/snprintf() etc.
+
+Signed-off-by: Marc-André Lureau 
+Reviewed-by: Samuel Thibault 
+Message-Id: <20200127092414.169796-2-marcandre.lur...@redhat.com>
+Signed-off-by: Oguz Bektas 
+---
+ slirp/src/util.c | 62 ++
+ slirp/src/util.h |  3 +++
+ 2 files changed, 65 insertions(+)
+
+diff --git a/slirp/src/util.c b/slirp/src/util.c
+index e596087..e3b6257 100644
+--- a/slirp/src/util.c
 b/slirp/src/util.c
+@@ -364,3 +364,65 @@ void slirp_pstrcpy(char *buf, int buf_size, const char 
*str)
+ }
+ *q = '\0';
+ }
++
++static int slirp_vsnprintf(char *str, size_t size,
++   const char *format, va_list args)
++{
++int rv = vsnprintf(str, size, format, args);
++
++if (rv < 0) {
++g_error("vsnprintf() failed: %s", g_strerror(errno));
++}
++
++return rv;
++}
++
++/*
++ * A snprintf()-like function that:
++ * - returns the number of bytes written (excluding optional \0-ending)
++ * - dies on error
++ * - warn on truncation
++ */
++int slirp_fmt(char *str, size_t size, const char *format, ...)
++{
++va_list args;
++int rv;
++
++va_start(args, format);
++rv = slirp_vsnprintf(str, size, format, args);
++va_end(args);
++
++if (rv > size) {
++g_critical("vsnprintf() truncation");
++}
++
++return MIN(rv, size);
++}
++
++/*
++ * A snprintf()-like function that:
++ * - always \0-end (unless size == 0)
++ * - returns the number of bytes actually written, including \0 ending
++ * - dies on error
++ * - warn on truncation
++ */
++int slirp_fmt0(char *str, size_t size, const char *format, ...)
++{
++va_list args;
++int rv;
++
++va_start(args, format);
++rv = slirp_vsnprintf(str, size, format, args);
++va_end(args);
++
++if (rv >= size) {
++g_critical("vsnprintf() truncation");
++if (size > 0)
++str[size - 1] = '\0';
++rv = size;
++} else {
++rv += 1; /* include \0 */
++}
++
++return rv;
++}
+diff --git a/slirp/src/util.h b/slirp/src/util.h
+index 3c6223c..0558dfc 100644
+--- a/slirp/src/util.h
 b/slirp/src/util.h
+@@ -177,4 +177,7 @@ static inline int slirp_socket_set_fast_reuse(int fd)
+ 
+ void slirp_pstrcpy(char *buf, int buf_size, const char *str);
+ 
++int slirp_fmt(char *str, size_t size, const char *format, ...);
++int slirp_fmt0(char *str, size_t size, const char *format, ...);
++
+ #endif
+-- 
+2.20.1
+
diff --git 
a/debian/patches/security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch 
b/debian/patches/security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch
new file mode 100644
index 000..099fecd
--- /dev/null
+++ b/debian/patches/security/0

Re: [pve-devel] [PATCH pve-manager 1/1] fix #2634: if hook-script without permission, prints message, that the script not executable

2020-03-12 Thread Oguz Bektas
hi,

works fine. also tested with symlinks.
i'd remove the extra whitespace near the end though.

Tested-by: Oguz Bektas 

On Thu, Mar 12, 2020 at 12:18:18PM +0100, Moayad Almalat wrote:
> Signed-off-by: Moayad Almalat 
> ---
>  PVE/VZDump.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
> index ada3681e..5f7f7725 100644
> --- a/PVE/VZDump.pm
> +++ b/PVE/VZDump.pm
> @@ -571,6 +571,12 @@ sub run_hook_script {
>  
>  return if !$script;
>  
> +if (!-x $script) {
> + die "The hook-script '$script' is not executable.\n"
> +}
> +
> +
> +
>  my $cmd = "$script $phase";
>  
>  $cmd .= " $task->{mode} $task->{vmid}" if ($task);
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-qemu] slirp: re-add security patches

2020-03-11 Thread Oguz Bektas
On Wed, Mar 11, 2020 at 04:05:22PM +0100, Dominik Csapak wrote:
> [...]
> > diff --git a/debian/patches/series b/debian/patches/series
> > index 651c609..63598ab 100644
> > --- a/debian/patches/series
> > +++ b/debian/patches/series
> > @@ -30,3 +30,7 @@ pve/0029-PVE-Backup-add-vma-backup-format-code.patch
> >   pve/0030-PVE-Backup-add-backup-dump-block-driver.patch
> >   pve/0031-PVE-Backup-proxmox-backup-patches-for-qemu.patch
> >   pve/0032-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
> > +security/0001-util-add-slirp_fmt-helpers.patch
> > +security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch
> > +security/0003-ip_reass-Fix-use-after-free.patch
> > +security/0004-tcp_emu-Fix-oob-access.patch
> > 
> 
> seems that 0004 patch is missing? or mistakenly in the series file?
> 

sorry, mistakenly added in the series file

that was the 1st part of a patch series which didn't apply anymore. safe
to remove from series file as well

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-qemu] slirp: re-add security patches

2020-03-11 Thread Oguz Bektas
the first two patches were mistakenly left out during the 4.2 qemu
rebase.

also adds another patch for issue CVE-2019-14378 (heap-based BOF)

Signed-off-by: Oguz Bektas 
---
 .../0001-util-add-slirp_fmt-helpers.patch | 126 
 ...2-tcp_emu-fix-unsafe-snprintf-usages.patch | 135 ++
 .../0003-ip_reass-Fix-use-after-free.patch|  46 ++
 debian/patches/series |   4 +
 4 files changed, 311 insertions(+)
 create mode 100644 
debian/patches/security/0001-util-add-slirp_fmt-helpers.patch
 create mode 100644 
debian/patches/security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch
 create mode 100644 
debian/patches/security/0003-ip_reass-Fix-use-after-free.patch

diff --git a/debian/patches/security/0001-util-add-slirp_fmt-helpers.patch 
b/debian/patches/security/0001-util-add-slirp_fmt-helpers.patch
new file mode 100644
index 000..af944f8
--- /dev/null
+++ b/debian/patches/security/0001-util-add-slirp_fmt-helpers.patch
@@ -0,0 +1,126 @@
+From  Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= 
+Date: Mon, 27 Jan 2020 10:24:09 +0100
+Subject: [PATCH 1/2] util: add slirp_fmt() helpers
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Various calls to snprintf() in libslirp assume that snprintf() returns
+"only" the number of bytes written (excluding terminating NUL).
+
+https://pubs.opengroup.org/onlinepubs/9699919799/functions/snprintf.html#tag_16_159_04
+
+"Upon successful completion, the snprintf() function shall return the
+number of bytes that would be written to s had n been sufficiently
+large excluding the terminating null byte."
+
+Introduce slirp_fmt() that handles several pathological cases the
+way libslirp usually expect:
+
+- treat error as fatal (instead of silently returning -1)
+
+- fmt0() will always \0 end
+
+- return the number of bytes actually written (instead of what would
+have been written, which would usually result in OOB later), including
+the ending \0 for fmt0()
+
+- warn if truncation happened (instead of ignoring)
+
+Other less common cases can still be handled with strcpy/snprintf() etc.
+
+Signed-off-by: Marc-André Lureau 
+Reviewed-by: Samuel Thibault 
+Message-Id: <20200127092414.169796-2-marcandre.lur...@redhat.com>
+Signed-off-by: Oguz Bektas 
+---
+ slirp/src/util.c | 62 ++
+ slirp/src/util.h |  3 +++
+ 2 files changed, 65 insertions(+)
+
+diff --git a/slirp/src/util.c b/slirp/src/util.c
+index e596087..e3b6257 100644
+--- a/slirp/src/util.c
 b/slirp/src/util.c
+@@ -364,3 +364,65 @@ void slirp_pstrcpy(char *buf, int buf_size, const char 
*str)
+ }
+ *q = '\0';
+ }
++
++static int slirp_vsnprintf(char *str, size_t size,
++   const char *format, va_list args)
++{
++int rv = vsnprintf(str, size, format, args);
++
++if (rv < 0) {
++g_error("vsnprintf() failed: %s", g_strerror(errno));
++}
++
++return rv;
++}
++
++/*
++ * A snprintf()-like function that:
++ * - returns the number of bytes written (excluding optional \0-ending)
++ * - dies on error
++ * - warn on truncation
++ */
++int slirp_fmt(char *str, size_t size, const char *format, ...)
++{
++va_list args;
++int rv;
++
++va_start(args, format);
++rv = slirp_vsnprintf(str, size, format, args);
++va_end(args);
++
++if (rv > size) {
++g_critical("vsnprintf() truncation");
++}
++
++return MIN(rv, size);
++}
++
++/*
++ * A snprintf()-like function that:
++ * - always \0-end (unless size == 0)
++ * - returns the number of bytes actually written, including \0 ending
++ * - dies on error
++ * - warn on truncation
++ */
++int slirp_fmt0(char *str, size_t size, const char *format, ...)
++{
++va_list args;
++int rv;
++
++va_start(args, format);
++rv = slirp_vsnprintf(str, size, format, args);
++va_end(args);
++
++if (rv >= size) {
++g_critical("vsnprintf() truncation");
++if (size > 0)
++str[size - 1] = '\0';
++rv = size;
++} else {
++rv += 1; /* include \0 */
++}
++
++return rv;
++}
+diff --git a/slirp/src/util.h b/slirp/src/util.h
+index 3c6223c..0558dfc 100644
+--- a/slirp/src/util.h
 b/slirp/src/util.h
+@@ -177,4 +177,7 @@ static inline int slirp_socket_set_fast_reuse(int fd)
+ 
+ void slirp_pstrcpy(char *buf, int buf_size, const char *str);
+ 
++int slirp_fmt(char *str, size_t size, const char *format, ...);
++int slirp_fmt0(char *str, size_t size, const char *format, ...);
++
+ #endif
+-- 
+2.20.1
+
diff --git 
a/debian/patches/security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch 
b/debian/patches/security/0002-tcp_emu-fix-unsafe-snprintf-usages.patch
new file mode 100644
index 000..099fecd
--- /dev/null
+++ b/debian/patches/security/0

Re: [pve-devel] [PATCH storage] zfs: set cachefile=none when creating pool

2020-03-05 Thread Oguz Bektas
hi,

On Thu, Mar 05, 2020 at 03:21:11PM +0100, Thomas Lamprecht wrote:
> On 3/5/20 2:16 PM, Oguz Bektas wrote:
> > the first rpool we create during setup with our installer has
> > cachefile=none set.
> > 
> > when this isn't specified, zfs defaults to /etc/zfs/zpool.cache
> > which later can cause problems
> > 
> > [0]: 
> > https://forum.proxmox.com/threads/zfs-mirror-werden-nach-reboot-nicht-mehr-automatisch-gemountet.66222/#post-298984
> 
> can you please elaborate a bit why this is OK, why and which problems are
> being caused without this. I'm not saying it's wrong, I just want some
> real rationale in the commit message, thanks!

of course.

i started looking into this after many users in forum started talking
about their zvols not being mounted after reboot, which seems to get
fixed after updating the cachefile.
except we don't use a cachefile by default when creating the first
'rpool' during setup (see pve-installer code).

however in the storage code pools are created without the
'cachefile=none' option, which results in zfs using the default
cachefile. (cachefile doesn't exist before that)

after creating a pool without the explicit option (using our GUI for
example where this bit of code gets called), /etc/zfs/zpool.cache
cachefile is created which results in systemd deciding to use the
import-cache service instead.

that's also most likely the reason the cachefile gets outdated (which
results in the problem in the forum post)
> 
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  PVE/API2/Disks/ZFS.pm | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> > index 551f21a..78484eb 100644
> > --- a/PVE/API2/Disks/ZFS.pm
> > +++ b/PVE/API2/Disks/ZFS.pm
> > @@ -371,7 +371,7 @@ __PACKAGE__->register_method ({
> > PVE::Diskmanage::locked_disk_action(sub {
> > # create zpool with desired raidlevel
> >  
> > -   my $cmd = [$ZPOOL, 'create', '-o', "ashift=$ashift", $name];
> > +   my $cmd = [$ZPOOL, 'create', '-o', "cachefile=none", 
> > "ashift=$ashift", $name];
> >  
> > if ($raidlevel eq 'raid10') {
> > for (my $i = 0; $i < @$devs; $i+=2) {
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] zfs: set cachefile=none when creating pool

2020-03-05 Thread Oguz Bektas
the first rpool we create during setup with our installer has
cachefile=none set.

when this isn't specified, zfs defaults to /etc/zfs/zpool.cache
which later can cause problems

[0]: 
https://forum.proxmox.com/threads/zfs-mirror-werden-nach-reboot-nicht-mehr-automatisch-gemountet.66222/#post-298984

Signed-off-by: Oguz Bektas 
---
 PVE/API2/Disks/ZFS.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 551f21a..78484eb 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -371,7 +371,7 @@ __PACKAGE__->register_method ({
PVE::Diskmanage::locked_disk_action(sub {
# create zpool with desired raidlevel
 
-   my $cmd = [$ZPOOL, 'create', '-o', "ashift=$ashift", $name];
+   my $cmd = [$ZPOOL, 'create', '-o', "cachefile=none", 
"ashift=$ashift", $name];
 
if ($raidlevel eq 'raid10') {
for (my $i = 0; $i < @$devs; $i+=2) {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-qemu] add patch for CVE-2019-20382 (vnc disconnect memory leak)

2020-03-05 Thread Oguz Bektas
oss-security email can be found here[0]

upstream commit here[1]

this effects our vncproxy. dominik and me tested if the issue is present
on our branch and it appears that it is.
in essence when we disconnect from a vnc connection, the memory isn't
free'd afterwards which causes the qemu process to use more and more
memory with each disconnect, which could lead to a dos scenario.

we tested the patch and it seems to mitigate the problem.

[0]: https://seclists.org/oss-sec/2020/q1/105
[1]: 
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=6bf21f3d83e95bcc4ba35a7a07cc6655e8b010b0

Tested-by: Dominik Csapak 
Signed-off-by: Oguz Bektas 
---
 ...-fix-memory-leak-when-vnc-disconnect.patch | 1016 +
 debian/patches/series |1 +
 2 files changed, 1017 insertions(+)
 create mode 100644 
debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch

diff --git 
a/debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch 
b/debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch
new file mode 100644
index 000..3c95274
--- /dev/null
+++ b/debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch
@@ -0,0 +1,1016 @@
+From  Mon Sep 17 00:00:00 2001
+From: Li Qiang 
+Date: Sat, 31 Aug 2019 08:39:22 -0700
+Subject: [PATCH] vnc: fix memory leak when vnc disconnect
+
+Currently when qemu receives a vnc connect, it creates a 'VncState' to
+represent this connection. In 'vnc_worker_thread_loop' it creates a
+local 'VncState'. The connection 'VcnState' and local 'VncState' exchange
+data in 'vnc_async_encoding_start' and 'vnc_async_encoding_end'.
+In 'zrle_compress_data' it calls 'deflateInit2' to allocate the libz library
+opaque data. The 'VncState' used in 'zrle_compress_data' is the local
+'VncState'. In 'vnc_zrle_clear' it calls 'deflateEnd' to free the libz
+library opaque data. The 'VncState' used in 'vnc_zrle_clear' is the connection
+'VncState'. In currently implementation there will be a memory leak when the
+vnc disconnect. Following is the asan output backtrack:
+
+Direct leak of 29760 byte(s) in 5 object(s) allocated from:
+0 0xa67ef3c3 in __interceptor_calloc (/lib64/libasan.so.4+0xd33c3)
+1 0xa65071cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb)
+2 0xa5e968f7 in deflateInit2_ (/lib64/libz.so.1+0x78f7)
+3 0xcec58613 in zrle_compress_data ui/vnc-enc-zrle.c:87
+4 0xcec58613 in zrle_send_framebuffer_update ui/vnc-enc-zrle.c:344
+5 0xcec34e77 in vnc_send_framebuffer_update ui/vnc.c:919
+6 0xcec5e023 in vnc_worker_thread_loop ui/vnc-jobs.c:271
+7 0xcec5e5e7 in vnc_worker_thread ui/vnc-jobs.c:340
+8 0xcee4d3c3 in qemu_thread_start util/qemu-thread-posix.c:502
+9 0xa544e8bb in start_thread (/lib64/libpthread.so.0+0x78bb)
+10 0xa53965cb in thread_start (/lib64/libc.so.6+0xd55cb)
+
+This is because the opaque allocated in 'deflateInit2' is not freed in
+'deflateEnd'. The reason is that the 'deflateEnd' calls 'deflateStateCheck'
+and in the latter will check whether 's->strm != strm'(libz's data structure).
+This check will be true so in 'deflateEnd' it just return 'Z_STREAM_ERROR' and
+not free the data allocated in 'deflateInit2'.
+
+The reason this happens is that the 'VncState' contains the whole 'VncZrle',
+so when calling 'deflateInit2', the 's->strm' will be the local address.
+So 's->strm != strm' will be true.
+
+To fix this issue, we need to make 'zrle' of 'VncState' to be a pointer.
+Then the connection 'VncState' and local 'VncState' exchange mechanism will
+work as expection. The 'tight' of 'VncState' has the same issue, let's also 
turn
+it to a pointer.
+
+Reported-by: Ying Fang 
+Signed-off-by: Li Qiang 
+Message-id: 20190831153922.121308-1-liq...@163.com
+Signed-off-by: Gerd Hoffmann 
+(cherry picked from commit 6bf21f3d83e95bcc4ba35a7a07cc6655e8b010b0)
+Signed-off-by: Oguz Bektas 
+---
+ ui/vnc-enc-tight.c| 219 +-
+ ui/vnc-enc-zlib.c |  11 ++-
+ ui/vnc-enc-zrle.c |  68 ++---
+ ui/vnc-enc-zrle.inc.c |   2 +-
+ ui/vnc.c  |  28 +++---
+ ui/vnc.h  |   4 +-
+ 6 files changed, 170 insertions(+), 162 deletions(-)
+
+diff --git a/ui/vnc-enc-tight.c b/ui/vnc-enc-tight.c
+index 9084c2201b..1e0851826a 100644
+--- a/ui/vnc-enc-tight.c
 b/ui/vnc-enc-tight.c
+@@ -116,7 +116,7 @@ static int send_png_rect(VncState *vs, int x, int y, int 
w, int h,
+ 
+ static bool tight_can_send_png_rect(VncState *vs, int w, int h)
+ {
+-if (vs->tight.type != VNC_ENCODING_TIGHT_PNG) {
++if (vs->tight->type != VNC_ENCODING_TIGHT_PNG) {
+ return false;
+ }
+ 
+@@ -144,7 +144,7 @@ tight_detect_smooth_image24(VncState *vs, int w, int h)
+ int pixels = 0;
+ int pix, left[3];
+ unsigned int errors;
+-unsigned char *buf = vs->tight.tight.buffer;
++unsigned char *buf = vs

[pve-devel] [PATCH pve-qemu] add patch for CVE-2019-20382 (vnc disconnect memory leak)

2020-03-05 Thread Oguz Bektas
oss-security email can be found here[0]

upstream commit here[1]

this effects our vncproxy. dominik and me tested if the issue is present
on our branch and it appears that it is.
in essence when we disconnect from a vnc connection, the memory isn't
free'd afterwards which causes the qemu process to use more and more
memory with each disconnect, which could lead to a dos scenario.

we tested the patch and it seems to mitigate the problem.

[0]: https://seclists.org/oss-sec/2020/q1/105
[1]: 
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=6bf21f3d83e95bcc4ba35a7a07cc6655e8b010b0

Tested-by: Dominik Csapak 
Signed-off-by: Oguz Bektas 
---
 ...-fix-memory-leak-when-vnc-disconnect.patch | 1016 +
 debian/patches/series |1 +
 2 files changed, 1017 insertions(+)
 create mode 100644 
debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch

diff --git 
a/debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch 
b/debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch
new file mode 100644
index 000..3c95274
--- /dev/null
+++ b/debian/patches/extra/0003-vnc-fix-memory-leak-when-vnc-disconnect.patch
@@ -0,0 +1,1016 @@
+From  Mon Sep 17 00:00:00 2001
+From: Li Qiang 
+Date: Sat, 31 Aug 2019 08:39:22 -0700
+Subject: [PATCH] vnc: fix memory leak when vnc disconnect
+
+Currently when qemu receives a vnc connect, it creates a 'VncState' to
+represent this connection. In 'vnc_worker_thread_loop' it creates a
+local 'VncState'. The connection 'VcnState' and local 'VncState' exchange
+data in 'vnc_async_encoding_start' and 'vnc_async_encoding_end'.
+In 'zrle_compress_data' it calls 'deflateInit2' to allocate the libz library
+opaque data. The 'VncState' used in 'zrle_compress_data' is the local
+'VncState'. In 'vnc_zrle_clear' it calls 'deflateEnd' to free the libz
+library opaque data. The 'VncState' used in 'vnc_zrle_clear' is the connection
+'VncState'. In currently implementation there will be a memory leak when the
+vnc disconnect. Following is the asan output backtrack:
+
+Direct leak of 29760 byte(s) in 5 object(s) allocated from:
+0 0xa67ef3c3 in __interceptor_calloc (/lib64/libasan.so.4+0xd33c3)
+1 0xa65071cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb)
+2 0xa5e968f7 in deflateInit2_ (/lib64/libz.so.1+0x78f7)
+3 0xcec58613 in zrle_compress_data ui/vnc-enc-zrle.c:87
+4 0xcec58613 in zrle_send_framebuffer_update ui/vnc-enc-zrle.c:344
+5 0xcec34e77 in vnc_send_framebuffer_update ui/vnc.c:919
+6 0xcec5e023 in vnc_worker_thread_loop ui/vnc-jobs.c:271
+7 0xcec5e5e7 in vnc_worker_thread ui/vnc-jobs.c:340
+8 0xcee4d3c3 in qemu_thread_start util/qemu-thread-posix.c:502
+9 0xa544e8bb in start_thread (/lib64/libpthread.so.0+0x78bb)
+10 0xa53965cb in thread_start (/lib64/libc.so.6+0xd55cb)
+
+This is because the opaque allocated in 'deflateInit2' is not freed in
+'deflateEnd'. The reason is that the 'deflateEnd' calls 'deflateStateCheck'
+and in the latter will check whether 's->strm != strm'(libz's data structure).
+This check will be true so in 'deflateEnd' it just return 'Z_STREAM_ERROR' and
+not free the data allocated in 'deflateInit2'.
+
+The reason this happens is that the 'VncState' contains the whole 'VncZrle',
+so when calling 'deflateInit2', the 's->strm' will be the local address.
+So 's->strm != strm' will be true.
+
+To fix this issue, we need to make 'zrle' of 'VncState' to be a pointer.
+Then the connection 'VncState' and local 'VncState' exchange mechanism will
+work as expection. The 'tight' of 'VncState' has the same issue, let's also 
turn
+it to a pointer.
+
+Reported-by: Ying Fang 
+Signed-off-by: Li Qiang 
+Message-id: 20190831153922.121308-1-liq...@163.com
+Signed-off-by: Gerd Hoffmann 
+(cherry picked from commit 6bf21f3d83e95bcc4ba35a7a07cc6655e8b010b0)
+Signed-off-by: Oguz Bektas 
+---
+ ui/vnc-enc-tight.c| 219 +-
+ ui/vnc-enc-zlib.c |  11 ++-
+ ui/vnc-enc-zrle.c |  68 ++---
+ ui/vnc-enc-zrle.inc.c |   2 +-
+ ui/vnc.c  |  28 +++---
+ ui/vnc.h  |   4 +-
+ 6 files changed, 170 insertions(+), 162 deletions(-)
+
+diff --git a/ui/vnc-enc-tight.c b/ui/vnc-enc-tight.c
+index 9084c2201b..1e0851826a 100644
+--- a/ui/vnc-enc-tight.c
 b/ui/vnc-enc-tight.c
+@@ -116,7 +116,7 @@ static int send_png_rect(VncState *vs, int x, int y, int 
w, int h,
+ 
+ static bool tight_can_send_png_rect(VncState *vs, int w, int h)
+ {
+-if (vs->tight.type != VNC_ENCODING_TIGHT_PNG) {
++if (vs->tight->type != VNC_ENCODING_TIGHT_PNG) {
+ return false;
+ }
+ 
+@@ -144,7 +144,7 @@ tight_detect_smooth_image24(VncState *vs, int w, int h)
+ int pixels = 0;
+ int pix, left[3];
+ unsigned int errors;
+-unsigned char *buf = vs->tight.tight.buffer;
++unsigned char *buf = vs

[pve-devel] [PATCH docs] local-zfs: add troubleshooting section

2020-03-04 Thread Oguz Bektas
and an entry for the corrupted cachefile problem which keeps showing up
in forum posts.

Signed-off-by: Oguz Bektas 
---
 local-zfs.adoc | 29 +
 1 file changed, 29 insertions(+)

diff --git a/local-zfs.adoc b/local-zfs.adoc
index 5cce677..c516db8 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -551,3 +551,32 @@ in the pool will opt in for small file blocks).
 
 # zfs set special_small_blocks=0 /
 
+
+[[zfs_troubleshooting]]
+Troubleshooting
+~~~
+
+.Corrupted cachefile
+
+In case of a corrupted ZFS cachefile, some volumes may not be mounted during
+boot until mounted manually later.
+
+For each pool, run:
+
+
+# zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
+
+
+and afterwards update the `initramfs` by running:
+
+
+# update-initramfs -u -k all
+
+
+and finally reboot your node.
+
+Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service`
+doesn't import the pools that aren't present in the cachefile.
+
+Another workaround to this problem is enabling the `zfs-import-scan.service`,
+which searches and imports pools via device scanning (usually slower).
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH Fix: CentOS 8 lxc start After update & upgrade PVE 5.4 1/1] Fix: CentOS 8 After Update & Upgrade PVE-5.4

2020-03-02 Thread Oguz Bektas
On Mon, Mar 02, 2020 at 03:20:50PM +0100, Moayad Almalat wrote:
> Hi,
> 
> Yeah it's in PVE 5.x, i already added in the Subject PVE 5.4.

yes i see. you should still add the name of the
branch i.e. 'stable-5' so the subject can be something like:

[PATCH pve-container stable-5] fix ...

that makes it easier to realize when sorting through emails for review

> 
> Should i edit the PATCH?

yes, please make it '<= 9' to match with the current branch so we don't
have to do it one more time later :)

> 
> > On March 2, 2020 2:51 PM Oguz Bektas  wrote:
> > 
> >  
> > hi,
> > 
> > i assume this is for the older 5.x branch. you should mention that in
> > your message.
> > 
> > in the 6.x branch this is already set to '<= 9' so you can do that here
> > as well.
> > 
> > On Mon, Mar 02, 2020 at 02:33:32PM +0100, Moayad Almalat wrote:
> > > Signed-off-by: Moayad Almalat 
> > > ---
> > >  src/PVE/LXC/Setup/CentOS.pm | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
> > > index cc4c5bb..34430ff 100644
> > > --- a/src/PVE/LXC/Setup/CentOS.pm
> > > +++ b/src/PVE/LXC/Setup/CentOS.pm
> > > @@ -20,7 +20,7 @@ sub new {
> > >  my $version;
> > >  
> > >  if ($release =~ m/release\s+(\d+\.\d+)(\.\d+)?/) {
> > > - if ($1 >= 5 && $1 <= 8) {
> > > + if ($1 >= 5 && $1 <= 8.1) {
> > >   $version = $1;
> > >   }
> > >  }
> > > -- 
> > > 2.20.1
> > > 
> > > ___
> > > pve-devel mailing list
> > > pve-devel@pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> > > 
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> best regards,
>  
> Moayad Alamalt
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH Fix: CentOS 8 lxc start After update & upgrade PVE 5.4 1/1] Fix: CentOS 8 After Update & Upgrade PVE-5.4

2020-03-02 Thread Oguz Bektas
hi,

i assume this is for the older 5.x branch. you should mention that in
your message.

in the 6.x branch this is already set to '<= 9' so you can do that here
as well.

On Mon, Mar 02, 2020 at 02:33:32PM +0100, Moayad Almalat wrote:
> Signed-off-by: Moayad Almalat 
> ---
>  src/PVE/LXC/Setup/CentOS.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
> index cc4c5bb..34430ff 100644
> --- a/src/PVE/LXC/Setup/CentOS.pm
> +++ b/src/PVE/LXC/Setup/CentOS.pm
> @@ -20,7 +20,7 @@ sub new {
>  my $version;
>  
>  if ($release =~ m/release\s+(\d+\.\d+)(\.\d+)?/) {
> - if ($1 >= 5 && $1 <= 8) {
> + if ($1 >= 5 && $1 <= 8.1) {
>   $version = $1;
>   }
>  }
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH i18n 0/1] Update Hungarian translation

2020-02-28 Thread Oguz Bektas
hi,

i only received the cover letter here and no patch. perhaps you forgot
to send it?

On Thu, Feb 27, 2020 at 11:01:43PM +0100, Csanádi Norbert wrote:
> From: Norbert Csanadi 
> 
> 
> Norbert Csanadi (1):
>   Update Hungarian translation, focused on ProxMox Mail Gateway webUI
> 
>  hu.po | 1108 +
>  1 file changed, 398 insertions(+), 710 deletions(-)
> 
> -- 
> 2.23.0
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 container] fix #2598: activate volumes before mounting in stop mode backup

2020-02-19 Thread Oguz Bektas
On Wed, Feb 19, 2020 at 05:10:54PM +0100, Wolfgang Bumiller wrote:
> On Tue, Feb 18, 2020 at 02:38:52PM +0100, Oguz Bektas wrote:
> > 'stop' mode deactivates the volumes (relevant for LVM backend), and
> > they're not reactivated before trying to mount them for backup.
> > 
> > reactivating the volumes before the mount in 'stop' mode backup solves
> > the issue.
> > 
> > Signed-off-by: Oguz Bektas 
> 
> Acked-by: Wolfgang Bumiller 
> 
> > ---
> >  src/PVE/VZDump/LXC.pm | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
> > index 0260184..ed6daa2 100644
> > --- a/src/PVE/VZDump/LXC.pm
> > +++ b/src/PVE/VZDump/LXC.pm
> > @@ -310,6 +310,7 @@ sub archive {
> >  if ($task->{mode} eq 'stop') {
> > my $rootdir = $default_mount_point;
> > my $storage_cfg = $self->{storecfg};
> > +   PVE::Storage::activate_volumes($storage_cfg, $task->{volids});
> 
> This we definitely need. Additionally, we can consider removing this
> from prepare(). Do we maybe also want a 'skip-deactivate' flag for the
> vm-stop call made from vzdump?

i thought that's what the $keepActive flag was for, but it's not used
anymore? we could utilize that again possibly?

> 
> > foreach my $disk (@$disks) {
> > $disk->{dir} = "${rootdir}$disk->{mp}";
> > PVE::LXC::mountpoint_mount($disk, $rootdir, $storage_cfg, undef, 
> > $task->{rootuid}, $task->{rootgid});
> > -- 
> > 2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 5/5] hotplug_pending: allow partial fast plugging

2020-02-19 Thread Oguz Bektas
adds a loop after the main fastplug loop, to check if any of the options
are partially fast pluggable.

these are defined in $partial_fast_plug_option

Signed-off-by: Oguz Bektas 
---

v1->v2:
* set $changes according to partial_fast_plug result as well as fast_plug result
* do cleanup_pending before writing fastplug changes



 PVE/QemuConfig.pm |  7 +++
 PVE/QemuServer.pm | 19 +++
 2 files changed, 26 insertions(+)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 1ba728a..d1727b2 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -399,6 +399,13 @@ sub __snapshot_foreach_volume {
 
 PVE::QemuServer::foreach_drive($conf, $func);
 }
+
+sub get_partial_fast_plug_option {
+my ($class) = @_;
+
+return $PVE::QemuServer::partial_fast_plug_option;
+}
+
 # END implemented abstract methods from PVE::AbstractConfig
 
 1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 44d0dee..8a689a0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4732,6 +4732,18 @@ my $fast_plug_option = {
 'tags' => 1,
 };
 
+# name of opt
+# -> fmt -> format variable
+# -> properties -> fastpluggable options hash
+our $partial_fast_plug_option = {
+agent => {
+   fmt => $agent_fmt,
+   properties => {
+   fstrim_cloned_disks => 1
+   },
+},
+};
+
 # hotplug changes in [PENDING]
 # $selection hash can be used to only apply specified options, for
 # example: { cores => 1 } (only apply changed 'cores')
@@ -4761,7 +4773,14 @@ sub vmconfig_hotplug_pending {
}
 }
 
+foreach my $opt (keys %{$conf->{pending}}) {
+   if ($partial_fast_plug_option->{$opt}) {
+   $changes ||= PVE::QemuConfig->partial_fast_plug($conf, $opt);
+   }
+}
+
 if ($changes) {
+   PVE::QemuConfig->cleanup_pending($conf);
PVE::QemuConfig->write_config($vmid, $conf);
 }
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 guest-common 4/5] abstractconfig: add partial_fast_plug

2020-02-19 Thread Oguz Bektas
allows partial fast plugging of functions as defined in the
$partial_fast_plug_option in qemuserver (and possibly lxc later on)

Signed-off-by: Oguz Bektas 
---

v1->v2:
* rename get_partial_fast_plug_map -> get_partial_fast_plug_option
* rename variables in partial_fast_plug for readability/understandability
* return $changes instead of $changes_left
* more verbose comment

 PVE/AbstractConfig.pm | 43 +++
 1 file changed, 43 insertions(+)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 782714f..c517c61 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -168,6 +168,49 @@ sub cleanup_pending {
 return $changes;
 }
 
+sub get_partial_fast_plug_option {
+my ($class) = @_;
+
+die "abstract method - implement me ";
+}
+
+sub partial_fast_plug {
+my ($class, $conf, $opt) = @_;
+
+my $partial_fast_plug_option = $class->get_partial_fast_plug_option();
+my $format = $partial_fast_plug_option->{$opt}->{fmt};
+my $fast_pluggable = $partial_fast_plug_option->{$opt}->{properties};
+
+my $configured = {};
+if (exists($conf->{$opt})) {
+   $configured = PVE::JSONSchema::parse_property_string($format, 
$conf->{$opt});
+}
+my $pending = PVE::JSONSchema::parse_property_string($format, 
$conf->{pending}->{$opt});
+
+my $changes = 0;
+
+# merge configured and pending opts to iterate
+my @all_keys = keys %{{ %$pending, %$configured }};
+
+foreach my $subopt (@all_keys) {
+   my $type = $format->{$subopt}->{type};
+   if (PVE::GuestHelpers::typesafe_ne($configured->{$subopt}, 
$pending->{$subopt}, $type)) {
+   if ($fast_pluggable->{$subopt}) {
+   $configured->{$subopt} = $pending->{$subopt};
+   $changes = 1
+   }
+   }
+}
+
+# if there're no keys in $configured (after merge) there shouldn't be 
anything to change
+if (keys %$configured) {
+   $conf->{$opt} = PVE::JSONSchema::print_property_string($configured, 
$format);
+}
+
+return $changes;
+}
+
+
 sub load_snapshot_config {
 my ($class, $vmid, $snapname) = @_;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 3/5] use helper functions from GuestHelpers

2020-02-19 Thread Oguz Bektas
remove safe_string_ne and safe_num_ne code which is now shared in
GuestHelpers. also change all the calls.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* use helpers from @EXPORT_OK


 src/PVE/LXC.pm | 38 +-
 1 file changed, 9 insertions(+), 29 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 34ca2a3..bd990e4 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -29,7 +29,7 @@ use PVE::AccessControl;
 use PVE::ProcFSTools;
 use PVE::Syscall qw(:fsmount);
 use PVE::LXC::Config;
-use PVE::GuestHelpers;
+use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne 
typesafe_ne);
 use PVE::LXC::Tools;
 
 use Time::HiRes qw (gettimeofday);
@@ -876,26 +876,6 @@ sub vm_stop_cleanup {
 warn $@ if $@; # avoid errors - just warn
 }
 
-my $safe_num_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a != $b;
-};
-
-my $safe_string_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a ne $b;
-};
-
 sub update_net {
 my ($vmid, $conf, $opt, $newnet, $netid, $rootdir) = @_;
 
@@ -910,8 +890,8 @@ sub update_net {
 if (my $oldnetcfg = $conf->{$opt}) {
my $oldnet = PVE::LXC::Config->parse_lxc_network($oldnetcfg);
 
-   if (&$safe_string_ne($oldnet->{hwaddr}, $newnet->{hwaddr}) ||
-   &$safe_string_ne($oldnet->{name}, $newnet->{name})) {
+   if (safe_string_ne($oldnet->{hwaddr}, $newnet->{hwaddr}) ||
+   safe_string_ne($oldnet->{name}, $newnet->{name})) {
 
PVE::Network::veth_delete($veth);
delete $conf->{$opt};
@@ -920,9 +900,9 @@ sub update_net {
hotplug_net($vmid, $conf, $opt, $newnet, $netid);
 
} else {
-   if (&$safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
-   &$safe_num_ne($oldnet->{tag}, $newnet->{tag}) ||
-   &$safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
+   if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
+   safe_num_ne($oldnet->{tag}, $newnet->{tag}) ||
+   safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
 
if ($oldnet->{bridge}) {
PVE::Network::tap_unplug($veth);
@@ -938,7 +918,7 @@ sub update_net {
foreach (qw(bridge tag firewall rate)) {
$oldnet->{$_} = $newnet->{$_} if $newnet->{$_};
}
-   } elsif (&$safe_string_ne($oldnet->{rate}, $newnet->{rate})) {
+   } elsif (safe_string_ne($oldnet->{rate}, $newnet->{rate})) {
# Rate can be applied on its own but any change above needs to
# include the rate in tap_plug since OVS resets everything.
PVE::Network::tap_rate_limit($veth, $newnet->{rate});
@@ -1008,8 +988,8 @@ sub update_ipconfig {
my $oldip = $optdata->{$ip};
my $oldgw = $optdata->{$gw};
 
-   my $change_ip = &$safe_string_ne($oldip, $newip);
-   my $change_gw = &$safe_string_ne($oldgw, $newgw);
+   my $change_ip = safe_string_ne($oldip, $newip);
+   my $change_gw = safe_string_ne($oldgw, $newgw);
 
return if !$change_ip && !$change_gw;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 guest-common 1/5] guesthelpers: move/add safe comparison functions from lxc and qemu

2020-02-19 Thread Oguz Bektas
move the safe_string_ne and safe_num_ne functions to guesthelpers to
remove duplicate code.

add the new safe_boolean_ne and typesafe_ne helper functions

also add them in @EXPORT_OK

Signed-off-by: Oguz Bektas 
---

v1->v2:
* add new helpers to @EXPORT_OK for easy use
* add a die to typesafe_ne for safe abort

 PVE/GuestHelpers.pm | 53 +
 1 file changed, 53 insertions(+)

diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm
index 07a62ce..916f19f 100644
--- a/PVE/GuestHelpers.pm
+++ b/PVE/GuestHelpers.pm
@@ -9,11 +9,64 @@ use PVE::Storage;
 use POSIX qw(strftime);
 use Scalar::Util qw(weaken);
 
+our @EXPORT_OK = qw(safe_string_ne safe_boolean_ne safe_num_ne typesafe_ne);
+
 # We use a separate lock to block migration while a replication job
 # is running.
 
 our $lockdir = '/var/lock/pve-manager';
 
+# safe variable comparison functions
+
+sub safe_num_ne {
+my ($a, $b) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+return $a != $b;
+}
+
+sub safe_string_ne  {
+my ($a, $b) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+return $a ne $b;
+}
+
+sub safe_boolean_ne {
+my ($a, $b) = @_;
+
+# we don't check if value is defined, since undefined
+# is false (so it's a valid boolean)
+
+# negate both values to normalize and compare
+return !$a != !$b;
+}
+
+sub typesafe_ne {
+my ($a, $b, $type) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+if ($type eq 'string') {
+   return safe_string_ne($a, $b);
+} elsif ($type eq 'number' || $type eq 'integer') {
+   return safe_num_ne($a, $b);
+} elsif ($type eq 'boolean') {
+   return safe_boolean_ne($a, $b);
+}
+
+die "internal error: can't compare $a and $b with type $type";
+}
+
+
 sub guest_migration_lock {
 my ($vmid, $timeout, $func, @param) = @_;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 2/5] QemuServer: use helper functions from GuestHelpers

2020-02-19 Thread Oguz Bektas
removes safe_string_ne and safe_num_ne code which is now shared in
GuestHelpers. also change all the calls to use the shared definitions.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* use helpers from @EXPORT_OK



 PVE/QemuServer.pm | 86 ++-
 1 file changed, 33 insertions(+), 53 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 23176dd..44d0dee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -29,7 +29,7 @@ use UUID;
 use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file 
cfs_lock_file);
 use PVE::DataCenterConfig;
 use PVE::Exception qw(raise raise_param_exc);
-use PVE::GuestHelpers;
+use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne 
typesafe_ne);
 use PVE::INotify;
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::ProcFSTools;
@@ -5010,26 +5010,6 @@ sub vmconfig_apply_pending {
 PVE::QemuConfig->write_config($vmid, $conf);
 }
 
-my $safe_num_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a != $b;
-};
-
-my $safe_string_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a ne $b;
-};
-
 sub vmconfig_update_net {
 my ($storecfg, $conf, $hotplug, $vmid, $opt, $value, $arch, $machine_type) 
= @_;
 
@@ -5038,9 +5018,9 @@ sub vmconfig_update_net {
 if ($conf->{$opt}) {
my $oldnet = parse_net($conf->{$opt});
 
-   if (&$safe_string_ne($oldnet->{model}, $newnet->{model}) ||
-   &$safe_string_ne($oldnet->{macaddr}, $newnet->{macaddr}) ||
-   &$safe_num_ne($oldnet->{queues}, $newnet->{queues}) ||
+   if (safe_string_ne($oldnet->{model}, $newnet->{model}) ||
+   safe_string_ne($oldnet->{macaddr}, $newnet->{macaddr}) ||
+   safe_num_ne($oldnet->{queues}, $newnet->{queues}) ||
!($newnet->{bridge} && $oldnet->{bridge})) { # bridge/nat mode 
change
 
 # for non online change, we try to hot-unplug
@@ -5051,19 +5031,19 @@ sub vmconfig_update_net {
die "internal error" if $opt !~ m/net(\d+)/;
my $iface = "tap${vmid}i$1";
 
-   if (&$safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
-   &$safe_num_ne($oldnet->{tag}, $newnet->{tag}) ||
-   &$safe_string_ne($oldnet->{trunks}, $newnet->{trunks}) ||
-   &$safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
+   if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
+   safe_num_ne($oldnet->{tag}, $newnet->{tag}) ||
+   safe_string_ne($oldnet->{trunks}, $newnet->{trunks}) ||
+   safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
PVE::Network::tap_unplug($iface);
PVE::Network::tap_plug($iface, $newnet->{bridge}, 
$newnet->{tag}, $newnet->{firewall}, $newnet->{trunks}, $newnet->{rate});
-   } elsif (&$safe_num_ne($oldnet->{rate}, $newnet->{rate})) {
+   } elsif (safe_num_ne($oldnet->{rate}, $newnet->{rate})) {
# Rate can be applied on its own but any change above needs to
# include the rate in tap_plug since OVS resets everything.
PVE::Network::tap_rate_limit($iface, $newnet->{rate});
}
 
-   if (&$safe_string_ne($oldnet->{link_down}, $newnet->{link_down})) {
+   if (safe_string_ne($oldnet->{link_down}, $newnet->{link_down})) {
qemu_set_link_status($vmid, $opt, !$newnet->{link_down});
}
 
@@ -5105,33 +5085,33 @@ sub vmconfig_update_disk {
# update existing disk
 
# skip non hotpluggable value
-   if (&$safe_string_ne($drive->{discard}, 
$old_drive->{discard}) ||
-   &$safe_string_ne($drive->{iothread}, 
$old_drive->{iothread}) ||
-   &$safe_string_ne($drive->{queues}, 
$old_drive->{queues}) ||
-   &$safe_string_ne($drive->{cache}, $old_drive->{cache}) 
||
-   &$safe_string_ne($drive->{ssd}, $old_drive->{ssd})) {
+   if (safe_string_ne($drive->{discard}, 
$old_drive->{discard}) ||
+   safe_string_ne($drive->{iothread}, 
$old_drive->{iothread}) ||
+   safe_string_ne($drive->{queues}, $old_drive->{queues}) 
||
+   safe_string_ne($drive->{cache}, $old_drive->{cache}) ||
+   safe_string_ne($drive->{ssd}, $old_drive->{ssd})) {
die &q

[pve-devel] [PATCH v2 container] fix #2598: activate volumes before mounting in stop mode backup

2020-02-18 Thread Oguz Bektas
'stop' mode deactivates the volumes (relevant for LVM backend), and
they're not reactivated before trying to mount them for backup.

reactivating the volumes before the mount in 'stop' mode backup solves
the issue.

Signed-off-by: Oguz Bektas 
---
 src/PVE/VZDump/LXC.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 0260184..ed6daa2 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -310,6 +310,7 @@ sub archive {
 if ($task->{mode} eq 'stop') {
my $rootdir = $default_mount_point;
my $storage_cfg = $self->{storecfg};
+   PVE::Storage::activate_volumes($storage_cfg, $task->{volids});
foreach my $disk (@$disks) {
$disk->{dir} = "${rootdir}$disk->{mp}";
PVE::LXC::mountpoint_mount($disk, $rootdir, $storage_cfg, undef, 
$task->{rootuid}, $task->{rootgid});
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] fix #2598: prepare/activate volumes after stopping container for backup

2020-02-18 Thread Oguz Bektas
this might not be working correctly, please wait before applying

On Tue, Feb 18, 2020 at 01:35:21PM +0100, Oguz Bektas wrote:
> when doing a 'stop' backup with an LVM backend, volumes are deactivated
> by the stop operation. they're not activated before the backup, which
> causes it to fail because of mount/unmount problems.
> 
> call prepare() after stop_vm() instead to activate volumes beforehand.
> 
> [0]: https://forum.proxmox.com/threads/problem-backups-proxmox-6-1.65317/
> 
> Signed-off-by: Oguz Bektas 
> ---
>  PVE/VZDump.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
> index 87d4b699..514f432b 100644
> --- a/PVE/VZDump.pm
> +++ b/PVE/VZDump.pm
> @@ -757,7 +757,6 @@ sub exec_backup_task {
>  
>   if ($mode eq 'stop') {
>  
> - $plugin->prepare ($task, $vmid, $mode);
>  
>   $self->run_hook_script ('backup-start', $task, $logfd);
>  
> @@ -766,6 +765,7 @@ sub exec_backup_task {
>   $task->{vmstoptime} = time();
>   $self->run_hook_script ('pre-stop', $task, $logfd);
>   $plugin->stop_vm ($task, $vmid);
> + $plugin->prepare ($task, $vmid, $mode);
>   $cleanup->{restart} = 1;
>   }
>  
> -- 
> 2.20.1
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] fix #2598: prepare/activate volumes after stopping container for backup

2020-02-18 Thread Oguz Bektas
when doing a 'stop' backup with an LVM backend, volumes are deactivated
by the stop operation. they're not activated before the backup, which
causes it to fail because of mount/unmount problems.

call prepare() after stop_vm() instead to activate volumes beforehand.

[0]: https://forum.proxmox.com/threads/problem-backups-proxmox-6-1.65317/

Signed-off-by: Oguz Bektas 
---
 PVE/VZDump.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 87d4b699..514f432b 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -757,7 +757,6 @@ sub exec_backup_task {
 
if ($mode eq 'stop') {
 
-   $plugin->prepare ($task, $vmid, $mode);
 
$self->run_hook_script ('backup-start', $task, $logfd);
 
@@ -766,6 +765,7 @@ sub exec_backup_task {
$task->{vmstoptime} = time();
$self->run_hook_script ('pre-stop', $task, $logfd);
$plugin->stop_vm ($task, $vmid);
+   $plugin->prepare ($task, $vmid, $mode);
$cleanup->{restart} = 1;
}
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH guest-common 1/5] guesthelpers: move/add safe comparison functions from lxc and qemu

2020-02-13 Thread Oguz Bektas
hi,

On Thu, Feb 13, 2020 at 11:16:31AM +0100, Stefan Reiter wrote:
> On 1/28/20 4:03 PM, Oguz Bektas wrote:
> > move the safe_string_ne and safe_num_ne functions to guesthelpers to
> > remove duplicate code.
> > 
> > add the new safe_boolean_ne and typesafe_ne helper functions
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> > 
> > these will be used in the partial fast plug function in this series
> > 
> >   PVE/GuestHelpers.pm | 49 +
> >   1 file changed, 49 insertions(+)
> > 
> > diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm
> > index 07a62ce..b7133d0 100644
> > --- a/PVE/GuestHelpers.pm
> > +++ b/PVE/GuestHelpers.pm
> > @@ -14,6 +14,55 @@ use Scalar::Util qw(weaken);
> >   our $lockdir = '/var/lock/pve-manager';
> > +# safe variable comparison functions
> > +
> > +sub safe_num_ne {
> > +my ($a, $b) = @_;
> > +
> > +return 0 if !defined($a) && !defined($b);
> > +return 1 if !defined($a);
> > +return 1 if !defined($b);
> > +
> > +return $a != $b;
> > +}
> > +
> > +sub safe_string_ne  {
> > +my ($a, $b) = @_;
> > +
> > +return 0 if !defined($a) && !defined($b);
> > +return 1 if !defined($a);
> > +return 1 if !defined($b);
> > +
> > +return $a ne $b;
> > +}
> > +
> > +sub safe_boolean_ne {
> > +my ($a, $b) = @_;
> > +
> > +# we don't check if value is defined, since undefined
> > +# is false (so it's a valid boolean)
> > +
> > +# negate both values to normalize and compare
> > +return !$a != !$b;
> > +}
> > +
> > +sub typesafe_ne {
> > +my ($a, $b, $type) = @_;
> > +
> > +return 0 if !defined($a) && !defined($b);
> > +return 1 if !defined($a);
> > +return 1 if !defined($b);
> > +
> > +if ($type eq 'string') {
> > +   return safe_string_ne($a, $b);
> > +} elsif ($type eq 'number') {
> 
> This should include 'integer' too, since it's pretty much just a limited
> 'number'.

right, i'll add that
> 
> > +   return safe_num_ne($a, $b);
> > +} elsif ($type eq 'boolean') {
> > +   return safe_boolean_ne($a, $b);
> > +}
> 
> I'd add a 'die "internal error bla bla ..."' part just to be safe. Otherwise
> you just end the function without a return if $type is not matched, and I
> don't trust Perl to do the right thing™ in that case at all.

yeah, makes sense.

> 
> > +}
> > +
> > +
> >   sub guest_migration_lock {
> >   my ($vmid, $timeout, $func, @param) = @_;
> > 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 docs] rewrite and extend pct documentation

2020-02-13 Thread Oguz Bektas
* rephrase some parts.
* update old information
* add info about pending changes and other "new" features

Co-Authored-by: Aaron Lauterer 
Co-Authored-by: Thomas Lamprecht 
Signed-off-by: Oguz Bektas 
---

v2->v3:
* move info about disabling apparmor further down
* add suggested changes from thomas (with slightly different wording in
some cases)

 pct.adoc | 439 ---
 1 file changed, 253 insertions(+), 186 deletions(-)

diff --git a/pct.adoc b/pct.adoc
index 2f1d329..8a41964 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -28,41 +28,35 @@ ifdef::wiki[]
 :title: Linux Container
 endif::wiki[]
 
-Containers are a lightweight alternative to fully virtualized
-VMs. Instead of emulating a complete Operating System (OS), containers
-simply use the OS of the host they run on. This implies that all
-containers use the same kernel, and that they can access resources
-from the host directly.
+Containers are a lightweight alternative to fully virtualized machines (VMs).
+They use the kernel of the host system that they run on, instead of emulating a
+full operating system (OS). This means that containers can access resources on
+the host system directly.
 
-This is great because containers do not waste CPU power nor memory due
-to kernel emulation. Container run-time costs are close to zero and
-usually negligible. But there are also some drawbacks you need to
-consider:
+The runtime costs for containers is low, usually negligible. However, there
+are some drawbacks that need be considered:
 
-* You can only run Linux based OS inside containers, i.e. it is not
-  possible to run FreeBSD or MS Windows inside.
+* Only Linux distributions can be run in containers. (It is not
+  possible to run FreeBSD or MS Windows inside a container.)
 
-* For security reasons, access to host resources needs to be
-  restricted. This is done with AppArmor, SecComp filters and other
-  kernel features. Be prepared that some syscalls are not allowed
-  inside containers.
+* For security reasons, access to host resources needs to be restricted. 
Containers
+  run in their own separate namespaces. Additionally some syscalls are not
+  allowed within containers.
 
 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
-technology. We consider LXC as low-level library, which provides
-countless options. It would be too difficult to use those tools
-directly. Instead, we provide a small wrapper called `pct`, the
-"Proxmox Container Toolkit".
+technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the usage of 
LXC
+containers.
 
-The toolkit is tightly coupled with {pve}. That means that it is aware
-of the cluster setup, and it can use the same network and storage
-resources as fully virtualized VMs. You can even use the {pve}
-firewall, or manage containers using the HA framework.
+Containers are tightly integrated with {pve}. This means that they are aware of
+the cluster setup, and they can use the same network and storage resources as
+virtual machines. You can also use the {pve} firewall, or manage containers
+using the HA framework.
 
 Our primary goal is to offer an environment as one would get from a
 VM, but without the additional overhead. We call this "System
 Containers".
 
-NOTE: If you want to run micro-containers (with docker, rkt, ...), it
+NOTE: If you want to run micro-containers (with docker, rkt, etc.) it
 is best to run them inside a VM.
 
 
@@ -79,38 +73,43 @@ Technology Overview
 
 * lxcfs to provide containerized /proc file system
 
-* AppArmor/Seccomp to improve security
+* CGroups (control groups) for resource allocation
 
-* CRIU: for live migration (planned)
+* AppArmor/Seccomp to improve security
 
-* Runs on modern Linux kernels
+* Modern Linux kernels
 
 * Image based deployment (templates)
 
-* Use {pve} storage library
-
-* Container setup from host (network, DNS, storage, ...)
+* Uses {pve} storage library
 
+* Container setup from host (network, DNS, storage, etc.)
 
 Security Considerations
 ---
 
-Containers use the same kernel as the host, so there is a big attack
-surface for malicious users. You should consider this fact if you
-provide containers to totally untrusted people. In general, fully
-virtualized VMs provide better isolation.
+Containers use the kernel of the host system. This creates a big attack
+surface for malicious users. This should be considered if containers
+are provided to untrustworthy people. In general, full
+virtual machines provide better isolation.
+
+However, LXC uses many security features like AppArmor, CGroups and kernel
+namespaces to reduce the attack surface.
+
+AppArmor profiles are used to restrict access to possibly dangerous actions.
+Some system calls, i.e. `mount`, are prohibited from execution.
 
-The good news is that LXC uses many kernel security features like
-AppArmor, CGroups and PID and user namespaces, which makes containers
-usage

[pve-devel] [PATCH ha-manager] fix service name for pve-ha-crm

2020-02-11 Thread Oguz Bektas
"PVE Cluster Resource Manager Daemon" should be "PVE Cluster HA Resource
Manager Daemon"

[0]: https://forum.proxmox.com/threads/typo-omission.65107/

Signed-off-by: Oguz Bektas 
---
 debian/pve-ha-crm.service | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/debian/pve-ha-crm.service b/debian/pve-ha-crm.service
index b54992f..6b57e9f 100644
--- a/debian/pve-ha-crm.service
+++ b/debian/pve-ha-crm.service
@@ -1,5 +1,5 @@
 [Unit]
-Description=PVE Cluster Resource Manager Daemon
+Description=PVE Cluster HA Resource Manager Daemon
 ConditionPathExists=/usr/sbin/pve-ha-crm
 Wants=pve-cluster.service
 Wants=watchdog-mux.service
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-qemu] security patches for libslirp CVE-2020-8608

2020-02-10 Thread Oguz Bektas
hi,

On Fri, Feb 07, 2020 at 09:03:50AM +0100, Thomas Lamprecht wrote:
> On 2/6/20 3:25 PM, Oguz Bektas wrote:
> > original commits and email can be found here[0]
> > 
> > A out-of-bounds heap buffer access issue was found in the SLiRP
> > networking implementation of the QEMU emulator. It occurs in tcp_emu()
> > routine while emulating IRC and other protocols due to unsafe usage of
> > snprintf(3) function.
> > 
> > A user/process could use this flaw to crash the Qemu process on the host
> > resulting in DoS or potentially execute arbitrary code with privileges
> > of the QEMU process on the host.
> > 
> 
> for the record before anybody starts another offlist discussion :)
> looks OK, but AFAICT the bridged mode we use is not affected by this,
> the NAT (settable through CLI only) may be, but not to sure.
> 
> It really would help to reason why we need this, what is affected and how
> one can check if it actual helped against an issue - the latter part is often
> non-trivial, be it because an issue is theoretical or no reproducer is
> available and not easy to think of oneself.
> 
> That said, the fmt sanitizing looks OK and won't hurt, so taking this in
> is OK nonetheless, IMO
> 

yes, it seems the slirp networking is only used in qemu user
networking[0], so our regular bridged mode shouldn't be affected.

also we pass the `-nodefaults` option which avoids using the default
networking (slirp). so i think we should be relatively safe.
nevertheless it doesn't hurt to apply it.

for now as far as i'm aware there's no public PoC for this.

also i noticed they disabled the vulnerable `tcp_emu` in libslirp 4.1.0 [1]


[0]: https://wiki.qemu.org/Documentation/Networking#User_Networking_.28SLIRP.29
[1]: https://gitlab.freedesktop.org/slirp/libslirp/-/tags/v4.1.0



> > [0]: https://seclists.org/oss-sec/2020/q1/64
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  .../0003-util-add-slirp_fmt-helpers.patch | 126 
> >  ...4-tcp_emu-fix-unsafe-snprintf-usages.patch | 135 ++
> >  debian/patches/series |   2 +
> >  3 files changed, 263 insertions(+)
> >  create mode 100644 
> > debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch
> >  create mode 100644 
> > debian/patches/extra/0004-tcp_emu-fix-unsafe-snprintf-usages.patch
> > 
> > diff --git a/debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch 
> > b/debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch
> > new file mode 100644
> > index 000..af944f8
> > --- /dev/null
> > +++ b/debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch
> > @@ -0,0 +1,126 @@
> > +From  Mon Sep 17 00:00:00 2001
> > +From: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= 
> > +Date: Mon, 27 Jan 2020 10:24:09 +0100
> > +Subject: [PATCH 1/2] util: add slirp_fmt() helpers
> > +MIME-Version: 1.0
> > +Content-Type: text/plain; charset=UTF-8
> > +Content-Transfer-Encoding: 8bit
> > +
> > +Various calls to snprintf() in libslirp assume that snprintf() returns
> > +"only" the number of bytes written (excluding terminating NUL).
> > +
> > +https://pubs.opengroup.org/onlinepubs/9699919799/functions/snprintf.html#tag_16_159_04
> > +
> > +"Upon successful completion, the snprintf() function shall return the
> > +number of bytes that would be written to s had n been sufficiently
> > +large excluding the terminating null byte."
> > +
> > +Introduce slirp_fmt() that handles several pathological cases the
> > +way libslirp usually expect:
> > +
> > +- treat error as fatal (instead of silently returning -1)
> > +
> > +- fmt0() will always \0 end
> > +
> > +- return the number of bytes actually written (instead of what would
> > +have been written, which would usually result in OOB later), including
> > +the ending \0 for fmt0()
> > +
> > +- warn if truncation happened (instead of ignoring)
> > +
> > +Other less common cases can still be handled with strcpy/snprintf() etc.
> > +
> > +Signed-off-by: Marc-André Lureau 
> > +Reviewed-by: Samuel Thibault 
> > +Message-Id: <20200127092414.169796-2-marcandre.lur...@redhat.com>
> > +Signed-off-by: Oguz Bektas 
> > +---
> > + slirp/src/util.c | 62 
> > ++
> > + slirp/src/util.h |  3 +++
> > + 2 files changed, 65 insertions(+)
> > +
> > +diff --git a/slirp/src/util.c b/slirp/src/util.c
> > +index e596087..e3b6257 100644
> > +--- a/slirp/src/util.c
> >  b/slirp/src/ut

[pve-devel] [PATCH v2 docs] network: add note for possible fix/workaround in NAT setup

2020-02-10 Thread Oguz Bektas
apparently sometimes users have problems reaching outside internet with
some network setups. this is the workaround a user suggested that
we should add in the wiki.

Signed-off-by: Oguz Bektas 
---

v1->v2:
* add more rationale as suggested by stoiko
* fix indent on one line in the example config
* add links stoiko posted in mailing list for reference

 pve-network.adoc | 20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/pve-network.adoc b/pve-network.adoc
index c61cd42..1913498 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -243,11 +243,29 @@ iface vmbr0 inet static
 bridge_stp off
 bridge_fd 0
 
-post-up echo 1 > /proc/sys/net/ipv4/ip_forward
+post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
 post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j 
MASQUERADE
 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j 
MASQUERADE
 
 
+NOTE: In some masquerade setups with firewall enabled, conntrack zones might be
+needed for outgoing connections. Otherwise the firewall could block outgoing
+connections since they will prefer the `POSTROUTING` of the VM bridge (and not
+`MASQUERADE`).
+
+Adding these lines in the `/etc/network/interfaces` can fix this problem:
+
+
+post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
+post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
+
+
+For more information about this, refer to the following links:
+https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg[Netfilter 
Packet Flow]
+https://lwn.net/Articles/370152/[Patch on netdev-list introducing conntrack 
zones]
+https://blog.lobraun.de/2019/05/19/prox/[Blog post with a good explanation by 
using TRACE in the raw table]
+
+
 
 Linux Bond
 ~~
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH container] apply_pending: call cleanup_pending between change/delete loops

2020-02-06 Thread Oguz Bektas
On Thu, Feb 06, 2020 at 05:15:18PM +0100, Thomas Lamprecht wrote:
> On 2/6/20 5:13 PM, Oguz Bektas wrote:
> >> Further, while this resolves the issue of a broken config in general the
> >> underlying "when are config property values equal" is not solved. I can
> >> still trigger a fake pending change. For example, assume the following
> >> config property present and applied already:
> >>
> >> mp0: tom-nasi:110/vm-110-disk-0.raw,mp=/foo,backup=1,size=102M
> >>
> >> Now a API or CLI client (in this case simulated through pct) sets it to the
> >> same semantic value, but switched order of property strings:
> >> # pct set 110 --mp0 
> >> mp=/foo,tom-nasi:110/vm-110-disk-0.raw,backup=1,size=102M
> >>
> >> I get a pending change, but there'd be none. Same issue if I do not switch
> >> order of properties in the string but one time a default_key is present
> >> verbose "enabled=1", and one time in it's short form "1".
> > indeed. but i think this isn't that tragic since it doesn't break any
> > functionality (i think?).
> 
> No, it isn't tragic at all, but it is confusing and IMO not nice API behavior.

agreed.
> 
> > 
> >> The correct solution would be parsing the properties and doing a 
> >> deterministic
> >> (deep) compare.
> >> A heuristic could be ensuring that at least our webinterface and backend 
> >> always
> >> print property strings the same way (i.e., sorted) - that would be possible
> >> cheaper but not solve that effect for all clients using the API.
> >>
> > but still if you want i can take a look at implementing that soon.
> > 
> 
> Yes, but I'd treat it rather lower priority.

alright, i'll take a look next week.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH container] apply_pending: call cleanup_pending between change/delete loops

2020-02-06 Thread Oguz Bektas
On Thu, Feb 06, 2020 at 04:53:04PM +0100, Thomas Lamprecht wrote:
> On 2/6/20 3:48 PM, Dominik Csapak wrote:
> > lgtm, did not break anything obvious, and fixed my problem i reported 
> > yesterday[0]
> > 
> > Tested-By: Dominik Csapak 
> > 
> 
> Thanks, with your T-b applied. Oguz, the commit message lacked the total
> "why it happens" and "why does it get solved" parts. Those are important.
> Linking to outside info can be good but the core part of the explanation
> should always be inline, links tend to go dead sometimes and it's easier
> to read a few sentences inline than to click on one, or even multiple links,
> to find out what's and why actually something is being done they way it is.
ah, i thought the link was enough. i'll write more detailed next time.
thanks for adding the details in the commit message.
> 
> Further, while this resolves the issue of a broken config in general the
> underlying "when are config property values equal" is not solved. I can
> still trigger a fake pending change. For example, assume the following
> config property present and applied already:
> 
> mp0: tom-nasi:110/vm-110-disk-0.raw,mp=/foo,backup=1,size=102M
> 
> Now a API or CLI client (in this case simulated through pct) sets it to the
> same semantic value, but switched order of property strings:
> # pct set 110 --mp0 mp=/foo,tom-nasi:110/vm-110-disk-0.raw,backup=1,size=102M
> 
> I get a pending change, but there'd be none. Same issue if I do not switch
> order of properties in the string but one time a default_key is present
> verbose "enabled=1", and one time in it's short form "1".
indeed. but i think this isn't that tragic since it doesn't break any
functionality (i think?).

> 
> The correct solution would be parsing the properties and doing a deterministic
> (deep) compare.
> A heuristic could be ensuring that at least our webinterface and backend 
> always
> print property strings the same way (i.e., sorted) - that would be possible
> cheaper but not solve that effect for all clients using the API.
> 

but still if you want i can take a look at implementing that soon.

> > 0: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041548.html
> > 
> > On 2/5/20 3:03 PM, Oguz Bektas wrote:
> >> instead of calling it while iterating, inbetween the loops is a better
> >> place in terms of similarity with qemu side (also this should fix the bug 
> >> that
> >> dominik found[0])
> >>
> >> [0]: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041573.html
> >>
> >> Signed-off-by: Oguz Bektas 
> >> ---
> >>   src/PVE/LXC/Config.pm | 4 ++--
> >>   1 file changed, 2 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
> >> index 310aba6..e88ba0b 100644
> >> --- a/src/PVE/LXC/Config.pm
> >> +++ b/src/PVE/LXC/Config.pm
> >> @@ -1268,7 +1268,6 @@ sub vmconfig_apply_pending {
> >>   # FIXME: $force deletion is not implemented for CTs
> >>   foreach my $opt (sort keys %$pending_delete_hash) {
> >>   next if $selection && !$selection->{$opt};
> >> -    $class->cleanup_pending($conf);
> >>   eval {
> >>   if ($opt =~ m/^mp(\d+)$/) {
> >>   my $mp = $class->parse_ct_mountpoint($conf->{$opt});
> >> @@ -1289,6 +1288,8 @@ sub vmconfig_apply_pending {
> >>   }
> >>   }
> >>   +    $class->cleanup_pending($conf);
> >> +
> >>   foreach my $opt (sort keys %{$conf->{pending}}) { # add/change
> >>   next if $opt eq 'delete'; # just to be sure
> >>   next if $selection && !$selection->{$opt};
> >> @@ -1304,7 +1305,6 @@ sub vmconfig_apply_pending {
> >>   if (my $err = $@) {
> >>   $add_apply_error->($opt, $err);
> >>   } else {
> >> -    $class->cleanup_pending($conf);
> >>   $conf->{$opt} = delete $conf->{pending}->{$opt};
> >>   }
> >>   }
> >>
> 
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-qemu] security patches for libslirp CVE-2020-8608

2020-02-06 Thread Oguz Bektas
original commits and email can be found here[0]

A out-of-bounds heap buffer access issue was found in the SLiRP
networking implementation of the QEMU emulator. It occurs in tcp_emu()
routine while emulating IRC and other protocols due to unsafe usage of
snprintf(3) function.

A user/process could use this flaw to crash the Qemu process on the host
resulting in DoS or potentially execute arbitrary code with privileges
of the QEMU process on the host.

[0]: https://seclists.org/oss-sec/2020/q1/64

Signed-off-by: Oguz Bektas 
---
 .../0003-util-add-slirp_fmt-helpers.patch | 126 
 ...4-tcp_emu-fix-unsafe-snprintf-usages.patch | 135 ++
 debian/patches/series |   2 +
 3 files changed, 263 insertions(+)
 create mode 100644 debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch
 create mode 100644 
debian/patches/extra/0004-tcp_emu-fix-unsafe-snprintf-usages.patch

diff --git a/debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch 
b/debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch
new file mode 100644
index 000..af944f8
--- /dev/null
+++ b/debian/patches/extra/0003-util-add-slirp_fmt-helpers.patch
@@ -0,0 +1,126 @@
+From  Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= 
+Date: Mon, 27 Jan 2020 10:24:09 +0100
+Subject: [PATCH 1/2] util: add slirp_fmt() helpers
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Various calls to snprintf() in libslirp assume that snprintf() returns
+"only" the number of bytes written (excluding terminating NUL).
+
+https://pubs.opengroup.org/onlinepubs/9699919799/functions/snprintf.html#tag_16_159_04
+
+"Upon successful completion, the snprintf() function shall return the
+number of bytes that would be written to s had n been sufficiently
+large excluding the terminating null byte."
+
+Introduce slirp_fmt() that handles several pathological cases the
+way libslirp usually expect:
+
+- treat error as fatal (instead of silently returning -1)
+
+- fmt0() will always \0 end
+
+- return the number of bytes actually written (instead of what would
+have been written, which would usually result in OOB later), including
+the ending \0 for fmt0()
+
+- warn if truncation happened (instead of ignoring)
+
+Other less common cases can still be handled with strcpy/snprintf() etc.
+
+Signed-off-by: Marc-André Lureau 
+Reviewed-by: Samuel Thibault 
+Message-Id: <20200127092414.169796-2-marcandre.lur...@redhat.com>
+Signed-off-by: Oguz Bektas 
+---
+ slirp/src/util.c | 62 ++
+ slirp/src/util.h |  3 +++
+ 2 files changed, 65 insertions(+)
+
+diff --git a/slirp/src/util.c b/slirp/src/util.c
+index e596087..e3b6257 100644
+--- a/slirp/src/util.c
 b/slirp/src/util.c
+@@ -364,3 +364,65 @@ void slirp_pstrcpy(char *buf, int buf_size, const char 
*str)
+ }
+ *q = '\0';
+ }
++
++static int slirp_vsnprintf(char *str, size_t size,
++   const char *format, va_list args)
++{
++int rv = vsnprintf(str, size, format, args);
++
++if (rv < 0) {
++g_error("vsnprintf() failed: %s", g_strerror(errno));
++}
++
++return rv;
++}
++
++/*
++ * A snprintf()-like function that:
++ * - returns the number of bytes written (excluding optional \0-ending)
++ * - dies on error
++ * - warn on truncation
++ */
++int slirp_fmt(char *str, size_t size, const char *format, ...)
++{
++va_list args;
++int rv;
++
++va_start(args, format);
++rv = slirp_vsnprintf(str, size, format, args);
++va_end(args);
++
++if (rv > size) {
++g_critical("vsnprintf() truncation");
++}
++
++return MIN(rv, size);
++}
++
++/*
++ * A snprintf()-like function that:
++ * - always \0-end (unless size == 0)
++ * - returns the number of bytes actually written, including \0 ending
++ * - dies on error
++ * - warn on truncation
++ */
++int slirp_fmt0(char *str, size_t size, const char *format, ...)
++{
++va_list args;
++int rv;
++
++va_start(args, format);
++rv = slirp_vsnprintf(str, size, format, args);
++va_end(args);
++
++if (rv >= size) {
++g_critical("vsnprintf() truncation");
++if (size > 0)
++str[size - 1] = '\0';
++rv = size;
++} else {
++rv += 1; /* include \0 */
++}
++
++return rv;
++}
+diff --git a/slirp/src/util.h b/slirp/src/util.h
+index 3c6223c..0558dfc 100644
+--- a/slirp/src/util.h
 b/slirp/src/util.h
+@@ -177,4 +177,7 @@ static inline int slirp_socket_set_fast_reuse(int fd)
+ 
+ void slirp_pstrcpy(char *buf, int buf_size, const char *str);
+ 
++int slirp_fmt(char *str, size_t size, const char *format, ...);
++int slirp_fmt0(char *str, size_t size, const char *format, ...);
++
+ #endif
+-- 
+2.20.1
+
diff --git a/debian/patches/extra/0004-tcp_emu-fix-unsafe-snprintf-u

Re: [pve-devel] [PATCH v2 docs] rewrite and extend pct documentation

2020-02-06 Thread Oguz Bektas
hi,

any update here?

On Tue, Jan 14, 2020 at 05:47:01PM +0100, Oguz Bektas wrote:
> * rephrase some parts.
> * update old information
> * add info about pending changes and other "new" features
> 
> Co-Authored-by: Aaron Lauterer 
> Signed-off-by: Oguz Bektas 
> ---
> 
> v1->v2:
> changed some of the writing in terms of phrasing and style, with
> feedback from aaron. thanks!
> 
>  pct.adoc | 442 ---
>  1 file changed, 259 insertions(+), 183 deletions(-)
> 
> diff --git a/pct.adoc b/pct.adoc
> index 2f1d329..f8804e6 100644
> --- a/pct.adoc
> +++ b/pct.adoc
> @@ -28,32 +28,27 @@ ifdef::wiki[]
>  :title: Linux Container
>  endif::wiki[]
>  
> -Containers are a lightweight alternative to fully virtualized
> -VMs. Instead of emulating a complete Operating System (OS), containers
> -simply use the OS of the host they run on. This implies that all
> -containers use the same kernel, and that they can access resources
> -from the host directly.
> +Containers are a lightweight alternative to fully virtualized VMs.  They use
> +the kernel of the host system that they run on, instead of emulating a full
> +operating system (OS). This means that containers can access resources on the
> +host system directly.
>  
> -This is great because containers do not waste CPU power nor memory due
> -to kernel emulation. Container run-time costs are close to zero and
> -usually negligible. But there are also some drawbacks you need to
> -consider:
> +The runtime costs for containers is low, usually negligible, because of the 
> low
> +overhead in terms of CPU and memory resources. However, there are some 
> drawbacks
> +that need be considered:
>  
> -* You can only run Linux based OS inside containers, i.e. it is not
> -  possible to run FreeBSD or MS Windows inside.
> +* Only Linux distributions can be run in containers, i.e. It is not
> +  possible to run FreeBSD or MS Windows inside a container.
>  
> -* For security reasons, access to host resources needs to be
> -  restricted. This is done with AppArmor, SecComp filters and other
> -  kernel features. Be prepared that some syscalls are not allowed
> -  inside containers.
> +* For security reasons, access to host resources needs to be restricted. Some
> +  syscalls are not allowed within containers. This is done with AppArmor, 
> SecComp
> +  filters, and other kernel features.
>  
>  {pve} uses https://linuxcontainers.org/[LXC] as underlying container
> -technology. We consider LXC as low-level library, which provides
> -countless options. It would be too difficult to use those tools
> -directly. Instead, we provide a small wrapper called `pct`, the
> -"Proxmox Container Toolkit".
> +technology. The "Proxmox Container Toolkit" (`pct`) simplifies the usage of 
> LXC
> +containers.
>  
> -The toolkit is tightly coupled with {pve}. That means that it is aware
> +The `pct` is tightly coupled with {pve}. This means that it is aware
>  of the cluster setup, and it can use the same network and storage
>  resources as fully virtualized VMs. You can even use the {pve}
>  firewall, or manage containers using the HA framework.
> @@ -62,7 +57,7 @@ Our primary goal is to offer an environment as one would 
> get from a
>  VM, but without the additional overhead. We call this "System
>  Containers".
>  
> -NOTE: If you want to run micro-containers (with docker, rkt, ...), it
> +NOTE: If you want to run micro-containers (with docker, rkt, etc.) it
>  is best to run them inside a VM.
>  
>  
> @@ -79,38 +74,66 @@ Technology Overview
>  
>  * lxcfs to provide containerized /proc file system
>  
> -* AppArmor/Seccomp to improve security
> +* CGroups (control groups) for resource allocation
>  
> -* CRIU: for live migration (planned)
> +* AppArmor/Seccomp to improve security
>  
> -* Runs on modern Linux kernels
> +* Modern Linux kernels
>  
>  * Image based deployment (templates)
>  
> -* Use {pve} storage library
> +* Uses {pve} storage library
>  
> -* Container setup from host (network, DNS, storage, ...)
> +* Container setup from host (network, DNS, storage, etc.)
>  
>  
>  Security Considerations
>  ---
>  
> -Containers use the same kernel as the host, so there is a big attack
> -surface for malicious users. You should consider this fact if you
> -provide containers to totally untrusted people. In general, fully
> -virtualized VMs provide better isolation.
> +Containers use the kernel of the host system. This creates a big attack
> +surface for malicious users. This should be considered if containers
> +are provided to untrustworthy peop

[pve-devel] [PATCH docs] network: add note for possible fix/workaround in NAT setup

2020-02-05 Thread Oguz Bektas
apparently sometimes users have problems reaching outside internet with
some network setups. this is the workaround a user suggested that
we should add in the wiki.

Signed-off-by: Oguz Bektas 
---
 pve-network.adoc | 9 +
 1 file changed, 9 insertions(+)

diff --git a/pve-network.adoc b/pve-network.adoc
index c61cd42..471edb4 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -248,6 +248,15 @@ iface vmbr0 inet static
 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j 
MASQUERADE
 
 
+NOTE: If you have firewall enabled for your CT/VM and you're having
+connectivity problems with outgoing connections, you can add the following
+lines in the interfaces config:
+
+
+post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
+post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
+
+
 
 Linux Bond
 ~~
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] apply_pending: call cleanup_pending between change/delete loops

2020-02-05 Thread Oguz Bektas
instead of calling it while iterating, inbetween the loops is a better
place in terms of similarity with qemu side (also this should fix the bug that
dominik found[0])

[0]: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041573.html

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Config.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 310aba6..e88ba0b 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1268,7 +1268,6 @@ sub vmconfig_apply_pending {
 # FIXME: $force deletion is not implemented for CTs
 foreach my $opt (sort keys %$pending_delete_hash) {
next if $selection && !$selection->{$opt};
-   $class->cleanup_pending($conf);
eval {
if ($opt =~ m/^mp(\d+)$/) {
my $mp = $class->parse_ct_mountpoint($conf->{$opt});
@@ -1289,6 +1288,8 @@ sub vmconfig_apply_pending {
}
 }
 
+$class->cleanup_pending($conf);
+
 foreach my $opt (sort keys %{$conf->{pending}}) { # add/change
next if $opt eq 'delete'; # just to be sure
next if $selection && !$selection->{$opt};
@@ -1304,7 +1305,6 @@ sub vmconfig_apply_pending {
if (my $err = $@) {
$add_apply_error->($opt, $err);
} else {
-   $class->cleanup_pending($conf);
$conf->{$opt} = delete $conf->{pending}->{$opt};
}
 }
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] fix #2578: check if $target is provided in clone

2020-02-03 Thread Oguz Bektas
regression introduced with commit a85ff91b

previously we set $target to undef if it's localnode or localhost, then
we check if node exists.

with regression commit, behaviour changes as we do the node check in
else, but $target may be undef. this causes an error:

no such cluster node ''

Signed-off-by: Oguz Bektas 
---
 PVE/API2/Qemu.pm | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index e15c0c3..fe68e87 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2749,10 +2749,12 @@ __PACKAGE__->register_method({
 
 my $localnode = PVE::INotify::nodename();
 
-if ($target && ($target eq $localnode || $target eq 'localhost')) {
-   undef $target;
-   } else {
-   PVE::Cluster::check_node_exists($target);
+   if ($target) {
+   if ($target eq $localnode || $target eq 'localhost') {
+   undef $target;
+   } else {
+   PVE::Cluster::check_node_exists($target);
+   }
}
 
my $storecfg = PVE::Storage::config();
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 5/5] hotplug_pending: allow partial fast plugging

2020-01-28 Thread Oguz Bektas
adds a loop after the main fastplug loop, to check if any of the options
are partially fast pluggable.

these are defined in $partial_fast_plug_option

our first use case is the fstrim_cloned_disks option of the
qemu-guest-agent

Signed-off-by: Oguz Bektas 
---
 PVE/QemuConfig.pm |  7 +++
 PVE/QemuServer.pm | 20 
 2 files changed, 27 insertions(+)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 1ba728a..b9fdfbb 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -399,6 +399,13 @@ sub __snapshot_foreach_volume {
 
 PVE::QemuServer::foreach_drive($conf, $func);
 }
+
+sub get_partial_fast_plug_map {
+my ($class) = @_;
+
+return $PVE::QemuServer::partial_fast_plug_option;
+}
+
 # END implemented abstract methods from PVE::AbstractConfig
 
 1;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6a8dc16..72d81ff 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4691,6 +4691,20 @@ my $fast_plug_option = {
 'tags' => 1,
 };
 
+# name
+# -> fmt -> format variable
+# -> properties -> fastpluggable options hash
+our $partial_fast_plug_option = {
+agent => {
+   fmt => $agent_fmt,
+   properties => {
+   fstrim_cloned_disks => 1
+   },
+},
+};
+
+
+
 # hotplug changes in [PENDING]
 # $selection hash can be used to only apply specified options, for
 # example: { cores => 1 } (only apply changed 'cores')
@@ -4720,6 +4734,12 @@ sub vmconfig_hotplug_pending {
}
 }
 
+foreach my $opt (keys %{$conf->{pending}}) {
+   if ($partial_fast_plug_option->{$opt}) {
+   PVE::QemuConfig->partial_fast_plug($conf, $opt);
+   }
+}
+
 if ($changes) {
PVE::QemuConfig->write_config($vmid, $conf);
 }
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/5] use helper functions from GuestHelpers

2020-01-28 Thread Oguz Bektas
removes safe_string_ne and safe_num_ne code which is now shared in
GuestHelpers. also change all the calls

Signed-off-by: Oguz Bektas 
---
 PVE/QemuServer.pm | 82 ++-
 1 file changed, 31 insertions(+), 51 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7374bf1..6a8dc16 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4969,26 +4969,6 @@ sub vmconfig_apply_pending {
 PVE::QemuConfig->write_config($vmid, $conf);
 }
 
-my $safe_num_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a != $b;
-};
-
-my $safe_string_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a ne $b;
-};
-
 sub vmconfig_update_net {
 my ($storecfg, $conf, $hotplug, $vmid, $opt, $value, $arch, $machine_type) 
= @_;
 
@@ -4997,9 +4977,9 @@ sub vmconfig_update_net {
 if ($conf->{$opt}) {
my $oldnet = parse_net($conf->{$opt});
 
-   if (&$safe_string_ne($oldnet->{model}, $newnet->{model}) ||
-   &$safe_string_ne($oldnet->{macaddr}, $newnet->{macaddr}) ||
-   &$safe_num_ne($oldnet->{queues}, $newnet->{queues}) ||
+   if (PVE::GuestHelpers::safe_string_ne($oldnet->{model}, 
$newnet->{model}) ||
+   PVE::GuestHelpers::safe_string_ne($oldnet->{macaddr}, 
$newnet->{macaddr}) ||
+   PVE::GuestHelpers::safe_num_ne($oldnet->{queues}, 
$newnet->{queues}) ||
!($newnet->{bridge} && $oldnet->{bridge})) { # bridge/nat mode 
change
 
 # for non online change, we try to hot-unplug
@@ -5010,19 +4990,19 @@ sub vmconfig_update_net {
die "internal error" if $opt !~ m/net(\d+)/;
my $iface = "tap${vmid}i$1";
 
-   if (&$safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
-   &$safe_num_ne($oldnet->{tag}, $newnet->{tag}) ||
-   &$safe_string_ne($oldnet->{trunks}, $newnet->{trunks}) ||
-   &$safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
+   if (PVE::GuestHelpers::safe_string_ne($oldnet->{bridge}, 
$newnet->{bridge}) ||
+   PVE::GuestHelpers::safe_num_ne($oldnet->{tag}, $newnet->{tag}) 
||
+   PVE::GuestHelpers::safe_string_ne($oldnet->{trunks}, 
$newnet->{trunks}) ||
+   PVE::GuestHelpers::safe_num_ne($oldnet->{firewall}, 
$newnet->{firewall})) {
PVE::Network::tap_unplug($iface);
PVE::Network::tap_plug($iface, $newnet->{bridge}, 
$newnet->{tag}, $newnet->{firewall}, $newnet->{trunks}, $newnet->{rate});
-   } elsif (&$safe_num_ne($oldnet->{rate}, $newnet->{rate})) {
+   } elsif (PVE::GuestHelpers::safe_num_ne($oldnet->{rate}, 
$newnet->{rate})) {
# Rate can be applied on its own but any change above needs to
# include the rate in tap_plug since OVS resets everything.
PVE::Network::tap_rate_limit($iface, $newnet->{rate});
}
 
-   if (&$safe_string_ne($oldnet->{link_down}, $newnet->{link_down})) {
+   if (PVE::GuestHelpers::safe_string_ne($oldnet->{link_down}, 
$newnet->{link_down})) {
qemu_set_link_status($vmid, $opt, !$newnet->{link_down});
}
 
@@ -5066,32 +5046,32 @@ sub vmconfig_update_disk {
# update existing disk
 
# skip non hotpluggable value
-   if (&$safe_string_ne($drive->{discard}, 
$old_drive->{discard}) ||
-   &$safe_string_ne($drive->{iothread}, 
$old_drive->{iothread}) ||
-   &$safe_string_ne($drive->{queues}, 
$old_drive->{queues}) ||
-   &$safe_string_ne($drive->{cache}, $old_drive->{cache})) 
{
+   if (PVE::GuestHelpers::safe_string_ne($drive->{discard}, 
$old_drive->{discard}) ||
+   PVE::GuestHelpers::safe_string_ne($drive->{iothread}, 
$old_drive->{iothread}) ||
+   PVE::GuestHelpers::safe_string_ne($drive->{queues}, 
$old_drive->{queues}) ||
+   PVE::GuestHelpers::safe_string_ne($drive->{cache}, 
$old_drive->{cache})) {
die "skip\n";
}
 
# apply throttle
-   if (&$safe_num_ne($drive->{mbps}, $old_drive->{mbps}) ||
-   &$safe_num_ne($drive->{mbps_rd}, $old_drive->{mbps_rd}) 
||
-   &$safe_num_ne($drive->{mbps_wr}, $old_drive->{mbps_wr}) 

[pve-devel] [PATCH container 3/5] use helper functions from GuestHelpers

2020-01-28 Thread Oguz Bektas
remove safe_string_ne and safe_num_ne code which is now shared in
GuestHelpers. also change all the calls.

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC.pm | 36 
 1 file changed, 8 insertions(+), 28 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 34949c6..0051c5c 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -814,26 +814,6 @@ sub vm_stop_cleanup {
 warn $@ if $@; # avoid errors - just warn
 }
 
-my $safe_num_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a != $b;
-};
-
-my $safe_string_ne = sub {
-my ($a, $b) = @_;
-
-return 0 if !defined($a) && !defined($b);
-return 1 if !defined($a);
-return 1 if !defined($b);
-
-return $a ne $b;
-};
-
 sub update_net {
 my ($vmid, $conf, $opt, $newnet, $netid, $rootdir) = @_;
 
@@ -848,8 +828,8 @@ sub update_net {
 if (my $oldnetcfg = $conf->{$opt}) {
my $oldnet = PVE::LXC::Config->parse_lxc_network($oldnetcfg);
 
-   if (&$safe_string_ne($oldnet->{hwaddr}, $newnet->{hwaddr}) ||
-   &$safe_string_ne($oldnet->{name}, $newnet->{name})) {
+   if (PVE::GuestHelpers::safe_string_ne($oldnet->{hwaddr}, 
$newnet->{hwaddr}) ||
+   PVE::GuestHelpers::safe_string_ne($oldnet->{name}, 
$newnet->{name})) {
 
PVE::Network::veth_delete($veth);
delete $conf->{$opt};
@@ -858,9 +838,9 @@ sub update_net {
hotplug_net($vmid, $conf, $opt, $newnet, $netid);
 
} else {
-   if (&$safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
-   &$safe_num_ne($oldnet->{tag}, $newnet->{tag}) ||
-   &$safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
+   if (PVE::GuestHelpers::safe_string_ne($oldnet->{bridge}, 
$newnet->{bridge}) ||
+   PVE::GuestHelpers::safe_num_ne($oldnet->{tag}, $newnet->{tag}) 
||
+   PVE::GuestHelpers::safe_num_ne($oldnet->{firewall}, 
$newnet->{firewall})) {
 
if ($oldnet->{bridge}) {
PVE::Network::tap_unplug($veth);
@@ -876,7 +856,7 @@ sub update_net {
foreach (qw(bridge tag firewall rate)) {
$oldnet->{$_} = $newnet->{$_} if $newnet->{$_};
}
-   } elsif (&$safe_string_ne($oldnet->{rate}, $newnet->{rate})) {
+   } elsif (PVE::GuestHelpers::safe_string_ne($oldnet->{rate}, 
$newnet->{rate})) {
# Rate can be applied on its own but any change above needs to
# include the rate in tap_plug since OVS resets everything.
PVE::Network::tap_rate_limit($veth, $newnet->{rate});
@@ -946,8 +926,8 @@ sub update_ipconfig {
my $oldip = $optdata->{$ip};
my $oldgw = $optdata->{$gw};
 
-   my $change_ip = &$safe_string_ne($oldip, $newip);
-   my $change_gw = &$safe_string_ne($oldgw, $newgw);
+   my $change_ip = PVE::GuestHelpers::safe_string_ne($oldip, $newip);
+   my $change_gw = PVE::GuestHelpers::safe_string_ne($oldgw, $newgw);
 
return if !$change_ip && !$change_gw;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common 1/5] guesthelpers: move/add safe comparison functions from lxc and qemu

2020-01-28 Thread Oguz Bektas
move the safe_string_ne and safe_num_ne functions to guesthelpers to
remove duplicate code.

add the new safe_boolean_ne and typesafe_ne helper functions

Signed-off-by: Oguz Bektas 
---

these will be used in the partial fast plug function in this series

 PVE/GuestHelpers.pm | 49 +
 1 file changed, 49 insertions(+)

diff --git a/PVE/GuestHelpers.pm b/PVE/GuestHelpers.pm
index 07a62ce..b7133d0 100644
--- a/PVE/GuestHelpers.pm
+++ b/PVE/GuestHelpers.pm
@@ -14,6 +14,55 @@ use Scalar::Util qw(weaken);
 
 our $lockdir = '/var/lock/pve-manager';
 
+# safe variable comparison functions
+
+sub safe_num_ne {
+my ($a, $b) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+return $a != $b;
+}
+
+sub safe_string_ne  {
+my ($a, $b) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+return $a ne $b;
+}
+
+sub safe_boolean_ne {
+my ($a, $b) = @_;
+
+# we don't check if value is defined, since undefined
+# is false (so it's a valid boolean)
+
+# negate both values to normalize and compare
+return !$a != !$b;
+}
+
+sub typesafe_ne {
+my ($a, $b, $type) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+if ($type eq 'string') {
+   return safe_string_ne($a, $b);
+} elsif ($type eq 'number') {
+   return safe_num_ne($a, $b);
+} elsif ($type eq 'boolean') {
+   return safe_boolean_ne($a, $b);
+}
+}
+
+
 sub guest_migration_lock {
 my ($vmid, $timeout, $func, @param) = @_;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common 4/5] abstractconfig: add partial_fast_plug

2020-01-28 Thread Oguz Bektas
allows partial fast plugging of functions as defined in the
$partial_fast_plug_option in qemuserver (and possibly lxc later on)

Signed-off-by: Oguz Bektas 
---
 PVE/AbstractConfig.pm | 44 +++
 1 file changed, 44 insertions(+)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index b63a744..102b12d 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -168,6 +168,50 @@ sub cleanup_pending {
 return $changes;
 }
 
+sub get_partial_fast_plug_map {
+my ($class) = @_;
+
+die "abstract method - implement me ";
+}
+
+sub partial_fast_plug {
+my ($class, $conf, $opt) = @_;
+
+my $partial_fast_plug_option = $class->get_partial_fast_plug_map();
+my $format = $partial_fast_plug_option->{$opt}->{fmt};
+my $fast_pluggable = $partial_fast_plug_option->{$opt}->{properties};
+
+my $old = {};
+if (exists($conf->{$opt})) {
+   $old = PVE::JSONSchema::parse_property_string($format, $conf->{$opt});
+}
+my $new = PVE::JSONSchema::parse_property_string($format, 
$conf->{pending}->{$opt});
+
+my $changes_left = 0;
+
+# merge old and new opts to iterate
+my @all_keys = keys %{{ %$new, %$old }};
+
+foreach my $subopt (@all_keys) {
+   my $type = $format->{$subopt}->{type};
+   if (PVE::GuestHelpers::typesafe_ne($old->{$subopt}, $new->{$subopt}, 
$type)) {
+   if ($fast_pluggable->{$subopt}) {
+   $old->{$subopt} = $new->{$subopt};
+   } else {
+   $changes_left = 1;
+   }
+   }
+}
+
+# fastplug
+if (keys %$old) {
+   $conf->{$opt} = PVE::JSONSchema::print_property_string($old, $format);
+}
+
+return $changes_left;
+}
+
+
 sub load_snapshot_config {
 my ($class, $vmid, $snapname) = @_;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] fix #2568: hotplug: fix typo 'cpu.shares'

2020-01-27 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Config.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 760ec23..eec6b38 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1188,7 +1188,7 @@ sub vmconfig_hotplug_pending {
PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.cfs_period_us", 
-1);
PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.cfs_quota_us", 
-1);
} elsif ($opt eq 'cpuunits') {
-   PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.shared", 
$confdesc->{cpuunits}->{default});
+   PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.shares", 
$confdesc->{cpuunits}->{default});
} elsif ($opt =~ m/^net(\d)$/) {
my $netid = $1;
PVE::Network::veth_delete("veth${vmid}i$netid");
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC qemu-server] qemu: allow partial fastplugging of property string options

2020-01-24 Thread Oguz Bektas
hi,

thanks for the review!

On Thu, Jan 23, 2020 at 07:41:11AM +0100, Thomas Lamprecht wrote:
> On 1/22/20 5:37 PM, Oguz Bektas wrote:
> > this patch adds the partial_fast_plug function, which allows to partially
> > fastplug an option with a property string.
> > 
> > this is done by having a map $partial_fast_plug_option, the format is 
> > commented.
> > 
> > other helper functions:
> > 
> > * safe_boolean_ne (!$a != !$b)
> > * typesafe_ne (combines safe_string_ne, safe_num_ne and safe_boolean_ne by
> > taking $type as an argument)
> > 
> > the qemu-guest-agent is our first use case for this (although i am sure
> > there are more, this is more of a proof of concept. it should be trivial
> > to add other things via the map), specifically the fstrim_cloned_disks
> > option.
> > 
> > Co-Authored-by: Stefan Reiter 
> > Signed-off-by: Oguz Bektas 
> > ---
> > 
> > i added stefan as a co-author for this, to thank for his help debugging and 
> > testing it with me
> 
> no thanks for me providing the initial hunk of code doing (almost) all
> of this ? ;P
> 
> > 
> > also please note that i'm sending this as RFC for review only, and it
> > shouldn't be applied yet since i'm working on generalizing it a bit more
> > for code reuse via pve-guest-common
> 
> As you only call that method in the agent case this is also not ready to
> be taken in here. Some other issues below.
> 
> 
> Testing changing the agent from a deactivated to a "change every setting" 
> leads
> to an error with your series:
> > agent: hotplug problem - format error enabled: property is missing and it 
> > is not optional
> 
> That happens because $old is empty in that case but you try to parse it as 
> format
> nonetheless.
will fix that, thanks

> 
> > 
> >  PVE/QemuServer.pm | 78 +++
> >  1 file changed, 78 insertions(+)
> > 
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index bcdadca..74cbba0 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -5031,6 +5031,10 @@ sub vmconfig_hotplug_pending {
> > }
> > vmconfig_update_disk($storecfg, $conf, 
> > $hotplug_features->{disk},
> >  $vmid, $opt, $value, 1, $arch, 
> > $machine_type);
> > +   } elsif ($opt eq 'agent') {
> > +   # partially fastpluggable
> 
> useless comment
okay
> 
> > +   # skip if all options were fastpluggable
> 
> No! You do the contrary, you skip if *not* all options were fastpluggable!
right. i'll change the comment
> 
> You could *always* skip if you deleted $cond->{pending}->{$opt} in 
> partial_fast_plug
> if $no_changes there is false. If you want to do it like this and let the 
> deletion
> from pending happen here, OK, but get the comments right - else this is 
> confusing
> as hell.
i'd like to let the hotplug_pending path handle the deletion from
pending. i'll fix the comments in the new version

also based on our offline discussion this belongs in the else branch at
the end, i'll also take a look at that

> 
> > +   die "skip\n" if partial_fast_plug($conf, $opt);
> > } elsif ($opt =~ m/^memory$/) { #dimms
> > die "skip\n" if !$hotplug_features->{memory};
> > $value = PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, 
> > $conf, $defaults, $opt, $value);
> > @@ -5165,6 +5169,80 @@ my $safe_string_ne = sub {
> >  return $a ne $b;
> >  };
> >  
> > +my $safe_boolean_ne = sub {
> > +my ($a, $b) = @_;
> > +
> > +# we don't check if value is defined, since undefined
> > +# is false (so it's a valid boolean)
> 
> The comment below is enough, above adds just noise - it's a common enough
> pattern, IMO.
okay
> 
> > +# negate both values to normalize and compare
> > +return !$a != !$b;
> > +};
> > +
> > +my $typesafe_ne = sub {
> > +my ($a, $b, $type) = @_;
> > +
> > +return 0 if !defined($a) && !defined($b);
> > +return 1 if !defined($a);
> > +return 1 if !defined($b);
> > +
> > +if ($type eq 'string') {
> > +   $safe_string_ne->($a, $b);
> 
> 1. The returns are still missing, as told off-list. missing those works by 
> chance only
>and is very prone for subtle breakage.
right, i forgot about those, i'll change that as well
> 
> 2. if you move this to guest-common do the check directly here, no point in 
> calling (and
> 

[pve-devel] [RFC qemu-server] qemu: allow partial fastplugging of property string options

2020-01-22 Thread Oguz Bektas
this patch adds the partial_fast_plug function, which allows to partially
fastplug an option with a property string.

this is done by having a map $partial_fast_plug_option, the format is commented.

other helper functions:

* safe_boolean_ne (!$a != !$b)
* typesafe_ne (combines safe_string_ne, safe_num_ne and safe_boolean_ne by
taking $type as an argument)

the qemu-guest-agent is our first use case for this (although i am sure
there are more, this is more of a proof of concept. it should be trivial
to add other things via the map), specifically the fstrim_cloned_disks
option.

Co-Authored-by: Stefan Reiter 
Signed-off-by: Oguz Bektas 
---

i added stefan as a co-author for this, to thank for his help debugging and 
testing it with me

also please note that i'm sending this as RFC for review only, and it
shouldn't be applied yet since i'm working on generalizing it a bit more
for code reuse via pve-guest-common

 PVE/QemuServer.pm | 78 +++
 1 file changed, 78 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bcdadca..74cbba0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5031,6 +5031,10 @@ sub vmconfig_hotplug_pending {
}
vmconfig_update_disk($storecfg, $conf, 
$hotplug_features->{disk},
 $vmid, $opt, $value, 1, $arch, 
$machine_type);
+   } elsif ($opt eq 'agent') {
+   # partially fastpluggable
+   # skip if all options were fastpluggable
+   die "skip\n" if partial_fast_plug($conf, $opt);
} elsif ($opt =~ m/^memory$/) { #dimms
die "skip\n" if !$hotplug_features->{memory};
$value = PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, 
$conf, $defaults, $opt, $value);
@@ -5165,6 +5169,80 @@ my $safe_string_ne = sub {
 return $a ne $b;
 };
 
+my $safe_boolean_ne = sub {
+my ($a, $b) = @_;
+
+# we don't check if value is defined, since undefined
+# is false (so it's a valid boolean)
+
+# negate both values to normalize and compare
+return !$a != !$b;
+};
+
+my $typesafe_ne = sub {
+my ($a, $b, $type) = @_;
+
+return 0 if !defined($a) && !defined($b);
+return 1 if !defined($a);
+return 1 if !defined($b);
+
+if ($type eq 'string') {
+   $safe_string_ne->($a, $b);
+} elsif ($type eq 'number') {
+   $safe_num_ne->($a, $b);
+} elsif ($type eq 'boolean') {
+   $safe_boolean_ne->($a, $b);
+}
+};
+
+my $partial_fast_plug_option =
+# name
+# -> fmt -> format variable
+# -> properties -> fastpluggable options hash
+{
+agent => {
+   fmt => $agent_fmt,
+   properties => {
+   fstrim_cloned_disks => 1
+   },
+},
+};
+
+sub partial_fast_plug {
+my ($conf, $opt) = @_;
+
+my $format = $partial_fast_plug_option->{$opt}->{fmt};
+my $properties = $partial_fast_plug_option->{$opt}->{properties};
+
+my $old = PVE::JSONSchema::parse_property_string($format, $conf->{$opt});
+my $new = PVE::JSONSchema::parse_property_string($format, 
$conf->{pending}->{$opt});
+
+my $changes_left = 0;
+
+# merge old and new opts to iterate
+my $all_opts = dclone($old);
+foreach my $opt1 (keys %$new) {
+   $all_opts->{$opt1} = $new->{$opt1} if !defined($all_opts->{$opt1});
+}
+
+foreach my $opt2 (keys %$all_opts) {
+   my $is_fastpluggable = $properties->{$opt2};
+   my $type = $format->{$opt2}->{type};
+   if ($typesafe_ne->($old->{$opt2}, $new->{$opt2}, $type)) {
+   if (defined($is_fastpluggable)) {
+   $old->{$opt2} = $new->{$opt2};
+   } else {
+   $changes_left = 1;
+   }
+   }
+}
+
+$conf->{$opt} = PVE::JSONSchema::print_property_string($old, $format);
+
+return $changes_left;
+}
+
+
 sub vmconfig_update_net {
 my ($storecfg, $conf, $hotplug, $vmid, $opt, $value, $arch, $machine_type) 
= @_;
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] setup: allow centos to version 9

2020-01-20 Thread Oguz Bektas
so that we handle all the point releases between 8-9

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Setup/CentOS.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
index 34430ff..d73c0cf 100644
--- a/src/PVE/LXC/Setup/CentOS.pm
+++ b/src/PVE/LXC/Setup/CentOS.pm
@@ -20,7 +20,7 @@ sub new {
 my $version;
 
 if ($release =~ m/release\s+(\d+\.\d+)(\.\d+)?/) {
-   if ($1 >= 5 && $1 <= 8.1) {
+   if ($1 >= 5 && $1 <= 9) {
$version = $1;
}
 }
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] setup: allow centos 8.1

2020-01-17 Thread Oguz Bektas
[0]: 
https://forum.proxmox.com/threads/centos-8-1-lxc-unsupported-centos-release.63530/

Signed-off-by: Oguz Bektas 
---
 src/PVE/LXC/Setup/CentOS.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
index cc4c5bb..34430ff 100644
--- a/src/PVE/LXC/Setup/CentOS.pm
+++ b/src/PVE/LXC/Setup/CentOS.pm
@@ -20,7 +20,7 @@ sub new {
 my $version;
 
 if ($release =~ m/release\s+(\d+\.\d+)(\.\d+)?/) {
-   if ($1 >= 5 && $1 <= 8) {
+   if ($1 >= 5 && $1 <= 8.1) {
$version = $1;
}
 }
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] hotplug_pending: make 'ssd' option non-hotpluggable

2020-01-16 Thread Oguz Bektas
from hotplug_pending we go into 'vmconfig_update_disk', where we check the
hotpluggability of options.

add 'ssd' there as a non-hotpluggable option (since we'd have to unplug/plug to
change the drive type)

Signed-off-by: Oguz Bektas 
---
 PVE/QemuServer.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c2547d6..1d01a68 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5252,7 +5252,8 @@ sub vmconfig_update_disk {
if (&$safe_string_ne($drive->{discard}, 
$old_drive->{discard}) ||
&$safe_string_ne($drive->{iothread}, 
$old_drive->{iothread}) ||
&$safe_string_ne($drive->{queues}, 
$old_drive->{queues}) ||
-   &$safe_string_ne($drive->{cache}, $old_drive->{cache})) 
{
+   &$safe_string_ne($drive->{cache}, $old_drive->{cache}) 
||
+   &$safe_string_ne($drive->{ssd}, $old_drive->{ssd})) {
die "skip\n";
}
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] hotplug_pending: allow hotplugging of 'fstrim_cloned_disks'

2020-01-16 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---

some thoughts about this, based on our offline discussion with dominik.

this patch currently allows us to hotplug the fstrim option for agent,
when it's the only change.
however if the user changes both the type and this option for example,
it will not hotplug the option. this is the same behavior/semantic we
have with the other options like for disks (where you can hotplug some
options like backup etc. but they won't hotplug if you change
non-hotpluggable things at the same time).

one way of handling this in a generic way (for reuse) would be to have a
schema for each property string, to define the hotpluggability (or other
traits) of each substring/option.

so for now i decided to leave it like this since it behaves similar to
our other options, but we can discuss some solutions


 PVE/QemuServer.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9ef3b71..c2547d6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5031,6 +5031,13 @@ sub vmconfig_hotplug_pending {
}
vmconfig_update_disk($storecfg, $conf, 
$hotplug_features->{disk},
 $vmid, $opt, $value, 1, $arch, 
$machine_type);
+   } elsif ($opt eq 'agent') {
+   my $old_agent = parse_guest_agent($conf);
+   my $new_agent = parse_guest_agent($conf->{pending});
+   if ($old_agent->{enabled} ne $new_agent->{enabled} ||
+   $old_agent->{type} ne $new_agent->{type}) {
+   die "skip\n"; # can't hotplug-enable agent or change type
+   }
} elsif ($opt =~ m/^memory$/) { #dimms
die "skip\n" if !$hotplug_features->{memory};
$value = PVE::QemuServer::Memory::qemu_memory_hotplug($vmid, 
$conf, $defaults, $opt, $value);
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v6 qemu-server] vmconfig_apply_pending: remove redundant write/load config calls

2020-01-15 Thread Oguz Bektas
since we handle errors gracefully now, we don't need to write & save
config every time we change a setting.

Signed-off-by: Oguz Bektas 
---

v5 -> v6:
* style nit from fabian
* combine two elsif statements into one, get rid of $new_val and
$old_val from last version (not needed because of $cleanup_pending

 PVE/QemuServer.pm | 36 +---
 1 file changed, 9 insertions(+), 27 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f7d99e3..56dc6a0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5108,20 +5108,10 @@ sub vmconfig_apply_pending {
 foreach my $opt (sort keys %$pending_delete_hash) {
my $force = $pending_delete_hash->{$opt}->{force};
eval {
-   die "internal error" if $opt =~ m/^unused/;
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-   if (!defined($conf->{$opt})) {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } elsif (is_valid_drivename($opt)) {
+   if ($opt =~ m/^unused/) {
+   die "internal error";
+   } elsif (defined($conf->{$opt}) && is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } else {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
};
if (my $err = $@) {
@@ -5129,35 +5119,27 @@ sub vmconfig_apply_pending {
} else {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
 }
 
-$conf = PVE::QemuConfig->load_config($vmid); # update/reload
+PVE::QemuConfig->cleanup_pending($conf);
 
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-
+   next if $opt eq 'delete'; # just to be sure
eval {
-   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
-   # skip if nothing changed
-   } elsif (is_valid_drivename($opt)) {
+   if (defined($conf->{$opt}) && is_valid_drivename($opt)) {
vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
-   if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
-   } else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
}
};
if (my $err = $@) {
$add_apply_error->($opt, $err);
} else {
$conf->{$opt} = delete $conf->{pending}->{$opt};
-   PVE::QemuConfig->cleanup_pending($conf);
}
-
-   PVE::QemuConfig->write_config($vmid, $conf);
 }
+
+# write all changes at once to avoid unnecessary i/o
+PVE::QemuConfig->write_config($vmid, $conf);
 }
 
 my $safe_num_ne = sub {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 docs] rewrite and extend pct documentation

2020-01-14 Thread Oguz Bektas
* rephrase some parts.
* update old information
* add info about pending changes and other "new" features

Co-Authored-by: Aaron Lauterer 
Signed-off-by: Oguz Bektas 
---

v1->v2:
changed some of the writing in terms of phrasing and style, with
feedback from aaron. thanks!

 pct.adoc | 442 ---
 1 file changed, 259 insertions(+), 183 deletions(-)

diff --git a/pct.adoc b/pct.adoc
index 2f1d329..f8804e6 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -28,32 +28,27 @@ ifdef::wiki[]
 :title: Linux Container
 endif::wiki[]
 
-Containers are a lightweight alternative to fully virtualized
-VMs. Instead of emulating a complete Operating System (OS), containers
-simply use the OS of the host they run on. This implies that all
-containers use the same kernel, and that they can access resources
-from the host directly.
+Containers are a lightweight alternative to fully virtualized VMs.  They use
+the kernel of the host system that they run on, instead of emulating a full
+operating system (OS). This means that containers can access resources on the
+host system directly.
 
-This is great because containers do not waste CPU power nor memory due
-to kernel emulation. Container run-time costs are close to zero and
-usually negligible. But there are also some drawbacks you need to
-consider:
+The runtime costs for containers is low, usually negligible, because of the low
+overhead in terms of CPU and memory resources. However, there are some 
drawbacks
+that need be considered:
 
-* You can only run Linux based OS inside containers, i.e. it is not
-  possible to run FreeBSD or MS Windows inside.
+* Only Linux distributions can be run in containers, i.e. It is not
+  possible to run FreeBSD or MS Windows inside a container.
 
-* For security reasons, access to host resources needs to be
-  restricted. This is done with AppArmor, SecComp filters and other
-  kernel features. Be prepared that some syscalls are not allowed
-  inside containers.
+* For security reasons, access to host resources needs to be restricted. Some
+  syscalls are not allowed within containers. This is done with AppArmor, 
SecComp
+  filters, and other kernel features.
 
 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
-technology. We consider LXC as low-level library, which provides
-countless options. It would be too difficult to use those tools
-directly. Instead, we provide a small wrapper called `pct`, the
-"Proxmox Container Toolkit".
+technology. The "Proxmox Container Toolkit" (`pct`) simplifies the usage of LXC
+containers.
 
-The toolkit is tightly coupled with {pve}. That means that it is aware
+The `pct` is tightly coupled with {pve}. This means that it is aware
 of the cluster setup, and it can use the same network and storage
 resources as fully virtualized VMs. You can even use the {pve}
 firewall, or manage containers using the HA framework.
@@ -62,7 +57,7 @@ Our primary goal is to offer an environment as one would get 
from a
 VM, but without the additional overhead. We call this "System
 Containers".
 
-NOTE: If you want to run micro-containers (with docker, rkt, ...), it
+NOTE: If you want to run micro-containers (with docker, rkt, etc.) it
 is best to run them inside a VM.
 
 
@@ -79,38 +74,66 @@ Technology Overview
 
 * lxcfs to provide containerized /proc file system
 
-* AppArmor/Seccomp to improve security
+* CGroups (control groups) for resource allocation
 
-* CRIU: for live migration (planned)
+* AppArmor/Seccomp to improve security
 
-* Runs on modern Linux kernels
+* Modern Linux kernels
 
 * Image based deployment (templates)
 
-* Use {pve} storage library
+* Uses {pve} storage library
 
-* Container setup from host (network, DNS, storage, ...)
+* Container setup from host (network, DNS, storage, etc.)
 
 
 Security Considerations
 ---
 
-Containers use the same kernel as the host, so there is a big attack
-surface for malicious users. You should consider this fact if you
-provide containers to totally untrusted people. In general, fully
-virtualized VMs provide better isolation.
+Containers use the kernel of the host system. This creates a big attack
+surface for malicious users. This should be considered if containers
+are provided to untrustworthy people. In general, full
+virtual machines provide better isolation.
+
+However, LXC uses many security features like AppArmor, CGroups and kernel
+namespaces to reduce the attack surface.
+
+AppArmor profiles are used to restrict access to possibly dangerous actions.
+Some system calls, i.e. `mount`, are prohibited from execution.
+
+To trace AppArmor activity, use:
+
+
+# dmesg | grep apparmor
+
+
+WARNING: Although it is not recommended, AppArmor can be disabled for
+a container. This brings some security risks, for example being able
+to execute some syscalls in containers can lead to privilege
+escalation in some cases if the system 

[pve-devel] [PATCH v5 qemu-server] vmconfig_apply_pending: remove redundant write/load config calls

2020-01-14 Thread Oguz Bektas
since we handle errors gracefully now, we don't need to write & save
config every time we change a setting.

Signed-off-by: Oguz Bektas 
---
v4 -> v5:

changed some stuff according to the feedback from fabian and thomas,
thanks a lot!

* remove forgotten load_config call
* some style changes (if block vs. postfix if) for consistency
* do not run cleanup_pending in every iteration, but instead between the
delete and add/change loops (for optimization)
* remove unnecessary remove_from_pending_delete in the eval, since
that's already handled afterwards in the else block

 PVE/QemuServer.pm | 41 ++---
 1 file changed, 14 insertions(+), 27 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f7d99e3..eb4ef85 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5108,20 +5108,13 @@ sub vmconfig_apply_pending {
 foreach my $opt (sort keys %$pending_delete_hash) {
my $force = $pending_delete_hash->{$opt}->{force};
eval {
-   die "internal error" if $opt =~ m/^unused/;
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-   if (!defined($conf->{$opt})) {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   PVE::QemuConfig->write_config($vmid, $conf);
+   if ($opt =~ m/^unused/) {
+   die "internal error";
+   }
+   elsif (!defined($conf->{$opt})) {
+   # pass
} elsif (is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } else {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
};
if (my $err = $@) {
@@ -5129,35 +5122,29 @@ sub vmconfig_apply_pending {
} else {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
 }
 
-$conf = PVE::QemuConfig->load_config($vmid); # update/reload
+PVE::QemuConfig->cleanup_pending($conf);
 
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-
+   next if $opt eq 'delete'; # just to be sure
eval {
-   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
-   # skip if nothing changed
-   } elsif (is_valid_drivename($opt)) {
-   vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
-   if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
-   } else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
+   my $old_val = $conf->{$opt};
+   my $new_val = $conf->{pending}->{$opt};
+   if (defined($old_val) && is_valid_drivename($opt) && $old_val ne 
$new_val) {
+   vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $old_val))
}
};
if (my $err = $@) {
$add_apply_error->($opt, $err);
} else {
$conf->{$opt} = delete $conf->{pending}->{$opt};
-   PVE::QemuConfig->cleanup_pending($conf);
}
-
-   PVE::QemuConfig->write_config($vmid, $conf);
 }
+
+# write all changes at once to avoid unnecessary i/o
+PVE::QemuConfig->write_config($vmid, $conf);
 }
 
 my $safe_num_ne = sub {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH i18n] tr: fix whitespace for string

2020-01-08 Thread Oguz Bektas
"ACPI" instead of "ACPI " with whitespace.

Signed-off-by: Oguz Bektas 
---
 tr.po | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tr.po b/tr.po
index e94b0fa..b2db822 100644
--- a/tr.po
+++ b/tr.po
@@ -8,7 +8,7 @@ msgstr ""
 "Project-Id-Version: proxmox translations\n"
 "Report-Msgid-Bugs-To: \n"
 "POT-Creation-Date: Fri Dec 13 12:52:08 2019\n"
-"PO-Revision-Date: 2020-01-02 11:37+0100\n"
+"PO-Revision-Date: 2020-01-08 12:17+0100\n"
 "Last-Translator: Oguz Bektas \n"
 "Language-Team: Turkish\n"
 "Language: tr\n"
@@ -36,7 +36,7 @@ msgstr "ACME Dizini"
 #: pve-manager/www/manager6/qemu/Options.js:160
 #: pve-manager/www/manager6/qemu/Options.js:165
 msgid "ACPI support"
-msgstr "ACPI "
+msgstr "ACPI"
 
 #: pve-manager/www/manager6/storage/ContentView.js:259
 msgid "Abort"
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v4 qemu-server 1/2] vmconfig_apply_pending: add error handling

2020-01-07 Thread Oguz Bektas
wrap around code which can possibly fail in evals to handle them
gracefully, and log errors.

Signed-off-by: Oguz Bektas 
---

v3 -> v4:
* use $errors parameter while calling vmconfig_apply_pending

 PVE/API2/Qemu.pm  |  6 ++---
 PVE/QemuServer.pm | 61 ---
 2 files changed, 45 insertions(+), 22 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 5bae513..e853a83 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1242,13 +1242,13 @@ my $update_vm_api  = sub {
 
$conf = PVE::QemuConfig->load_config($vmid); # update/reload
 
+   my $errors = {};
if ($running) {
-   my $errors = {};
PVE::QemuServer::vmconfig_hotplug_pending($vmid, $conf, 
$storecfg, $modified, $errors);
-   raise_param_exc($errors) if scalar(keys %$errors);
} else {
-   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
$storecfg, $running);
+   PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
$storecfg, $running, $errors);
}
+   raise_param_exc($errors) if scalar(keys %$errors);
 
return;
};
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2b68d81..2de8376 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4977,23 +4977,39 @@ sub vmconfig_delete_or_detach_drive {
 
 
 sub vmconfig_apply_pending {
-my ($vmid, $conf, $storecfg) = @_;
+my ($vmid, $conf, $storecfg, $errors) = @_;
+
+my $add_apply_error = sub {
+   my ($opt, $msg) = @_;
+   my $err_msg = "unable to apply pending change $opt : $msg";
+   $errors->{$opt} = $err_msg;
+   warn $err_msg;
+};
 
 # cold plug
 
 my $pending_delete_hash = 
PVE::QemuConfig->parse_pending_delete($conf->{pending}->{delete});
 foreach my $opt (sort keys %$pending_delete_hash) {
-   die "internal error" if $opt =~ m/^unused/;
my $force = $pending_delete_hash->{$opt}->{force};
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-   if (!defined($conf->{$opt})) {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } elsif (is_valid_drivename($opt)) {
-   vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
+   eval {
+   die "internal error" if $opt =~ m/^unused/;
+   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
+   if (!defined($conf->{$opt})) {
+   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
+   PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif (is_valid_drivename($opt)) {
+   vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
+   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
+   delete $conf->{$opt};
+   PVE::QemuConfig->write_config($vmid, $conf);
+   } else {
+   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
+   delete $conf->{$opt};
+   PVE::QemuConfig->write_config($vmid, $conf);
+   }
+   };
+   if (my $err = $@) {
+   $add_apply_error->($opt, $err);
} else {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
delete $conf->{$opt};
@@ -5006,17 +5022,24 @@ sub vmconfig_apply_pending {
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
$conf = PVE::QemuConfig->load_config($vmid); # update/reload
 
-   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
-   # skip if nothing changed
-   } elsif (is_valid_drivename($opt)) {
-   vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
-   if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
+   eval {
+   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
+   # skip if nothing changed
+   } elsif (is_valid_drivename($opt)) {
+   vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
+   if defined($conf->{$opt});
+   $conf->{$opt} = $conf->{pending}->{$opt};
+   } else {
+   $conf->{$opt} = $conf->{pending}->{$opt};
+   }
+   };
+   if (my $err = $@) {
+   $add_apply_error->($opt, $err);
} else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
+ 

[pve-devel] [PATCH v4 qemu-server 2/2] vmconfig_apply_pending: remove redundant write/load config calls

2020-01-07 Thread Oguz Bektas
since we handle errors gracefully now, we don't need to write & save
config every time we change a setting.

note: this results in a change of behavior in the API. since errors are
handled gracefully instead of "die"ing, when there is a pending change
which cannot be applied for some reason, it will get logged in the
tasklog but the vm will continue booting regardless. the non-applied
change will stay in the pending section of the configuration.

Signed-off-by: Oguz Bektas 
---

v3 -> v4:
* add explanation in commit message about behavior change in API

 PVE/QemuServer.pm | 21 +
 1 file changed, 5 insertions(+), 16 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2de8376..9c86bee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4993,19 +4993,11 @@ sub vmconfig_apply_pending {
my $force = $pending_delete_hash->{$opt}->{force};
eval {
die "internal error" if $opt =~ m/^unused/;
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
if (!defined($conf->{$opt})) {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   PVE::QemuConfig->write_config($vmid, $conf);
} elsif (is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } else {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
};
if (my $err = $@) {
@@ -5013,24 +5005,20 @@ sub vmconfig_apply_pending {
} else {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
+
 }
 
 $conf = PVE::QemuConfig->load_config($vmid); # update/reload
 
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-
+   next if $opt eq 'delete'; # just to be sure
eval {
if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
# skip if nothing changed
} elsif (is_valid_drivename($opt)) {
vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
-   } else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
}
};
if (my $err = $@) {
@@ -5039,9 +5027,10 @@ sub vmconfig_apply_pending {
$conf->{$opt} = delete $conf->{pending}->{$opt};
PVE::QemuConfig->cleanup_pending($conf);
}
-
-   PVE::QemuConfig->write_config($vmid, $conf);
 }
+
+# write all changes at once to avoid unnecessary i/o
+PVE::QemuConfig->write_config($vmid, $conf);
 }
 
 my $safe_num_ne = sub {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 qemu-server 2/2] vmconfig_apply_pending: remove redundant write/load config calls

2020-01-02 Thread Oguz Bektas
since we handle errors gracefully now, we don't need to write & save
config every time we change a setting.

Signed-off-by: Oguz Bektas 
---
 PVE/QemuServer.pm | 21 +
 1 file changed, 5 insertions(+), 16 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2de8376..9c86bee 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4993,19 +4993,11 @@ sub vmconfig_apply_pending {
my $force = $pending_delete_hash->{$opt}->{force};
eval {
die "internal error" if $opt =~ m/^unused/;
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
if (!defined($conf->{$opt})) {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   PVE::QemuConfig->write_config($vmid, $conf);
} elsif (is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } else {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
};
if (my $err = $@) {
@@ -5013,24 +5005,20 @@ sub vmconfig_apply_pending {
} else {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
}
+
 }
 
 $conf = PVE::QemuConfig->load_config($vmid); # update/reload
 
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-
+   next if $opt eq 'delete'; # just to be sure
eval {
if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
# skip if nothing changed
} elsif (is_valid_drivename($opt)) {
vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
-   } else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
}
};
if (my $err = $@) {
@@ -5039,9 +5027,10 @@ sub vmconfig_apply_pending {
$conf->{$opt} = delete $conf->{pending}->{$opt};
PVE::QemuConfig->cleanup_pending($conf);
}
-
-   PVE::QemuConfig->write_config($vmid, $conf);
 }
+
+# write all changes at once to avoid unnecessary i/o
+PVE::QemuConfig->write_config($vmid, $conf);
 }
 
 my $safe_num_ne = sub {
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 qemu-server 1/2] vmconfig_apply_pending: add error handling

2020-01-02 Thread Oguz Bektas
wrap around code which can possibly fail in evals to handle them
gracefully, and log errors.

Signed-off-by: Oguz Bektas 
---
 PVE/API2/Qemu.pm  |  4 ++--
 PVE/QemuServer.pm | 61 ---
 2 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 5bae513..3bea68b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1242,13 +1242,13 @@ my $update_vm_api  = sub {
 
$conf = PVE::QemuConfig->load_config($vmid); # update/reload
 
+   my $errors = {};
if ($running) {
-   my $errors = {};
PVE::QemuServer::vmconfig_hotplug_pending($vmid, $conf, 
$storecfg, $modified, $errors);
-   raise_param_exc($errors) if scalar(keys %$errors);
} else {
PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, 
$storecfg, $running);
}
+   raise_param_exc($errors) if scalar(keys %$errors);
 
return;
};
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2b68d81..2de8376 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4977,23 +4977,39 @@ sub vmconfig_delete_or_detach_drive {
 
 
 sub vmconfig_apply_pending {
-my ($vmid, $conf, $storecfg) = @_;
+my ($vmid, $conf, $storecfg, $errors) = @_;
+
+my $add_apply_error = sub {
+   my ($opt, $msg) = @_;
+   my $err_msg = "unable to apply pending change $opt : $msg";
+   $errors->{$opt} = $err_msg;
+   warn $err_msg;
+};
 
 # cold plug
 
 my $pending_delete_hash = 
PVE::QemuConfig->parse_pending_delete($conf->{pending}->{delete});
 foreach my $opt (sort keys %$pending_delete_hash) {
-   die "internal error" if $opt =~ m/^unused/;
my $force = $pending_delete_hash->{$opt}->{force};
-   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
-   if (!defined($conf->{$opt})) {
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   PVE::QemuConfig->write_config($vmid, $conf);
-   } elsif (is_valid_drivename($opt)) {
-   vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
-   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
-   delete $conf->{$opt};
-   PVE::QemuConfig->write_config($vmid, $conf);
+   eval {
+   die "internal error" if $opt =~ m/^unused/;
+   $conf = PVE::QemuConfig->load_config($vmid); # update/reload
+   if (!defined($conf->{$opt})) {
+   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
+   PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif (is_valid_drivename($opt)) {
+   vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
+   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
+   delete $conf->{$opt};
+   PVE::QemuConfig->write_config($vmid, $conf);
+   } else {
+   PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
+   delete $conf->{$opt};
+   PVE::QemuConfig->write_config($vmid, $conf);
+   }
+   };
+   if (my $err = $@) {
+   $add_apply_error->($opt, $err);
} else {
PVE::QemuConfig->remove_from_pending_delete($conf, $opt);
delete $conf->{$opt};
@@ -5006,17 +5022,24 @@ sub vmconfig_apply_pending {
 foreach my $opt (keys %{$conf->{pending}}) { # add/change
$conf = PVE::QemuConfig->load_config($vmid); # update/reload
 
-   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
-   # skip if nothing changed
-   } elsif (is_valid_drivename($opt)) {
-   vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
-   if defined($conf->{$opt});
-   $conf->{$opt} = $conf->{pending}->{$opt};
+   eval {
+   if (defined($conf->{$opt}) && ($conf->{$opt} eq 
$conf->{pending}->{$opt})) {
+   # skip if nothing changed
+   } elsif (is_valid_drivename($opt)) {
+   vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
+   if defined($conf->{$opt});
+   $conf->{$opt} = $conf->{pending}->{$opt};
+   } else {
+   $conf->{$opt} = $conf->{pending}->{$opt};
+   }
+   };
+   if (my $err = $@) {
+   $add_apply_error->($opt, $err);
} else {
-   $conf->{$opt} = $conf->{pending}->{$opt};
+   $conf->{$opt} = delete $conf->{pending}->{$opt};
+   PVE::QemuConfig->cleanup_pending($conf);
}
 
-   delete $conf->{pending}->{$opt}

[pve-devel] [PATCH i18n] update turkish translation

2020-01-02 Thread Oguz Bektas
Signed-off-by: Oguz Bektas 
---
 tr.po | 110 +++---
 1 file changed, 44 insertions(+), 66 deletions(-)

diff --git a/tr.po b/tr.po
index 1d67201..e94b0fa 100644
--- a/tr.po
+++ b/tr.po
@@ -8,7 +8,7 @@ msgstr ""
 "Project-Id-Version: proxmox translations\n"
 "Report-Msgid-Bugs-To: \n"
 "POT-Creation-Date: Fri Dec 13 12:52:08 2019\n"
-"PO-Revision-Date: 2019-07-11 16:18+0200\n"
+"PO-Revision-Date: 2020-01-02 11:37+0100\n"
 "Last-Translator: Oguz Bektas \n"
 "Language-Team: Turkish\n"
 "Language: tr\n"
@@ -26,6 +26,8 @@ msgstr "/bir/dizin"
 msgid ""
 "A newer version was installed but old version still running, please restart"
 msgstr ""
+"Yeni bir versiyon yüklü, ama eski versiyon çalışıyor. Lütfen yeniden "
+"başlatın."
 
 #: pve-manager/www/manager6/node/ACME.js:65
 msgid "ACME Directory"
@@ -33,9 +35,8 @@ msgstr "ACME Dizini"
 
 #: pve-manager/www/manager6/qemu/Options.js:160
 #: pve-manager/www/manager6/qemu/Options.js:165
-#, fuzzy
 msgid "ACPI support"
-msgstr "Destek"
+msgstr "ACPI "
 
 #: pve-manager/www/manager6/storage/ContentView.js:259
 msgid "Abort"
@@ -127,19 +128,16 @@ msgstr ""
 
 #: pve-manager/www/manager6/ceph/FS.js:49
 #: pve-manager/www/manager6/ceph/Pool.js:53
-#, fuzzy
 msgid "Add as Storage"
-msgstr "Depolama ekle"
+msgstr "Depolama olarak ekle"
 
 #: pve-manager/www/manager6/ceph/FS.js:54
-#, fuzzy
 msgid "Add the new CephFS to the cluster storage configuration."
-msgstr "Ceph Cluster Yapılandırması"
+msgstr "Yeni CephFS'i cluster depolama ayarlarına ekle."
 
 #: pve-manager/www/manager6/ceph/Pool.js:58
-#, fuzzy
 msgid "Add the new pool to the cluster storage configuration."
-msgstr "Ceph Cluster Yapılandırması"
+msgstr "Yeni pool'u cluster depolama ayarlarına ekle."
 
 #: pve-manager/www/manager6/ceph/CephInstallWizard.js:224
 msgid ""
@@ -210,7 +208,7 @@ msgstr "HREF'lere izin ver"
 
 #: pve-manager/www/manager6/window/BulkAction.js:86
 msgid "Allow local disk migration"
-msgstr ""
+msgstr "Yerel disk göçüne izin ver"
 
 #: proxmox-widget-toolkit/Toolkit.js:91 proxmox-widget-toolkit/Toolkit.js:99
 #: proxmox-widget-toolkit/Toolkit.js:107
@@ -227,14 +225,12 @@ msgid "Apply"
 msgstr "Uygula"
 
 #: proxmox-widget-toolkit/node/NetworkView.js:130
-#, fuzzy
 msgid "Apply Configuration"
-msgstr "Yapılandırma"
+msgstr "Ayarları uygula"
 
 #: pmg-gui/js/SpamDetectorCustom.js:84 pmg-gui/js/SpamDetectorCustom.js:251
-#, fuzzy
 msgid "Apply Custom Scores"
-msgstr "Spam skorları"
+msgstr "Özelleştirilmiş spam skorlarını uygula"
 
 #: pve-manager/www/manager6/lxc/Options.js:56
 msgid "Architecture"
@@ -297,9 +293,8 @@ msgstr ""
 #: pve-manager/www/manager6/qemu/AudioEdit.js:44
 #: pve-manager/www/manager6/qemu/HardwareView.js:292
 #: pve-manager/www/manager6/qemu/HardwareView.js:702
-#, fuzzy
 msgid "Audio Device"
-msgstr "Cihaz"
+msgstr "Ses cihazı"
 
 #: pmg-gui/js/Utils.js:21
 msgid "Auditor"
@@ -320,9 +315,8 @@ msgstr "Özgün özellikleri tut, MAC adresi vb."
 #: pve-manager/www/manager6/ceph/OSD.js:71
 #: pve-manager/www/manager6/ceph/OSD.js:107
 #: pve-manager/www/manager6/qemu/Options.js:302
-#, fuzzy
 msgid "Automatic"
-msgstr "Otomatik Başlat"
+msgstr "Otomatik"
 
 #: pve-manager/www/manager6/qemu/Options.js:312
 msgid "Automatic (Storage used by the VM, or 'local')"
@@ -383,9 +377,8 @@ msgid "Backup Job"
 msgstr "Yedekleme İşi"
 
 #: pve-manager/www/manager6/dc/OptionView.js:182
-#, fuzzy
 msgid "Backup Restore"
-msgstr "Yedekleme İşi"
+msgstr "Yedekleme/Geri yükleme"
 
 #: pve-manager/www/manager6/grid/BackupView.js:120
 msgid "Backup now"
@@ -445,9 +438,8 @@ msgid "Block Size"
 msgstr "Blok boyutu"
 
 #: pmg-gui/js/VirusDetectorOptions.js:11
-#, fuzzy
 msgid "Block encrypted archives and documents"
-msgstr "Blok şifrelenmiş arşivler"
+msgstr "Şifrelenmiş arşivleri ve dökümanları engelle"
 
 #: pmg-gui/js/Utils.js:515
 msgid "Body"
@@ -646,9 +638,8 @@ msgid "Check"
 msgstr "Kontrol"
 
 #: pve-manager/www/manager6/qemu/USBEdit.js:80
-#, fuzzy
 msgid "Choose Device"
-msgstr "PCI cihazı"
+msgstr "Cihaz seçin"
 
 #: pve-manager/www/manager6/qemu/USBEdit.js:99
 msgid "Choose Port"
@@ -834,9 +825,8 @@ msgid "Configuration"
 msgstr "Yapılandırma"
 
 #: pve-manager/www/manager6/ceph/Config.js:140
-#, fuzzy

  1   2   3   4   5   >