Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Michael Rasmussen
On Wed, 3 Jun 2020 18:16:11 +0200
"b...@todoo.biz"  wrote:

> 
> I'll try to see what can be done… 
> But I am more involved in firewalling hardcore dev than Proxmox at
> this stage ! 
> 
This is the file you should copy an adapt to ctld:
https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/Storage/LunCmd/Istgt.pm;h=2f758f908aafa7fa4e19b5a82b7244d77d949fb6;hb=HEAD

Very little adaption needs to be done in this file:
https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/Storage/ZFSPlugin.pm;h=383f0a0cde932da5ce34d792aafb7206517b74ee;hb=HEAD

Apart from the above mentioned a few places in the GUI needs to be
aware of the new plugin such as in this file:
https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/Storage/ZFSPlugin.pm;h=383f0a0cde932da5ce34d792aafb7206517b74ee;hb=HEAD

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir  datanom  net
https://pgp.key-server.io/pks/lookup?search=0xE501F51C
mir  miras  org
https://pgp.key-server.io/pks/lookup?search=0xE3E80917
--
/usr/games/fortune -es says:
The idea is to die young as late as possible.
-- Ashley Montague


pgppShUIxKPzT.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread b...@todoo.biz
Le 3 juin 2020 à 18:25, Michael Rasmussen  a écrit :
> 
> Signé partie PGP
> On Wed, 3 Jun 2020 18:16:11 +0200
> "b...@todoo.biz"  wrote:
> 
>> 
>> What is your upper limit for this ? 
>> 
> I prefer a world without limitations;-)
> BTW. Some years ago I asked phk (Poul Henning Kamp) about this and
> according to him there was no kernel reason behind the limitation and
> to the best of his knowledge the number of LUNs, within reasons, should
> be nearly endless. Of course the number of CPU cores and the amount of
> memory should be equipped accordingly.
> 
>> I have a large project with Proxmox which might be popping up, if
>> this is the case, I'll try so sponsor this ;-) 
> 
> I am convinced that the Proxmox team will welcome such a plugin since
> supported more storage solution can only benefit Proxmox as a product.

I didn't understand in the first place what was the real limitation of the 
FreeNAS plugin provided by TheGrandWazoo (beside not being in line with Proxmox 
coding practices). 

Because maybe in this case the Proxmox Core team could take a little of their 
precious time and modify the submitted patches to "style" them accordingly. 

Then we could do the testing part of the job… 

> 
> -- 
> Hilsen/Regards
> Michael Rasmussen
> 
> Get my public GnuPG keys:
> michael  rasmussen  cc
> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
> mir  datanom  net
> https://pgp.key-server.io/pks/lookup?search=0xE501F51C
> mir  miras  org
> https://pgp.key-server.io/pks/lookup?search=0xE3E80917
> --
> /usr/games/fortune -es says:
> Delta: We never make the same mistake three times.   -- David Letterman
> 
> 

---
ToDoo - osnet.eu - DynFi.com 
6 rue Montmartre - 75001 Paris
b...@todoo.biz
web: https://www.dynfi.com
PGP_ID: 0x1BA3C2FD


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Michael Rasmussen
On Wed, 3 Jun 2020 18:16:11 +0200
"b...@todoo.biz"  wrote:

> 
> What is your upper limit for this ? 
> 
I prefer a world without limitations;-)
BTW. Some years ago I asked phk (Poul Henning Kamp) about this and
according to him there was no kernel reason behind the limitation and
to the best of his knowledge the number of LUNs, within reasons, should
be nearly endless. Of course the number of CPU cores and the amount of
memory should be equipped accordingly.

> I have a large project with Proxmox which might be popping up, if
> this is the case, I'll try so sponsor this ;-) 

I am convinced that the Proxmox team will welcome such a plugin since
supported more storage solution can only benefit Proxmox as a product.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir  datanom  net
https://pgp.key-server.io/pks/lookup?search=0xE501F51C
mir  miras  org
https://pgp.key-server.io/pks/lookup?search=0xE3E80917
--
/usr/games/fortune -es says:
Delta: We never make the same mistake three times.   -- David Letterman


pgp9okdTFGsV0.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread b...@todoo.biz


> Le 3 juin 2020 à 18:06, Michael Rasmussen  a écrit :
> 
> Signé partie PGP
> On Wed, 3 Jun 2020 16:54:17 +0200
> "b...@todoo.biz"  wrote:
> 
>> 
>> A lot of time has passed since version 9 of FreeBSD / FreeNAS. Six
>> years to be precise. 
>> 
> The version numbers was for explanation purposes only.
> BTW. When FreeNAS soon will vanish an be consumed be TrueNAS what
> happens then? Will it be a closed and commercial product only?
> 
>> iSCSI on FreeBSD is used by very large corporation (Gandi  not to
>> name them https://www.gandi.net/en  is
>> hosting all it's VM using FreeBSD + Linux on the initiator side) with
>> an excellent level of stability. 
>> 
> I have nothing against FreeBSD on the contrary I have very much fun
> using FreeBSD myself (For one usecase I am addicted to pfSense). Only
> shortcoming in Proxmox realm in the past was the very non enterprise
> iSCSI implementation (istgt) which was replaced by cltd, a very fine
> iSCSI implementation, but limited to 1024 LUNs per target in the past
> but according to your links has since increased. (By the time I was
> looking into to this I had numerous discussions with the developer
> about this issue and I promised to inform me personally when he removed
> the limitation - I haven't heard from him)
> 
>> Solution derivated from Illumos are either overpriced, closed source
>> and scarcely used or abandoned (if not all at the same time). 
>> 
> This is where you I misinformed ;-) Illumos is truly FOSS an lives
> well. Commercially with SmartOS, but not closed source since all
> patches are sent upstream. The non-commercial branch of Illumos also
> lives well as OmniosCE and are well maintained by Andy and Tobias to
> mention a few (I have a small part as well ;-). I think you are
> thinking of Solaris as in Oracle Solaris which is definitely on the
> usual way to desintegration as all other stuff Larry gets his hands on.
> 
> You should also remember that all ZFS development, apart from the black
> caves near Redwood City, is done in OpenZFS and all current
> implementations apart from Oracle uses the same source tree.
> 
>> I think that It might be time to re-considering FreeBSD as a
>> potential stable solution to host iSCSI targets. Considering the very
>> important efforts FreeBSD has made to implement ZFS and stick to the
>> OpenZFS standards. 
>> 
> This has never been my attitude. FreeBSD is rock solid an has been for
> decades.
> 
>> 
>> This is not true. 
>> 
>> The limit that you are mentioning here has been overridden by a
>> tunable parameter.
>> https://www.freebsd.org/cgi/man.cgi?query=ctl=4
>> 
>> 
>> It turns out that this patch has been developed by one of my fellow
>> developer couple of years ago to bypass the limitations of 1024 LUNs
>> that you mentioned. 
>> 
> Do you have any personal experiences with raising the number of LUNs?
> Eg. increase it by a factor of 10?

What Manu, (FreeBSD kernel developer part of my team) has expertise with is 
having passed the limit of 1024. 
He is the one which has patched this in order to put this in a tunable 
variable. 

What is your upper limit for this ? 

I am putting him in copy. 

> 
>> So It should qualify to be selected as an enterprise grade solution
>> to host iSCSI target. 
>> 
>> The fact that some developers are patching Proxmox code to allow *BSD
>> / FreeNAS to perform is a sign that there might have a need for such
>> tool. 
>> 
>> So beside this max luns problem, what else seems to be causing
>> problem with the BSD target ? 
>> 
> The LUN limitation was my only objection when I stopped to develop the
> storage plugin. It should be rather easy to write such a plugin using
> the istgt code an replace the iSCSI part with cltd. For a motivated
> developer this should not take more than a weekends work :-)

I'll try to see what can be done… 
But I am more involved in firewalling hardcore dev than Proxmox at this stage ! 

I have a large project with Proxmox which might be popping up, if this is the 
case, I'll try so sponsor this ;-) 

> 
> -- 
> Hilsen/Regards
> Michael Rasmussen
> 
> Get my public GnuPG keys:
> michael  rasmussen  cc
> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
> mir  datanom  net
> https://pgp.key-server.io/pks/lookup?search=0xE501F51C
> mir  miras  org
> https://pgp.key-server.io/pks/lookup?search=0xE3E80917
> --
> /usr/games/fortune -es says:
> Quid me anxius sum?
> 
> [ What? Me, worry? ]
> 
> 

---
ToDoo - osnet.eu - DynFi.com 
6 rue Montmartre - 75001 Paris
b...@todoo.biz
web: https://www.dynfi.com
PGP_ID: 0x1BA3C2FD


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Michael Rasmussen
On Wed, 3 Jun 2020 16:54:17 +0200
"b...@todoo.biz"  wrote:

> 
> A lot of time has passed since version 9 of FreeBSD / FreeNAS. Six
> years to be precise. 
>
The version numbers was for explanation purposes only.
BTW. When FreeNAS soon will vanish an be consumed be TrueNAS what
happens then? Will it be a closed and commercial product only?

> iSCSI on FreeBSD is used by very large corporation (Gandi  not to
> name them https://www.gandi.net/en  is
> hosting all it's VM using FreeBSD + Linux on the initiator side) with
> an excellent level of stability. 
> 
I have nothing against FreeBSD on the contrary I have very much fun
using FreeBSD myself (For one usecase I am addicted to pfSense). Only
shortcoming in Proxmox realm in the past was the very non enterprise
iSCSI implementation (istgt) which was replaced by cltd, a very fine
iSCSI implementation, but limited to 1024 LUNs per target in the past
but according to your links has since increased. (By the time I was
looking into to this I had numerous discussions with the developer
about this issue and I promised to inform me personally when he removed
the limitation - I haven't heard from him)

> Solution derivated from Illumos are either overpriced, closed source
> and scarcely used or abandoned (if not all at the same time). 
> 
This is where you I misinformed ;-) Illumos is truly FOSS an lives
well. Commercially with SmartOS, but not closed source since all
patches are sent upstream. The non-commercial branch of Illumos also
lives well as OmniosCE and are well maintained by Andy and Tobias to
mention a few (I have a small part as well ;-). I think you are
thinking of Solaris as in Oracle Solaris which is definitely on the
usual way to desintegration as all other stuff Larry gets his hands on.

You should also remember that all ZFS development, apart from the black
caves near Redwood City, is done in OpenZFS and all current
implementations apart from Oracle uses the same source tree.

> I think that It might be time to re-considering FreeBSD as a
> potential stable solution to host iSCSI targets. Considering the very
> important efforts FreeBSD has made to implement ZFS and stick to the
> OpenZFS standards. 
> 
This has never been my attitude. FreeBSD is rock solid an has been for
decades.

> 
> This is not true. 
> 
> The limit that you are mentioning here has been overridden by a
> tunable parameter.
> https://www.freebsd.org/cgi/man.cgi?query=ctl=4
> 
> 
> It turns out that this patch has been developed by one of my fellow
> developer couple of years ago to bypass the limitations of 1024 LUNs
> that you mentioned. 
> 
Do you have any personal experiences with raising the number of LUNs?
Eg. increase it by a factor of 10?

> So It should qualify to be selected as an enterprise grade solution
> to host iSCSI target. 
> 
> The fact that some developers are patching Proxmox code to allow *BSD
> / FreeNAS to perform is a sign that there might have a need for such
> tool. 
> 
> So beside this max luns problem, what else seems to be causing
> problem with the BSD target ? 
> 
The LUN limitation was my only objection when I stopped to develop the
storage plugin. It should be rather easy to write such a plugin using
the istgt code an replace the iSCSI part with cltd. For a motivated
developer this should not take more than a weekends work :-)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir  datanom  net
https://pgp.key-server.io/pks/lookup?search=0xE501F51C
mir  miras  org
https://pgp.key-server.io/pks/lookup?search=0xE3E80917
--
/usr/games/fortune -es says:
Quid me anxius sum?

[ What? Me, worry? ]


pgptxPvZQth9T.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread b...@todoo.biz


> Le 3 juin 2020 à 12:03, Michael Rasmussen  a écrit :
> 
> Signé partie PGP
> On Wed, 3 Jun 2020 11:34:35 +0200
> Andreas Steinel  wrote:
> 
>> 
>> If I remember correctly, the problem with the ZFS-over-iSCSI stuff is
>> that the backend provider in FreeNAS/FreeBSD has changed numerous
>> times and at least one version did not support online
>> reconfiguration. That may be solved right now, but I haven't look at
>> the plugin for a long time. It just works for Debian as Mario
>> (@fireon) described in [2].
>> 
> 
> A few years ago the FreeNAS API was fluctuating between every new minor
> release (eg, 9.3 -> 9.4) and was as such called 'not production ready'
> by IXSystems and therefore labelled beta, experimental etc (I know that
> no product developed by Alfabet (google) never has transitioned from
> beta to production state ;-) This means that maintaining such a plugin
> is very time consuming and since this is bade in peoples free time
> there likely will be the same resources required to do that.

A lot of time has passed since version 9 of FreeBSD / FreeNAS. Six years to be 
precise. 

iSCSI on FreeBSD is used by very large corporation (Gandi  not to name them 
https://www.gandi.net/en  is hosting all it's VM 
using FreeBSD + Linux on the initiator side) with an excellent level of 
stability. 

Solution derivated from Illumos are either overpriced, closed source and 
scarcely used or abandoned (if not all at the same time). 

I think that It might be time to re-considering FreeBSD as a potential stable 
solution to host iSCSI targets. 
Considering the very important efforts FreeBSD has made to implement ZFS and 
stick to the OpenZFS standards. 

> On the
> other hands those plugin already present in Proxmox uses production
> stamped solutions which rarely/ever makes breaking changes is more
> likely to be maintainable in peoples free time. As a side note: The
> current iscsi initiator (ctld) in FreeBSD still has an upper limit of
> 1024 LUNs per target so therefore qualifies as an plugin alternative to
> proxmox
> 
> #define MAX_LUNS 1024 [0]
> 
> [0] https://github.com/freebsd/freebsd/blob/master/usr.sbin/ctld/ctld.h

This is not true. 

The limit that you are mentioning here has been overridden by a tunable 
parameter. 
https://www.freebsd.org/cgi/man.cgi?query=ctl=4 


> TUNABLE   VARIABLES 
> 
>  The following variables are available as loader(8) 
> 
>  tunables:
> 
>  kern.cam.ctl.max_luns
>Specifies the maximum number of LUNs we support, must be a power
>of 2.  The default value is 1024.
It turns out that this patch has been developed by one of my fellow developer 
couple of years ago to bypass the limitations of 1024 LUNs that you mentioned. 

So It should qualify to be selected as an enterprise grade solution to host 
iSCSI target. 

The fact that some developers are patching Proxmox code to allow *BSD / FreeNAS 
to perform is a sign that there might have a need for such tool. 

So beside this max luns problem, what else seems to be causing problem with the 
BSD target ? 


Sincerely yours. 

> 
> -- 
> Hilsen/Regards
> Michael Rasmussen
> 
> Get my public GnuPG keys:
> michael  rasmussen  cc
> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
> mir  datanom  net
> https://pgp.key-server.io/pks/lookup?search=0xE501F51C
> mir  miras  org
> https://pgp.key-server.io/pks/lookup?search=0xE3E80917
> --
> /usr/games/fortune -es says:
> Youth doesn't excuse everything.
>   -- Dr. Janice Lester (in Kirk's body), "Turnabout
> Intruder", stardate 5928.5.
> 
> 

---
ToDoo - osnet.eu - DynFi.com 
6 rue Montmartre - 75001 Paris
b...@todoo.biz
web: https://www.dynfi.com
PGP_ID: 0x1BA3C2FD


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Thomas Lamprecht
On 6/3/20 6:32 PM, b...@todoo.biz wrote:
> Le 3 juin 2020 à 18:25, Michael Rasmussen  a écrit :
>>
>> Signé partie PGP
>> On Wed, 3 Jun 2020 18:16:11 +0200
>> "b...@todoo.biz"  wrote:
>>
>>>
>>> What is your upper limit for this ? 
>>>
>> I prefer a world without limitations;-)
>> BTW. Some years ago I asked phk (Poul Henning Kamp) about this and
>> according to him there was no kernel reason behind the limitation and
>> to the best of his knowledge the number of LUNs, within reasons, should
>> be nearly endless. Of course the number of CPU cores and the amount of
>> memory should be equipped accordingly.
>>
>>> I have a large project with Proxmox which might be popping up, if
>>> this is the case, I'll try so sponsor this ;-) 
>>
>> I am convinced that the Proxmox team will welcome such a plugin since
>> supported more storage solution can only benefit Proxmox as a product.
> 
> I didn't understand in the first place what was the real limitation of the 
> FreeNAS plugin provided by TheGrandWazoo (beside not being in line with 
> Proxmox coding practices). 
> 
> Because maybe in this case the Proxmox Core team could take a little of their 
> precious time and modify the submitted patches to "style" them accordingly. 

If it is already in decent shape and only some few nits have to be adapted,
sure. But if a mess comes along, even if it's technical principles would be OK, 
we treat it as such, as we have to maintain them then for sure which comes with
a high cost. And not following a projects style is just a mess, I do not agree
with every projects style - not even always ours (*cough* indentation) but I
still try to adhere to it for any project I'm contributing too, at least if I
would like them to take it in ;)

Also, the last FreeNAS patches where far from just having a few style issues,
IIRC Fabians in-depth review then..

https://pve.proxmox.com/wiki/Perl_Style_Guide
https://pve.proxmox.com/wiki/Developer_Documentation


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 manager 1/2] vzdump: make guest include logic testable

2020-06-03 Thread Thomas Lamprecht
On 5/4/20 4:08 PM, Aaron Lauterer wrote:
> As a first step to make the whole guest include logic more testable the
> part from the API endpoint has been moved to its own method with as
> little changes as possible.
> 
> Everything concerning `all` and `exclude` logic is still in the
> PVE::VZDump->exec_backup() method.
> 
> Signed-off-by: Aaron Lauterer 
> ---
> 
> v1 -> v2:
> * fixed return value. Array refs inside an array lead to nested
>   arrays not working with `my ($foo, $bar) = method();`
> 
> 
> As talked with thomas on[0] and off list, this patch series is meant to
> have more confidence in the ongoing changes.
> 
> My other ongoing patch series [1] will move the all the logic, even the
> one in the `exec_backup()` method into one single method.
> 
> [0] https://pve.proxmox.com/pipermail/pve-devel/2020-April/042795.html
> [1] https://pve.proxmox.com/pipermail/pve-devel/2020-April/042753.html
> 
>  PVE/API2/VZDump.pm | 36 ++--
>  PVE/VZDump.pm  | 36 
>  2 files changed, 42 insertions(+), 30 deletions(-)
> 
> diff --git a/PVE/API2/VZDump.pm b/PVE/API2/VZDump.pm
> index f01e4de0..68a3de89 100644
> --- a/PVE/API2/VZDump.pm
> +++ b/PVE/API2/VZDump.pm
> @@ -69,39 +69,15 @@ __PACKAGE__->register_method ({
>   return 'OK' if $param->{node} && $param->{node} ne $nodename;
>  
>   my $cmdline = PVE::VZDump::Common::command_line($param);
> - my @vmids;
> - # convert string lists to arrays
> - if ($param->{pool}) {
> - @vmids = 
> @{PVE::API2Tools::get_resource_pool_guest_members($param->{pool})};
> - } else {
> - @vmids = PVE::Tools::split_list(extract_param($param, 'vmid'));
> - }
> + my ($vmids, $skiplist) = PVE::VZDump->get_included_guests($param);
>  
>   if($param->{stop}){
>   PVE::VZDump::stop_running_backups();
> - return 'OK' if !scalar(@vmids);
> + return 'OK' if !scalar(@{$vmids});
>   }
>  
> - my $skiplist = [];
> - if (!$param->{all}) {
> - if (!$param->{node} || $param->{node} eq $nodename) {
> - my $vmlist = PVE::Cluster::get_vmlist();
> - my @localvmids = ();
> - foreach my $vmid (@vmids) {
> - my $d = $vmlist->{ids}->{$vmid};
> - if ($d && ($d->{node} ne $nodename)) {
> - push @$skiplist, $vmid;
> - } else {
> - push @localvmids, $vmid;
> - }
> - }
> - @vmids = @localvmids;
> - # silent exit if specified VMs run on other nodes
> - return "OK" if !scalar(@vmids);
> - }
> -
> - $param->{vmids} = PVE::VZDump::check_vmids(@vmids)
> - }
> + # silent exit if specified VMs run on other nodes
> + return "OK" if !scalar(@{$vmids});
>  
>   my @exclude = PVE::Tools::split_list(extract_param($param, 'exclude'));
>   $param->{exclude} = PVE::VZDump::check_vmids(@exclude);
> @@ -118,7 +94,7 @@ __PACKAGE__->register_method ({
>   }
>  
>   die "you can only backup a single VM with option --stdout\n"
> - if $param->{stdout} && scalar(@vmids) != 1;
> + if $param->{stdout} && scalar(@{$vmids}) != 1;
>  
>   $rpcenv->check($user, "/storage/$param->{storage}", [ 
> 'Datastore.AllocateSpace' ])
>   if $param->{storage};
> @@ -167,7 +143,7 @@ __PACKAGE__->register_method ({
>   }
>  
>   my $taskid;
> - $taskid = $vmids[0] if scalar(@vmids) == 1;
> + $taskid = ${$vmids}[0] if scalar(@{$vmids}) == 1;
>  
>   return $rpcenv->fork_worker('vzdump', $taskid, $user, $worker);
> }});
> diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
> index f3274196..73ad9088 100644
> --- a/PVE/VZDump.pm
> +++ b/PVE/VZDump.pm
> @@ -21,6 +21,7 @@ use PVE::RPCEnvironment;
>  use PVE::Storage;
>  use PVE::VZDump::Common;
>  use PVE::VZDump::Plugin;
> +use PVE::Tools qw(extract_param);
>  
>  my @posix_filesystems = qw(ext3 ext4 nfs nfs4 reiserfs xfs);
>  
> @@ -1156,4 +1157,39 @@ sub stop_running_backups {
>  }
>  }
>  
> +sub get_included_guests {
> +my ($self, $job) = @_;

do we  need $self here? Why not call it like: 
PVE::VZDump::get_included_guests($params) ?

> +
> +my $nodename = PVE::INotify::nodename();
> +my $vmids = [];
> +
> +# convert string lists to arrays
> +if ($job->{pool}) {
> + $vmids = PVE::API2Tools::get_resource_pool_guest_members($job->{pool});

You use API2Tools here but do not add it as module use statement.


> +} else {
> + $vmids = [ PVE::Tools::split_list(extract_param($job, 'vmid')) ];
> +}
> +
> +my $skiplist = [];
> +if (!$job->{all}) {
> + if (!$job->{node} || $job->{node} eq $nodename) {
> + my $vmlist = PVE::Cluster::get_vmlist();
> + my $localvmids = [];
> + foreach my $vmid (@{$vmids}) {
> + my $d = $vmlist->{ids}->{$vmid};
> + if ($d && ($d->{node} ne 

[pve-devel] applied: Re: [PATCH manager] ui: migration: add maxHeight to migration window

2020-06-03 Thread Thomas Lamprecht
On 5/19/20 12:54 PM, Tim Marx wrote:
> to prevent indefinite growth in case of e.g. many local disks
> 
> Signed-off-by: Tim Marx 
> ---
>  www/manager6/window/Migrate.js | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: Re: [PATCH manager] ui: add checkbox for vmid filter for backupview

2020-06-03 Thread Thomas Lamprecht
On 5/29/20 2:28 PM, Dominik Csapak wrote:
> instead of hardcoding the text 'type-id-' into the searchbar
> to accomodate for the additional size, add an overflowHandler
> to the toolbar (for very small display sizes)
> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/grid/BackupView.js | 49 ++---
>  1 file changed, 39 insertions(+), 10 deletions(-)
> 
>

applied, thanks! Using now a boxLabel though.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: Re: [PATCH manager v2] Make PVE6 compatible with supported ceph versions

2020-06-03 Thread Thomas Lamprecht
On 6/3/20 1:39 PM, Alwin Antreich wrote:
> Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
> Also the ceph status was cleaned up and doesn't provide the mgrmap and
> monmap.
> 
> The rados queries used in the ceph status API endpoints (cluster / node)
> were factored out and merged to one place.
> 
> Signed-off-by: Alwin Antreich 
> ---
> v1 -> v2: make mon/mgr dump optional for Ceph versions prior Octopus
> 
>  PVE/API2/Ceph.pm  |  5 +
>  PVE/API2/Ceph/MON.pm  |  6 +++---
>  PVE/API2/Ceph/OSD.pm  |  2 +-
>  PVE/API2/Cluster/Ceph.pm  |  5 +
>  PVE/Ceph/Tools.pm | 17 +
>  www/manager6/ceph/StatusDetail.js | 12 
>  6 files changed, 31 insertions(+), 16 deletions(-)
> 
>

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: Re: [PATCH manager v2 1/2] ceph: extend the pool view

2020-06-03 Thread Thomas Lamprecht
On 6/3/20 3:28 PM, Alwin Antreich wrote:
> to add the pg_autoscale_mode since its activated in Ceph Octopus by
> default and emmits a waring (ceph status) if a pool has too many PGs.
> 
> Signed-off-by: Alwin Antreich 
> ---
> v1 -> v2: split addition of pg_autoscale_mode and pveceph pool
>   output format
> 
>  PVE/API2/Ceph.pm  | 13 -
>  www/manager6/ceph/Pool.js | 19 +++
>  2 files changed, 27 insertions(+), 5 deletions(-)
> 
>

applied series, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Michael Rasmussen
On Wed, 3 Jun 2020 12:03:05 +0200
Michael Rasmussen  wrote:

> so therefore qualifies
Should have read 'so therefore not qualifies' ;-)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir  datanom  net
https://pgp.key-server.io/pks/lookup?search=0xE501F51C
mir  miras  org
https://pgp.key-server.io/pks/lookup?search=0xE3E80917
--
/usr/games/fortune -es says:
Toto, I don't think we're in Kansas anymore.
-- Judy Garland, "Wizard of Oz"


pgpJpAVcokaO9.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Michael Rasmussen
On Wed, 3 Jun 2020 11:34:35 +0200
Andreas Steinel  wrote:

> 
> If I remember correctly, the problem with the ZFS-over-iSCSI stuff is
> that the backend provider in FreeNAS/FreeBSD has changed numerous
> times and at least one version did not support online
> reconfiguration. That may be solved right now, but I haven't look at
> the plugin for a long time. It just works for Debian as Mario
> (@fireon) described in [2].
> 

A few years ago the FreeNAS API was fluctuating between every new minor
release (eg, 9.3 -> 9.4) and was as such called 'not production ready'
by IXSystems and therefore labelled beta, experimental etc (I know that
no product developed by Alfabet (google) never has transitioned from
beta to production state ;-) This means that maintaining such a plugin
is very time consuming and since this is bade in peoples free time
there likely will be the same resources required to do that. On the
other hands those plugin already present in Proxmox uses production
stamped solutions which rarely/ever makes breaking changes is more
likely to be maintainable in peoples free time. As a side note: The
current iscsi initiator (ctld) in FreeBSD still has an upper limit of
1024 LUNs per target so therefore qualifies as an plugin alternative to
proxmox

#define MAX_LUNS 1024 [0]

[0] https://github.com/freebsd/freebsd/blob/master/usr.sbin/ctld/ctld.h

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir  datanom  net
https://pgp.key-server.io/pks/lookup?search=0xE501F51C
mir  miras  org
https://pgp.key-server.io/pks/lookup?search=0xE3E80917
--
/usr/games/fortune -es says:
Youth doesn't excuse everything.
-- Dr. Janice Lester (in Kirk's body), "Turnabout
Intruder", stardate 5928.5.


pgp1Zh2H0ur0T.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC qemu-server] close #2741: check for VM.Config.Cloudinit permission

2020-06-03 Thread Mira Limbeck
This allows setting ciuser, cipassword and all other cloudinit settings that
are not part of the network without VM.Config.Network permissions.

Signed-off-by: Mira Limbeck 
---
 PVE/API2/Qemu.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 974ee3b..23a569e 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -357,8 +357,11 @@ my $check_vm_modify_config_perm = sub {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.PowerMgmt']);
} elsif ($diskoptions->{$opt}) {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
-   } elsif ($cloudinitoptions->{$opt} || ($opt =~ 
m/^(?:net|ipconfig)\d+$/)) {
+   } elsif ($opt =~ m/^(?:net|ipconfig)\d+$/) {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, 
['VM.Config.Network']);
+   } elsif ($cloudinitoptions->{$opt}) {
+   print "checking VM.Config.Cloudinit\n";
+   $rpcenv->check_vm_perm($authuser, $vmid, $pool, 
['VM.Config.Cloudinit']);
} elsif ($opt eq 'vmstate') {
# the user needs Disk and PowerMgmt privileges to change the vmstate
# also needs privileges on the storage, that will be checked later
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC access-control] close #2741: introduce VM.Config.Cloudinit permission

2020-06-03 Thread Mira Limbeck
It is added to PVEVMUser by default.

Signed-off-by: Mira Limbeck 
---
 PVE/AccessControl.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/AccessControl.pm b/PVE/AccessControl.pm
index f50a510..ae8eaae 100644
--- a/PVE/AccessControl.pm
+++ b/PVE/AccessControl.pm
@@ -741,6 +741,7 @@ my $privgroups = {
],
user => [
'VM.Config.CDROM', # change CDROM media
+   'VM.Config.Cloudinit',
'VM.Console',
'VM.Backup',
'VM.PowerMgmt',
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC manager] change permissions for non-network cloudinit settings

2020-06-03 Thread Mira Limbeck
With the introduction of VM.Config.Cloudinit we can set the user,
password and an SSH key without VM.Config.Network permission and instead
use VM.Config.Cloudinit.

Signed-off-by: Mira Limbeck 
---
 www/manager6/qemu/CloudInit.js | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/www/manager6/qemu/CloudInit.js b/www/manager6/qemu/CloudInit.js
index cbb4af9d..efcbb668 100644
--- a/www/manager6/qemu/CloudInit.js
+++ b/www/manager6/qemu/CloudInit.js
@@ -200,7 +200,7 @@ Ext.define('PVE.qemu.CloudInit', {
iconCls: 'fa fa-user',
never_delete: true,
defaultValue: '',
-   editor: caps.vms['VM.Config.Options'] ? {
+   editor: caps.vms['VM.Config.Cloudinit'] ? {
xtype: 'proxmoxWindowEdit',
subject: gettext('User'),
items: [
@@ -221,7 +221,7 @@ Ext.define('PVE.qemu.CloudInit', {
header: gettext('Password'),
iconCls: 'fa fa-unlock',
defaultValue: '',
-   editor: caps.vms['VM.Config.Options'] ? {
+   editor: caps.vms['VM.Config.Cloudinit'] ? {
xtype: 'proxmoxWindowEdit',
subject: gettext('Password'),
items: [
@@ -256,7 +256,7 @@ Ext.define('PVE.qemu.CloudInit', {
sshkeys: {
header: gettext('SSH public key'),
iconCls: 'fa fa-key',
-   editor: caps.vms['VM.Config.Network'] ? 'PVE.qemu.SSHKeyEdit' : 
undefined,
+   editor: caps.vms['VM.Config.Cloudinit'] ? 'PVE.qemu.SSHKeyEdit' 
: undefined,
never_delete: true,
renderer: function(value) {
value = decodeURIComponent(value);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] add virtio host_mtu feature.

2020-06-03 Thread Alexandre DERUMIER
Hi,

any comment about this patch ?

forum users still need it 

https://forum.proxmox.com/threads/set-mtu-on-guest.45078/page-2

(and it could help too with vxlan and other tunneling where mtu need to be 
reduce on guest)

- Mail original -
De: "aderumier" 
À: "pve-devel" 
Cc: "aderumier" 
Envoyé: Vendredi 17 Avril 2020 07:47:20
Objet: [PATCH qemu-server] add virtio host_mtu feature.

This add a new "mtu" param to vm nic, 
and force the mtu in the guest for virtio nic only. 

Special value: 1 = set the same value than the bridge 
--- 
PVE/QemuServer.pm | 19 +++ 
1 file changed, 19 insertions(+) 

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
index 6445508..9baa6ff 100644 
--- a/PVE/QemuServer.pm 
+++ b/PVE/QemuServer.pm 
@@ -884,6 +884,12 @@ my $net_fmt = { 
description => 'Whether this interface should be disconnected (like pulling the 
plug).', 
optional => 1, 
}, 
+ mtu => { 
+ type => 'integer', 
+ minimum => 1, maximum => 65520, 
+ description => 'Force mtu (virtio only). 1 = bridge mtu value', 
+ optional => 1, 
+ }, 
}; 

my $netdesc = { 
@@ -1593,6 +1599,19 @@ sub print_netdevice_full { 
} 
$tmpstr .= ",bootindex=$net->{bootindex}" if $net->{bootindex} ; 

+ if($net->{model} eq 'virtio' && $net->{mtu} && $net->{bridge}) { 
+ 
+ my $mtu = $net->{mtu}; 
+ my $bridge_mtu = PVE::Network::read_bridge_mtu($net->{bridge}); 
+ 
+ if($mtu == 1) { 
+ $mtu = $bridge_mtu; 
+ } else { 
+ die "mtu $mtu is bigger than bridge mtu $bridge_mtu" if $mtu > $bridge_mtu; 
+ } 
+ $tmpstr .= ",host_mtu=$mtu"; 
+ } 
+ 
if ($use_old_bios_files) { 
my $romfile; 
if ($device eq 'virtio-net-pci') { 
-- 
2.20.1 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 1/2] ceph: extend the pool view

2020-06-03 Thread Alwin Antreich
to add the pg_autoscale_mode since its activated in Ceph Octopus by
default and emmits a waring (ceph status) if a pool has too many PGs.

Signed-off-by: Alwin Antreich 
---
v1 -> v2: split addition of pg_autoscale_mode and pveceph pool
  output format

 PVE/API2/Ceph.pm  | 13 -
 www/manager6/ceph/Pool.js | 19 +++
 2 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index afc1bdbd..d872c7c0 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -607,6 +607,7 @@ __PACKAGE__->register_method ({
pool => { type => 'integer' },
pool_name => { type => 'string' },
size => { type => 'integer' },
+   pg_autoscale_mode => { type => 'string', optional => 1 },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -636,9 +637,19 @@ __PACKAGE__->register_method ({
}
 
my $data = [];
+   my $attr_list = [
+   'pool',
+   'pool_name',
+   'size',
+   'min_size',
+   'pg_num',
+   'crush_rule',
+   'pg_autoscale_mode',
+   ];
+
foreach my $e (@{$res->{pools}}) {
my $d = {};
-   foreach my $attr (qw(pool pool_name size min_size pg_num 
crush_rule)) {
+   foreach my $attr (@$attr_list) {
$d->{$attr} = $e->{$attr} if defined($e->{$attr});
}
 
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index e81b5974..db1828a6 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -107,10 +107,21 @@ Ext.define('PVE.node.CephPoolList', {
dataIndex: 'size'
},
{
-   text: '# Placement Groups', // pg_num',
-   width: 180,
-   align: 'right',
-   dataIndex: 'pg_num'
+   text: 'Placement Groups',
+   columns: [
+   {
+   text: '# of PGs', // pg_num',
+   width: 100,
+   align: 'right',
+   dataIndex: 'pg_num'
+   },
+   {
+   text: 'Autoscale Mode',
+   width: 140,
+   align: 'right',
+   dataIndex: 'pg_autoscale_mode'
+   },
+   ]
},
{
text: 'CRUSH Rule',
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 2/2] ceph: extend pveceph pool ls

2020-06-03 Thread Alwin Antreich
to present more data on pools and a nicer formated output on the command
line.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph.pm   | 14 ++
 PVE/CLI/pveceph.pm | 24 ++--
 2 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index d872c7c0..d7e5892c 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -604,10 +604,16 @@ __PACKAGE__->register_method ({
items => {
type => "object",
properties => {
-   pool => { type => 'integer' },
-   pool_name => { type => 'string' },
-   size => { type => 'integer' },
-   pg_autoscale_mode => { type => 'string', optional => 1 },
+   pool => { type => 'integer', title => 'ID' },
+   pool_name => { type => 'string', title => 'Name' },
+   size => { type => 'integer', title => 'Size' },
+   min_size => { type => 'integer', title => 'Min Size' },
+   pg_num => { type => 'integer', title => 'PG Num' },
+   pg_autoscale_mode => { type => 'string', optional => 1, title 
=> 'PG Autoscale Mode' },
+   crush_rule => { type => 'integer', title => 'Crush Rule' },
+   crush_rule_name => { type => 'string', title => 'Crush Rule 
Name' },
+   percent_used => { type => 'number', title => '%-Used' },
+   bytes_used => { type => 'integer', title => 'Used' },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 92500253..b4c8b79c 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -182,16 +182,20 @@ our $cmddef = {
 init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
 pool => {
ls => [ 'PVE::API2::Ceph', 'lspools', [], { node => $nodename }, sub {
-   my $res = shift;
-
-   printf("%-20s %10s %10s %10s %10s %20s\n", "Name", "size", 
"min_size",
-   "pg_num", "%-used", "used");
-   foreach my $p (sort {$a->{pool_name} cmp $b->{pool_name}} @$res) {
-   printf("%-20s %10d %10d %10d %10.2f %20d\n", $p->{pool_name},
-   $p->{size}, $p->{min_size}, $p->{pg_num},
-   $p->{percent_used}, $p->{bytes_used});
-   }
-   }],
+   my ($data, $schema, $options) = @_;
+   PVE::CLIFormatter::print_api_result($data, $schema,
+   [
+   'pool_name',
+   'size',
+   'min_size',
+   'pg_num',
+   'pg_autoscale_mode',
+   'crush_rule_name',
+   'percent_used',
+   'bytes_used',
+   ],
+   $options);
+   }, $PVE::RESTHandler::standard_output_options],
create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => 
$nodename }],
destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => 
$nodename } ],
 },
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread b...@todoo.biz
Hello, 

I was wondering if there are any plans to integrate the Proxmox FreeNAS iSCSI 
initiator patches developped by the TheGrandWazoo and available in github 
repository here : https://github.com/TheGrandWazoo/freenas-proxmox into the 
main FreeNAS Enterprise repo. 

I am not a huge fan of "custom patches" and will clearly avoid touching source 
scripts from Proxmox… That being said the patches are looking nice and I am not 
sure what would be the reason not to upstream this into Proxmox business repo. 


Can the Proxmox Team discuss this a bit since FreeNAS is widely used, stable 
and a very nice solution to implement as an iSCSI target node. 

Benefiting from the Proxmox team's integration efforts would mean that a 
FreeNAS / TrueNAS could be used a target solution with long term support. 

Relates to this thread : 
https://forum.proxmox.com/threads/guide-setup-zfs-over-iscsi-with-pve-5x-and-freenas-11.54611/


Thanks for your answer.

Sincerely yours. 


---
ToDoo - osnet.eu - DynFi.com 
6 rue Montmartre - 75001 Paris
b...@todoo.biz
web: https://www.dynfi.com
PGP_ID: 0x1BA3C2FD
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: Re: [PATCH pve-docs 0/3] sdn: improvement

2020-06-03 Thread Thomas Lamprecht
On 5/26/20 5:46 PM, Alexandre Derumier wrote:
> Somes fixes, and add description for new vnet vlan-aware option
> 
> Alexandre Derumier (3):
>   sdn: add a note to add "source /etc/network/interfaces.d/*"
>   sdn: add vnet vlan-aware option
>   sdn: fix qinq zone2 example
> 
>  pvesdn.adoc | 11 ++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 


applied series, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: Re: [PATCH pve-common] network: vlan-aware bridge: fix pvid when trunks is defined

2020-06-03 Thread Thomas Lamprecht
On 5/25/20 1:05 PM, Alexandre Derumier wrote:
> Currently, when a trunks is defined, the vlan tag is not used
> for pvid with vlan-aware bridge. (It's ok with ovs switch)
> 
> example:
> 
> net0: e1000=BA:90:68:B8:CF:F5,bridge=vmbr1,tag=2,trunks=2-11
> 
> before
> --
> tap100i0   2-11
> 
> after
> -
> tap100i0   2 PVID Egress Untagged
>3-11
> 
> No regression for other configurations:
> 
> net0: e1000=BA:90:68:B8:CF:F5,bridge=vmbr1
> 
> before
> --
> tap100i0   1 PVID Egress Untagged
>2-4094
> 
> after
> -
> tap100i0   1 PVID Egress Untagged
>2-4094
> 
> net0: e1000=BA:90:68:B8:CF:F5,bridge=vmbr1,tag=2
> 
> before
> --
> tap100i0   2 PVID Egress Untagged
> 
> after
> -
> tap100i0   2 PVID Egress Untagged
> 
> net0: e1000=BA:90:68:B8:CF:F5,bridge=vmbr1,trunks=2-11
> 
> before
> --
> tap100i0   1 PVID Egress Untagged
>2-11
> 
> after
> -
> tap100i0   1 PVID Egress Untagged
>2-11
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  src/PVE/Network.pm | 36 +---
>  1 file changed, 17 insertions(+), 19 deletions(-)
> 
>

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: Re: [PATCH pve-manager 0/2] sdn: vlanaware + vlan mtu

2020-06-03 Thread Thomas Lamprecht
On 6/2/20 11:48 AM, Alexandre Derumier wrote:
> Patch1 is a resend with fix
> 
> Patch2 add missing mtu option to vlan plugin
> 
> Alexandre Derumier (2):
>   sdn: add vlan aware option to vnet
>   sdn: vlan : add mtu field
> 
>  www/manager6/sdn/VnetEdit.js   |  7 +++
>  www/manager6/sdn/VnetView.js   |  5 +
>  www/manager6/sdn/zones/VlanEdit.js | 10 ++
>  3 files changed, 22 insertions(+)
> 



applied series, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2] Make PVE6 compatible with supported ceph versions

2020-06-03 Thread Alwin Antreich
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.

The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.

Signed-off-by: Alwin Antreich 
---
v1 -> v2: make mon/mgr dump optional for Ceph versions prior Octopus

 PVE/API2/Ceph.pm  |  5 +
 PVE/API2/Ceph/MON.pm  |  6 +++---
 PVE/API2/Ceph/OSD.pm  |  2 +-
 PVE/API2/Cluster/Ceph.pm  |  5 +
 PVE/Ceph/Tools.pm | 17 +
 www/manager6/ceph/StatusDetail.js | 12 
 6 files changed, 31 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 85a04101..afc1bdbd 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -580,10 +580,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_inited();
 
-   my $rados = PVE::RADOS->new();
-   my $status = $rados->mon_command({ prefix => 'status' });
-   $status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
-   return $status;
+   return PVE::Ceph::Tools::ceph_cluster_status();
 }});
 
 __PACKAGE__->register_method ({
diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 3baeac52..b33b8700 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -130,7 +130,7 @@ __PACKAGE__->register_method ({
my $monhash = PVE::Ceph::Services::get_services_info("mon", $cfg, 
$rados);
 
if ($rados) {
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
 
my $mons = $monstat->{monmap}->{mons};
foreach my $d (@$mons) {
@@ -338,7 +338,7 @@ __PACKAGE__->register_method ({
my $monsection = "mon.$monid";
 
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
my $monlist = $monstat->{monmap}->{mons};
my $monhash = PVE::Ceph::Services::get_services_info('mon', $cfg, 
$rados);
 
@@ -356,7 +356,7 @@ __PACKAGE__->register_method ({
# reopen with longer timeout
$rados = PVE::RADOS->new(timeout => 
PVE::Ceph::Tools::get_config('long_rados_timeout'));
$monhash = PVE::Ceph::Services::get_services_info('mon', $cfg, 
$rados);
-   $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   $monstat = $rados->mon_command({ prefix => 'quorum_status' });
$monlist = $monstat->{monmap}->{mons};
 
my $addr;
diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index a514c502..ceaed129 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -344,7 +344,7 @@ __PACKAGE__->register_method ({
 
# get necessary ceph infos
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
 
die "unable to get fsid\n" if !$monstat->{monmap} || 
!$monstat->{monmap}->{fsid};
my $fsid = $monstat->{monmap}->{fsid};
diff --git a/PVE/API2/Cluster/Ceph.pm b/PVE/API2/Cluster/Ceph.pm
index e18d421e..c0277221 100644
--- a/PVE/API2/Cluster/Ceph.pm
+++ b/PVE/API2/Cluster/Ceph.pm
@@ -142,10 +142,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_inited();
 
-   my $rados = PVE::RADOS->new();
-   my $status = $rados->mon_command({ prefix => 'status' });
-   $status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
-   return $status;
+   return PVE::Ceph::Tools::ceph_cluster_status();
 }
 });
 
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 3273c7d1..a73b791b 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -468,4 +468,21 @@ sub get_real_flag_name {
 return $flagmap->{$flag} // $flag;
 }
 
+sub ceph_cluster_status {
+my ($rados) = @_;
+$rados = PVE::RADOS->new() if !$rados;
+
+my $ceph_version = get_local_version(1);
+my $status = $rados->mon_command({ prefix => 'status' });
+
+$status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
+
+if (!$ceph_version < 15) {
+   $status->{monmap} = $rados->mon_command({ prefix => 'mon dump' });
+   $status->{mgrmap} = $rados->mon_command({ prefix => 'mgr dump' });
+}
+
+return $status;
+}
+
 1;
diff --git a/www/manager6/ceph/StatusDetail.js 
b/www/manager6/ceph/StatusDetail.js
index 8185e3bb..211b0d6f 100644
--- a/www/manager6/ceph/StatusDetail.js
+++ b/www/manager6/ceph/StatusDetail.js
@@ -214,8 +214,11 @@ Ext.define('PVE.ceph.StatusDetail', {
 
var pgmap = status.pgmap || {};
var health = status.health || {};
-   var osdmap = status.osdmap || { 

[pve-devel] applied-series: Re: [PATCH V2 pve-network 0/7] vlanaware vnets

2020-06-03 Thread Thomas Lamprecht
On 6/2/20 11:20 AM, Alexandre Derumier wrote:
> This ass support for vlan-aware vnets.
> patch1 && 2 were already submit on the mailing
> 
> patch3 is a small fix to avoid packet lost on reload
> with ovs + qinq|vlan plugins
> 
> changelog v2:
> add more fixes for ovs
> 
> Alexandre Derumier (7):
>   add vnet vlan-aware option
>   vlan: ovs: use dot1q-tunnel when vlanaware is enabled
>   qinq|vlan: ovs: add ovsint interfaces to ovs-ports list
>   catch errors on sdn config generation
>   vlan|qinq: add mtu to ovsint link port
>   vlan: ovs: remove twice defined ovsbridge ports
>   vlan: ovs : vlanaware: use 802.1q for tunnel
> 
>  PVE/Network/SDN/VnetPlugin.pm|  5 +
>  PVE/Network/SDN/Zones.pm | 22 +++-
>  PVE/Network/SDN/Zones/EvpnPlugin.pm  |  1 +
>  PVE/Network/SDN/Zones/Plugin.pm  | 31 +---
>  PVE/Network/SDN/Zones/QinQPlugin.pm  | 10 +
>  PVE/Network/SDN/Zones/VlanPlugin.pm  | 17 ---
>  PVE/Network/SDN/Zones/VxlanPlugin.pm |  4 
>  7 files changed, 47 insertions(+), 43 deletions(-)
> 



applied series, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Integration of FreeNAS iSCSI target initiator in Proxmox Enterprise repo

2020-06-03 Thread Andreas Steinel
Hello,

On Wed, Jun 3, 2020 at 11:23 AM b...@todoo.biz  wrote:

> I was wondering if there are any plans to integrate the Proxmox FreeNAS
> iSCSI initiator patches developped by the TheGrandWazoo and available in
> github repository here : https://github.com/TheGrandWazoo/freenas-proxmox
> into the main FreeNAS Enterprise repo.
>

The general workflow is that people present patches in the way the Proxmox
staff want it, as outlined in the development guidelines in [1].
You cannot legally just pull in stuff from other people, therefore the
development guideline exist and you have to follow their protocol.

If I remember correctly, the problem with the ZFS-over-iSCSI stuff is that
the backend provider in FreeNAS/FreeBSD has changed numerous times and at
least one version did not support online reconfiguration. That may be
solved right now, but I haven't look at the plugin for a long time. It just
works for Debian as Mario (@fireon) described in [2].

Hopefully, we'll see the storage plugin framework happening in the future,
so that only additive changes are required in order to support new plugins.
This would hugely improve third-party-support and solve the problem of
integrating yet another Storage-Plugin.

[1] https://pve.proxmox.com/wiki/Developer_Documentation
[2]
https://deepdoc.at/dokuwiki/doku.php?id=virtualisierung:proxmox_kvm_und_lxc:proxmox_debian_als_zfs-over-iscsi_server_verwenden

-- 
With kind regards / Mit freundlichen Grüßen

Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied-series: Re: [PATCH V2 ifupdown2 00/10] 3.0.0-1 version

2020-06-03 Thread Alexandre DERUMIER
applied series, thanks! 
>>pushed out the 3.0.0-1 tag but then decided to update
>>to current master as it allows to drop all extra patches and master had
>>just one extra commit besides that

Ok,no problem. Thanks !

- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" 
Envoyé: Mercredi 3 Juin 2020 09:47:35
Objet: applied-series: Re: [pve-devel] [PATCH V2 ifupdown2 00/10] 3.0.0-1 
version

On 6/2/20 10:31 AM, Alexandre Derumier wrote: 
> Hi, 
> 
> This patch series update ifupdown2 to 3.0.0-1. 
> 
> Please bump the proxmox git mirror to 3.0.0-1 tag. 
> 

applied series, thanks! pushed out the 3.0.0-1 tag but then decided to update 
to current master as it allows to drop all extra patches and master had 
just one extra commit besides that. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: Re: [PATCH V2 ifupdown2 00/10] 3.0.0-1 version

2020-06-03 Thread Thomas Lamprecht


On 6/2/20 10:31 AM, Alexandre Derumier wrote:
> Hi,
> 
> This patch series update ifupdown2 to 3.0.0-1.
> 
> Please bump the proxmox git mirror to 3.0.0-1 tag.
> 

applied series, thanks! pushed out the 3.0.0-1 tag but then decided to update
to current master as it allows to drop all extra patches and master had
just one extra commit besides that.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel