Re: [pve-devel] API-Failures (PHP-Script) IMPORTANT

2016-07-14 Thread Daniel Hunsaker
This client isn't maintained by the Proxmox team - it's a third-party
GitHub project. Please file this report on
https://github.com/CpuID/pve2-api-php-client/issues instead.

Note, I do have push access to the repo in question, so feel free to submit
changes directly via pull request and I'll get them merged in.

On Thu, Jul 14, 2016, 12:59 Detlef Bracker  wrote:

> Dear,
>
> This important informations helps to save a lot of hours:
>
> A) in the PHP-Script pve2_api.class.php must been added the following
> lines, otherwise the PHP-Script for many users not work correctly:
>
> + Line 88: curl_setopt($prox_ch, CURLOPT_SSL_VERIFYHOST,
> $this->verify_ssl);
> + Line 220: curl_setopt($prox_ch, CURLOPT_SSL_VERIFYHOST, false);
>
> otherwise, when no valid SSL-Certificate exists, a connection cant been
> made!
>
> B) In the README.md
>
> are wrong informations about the debug-mode - This lines must been
> deleted, why debug is not possible!
>
> C) A likely debug of curl is possible too with the following edited
> lines (as an example):
>
> public function login () {
> // Prepare login variables.
> $login_postfields = array();
> $login_postfields['username'] = $this->username;
> $login_postfields['password'] = $this->password;
> $login_postfields['realm'] = $this->realm;
>
> $login_postfields_string = http_build_query($login_postfields);
> unset($login_postfields);
>
> // Perform login request.
> $prox_ch = curl_init();
> curl_setopt($prox_ch, CURLOPT_URL,
> "https://{$this->hostname}:{$this->port}/api2/json/access/ticket");
> curl_setopt($prox_ch, CURLOPT_POST, true);
> curl_setopt($prox_ch, CURLOPT_RETURNTRANSFER, true);
> curl_setopt($prox_ch, CURLOPT_POSTFIELDS,
> $login_postfields_string);
> curl_setopt($prox_ch, CURLOPT_SSL_VERIFYPEER, $this->verify_ssl);
> curl_setopt($prox_ch, CURLOPT_SSL_VERIFYHOST, $this->verify_ssl);
>
> +   curl_setopt($prox_ch, CURLOPT_VERBOSE, true);
>
> +   $verbose = fopen('php://temp', 'w+');
> +   curl_setopt($handle, CURLOPT_STDERR, $verbose);
>
> $login_ticket = curl_exec($prox_ch);
>
> +  if ($login_ticket === FALSE) {
> +   printf("cUrl error (#%d): %s\n", curl_errno($prox_ch),
> +   htmlspecialchars(curl_error($prox_ch)));
> +  }
>
> +   rewind($verbose);
> +$verboseLog = stream_get_contents($verbose);
>
> +   echo "Verbose information:\n",
> htmlspecialchars($verboseLog), "\n";
>
>
>
> $login_request_info = curl_getinfo($prox_ch);
>
> //echo '';
> //echo print_r($login_request_info);
> //echo print_r($login_ticket);
> //echo '';
>
>
> curl_close($prox_ch);
> unset($prox_ch);
> unset($login_postfields_string);
>
> if (!$login_ticket) {
> // SSL negotiation failed or connection timed out
> $this->login_ticket_timestamp = null;
> return false;
> }
>
> $login_ticket_data = json_decode($login_ticket, true);
>
> //echo '';
> //echo print_r($login_ticket_data);
> //echo '';
> //exit;
>
> if ($login_ticket_data == null || $login_ticket_data['data'] ==
> null) {
> // Login failed.
> // Just to be safe, set this to null again.
> $this->login_ticket_timestamp = null;
> if ($login_request_info['ssl_verify_result'] == 1) {
> throw new PVE2_Exception("Invalid SSL cert on
> {$this->hostname} - check that the hostname is correct, and that it
> appears in the server certificate's SAN list. Alternatively set the
> verify_ssl flag to false if you are using internal self-signed certs
> (ensure you are aware of the security risks before doing so).", 4);
> }
> return false;
> } else {
> // Login success.
> $this->login_ticket = $login_ticket_data['data'];
> // We store a UNIX timestamp of when the ticket was
> generated here,
> // so we can identify when we need a new one expiration-wise
> later
> // on...
> $this->login_ticket_timestamp = time();
> $this->reload_node_list();
> return true;
> }
> }
>
> Regards
>
> Detlef Bracker
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Graphite.pm needs a correction

2016-04-27 Thread Daniel Hunsaker
Yes, it's necessary for every patch, no matter the size or complexity.  It
protects Proxmox as well as contributors, not just from each other, but
also from other outside forces.  International law gets sticky sometimes,
and having these contributor agreements on file keeps things as smooth as
possible for everyone.

On Wed, Apr 27, 2016, 06:35 Alexey Kuzmin  wrote:

> Hello Thomas,
>
> Is it really necessary for such a minor fix?
> I'm not going to claim any legal rights for one-character patch (:
>
>
> 2016-04-27 15:15 GMT+03:00 Thomas Lamprecht :
>
> > Hi,
> >
> > On 04/27/2016 01:55 PM, Alexey Kuzmin wrote:
> > > Hello,
> > >
> > > Carbon (particularly carbon-c-relay) expects one metric per line.
> Current
> > > PVE implementation breaks this rule. The following patch corrects this
> > bug.
> > > I also created a PR: https://github.com/proxmox/pve-manager/pull/6
> >
> > First of all: Thank you for your contribution!
> >
> > This is no clone/fork run by any Proxmox official and we can not act on
> > the pull request there.
> > Further, you'll need to sign our open CLA to allow a contribution, if
> > you haven't done that yet,
> > that must be done to protect you and us legal wise, see
> > https://pve.proxmox.com/wiki/Developer_Documentation at the end of the
> > page.
> >
> > It would be really great if you could sign that and resend the fix in a
> > patch to the list.
> >
> > cheers & regards,
> > Thomas
> >
> > >
> > > Code:
> > >
> > > --- PVE/Status/Graphite.pm.orig  2016-04-26 20:03:02.961141497 +
> > > +++ PVE/Status/Graphite.pm  2016-04-26 20:03:14.541705841 +
> > > @@ -102,7 +102,7 @@
> > >   if ( ref $value eq 'HASH' ) {
> > >   write_graphite($carbon_socket, $value, $ctime, $path);
> > >   }else {
> > > -  $carbon_socket->send( "$path $value $ctime" );
> > > +  $carbon_socket->send( "$path $value $ctime\n" );
> > >   }
> > >   }
> > >   $path = $oldpath;
> > > ___
> > > pve-devel mailing list
> > > pve-devel@pve.proxmox.com
> > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> > >
> >
> >
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add Windows 10 and 2012r2 to OS selection

2016-04-19 Thread Daniel Hunsaker
On Tue, Apr 19, 2016, 11:16 Martin Maurer  wrote:

> > Dietmar Maurer  hat am 19. April 2016 um 18:33
> > geschrieben:
> ...
> > Why is it worth to mention? I would just use:
> >
> >  win8: 'Microsoft Windows 8/10/2012',
> >
> > ?
>
> as a server virtualization platform, we should list current microsoft
> server OS
> (which is 2012r2).
>
> MS users like to see their OS listed as supported.
>
> Martin
>

Since 2012 is offered as a completely separate product from 2012r2 in the
same way 8 and 10 are, not seeing 2012r2 in the list will look like it
isn't (yet) supported.  This feels counterintuitive to those of us in the
Linux world because we don't version and distribute this way, but as MS
does, we have to use their approach for their users.  In fact, the same
logic applies to 8 versus 8.1.  Hopefully the fact 10 is listed will be
enough to indicate 8.1 will work as well.

That said, 2012/r2 *is* enough to communicate that both 2012 and 2012r2 are
supported at that level, even within the MS world.

>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC PATCH container] setup: ability to ignore files

2016-04-12 Thread Daniel Hunsaker
Alternately, there's the .gitignore approach...  Pretty sure there's a Perl
library for handling that format (or a similar one) which we could drop
in.  One downside is having to parse .pve-ignore files all the way up the
path, but an existing library would take care of that.  The main upside is
we need far fewer ignore files to mask out large collections.

Either way, I agree that using dotfiles is best.

On Tue, Apr 12, 2016, 07:12 Fabian Grünbichler 
wrote:

> > Wolfgang Bumiller  hat am 12. April 2016 um
> 14:49
> > geschrieben:
> >
> >
> > Make the ct_* file wrapper functions ignore files for which
> > a file named .pve-ignore.$name exists.
> > ---
> >
> > This uses a .pve-ignore prefix. Another option would be a suffix. I'm
> > not sure which is better, but personally I like to keep annoying files
> > like that "hidden" from my standard view.
>
> +1 for the prefix variant. patch LGTM
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] proxmox 3->4 cluster upgrade : corosync2 wheezy transition package ?

2015-10-03 Thread Daniel Hunsaker
Cross-cluster migration could actually be really handy to have as a full
feature.  Then you could, for example, have a development/testing cluster
(possibly a single node) in your dev lab, and a production cluster in your
hosting center(s), and when a given VM moves from dev/test into production,
you simply migrate it to the production cluster.  It would also be handy
for situations like this one, obviously, where you're migrating between
otherwise-incompatible environments.

It would, by its nature, only support offline migration, so there would be
no consideration for HA or any of the numerous checks involved in live
migrations.  The most complexity I can see is in specifying when both
clusters (source and destination) have access to the storage(s) used to
hold the VM/CT images/directories.  You'd have to be able to pass something
like `--map-storage=:` for every storage
that is accessible from both clusters.  You'd also have to specify a target
storage for anything that isn't mapped on both clusters, so the migration
could rsync (or whatever) anything that isn't on a shared storage.  But the
rest should be fairly straightforward, as far as I can tell.  So long as
the source can ssh to the destination, it should be a fairly smooth process.

On Sat, Oct 3, 2015, 07:34 Alexandre DERUMIER  wrote:

> >>You need to update at least 3 packages for that:
> >>
> >>- libqb
> >>- corosync
> >>- pve-cluster
> >>
> >>and that update will have bad side effects for all other cluster related
> >>packages
> >>
> >>- redhat-cluster-pve
> >>- openais
> >>- clvmd
> >>- anything else?
>
> Need to disable HA before upgrade.
>
> For clvmd, I really don't known what is the impact, as I don't use it.
>
> >>
> >>This looks really complex to me?
>
>
> Yes, I don't say it's easy, but the only other way currently, if we want
> no interruption (for qemu of course),
> and be able to do live migration is :
>
> 1) keep a node empty without any vm
> 2) upgrade all hosts in the cluster to jessie  + proxmox 4.0 (in place,
> with vms running during the upgrade)
> 3) reboot the empty node
> 4) migrate all vm  from 1 node to the empty node
> 5) reboot the new empty node
>   ...
>   ..
>
>
>
> Another way, could be to build a new cluster,
> and be allowed to do live migration between clusters.
> (Need a little work, but technically it's possible).
> Only with special command line, not exposed in gui.
>
>
>
>
>
> - Mail original -
> De: "dietmar" 
> À: "aderumier" , "pve-devel" <
> pve-devel@pve.proxmox.com>
> Envoyé: Samedi 3 Octobre 2015 10:05:49
> Objet: Re: [pve-devel] proxmox 3->4 cluster upgrade : corosync2 wheezy
> transition package ?
>
> > On October 3, 2015 at 9:42 AM Dietmar Maurer 
> wrote:
> >
> >
> > > > I wonder if it could be great (ad possible? )to make a corosync2
> > > > transition
> > > > package for wheezy.
> > > >
> > > > Like this we could mix (proxmox3-wheezy-corosync2 and
> > > > proxmox4-jessie-corosync2),
> > > > and do live migration as usual.
> > > >
> > > >
> > > > What do you think about it ?
> > >
> > > How should that work? corosync2 is not compatible with corosync 1.4, so
> > > what is the idea?
> >
> > Oh, you want to backport corosync2 to wheezy? Need to think about that.
>
> You need to update at least 3 packages for that:
>
> - libqb
> - corosync
> - pve-cluster
>
> and that update will have bad side effects for all other cluster related
> packages
>
> - redhat-cluster-pve
> - openais
> - clvmd
> - anything else?
>
> This looks really complex to me?
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] read|write network interfaces : add support for vlan bridge|interface

2015-09-21 Thread Daniel Hunsaker
Doesn't affect functionality, but curious if there's a reason these weren't
added in the same order both times...

On Mon, Sep 21, 2015, 20:52 Alexandre Derumier  wrote:

> Signed-off-by: Alexandre Derumier 
> ---
>  src/PVE/INotify.pm | 10 ++
>  1 file changed, 10 insertions(+)
>
> diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
> index 22f01d1..61faa70 100644
> --- a/src/PVE/INotify.pm
> +++ b/src/PVE/INotify.pm
> @@ -971,6 +971,10 @@ sub __read_etc_network_interfaces {
> } else {
> $d->{type} = 'unknown';
> }
> +   } elsif ($iface =~ m/^(eth|bond)\d+.\d+$/) {
> +   $d->{type} = 'interface_vlan';
> +   } elsif ($iface =~ m/^vmbr\d+.\d+$/) {
> +   $d->{type} = 'bridge_vlan';
> } elsif ($iface =~ m/^lo$/) {
> $d->{type} = 'loopback';
> } else {
> @@ -1285,7 +1289,9 @@ NETWORKDOC
> loopback => 10,
> eth => 20,
> bond => 30,
> +   interface_vlan => 35,
> bridge => 40,
> +   bridge_vlan => 50,
> };
>
>  my $lookup_type_prio = sub {
> @@ -1306,6 +1312,10 @@ NETWORKDOC
> $pri = $if_type_hash->{bond} + $alias;
> } elsif ($iface =~ m/^vmbr\d+$/) {
> $pri = $if_type_hash->{bridge} + $alias;
> +   } elsif ($iface =~ m/^vmbr\d+.\d+$/) {
> +   $pri = $if_type_hash->{bridge_vlan} + $alias;
> +   } elsif ($iface =~ m/^(eth|bond)\d+.\d+$/) {
> +   $pri = $if_type_hash->{interface_vlan} + $alias;
> }
>
> return $pri || ($if_type_hash->{unknown} + $alias);
> --
> 2.1.4
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] disabling transparent hugepage by default like kernel 2.6.32 ?

2015-09-09 Thread Daniel Hunsaker
What about setting CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS = N?  As I understand
it, "not set" means "use the default setting the kernel devs think is the
most sane for normal use", which in this case would be Y...

On Wed, Sep 9, 2015, 06:58 Alexandre DERUMIER  wrote:

> >>I could set it in kernel config, for example:
> >>
> >>CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
> >>
> >>would that help?
>
> Well, this is like  "echo madvise >
> /sys/kernel/mm/transparent_hugepage/enabled"
>
> But I don't see how to set never.
>
> maybe
>
> # CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
> # CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
>
>
> But I don't known how to unset CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS in the
> new pve-kernel package makefile ?
>
>
> I have tried to add
>
> PVE_CONFIG_OPTS= \
> ..
> -d CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS \
> ..
>
> but it don't seem to work, I still have
> CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=Y in .config
>
> Any idea ?
>
>
>
> - Mail original -
> De: "dietmar" 
> À: "aderumier" , "pve-devel" <
> pve-devel@pve.proxmox.com>
> Envoyé: Mercredi 9 Septembre 2015 06:33:05
> Objet: Re: [pve-devel] disabling transparent hugepage by default like
> kernel 2.6.32 ?
>
> > I think it's better to disable it by default.
> >
> > echo never > /sys/kernel/mm/transparent_hugepage/enabled.
> >
> > (I don't known what is the best way to disable it by default)
>
> I could set it in kernel config, for example:
>
> CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
>
> would that help?
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] HA parse_sid changed to accept CT

2015-08-31 Thread Daniel Hunsaker
Also note that "colon" (:) only has one "l"...

On Mon, Aug 31, 2015, 07:48 Alen Grizonic  wrote:

> ---
>  src/PVE/HA/Tools.pm | 15 +++
>  1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/src/PVE/HA/Tools.pm b/src/PVE/HA/Tools.pm
> index 94613de..2b9ca6c 100644
> --- a/src/PVE/HA/Tools.pm
> +++ b/src/PVE/HA/Tools.pm
> @@ -5,6 +5,7 @@ use warnings;
>  use JSON;
>  use PVE::JSONSchema;
>  use PVE::Tools;
> +use PVE::Cluster;
>
>  PVE::JSONSchema::register_format('pve-ha-resource-id',
> \&pve_verify_ha_resource_id);
>  sub pve_verify_ha_resource_id {
> @@ -18,7 +19,7 @@ sub pve_verify_ha_resource_id {
>  }
>
>  PVE::JSONSchema::register_standard_option('pve-ha-resource-id', {
> -description => "HA resource ID. This consists of a resource type
> followed by a resource specific name, separated with collon (example:
> vm:100).",
> +description => "HA resource ID. This consists of a resource type
> followed by a resource specific name, separated with collon (example:
> vm:100 / ct:100).",
>  typetext => ":",
>  type => 'string', format => 'pve-ha-resource-id',
>  });
> @@ -35,7 +36,7 @@ sub pve_verify_ha_resource_or_vm_id {
>  }
>
>  PVE::JSONSchema::register_standard_option('pve-ha-resource-or-vm-id', {
> -description => "HA resource ID. This consists of a resource type
> followed by a resource specific name, separated with collon (example:
> vm:100). For virtual machines, you can simply use the VM id as shortcut
> (example: 100).",
> +description => "HA resource ID. This consists of a resource type
> followed by a resource specific name, separated with collon (example:
> vm:100 / ct:100). For virtual machines and containers, you can simply use
> the VM or CT id as shortcut (example: 100).",
>  typetext => ":",
>  type => 'string', format => 'pve-ha-resource-or-vm-id',
>  });
> @@ -69,8 +70,14 @@ sub parse_sid {
>
>  if ($sid =~ m/^(\d+)$/) {
> $name = $1;
> -   $type ='vm';
> -   $sid = "vm:$name";
> +   my $vmlist = PVE::Cluster::get_vmlist();
> +   my $type = $vmlist->{ids}->{$name}->{type};
> +   if ($type eq 'lxc') {
> +   $type = 'ct';
> +   } elsif ($type eq 'qemu') {
> +   $type = 'vm';
> +   }
> +   $sid = "$type:$name";
>  } elsif  ($sid =~m/^(\S+):(\S+)$/) {
> $name = $2;
> $type = $1;
> --
> 2.1.4
>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Usability: enhance password label in LXC wizzard

2015-07-31 Thread Daniel Hunsaker
> And what if the language is read from right to left, eg. Chinese?

That's why the %s is part of the string to be translated, you can always
add a formatting convention that would flip the inserted string ;-

)



Readers in rtl languages (which rarely use Latin characters to begin with)
tend to expect untranslated words will remain "unflipped" as well - so
"drowssap root" would be expected, while "drowssap toor" would just look
odd.  It's also worth noting, here, that we have a plethora of other
untranslated terms scattered throughout the existing translations, so
whatever mechanism those are using is probably the (currently) preferred
one.  I'd suggest taking a peek at a few to see what's already being done
in this area.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PVE-User] backup : add pigz compressor

2015-07-10 Thread Daniel Hunsaker
On Thu, Jul 9, 2015, 23:56 Alexandre DERUMIER  wrote:

About backup,

I would like to add a new feature :

As I m always afraid of problem which can occur with network backup through
nfs (network saturated, latencies,...),

I would like to add an option, to backup to a temp storage (local ssd for
example), then send the backup async (rsync for example),
to the destination storage. (big slow storage)

What do you think about it ?



 I would love this to be native.  I've already hacked together something
rather like this using the vzdump hook system, but it isn't terribly
portable.  It currently rsyncs the backup files from a local temp storage
to a remote temp storage, then relies on a script existing on the remote
storage that will verify the transfer (I'm comparing SHA1 hashes at the
moment, as I've found that sometimes even a successful rsync will transfer
improperly for various reasons), then move them from the remote temp to the
remote long-term storage, from where they can be restored if needed.

This might be overly complicated for most setups, but as I also retain
backups differently than most setups - taking daily backups, then copying
out the dailies into weekly, monthly, quarterly, and yearly folders, each
of which retains a configurable number of "snapshots" at each interval, so
I can roll back more flexibly and further as needed - it serves my needs
quite nicely.  It also smooths out a large number of the potential issues
faced when the backup storage servers are located in different data centers
than the virtualization servers.

At any rate, something like this being natively supported would drastically
simplify my setup, so I would love to see this, too.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] novc : rebase on last master + fix sendkey menu display

2015-06-26 Thread Daniel Hunsaker
Should be easy enough to only fire up the noVNC session when the user
switches to that tab...  That way only one console is up at a time unless
they pop it out into a separate window.

On Fri, Jun 26, 2015, 01:31 Dietmar Maurer  wrote:

> > I think also if we could add an extra tab "console" in vm
>
> I have implemented that some years ago with the java applet,
> but disabled it because of all those security popups.
>
> But we can retry that with noVNC, although I am not really sure it
> is a good idea to run several noVNC consoles within a single
> browser window (perfromance, bugs, ...?).
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] novc : rebase on last master + fix sendkey menu display

2015-06-26 Thread Daniel Hunsaker
+1 embedded console!

On Fri, Jun 26, 2015, 00:11 Alexandre DERUMIER  wrote:

> >>I would like to disable it by default, but allow to enable
> >>it with some checkbox on the GUI.
> ok, great.
>
>
> I think also if we could add an extra tab "console" in vm
> like on xenserver or vmware, now with resize enabled it's possible
> http://cdn.ttgtmedia.com/digitalguide/images/Misc/082410console1.png
>
> http://www.cisco.com/c/dam/en/us/td/i/21-30/270001-28/279001-28/279806.tif/_jcr_content/renditions/279806.jpg
>
> From personnal usage, I always have a lot of popups open because I'm too
> lazy to close them ;)
>
>
> (and also keep the current popup console button)
>
> - Mail original -
> De: "dietmar" 
> À: "aderumier" 
> Cc: "pve-devel" 
> Envoyé: Vendredi 26 Juin 2015 08:00:27
> Objet: Re: [pve-devel] novc : rebase on last master + fix sendkey menu
> display
>
> > values can be: scale,downscale,remote
> >
> > remote should disable it
> >
> >
> >
> > Maybe add a configurable option somewhere ?
>
> I would like to disable it by default, but allow to enable
> it with some checkbox on the GUI.
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Preview of the GUI with ExtJS5

2015-06-22 Thread Daniel Hunsaker
Might be good to take the opportunity to build a new, Proxmox-specific
ExtJS5 theme...  Something that goes well with the logo, for example.
Would take some time to develop, and would probably need some maintenance,
but would add a bit of extra polish.

On Tue, Jun 23, 2015, 00:12 Alexandre DERUMIER  wrote:

> The classic ext-theme-gray is not too bad too (same than current proxmox
> classic theme, but grey instead blue)
>
> http://odisoweb1.odiso.net/gray.png
>
>
>
>
> - Mail original -
> De: "aderumier" 
> À: "dietmar" 
> Cc: "pve-devel" 
> Envoyé: Mardi 23 Juin 2015 06:34:50
> Objet: Re: [pve-devel] Preview of the GUI with ExtJS5
>
> something like this could be worderful
>
> http://dox.codaxy.com/ext5-themes/Basic
>
> (too bad, it's a commercial theme).
>
> light blue for headers, grey for forms background
>
>
> - Mail original -
> De: "aderumier" 
> À: "dietmar" 
> Cc: "pve-devel" 
> Envoyé: Mardi 23 Juin 2015 05:52:47
> Objet: Re: [pve-devel] Preview of the GUI with ExtJS5
>
> >>The contrast is much too high for my eyes (dark blue vs. white)...
>
> I think that extjs5 crisp theme is less agressive
>
> http://odisoweb1.odiso.net/crisp.png
>
>
>
>
>
> - Mail original -
> De: "dietmar" 
> À: "Emmanuel Kasper" , "pve-devel" <
> pve-devel@pve.proxmox.com>
> Envoyé: Lundi 22 Juin 2015 17:08:46
> Objet: Re: [pve-devel] Preview of the GUI with ExtJS5
>
> > * I hope you like blue (Default Theme of ExtJS5 is, well, blue)
>
> The contrast is much too high for my eyes (dark blue vs. white)...
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] External noVNC without proxmox panel login

2015-05-08 Thread Daniel Hunsaker
The Proxmox API needs a way to know which user is attempting to access it
so it can handle permissions appropriately.  The login process (via web UI
or otherwise) provides a ticket that can be used in subsequent requests to
provide this information to the API.  You still need to obtain this ticket
- which expires after (I believe) a couple of weeks - in order to pass it
in your API requests.

Probably the best approach here is to provide a login prompt to your users,
passing their input to the login API endpoint, and using the ticket the
server returns in subsequent requests for that user.  If, however, you're
setting this up so that authentication is never required, the API is
probably not your best option.  At that point, you'll want a VNC server
installed inside the VM, and configured appropriately for noVNC.

On Fri, May 8, 2015, 04:29   wrote:

> Hey,
>
> Currently I'm trying to create an external VNC connection using the HTML
> VNC application 'noVNC'.
> I'm using the ProxMox API commands 'vncproxy' and 'vncwebsocket' which
> almost work perfectly.
>
> One problem tho. The connection only works when I'm logged in into the
> proxmox web panel, while this is exactly what I don't want.
> I found this topic;
> http://forum.proxmox.com/threads/19903-access-noVNC-html5-console-from-external-site-vncwebsocket-via-api
> Which basically describes the same problem I'm facing.
>
> The solution was to let noVNC login into the ProxMox web panel, providing
> the login into the javascript code (which is no option for me).
> Or to alter the proxmox perl file to allow external VNC connections with
> ticket authorization, but without the proxmox login problem. Unfortunality
> this person did’t respond any more, after someone else asked if he could
> share his solution.
>
> Is there anyone that has a fix for the problems I’m facing described above?
>
> Thanks in advance!
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] criu for lxc news on criu.org

2015-03-29 Thread Daniel Hunsaker
Good point.  I'd forgotten they'd moved to that.

On Sun, Mar 29, 2015, 12:21 Waschbüsch IT-Services GmbH <
serv...@waschbuesch.it> wrote:

>
> > Am 29.03.2015 um 20:18 schrieb Daniel Hunsaker :
> >
> > There's Gentoo, which seemed pretty solid and stable while I was using
> it, but I haven't looked at their kernels lately to see how they are
> faring...
>
> But being a rolling-release OS, would that be at all suitable?
>
> Martin
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] criu for lxc news on criu.org

2015-03-29 Thread Daniel Hunsaker
There's Gentoo, which seemed pretty solid and stable while I was using it,
but I haven't looked at their kernels lately to see how they are faring...

On Sun, Mar 29, 2015, 12:08 Dietmar Maurer  wrote:

> > I guess that is not really the problem, but Docker is intended to run
> > applications not full systems like the way it works for OpenVZ now.
>
> It is even more limited. The idea is to run single binaries inside a docker
> container.
> But in real world, most applications need to run many different binaries.
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] criu for lxc news on criu.org

2015-03-29 Thread Daniel Hunsaker
If RedHat isn't going to support the technologies we need, and OpenVZ is
evaporating anyway, maybe we should consider a different kernel?  One where
KVM and LXC, at least, will be fully supported?  It seems that creating our
own team to manage backporting kernel patches ourselves would be difficult
at best, and consume a lot of time in dev and test cycles, so finding a
different group already doing those things the way we need them done would
be preferred.

On Sun, Mar 29, 2015, 10:10 Dietmar Maurer  wrote:

> > > Yes - I started to remove the OpenVZ code from the jessie branch for
> that
> > > reason
> > > now.
> > > LXC seems to slowly develop - the bad news is that RH just dropped
> support
> > > for
> > > it :-/
> > > (at least the announced that in a press release).
> > >
> > > Any ideas are highly appreciated ...
> > If it needs to be fully Redhat supported docker seems the only option.
>
> Docker is best run inside a VM. It makes no sense to run such limited
> containers
> on the host.
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Live migration should check for host compatibility?

2015-03-21 Thread Daniel Hunsaker
True, host *is* really only useful in single-node clusters, or homogenous
clusters at worst.  It's not intended for production use.  I have found it
increases performance quite a bit when running Windows 7 in a VM, though.
(A highly nonstandard setup for a highly unusual use case.  Not recommended
for general use.)

On Sat, Mar 21, 2015, 02:26 Alexandre DERUMIER  wrote:

> Hi,
> for proxmox 4.0, we have added a qemu enforce option, which check if cpu
> flags exist on host machine when vm is starting.
>
> Of course, this works only if you choose a specific cpu model.
>
> (In my opinion, "host" should never be used. Only speficic models are safe
> and well tested)
>
>
>
> - Mail original -
> De: "Waschbüsch IT-Services GmbH" 
> À: "pve-devel" 
> Envoyé: Samedi 21 Mars 2015 07:03:06
> Objet: [pve-devel] Live migration should check for host compatibility?
>
> Hi all,
>
> Martin has kindly redirected me to the list as the appropriate place to
> ask / discuss this:
>
> I noticed that, even though a kvm guest is set to CPU type 'host', the
> live migration feature does not check for compatibility with the
> destination host.
> E.g. Moving from a Opteron 6366 to an Intel Xeon E5420 is going to cause
> the migrated VM to choke and die due to lots of CPU features no longer
> being there.
>
> That in itself is not surprising, but my suggestion would be to at least
> have a warning popup when choosing to migrate a kvm which is set to CPU
> type 'host'?
>
> Best,
>
> Martin
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Live migration should check for host compatibility?

2015-03-20 Thread Daniel Hunsaker
There's a discussion floating around on the list of a new CPU type which
includes all the CPU flags supported on all nodes in a cluster.  It's
similar to 'host', obviously, but would ensure that migrations would always
succeed by ensuring that only the flags available on *all* nodes are active
in the VCPU.

This won't solve the problem you're describing, and the values would need
to be renegotiated when new nodes join a cluster, potentially meaning
migrations to nodes added after VM start would fail similarly.  My reason
for mentioning this proposed option is to point out that it would also need
a way to handle this same scenario, and this approach makes as much sense
as any other.

On Sat, Mar 21, 2015, 00:03 Waschbüsch IT-Services GmbH <
serv...@waschbuesch.it> wrote:

> Hi all,
>
> Martin has kindly redirected me to the list as the appropriate place to
> ask / discuss this:
>
> I noticed that, even though a kvm guest is set to CPU type 'host', the
> live migration feature does not check for compatibility with the
> destination host.
> E.g. Moving from a Opteron 6366 to an Intel Xeon E5420 is going to cause
> the migrated VM to choke and die due to lots of CPU features no longer
> being there.
>
> That in itself is not surprising, but my suggestion would be to at least
> have a warning popup when choosing to migrate a kvm which is set to CPU
> type 'host'?
>
> Best,
>
> Martin
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] DRBD9 test packages for jessie

2015-03-20 Thread Daniel Hunsaker
I'd rather it run the resync before continuing the migration, than simply
abort.  It'll take a bit longer to migrate, but I can't think of any reason
it would make sense *not* to run the resync at that point and simply keep
going once that's done.

On Fri, Mar 20, 2015, 13:30 Cesar Peschiera  wrote:

> It is fantastic !!!
>
> Talking about of DRBD, for now, and if is possibe, i would like to order a
> features:
>
> 1- While a VM is running with LVM on top of DRBD, the replicate storages be
> in secondary mode, ie, only to have a primary storage for each VM that is
> running (that always was the recommendation of LINBIT for security
> reasons).
>
> 2- And when i apply a live migration of a VM, the DRBD plugin first see if
> the replicated storage is perfectly synchronized (in terms of drbd "oos"
> that mean: "out of sync"), and if really are perfectly synchronized, the
> plugin converts the secondary storage in primary for after apply the live
> migration, and after of a live migration successfully, finally convert the
> old primary storage in secondary. Of this mode, always we have a primary
> storage in execution for each VM.
>
> 3- But if we have the case of "oos", the plugin don't accept the live
> migration, and a error message appear in the screen.
>
> About of the verification of replicated storages in DRBD:
> 4- As DRBD has a command that do a verification of data and metadata of the
> replicated storages, will be fantastic have it  in the schedule of the
> PVE-GUI.
>
> About of do a resynchronization if a storage is in "oos":
> 5- As DRBD has a set of commands for do a resynchronization (applied only
> to
> the blocks of disk that are different and not for the complet storage),
> also
> will be good have it in the PVE-GUI.
>
>
> - Original Message -
> From: "Dietmar Maurer" 
> To: "Cesar Peschiera" ; "PVE Development List"
> 
> Sent: Friday, March 20, 2015 2:00 PM
> Subject: Re: [pve-devel] DRBD9 test packages for jessie
>
>
> >I just want to note that the new DRBD9 has some cool feature
> >
> > - support more that 2 nodes
> > - support snapshots
> > - as fast as DRBD 8.X
> >
> > I am now writing a storage plugin, so that we can do further tests...
> >
> >> Oh, OK, sorry
> >>
> >> >> DRBD9 isn't ready for his use in a production environment, now is in
> >> >> pre-release phase...
> >> >> http://www.linbit.com/en/component/phocadownload/category/5-drbd
> >> >>
> >> >> Respectfully I mean that I believe that will be good wait to that
> >> >> DRBD9
> >> >> be
> >> >> in a stable version before of release it in the pve repositories.
> >> >
> >> > That is why I compiled it for jessie, and uploaded to the jessie test
> >> > repository.
> >> >
> >>
> >>
> >
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] the vmid.fw can't delete issue

2015-03-04 Thread Daniel Hunsaker
Cloning all settings seems like expected behavior to me.  If I'm cloning
something, I usually want (or at least expect) as exact a duplicate as I
can get, and any differences that I need, I'll expect to apply manually.
If nothing else, modifying an existing firewall config would usually be far
easier than (re)creating a new one from nothing.

Said another way, you're already going to have to modify other settings in
that case, so why not clone firewall settings as well?

On Wed, Mar 4, 2015, 00:35 lyt_yudi  wrote:

>
> 在 2015年3月4日,下午3:20,Alexandre DERUMIER  写道:
>
>
> Also, I think it could be great when cloning a template|vm, to clone the
> fw config too.
>
>
> oh, I don’t think so, maybe the clone vm doesn’t  belong to the same
> user(customer).
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] proxmox 3.4, some visual glitch refresh on qemu hardware vew

2015-02-23 Thread Daniel Hunsaker
That's what I intended to say there, yes.

On Mon, Feb 23, 2015, 01:44 Dietmar Maurer  wrote:

> > see, but generally there will be a "loading" indicator until all the
> > elements required to display accurate detail have come back from the
> > server, so that other users, with less technical exposure to web dev,
> > understand that the data they're looking at may not be accurate.
> > Implementing something like that (I've seen it elsewhere in the PVE UI,
> so
> > probably we just need to adjust the conditions required to clear the
> > "loading" state) is probably all we can do, here.
>
> We continuously update those GUIs, so we 'always' load (more or less). But
> maybe
> we should show a 'loading indicator' for the first load.
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] proxmox 3.4, some visual glitch refresh on qemu hardware vew

2015-02-22 Thread Daniel Hunsaker
I've seen this latency issue on all of my single-node clusters, especially
when network conditions are bad, or load on the node in question is high.
Having done web dev for the past several years, it's something I expect to
see, but generally there will be a "loading" indicator until all the
elements required to display accurate detail have come back from the
server, so that other users, with less technical exposure to web dev,
understand that the data they're looking at may not be accurate.
Implementing something like that (I've seen it elsewhere in the PVE UI, so
probably we just need to adjust the conditions required to clear the
"loading" state) is probably all we can do, here.

On Mon, Feb 23, 2015, 00:40 Alexandre DERUMIER  wrote:

> >>Maybe it's related to latency (maybe for other api call, like
> task,backup,firewall between /current && /pending api call).
>
> I have tried to remove some panel (like summary,tasks,...)
> And it's reduce the time of the refresh.
> So it's seem to be related to latency between /config and /pending.
>
> Note that I also see this visual bug for the vm "resume" button hidding,
> in Config.js
>   if (qmpstatus === 'prelaunch' || qmpstatus === 'paused') {
> startBtn.setVisible(false);
> resumeBtn.setVisible(true);
> } else {
> startBtn.setVisible(true);
> resumeBtn.setVisible(false);
> }
>
> It's take around 0,3s to be hidden, same than for refresh the default
> hardware config to pending hardware config.
>
>
>
>
>
>
>
>
>
> - Mail original -
> De: "aderumier" 
> À: "dietmar" 
> Cc: "pve-devel" 
> Envoyé: Lundi 23 Février 2015 07:53:16
> Objet: Re: [pve-devel] proxmox 3.4, some visual glitch refresh on qemu
> hardware vew
>
> >>Sorry, I am still unable to see it. Please can you also post the VM
> configs for
> >>2 VMs, so that I
> >>can test switching views between those VM configs.
>
> Well, I don't see it on my small test cluster,
>
> but I'm seeing on any vm on my production cluster (with a lot of vms).
>
> Maybe it's related to latency (maybe for other api call, like
> task,backup,firewall between /current && /pending api call).
>
>
>
>
> I have also notice another bug,
> after browsing some vms config,
>
> I see a lot of api call to /current && /pending, for differents vms,
> even if I'm not on this vm
>
> (I think it's related to
> me.on('show', me.rstore.startUpdate);
> me.on('hide', me.rstore.stopUpdate);
> me.on('destroy', me.rstore.stopUpdate);
>
> )
>
> The stopUpdate don't work sometime.
>
>
>
>
> - Mail original -
> De: "dietmar" 
> À: "aderumier" 
> Cc: "pve-devel" 
> Envoyé: Lundi 23 Février 2015 05:52:50
> Objet: Re: [pve-devel] proxmox 3.4, some visual glitch refresh on qemu
> hardware vew
>
> > I see this mainly if,
> >
> > I'm on a hardware tab of a qemu vm,
> >
> > then I click on another qemu vm in left tree.
> >
> > In this case, the hardware tab of the new qemu is displayed directly,
> and I
> > see this glitch.
>
> Sorry, I am still unable to see it. Please can you also post the VM
> configs for
> 2 VMs, so that I
> can test switching views between those VM configs.
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PVE-User] Other journal FS

2015-02-21 Thread Daniel Hunsaker
Memory.  ZFS *requires* at least 4GiB of RAM.  You'll want a lot more than
that.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] GUI for DHCP

2015-02-13 Thread Daniel Hunsaker
> I don't known if it's technically possible to use same ip on each
> host by vlan (for dhcp reponse of the vm of the hosts),

Sounds like AnyCast to me...

On Fri Feb 13 2015 at 3:16:30 AM Dietmar Maurer  wrote:

> > >>I also seems that IPv6 SLAAC solves the problem. With RDNSS you can get
> > >>IPv6 address, gateway, and DNS server. And this does not have any
> problems
> > >>with VLANs, because it is handled by the router.
> > >>
> > >>I that correct?
> >
> > Yes, with ipv6 it should work fine without dhcp server.
> >
> > (But I need it for ipv4 ;)
> >
> > Another idead, I don't known if it's possible through qemu-agent to
> configure
> > network ?
>
> AFAIK that is currently not possible.
>
> I also wonder if we can use zeroconf somehow? (IPV4 link local addresses:
> 169.254.0.0/16)
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pvetest updates and RC ISO installer

2015-02-01 Thread Daniel Hunsaker
OpenVZ would probably work just fine, but KVM would be slow, at best.  The
rest should function normally.

On Sun, Feb 1, 2015, 19:09 Lindsay Mathieson 
wrote:

>
> On 31 January 2015 at 02:59, Martin Maurer  wrote:
>
>> We just updated the pvetest repository and uploaded a lot of latest
>> packages required to support ZFS on Linux.
>>
>> Also note that we have downgraded pve-qemu-kvm from 2.2 to 2.1, because
>> live migration was unstable on some hosts. So please downgrade that package
>> manually (using and wget .. and dpkg -i ..) if you already use the 2.2
>> version from pvetest.
>>
>> The repository includes latest grub packages (required for zfs), so the
>> upgrade will ask you to re-install the grub boot sector to the disks.
>> Please report any problems with that.
>>
>> And we have a brand new release candidate of the ISO installer,
>> supporting ZFS RAID with just a few clicks.
>> http://download.proxmox.com/iso/
>>
>
>
>
> Brilliant, thanks Martin.
>
> I don't currently have test hardware available, but I can install it as a
> VM in my proxmox cluster? :)
>
> I presume virtualisation wouldn't work, but I should be able to test the
> zfs storage with it?
>
>
> --
> Lindsay
>  ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] memory hotplug patch v6

2015-01-20 Thread Daniel Hunsaker
It's a bit more work internally, but changing the RAM size seems like a
task that could be handled without thinking about the add/remove process.
Present the user with a button/dialog to resize RAM, then hotplug and/or
unplug to arrive at the target.  If they provide a larger value than what's
currently plugged in, hotplug the appropriate size and number of DIMMs.  If
it's smaller, and the size difference matches one of the currently inserted
DIMMs, unplug it; otherwise find a DIMM that's larger than the size
difference, plug a new module that will provide the difference between that
larger DIMM and the reduced target, then unplug the larger one.  If all the
existing DIMMs are smaller than the size difference, you can follow a
similar logic path by treating groups of two or more DIMMs as the unplug
target mentioned above - if a given DIMM group matches the size difference,
unplug that group; if not, find the smallest group of DIMMs you can unplug
to reach the target, plug a new DIMM that will provide the amount of RAM
you don't want to remove, and then unplug the group.

Some scenarios to illustrate what I'm suggesting, since even I understand
what I mean better that way than what I just said above.  Each scenario
builds from the previous one; the first starts with a current RAM size of
512M, using a DIMM configuration of 1 x 512M modules.

Scenario 1:
Target RAM - 1024M
Action - plug 1 x 512M
Result - 1024M (2 x 512M)

Scenario 2:
Target RAM - 512M
Action - unplug 1 x 512M
Result - 512M (1 x 512M)

Scenario 3:
Target RAM - 256M
Action - plug 1 x 256M; unplug 1 x 512M
Result - 256M (1 x 256M)

After some more mucking about, adding memory at 256M increments, the user
arrives at a configuration of 1024M (4 x 256M) for the following scenarios.

Scenario 4:
Target RAM - 512M
Action - unplug 2 x 256M
Result - 512M (2 x 256M)

Scenario 5:
Target RAM - 128M
Action - plug 1 x 128M; unplug 2x256M
Result - 128M (1 x 128M)

A bit contrived, but illustrates the process I'm proposing.  I strongly
suspect that's what the other virtualization platforms are doing internally
as well.  It certainly simplifies the process for users who don't feel
comfortable managing NUMA and similar settings directly, though it does
increase the complexity of our code, and I'm certain I've simplified things
here immensely.

Just another option to consider.  Either way, I like the idea of a simple
process for admins in a hurry, and an advanced interface for those who are
comfortable with NUMA and want complete control over their memory map.

On Tue, Jan 20, 2015, 10:21 Gilberto Nunes 
wrote:

> BTW, how can I test this Alexandre??
>
> 2015-01-20 15:19 GMT-02:00 Gilberto Nunes :
>
> +1
>>
>> I am the one that want to see this feature in production environment
>> It will be a plus...
>>
>>
>>
>> 2015-01-20 15:13 GMT-02:00 Alexandre DERUMIER :
>>
>> >>I wonder if memory unplug is a good idea? I assume that will be the
>>> fastest way
>>> >>to crash most OS?
>>> I personnaly need this feature. (I have customers which want to
>>> add/remove memory, because we bill on that).
>>>
>>> For linux, it should work without any problem, if you have memory use
>>> for buffer.
>>> (Of course if your process need memory, you'll get OOM for sure)
>>>
>>>
>>>
>>> - Mail original -
>>> De: "dietmar" 
>>> À: "aderumier" 
>>> Cc: "pve-devel" 
>>> Envoyé: Mardi 20 Janvier 2015 18:09:13
>>> Objet: Re: [pve-devel] [PATCH] memory hotplug patch v6
>>>
>>> > what we can do
>>> > -
>>> >
>>> > I think we can add a button "add-> memory hotplug", with a field with
>>> memory
>>> > value wanted.
>>> > We display the total memory in current hardware grid memory field.
>>> (memory +
>>> > total of dimms memories).
>>> >
>>> > For unplug, create a button "delete->memory unplug",
>>> > and in the form, display a combobox with list of current memory modules
>>>
>>> I wonder if memory unplug is a good idea? I assume that will be the
>>> fastest way
>>> to crash most OS?
>>> ___
>>> pve-devel mailing list
>>> pve-devel@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>>
>>
>>
>>
>> --
>> --
>> "A única forma de chegar ao impossível, é acreditar que é possível."
>> Lewis Carroll - Alice no País das Maravilhas
>>
>> "“*The only way* to achieve *the impossible is to believe* it is
>> *possible*.”
>> Lewis Carroll - Alice in Wonderland
>>
>> Gilberto Ferreira
>> (47) 9676-7530
>>
>>
>
>
> --
> --
> "A única forma de chegar ao impossível, é acreditar que é possível."
> Lewis Carroll - Alice no País das Maravilhas
>
> "“*The only way* to achieve *the impossible is to believe* it is
> *possible*.”
> Lewis Carroll - Alice in Wonderland
>
> Gilberto Ferreira
> (47) 9676-7530
>
>  ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-d

Re: [pve-devel] Finding a VM's Node via API

2015-01-15 Thread Daniel Hunsaker
There are a number of command line tools for just that; Lindsay appears to
be using jq.

On Thu, Jan 15, 2015, 01:50 Dietmar Maurer  wrote:

> # pvesh get /cluster/resources -type vm
>
> But I have no idea how to extract the result inside a shell script?
>
> > I'm adapting spice-example.sh to just require a VMID, so we can launch a
> > spice client from the cmd line without needing to know which node it is
> on.
> >
> > Is there an API call for finding a VM's node from its id? or an
> alternative
> > way of handling this?
> >
> > Thanks,
> >
> > --
> > Lindsay
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] FYI: news from the OpenVZ project

2014-12-27 Thread Daniel Hunsaker
Will that mean an update to the API and so forth, to acknowledge and make
use of the new name?

On Sat Dec 27 2014 at 12:58:28 AM Dietmar Maurer 
wrote:

> see: http://openvz.livejournal.com/49158.html
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] BIG PROBLEM - Possible Bug - Log-Rotation

2014-12-10 Thread Daniel Hunsaker
See the "copytruncate" option in the logrotate man page:

http://linux.die.net/man/8/logrotate

On Wed Dec 10 2014 at 6:25:50 PM Detlef Bracker  wrote:

>  Dear,
>
> more information with logrotate debug. But how new log entries logs in the
> log.1 - logs?
>
> ..
> empty log files are not rotated, old logs are removed
> considering log /var/log/mail.info
>   log does not need rotating
> considering log /var/log/mail.warn
>   log does not need rotating
> considering log /var/log/mail.err
>   log does not need rotating
> considering log /var/log/mail.log
>   log does not need rotating
> considering log /var/log/daemon.log
>   log does not need rotating
> considering log /var/log/kern.log
>   log does not need rotating
> considering log /var/log/auth.log
>   log does not need rotating
> considering log /var/log/user.log
>   log does not need rotating
> considering log /var/log/lpr.log
>   log does not need rotating
> considering log /var/log/cron.log
>   log /var/log/cron.log does not exist -- skipping
> considering log /var/log/debug
>   log does not need rotating
> considering log /var/log/messages
>   log does not need rotating
> not running postrotate script, since no logs were rotated
> .
>
>
> Am 11.12.2014 02:05, schrieb Detlef Bracker:
>
> Dear,
>
> I have the same problem on 2 diferent hosts! The logrotate is not working
> correct and the logs
> are not going in the log-files normal, they go in the log-file with the
> number .1 !!!
> Expample not logged in auth - but logged in auth.1
> or not logged in syslog - but logged in syslog.1
>
> *This is a security risk*, when you check with fail2ban the hosts-files
> about hackings via console,
> ssh and some-thing on! - and I dont know, that I have this only or others
> too!
>
> uname -a
>
> Linux pm3-host 2.6.32-31-pve #1 SMP Thu Jul 24 06:44:16 CEST 2014 x86_64
> GNU/Linux
>
> The logrotate job in cron.daily not working, but I get not an error! The
> logrotate stop at the
> 23 octobre on 1 host and on the other at the 30. september!
>
> Have somebody a resolution w/o a reboot?
>
> Dear
>
> Detlef
>
>
> ___
> pve-devel mailing 
> listpve-devel@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
> --
> ACHTUNG: Ihr Anfragetext befindet sich unter unserem Absender!
> P.S. ePrivacy in Europa - lesen Sie mehr - read more
> 
>
> Mit freundlichen Gruessen
> 1awww.com - Internet-Service-Provider
>
> Detlef Bracker
>  Velilla, Calle Club s/n, E 18690 Almunecar, Tel.: +34.6 343 232 61 *
> EU-VAT-ID: ESX4516542D
>
> This email and any files transmitted are confidential and intended only or
> the person(s) directly addressed. If you are not the intended recipient,
> any use, copying, transmission, distribution, or other forms of
> dissemination is strictly prohibited. If you have received this email in
> error, please notify the sender immediately and permanently delete this
> email with any files that may be attached.
>
> Este correo electrónico y, en su caso, cualquier fichero anexo al mismo,
> contiene información de carácter confidencial exclusivamente dirigida a su
> destinatario o destinatarios. Queda prohibida su divulgación, copia o
> distribución a terceros sin la previa autorización escrita de Detlef
> Bracker. En caso de no ser usted la persona a la que fuera dirigido este
> mensaje y a pesar de ello está continúa leyéndolo, ponemos en su
> conocimiento que está cometiendo un acto ilícito en virtud de la
> legislación vigente en la actualidad, por lo que deberá dejarlo de leer
> automáticamente.
>
> Detlef Bracker no es responsable de su integridad, exactitud, o de lo que
> acontezca cuando el correo electrónico circula por las infraestructuras de
> comunicaciones electrónicas públicas. En el caso de haber recibido este
> correo electrónico por error, se ruega notificar inmediatamente esta
> circunstancia mediante reenvío a la dirección electrónica del remitente.
>
> El correo electrónico vía Internet no permite asegurar la confidencialidad
> de los mensajes que se transmiten ni su integridad o correcta recepción,
> por lo que Detlef Bracker no asume ninguna responsabilidad que pueda
> derivarse de este hecho.
>
> No imprima este correo si no es necesario. Ahorrar papel protege el medio
> ambiente.
>  ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] BIG PROBLEM - Possible Bug - Log-Rotation

2014-12-10 Thread Daniel Hunsaker
Send a SIGHUP to syslogd.  That should fix the rotation.  The issue is that
the syslogd is writing to an inode, not a filename (it still has the handle
open when logrotate renames the file, so it keeps writing to it until told
not to via SIGHUP.

killall -HUP syslogd

Best guess is the SIGHUP was never received by syslogd on one of
logrotate's passes.  Probably changed PIDs due to a restart of some kind
after logrotate had started, but before it finished.  You can configure
logrotate to copy/truncate instead of move/SIGHUP to prevent this in
future, but you risk losing any entries logged between the copy and the
truncate.  So it's up to you which is more important - getting every log
entry (which you are now, at risk of rotation breaking down) or keeping
rotation stable (at the cost of maybe losing a few log entries in the few
milliseconds between copy and truncate).

On Wed, Dec 10, 2014, 18:06 Detlef Bracker  wrote:

>  Dear,
>
> I have the same problem on 2 diferent hosts! The logrotate is not working
> correct and the logs
> are not going in the log-files normal, they go in the log-file with the
> number .1 !!!
> Expample not logged in auth - but logged in auth.1
> or not logged in syslog - but logged in syslog.1
>
> *This is a security risk*, when you check with fail2ban the hosts-files
> about hackings via console,
> ssh and some-thing on! - and I dont know, that I have this only or others
> too!
>
> uname -a
>
> Linux pm3-host 2.6.32-31-pve #1 SMP Thu Jul 24 06:44:16 CEST 2014 x86_64
> GNU/Linux
>
> The logrotate job in cron.daily not working, but I get not an error! The
> logrotate stop at the
> 23 octobre on 1 host and on the other at the 30. september!
>
> Have somebody a resolution w/o a reboot?
>
> Dear
>
> Detlef
>  ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-11-29 Thread Daniel Hunsaker
Sounds like a boot time RAM test... Though whether by the OS or the BIOS is
hard to say. I tend not to let Windows have near that much RAM to play with.

On Sat, Nov 29, 2014, 22:36 Cesar Peschiera  wrote:

>  Into the VM (Win Server 2008R2), the only process  that is consuming
> lots of CPU is:
>
> Image Name: System
> User Name: System
> CPU: 99%
> memory (Private workspace): 52 KB
> Description: NT Kernel & System
>
> All other processes are consuming 0% of CPU
>
> - Original Message -
> *From:* Daniel Hunsaker 
> *To:* Cesar Peschiera  ; Lindsay Mathieson
>  ; pve-devel@pve.proxmox.com
>
> *Sent:* Sunday, November 30, 2014 1:30 AM
> *Subject:* Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM
>
> Any OS is composed of various processes.  You'll still have one process or
> another (or perhaps a small handful) responsible for the usage unless the
> boot process has not yet reached the OS.  In the case where the OS is not
> yet running, the virtualized hardware is to blame, most likely the BIOS.
> Either way, the resolution requires knowledge of the cause, and that
> requires data about what is running inside the VM at the time.
>
> On Sat, Nov 29, 2014, 21:06 Cesar Peschiera  wrote:
>
>> Hi Daniel
>>
>> Many thanks for answer me.
>>
>> The VM is newly installed, has no other program installed. But soon will
>> should have MS-SQL server 2008 Standard installed.
>>
>> What can i do to fix my problem?
>>
>>
>> - Original Message -
>> From: Daniel Hunsaker
>> To: Cesar Peschiera ; Lindsay Mathieson ; pve-devel@pve.proxmox.com
>> Sent: Sunday, November 30, 2014 12:56 AM
>> Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM
>>
>>
>> What about inside the VM? It's expected that the VM itself will consume
>> all
>> CPU since that's the reported issue, and `/usr/bin/kvm` is the process the
>> host runs the VM in.
>>
>>
>> On Sat, Nov 29, 2014, 19:53 Cesar Peschiera  wrote:
>>
>> Hi Lindsay
>>
>> Many thanks for answer me :-)
>>
>> >What processes in the VM are hogging the CPU?
>> htop is showing me that the process that consume all processor available
>> is:
>> /usr/bin/kvm/ -id 109 -chardev socket, id=qmp,
>> path=/var/run/qemu-server/109.qmp, server,nowait etc (a line very
>> long)
>>
>> But in my case, each line of "/usr/bin/kvm/ -id 109 ..." to 100% of
>> consume
>> of processor is repetitive for each core that i configured previously
>> before
>> of startup the VM.
>>
>> >Is the VM memory fixed or ballon allocated?
>> Fixed, the VM has the ballon driver installed, but PVE don't have this
>> option enabled, and PVE have only a VM installed and nothing more.
>>
>>
>>
>> - Original Message -
>> From: "Lindsay Mathieson" 
>> To: 
>> Sent: Saturday, November 29, 2014 11:12 PM
>> Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM
>>
>>
>> > ___
>> > pve-devel mailing list
>> > pve-devel@pve.proxmox.com
>> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>> >
>>
>> ___
>> pve-devel mailing list
>> pve-devel@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-11-29 Thread Daniel Hunsaker
Any OS is composed of various processes.  You'll still have one process or
another (or perhaps a small handful) responsible for the usage unless the
boot process has not yet reached the OS.  In the case where the OS is not
yet running, the virtualized hardware is to blame, most likely the BIOS.
Either way, the resolution requires knowledge of the cause, and that
requires data about what is running inside the VM at the time.

On Sat, Nov 29, 2014, 21:06 Cesar Peschiera  wrote:

> Hi Daniel
>
> Many thanks for answer me.
>
> The VM is newly installed, has no other program installed. But soon will
> should have MS-SQL server 2008 Standard installed.
>
> What can i do to fix my problem?
>
>
> - Original Message -
> From: Daniel Hunsaker
> To: Cesar Peschiera ; Lindsay Mathieson ; pve-devel@pve.proxmox.com
> Sent: Sunday, November 30, 2014 12:56 AM
> Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM
>
>
> What about inside the VM? It's expected that the VM itself will consume all
> CPU since that's the reported issue, and `/usr/bin/kvm` is the process the
> host runs the VM in.
>
>
> On Sat, Nov 29, 2014, 19:53 Cesar Peschiera  wrote:
>
> Hi Lindsay
>
> Many thanks for answer me :-)
>
> >What processes in the VM are hogging the CPU?
> htop is showing me that the process that consume all processor available
> is:
> /usr/bin/kvm/ -id 109 -chardev socket, id=qmp,
> path=/var/run/qemu-server/109.qmp, server,nowait etc (a line very
> long)
>
> But in my case, each line of "/usr/bin/kvm/ -id 109 ..." to 100% of consume
> of processor is repetitive for each core that i configured previously
> before
> of startup the VM.
>
> >Is the VM memory fixed or ballon allocated?
> Fixed, the VM has the ballon driver installed, but PVE don't have this
> option enabled, and PVE have only a VM installed and nothing more.
>
>
>
> - Original Message -
> From: "Lindsay Mathieson" 
> To: 
> Sent: Saturday, November 29, 2014 11:12 PM
> Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM
>
>
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM

2014-11-29 Thread Daniel Hunsaker
What about inside the VM? It's expected that the VM itself will consume all
CPU since that's the reported issue, and `/usr/bin/kvm` is the process the
host runs the VM in.

On Sat, Nov 29, 2014, 19:53 Cesar Peschiera  wrote:

> Hi Lindsay
>
> Many thanks for answer me :-)
>
> >What processes in the VM are hogging the CPU?
> htop is showing me that the process that consume all processor available
> is:
> /usr/bin/kvm/ -id 109 -chardev socket, id=qmp,
> path=/var/run/qemu-server/109.qmp, server,nowait etc (a line very
> long)
>
> But in my case, each line of "/usr/bin/kvm/ -id 109 ..." to 100% of consume
> of processor is repetitive for each core that i configured previously
> before
> of startup the VM.
>
> >Is the VM memory fixed or ballon allocated?
> Fixed, the VM has the ballon driver installed, but PVE don't have this
> option enabled, and PVE have only a VM installed and nothing more.
>
>
>
> - Original Message -
> From: "Lindsay Mathieson" 
> To: 
> Sent: Saturday, November 29, 2014 11:12 PM
> Subject: Re: [pve-devel] Error in PVE with win2008r2 and 256GB RAM
>
>
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Error between PVE and LVM

2014-11-29 Thread Daniel Hunsaker
It'll be a while. At best.

On Sat, Nov 29, 2014, 19:32 Cesar Peschiera  wrote:

>  Hi Daniel
>
> Many thanks for your reply.
>
> Now that the problem was understood, as also the need, i would like to ask
> if anybody can add to the PVE GUI the option I most need? (and also any
> good administrator of storages).
>
> Best regards
> Cesar
>
> - Original Message -
> *From:* Daniel Hunsaker 
> *To:* Cesar Peschiera  ; Alexandre DERUMIER
>  ; pve-devel@pve.proxmox.com
>
> *Sent:* Saturday, November 29, 2014 3:38 PM
> *Subject:* Re: [pve-devel] Error between PVE and LVM
>
> OK, I think I see what you were saying there (having a bit of trouble with
> the grammar; my German is so rusty as to be useless, but learning it while
> I did gave me an appreciation for how hard it is to write/speak in a
> language I didn't grow up with, so I'm not complaining about it; I just
> mention it by way of apology for any remaining misunderstandings). You keep
> the VGs themselves small, to minimize administrative/maintenance task
> times. The situations in which you're looking to resize an LV are right
> after increasing the size of the VG the LV belongs to. This makes complete
> sense, and I feel a bit silly for not thinking of such a setup myself.
>
> On Sat, Nov 29, 2014, 11:20 Cesar Peschiera  wrote:
>
>>  Hi Daniel
>>
>> Thanks for your clarification, it is good know it, and please let me to
>> do some comments:
>>
>> About of do to grow the max the VD of the VM: as a administration
>> strategy, if my hard disks don't have all  the space occupied with a
>> Physical Volume, will be very easy and online expand any thing of LVM which
>> is desired, as also can be util if you want to use such space free for
>> other things out of LVM.
>>
>> Moreover, as a administration strategy, if my DRBD resource has a space
>> very limited, the verification of DRBD storages will take less time to
>> complete (talking in hours of time), and when more later i want do grow the
>> DRBD resource for any  need (that always i can do it), entails to that
>> after will be need grow the Physical Volume and the Volume Group that is
>> within of the DRBD resource, then finally, i will gain time for that DRBD
>> completes the verification of his storages that are executed automatic and
>> periodically if is that it is limited the space of these resources.
>>
>> Unfortunately, the times of delay for that finish the tasks of
>> verifications of storages replicated or shared always was a topic of study
>> and strategy important for his administrators, due to that also the VMs are
>> using these same hard disks at the same time that a verification is in
>> progress, and obviously it can be more critical talking in speed terms when
>> a data base depend of these hard drives.
>>
>> And finally, I know that the all manufacturers of hard drives tell us
>> about of the percent of error in writes that have his hard drives, and is
>> for it (as also for other reasons), that any decent RAID controller have
>> the option of verify these units.
>>
>> So in conclusion, and talking in general mode, as are needed that the
>> systems to make these tasks:
>> 1) The backup of the VMs.
>> 2) The verification of the storages replicated.
>> 3) The verification of the HDDs.
>> 4) All these tasks must be do in online mode, but none at the same time
>> for avoid degradation of yield.
>>
>> It is for it that i need reduce the times of this verifications to the
>> max possible, and also so i need to have the data and storages as small as
>> possible, whether they are in a RAID Controller, as in LVM, DRBD, Gluster,
>> Ceph, hardware storage or any kind of storage that exists.
>>
>> Best regards
>>  Cesar
>>
>>
>>
>> - Original Message -
>> *From:* Daniel Hunsaker 
>>
>>  *To:* Cesar Peschiera  ; Alexandre DERUMIER
>>  ; pve-devel@pve.proxmox.com
>> *Sent:* Saturday, November 29, 2014 8:43 AM
>> *Subject:* Re: [pve-devel] Error between PVE and LVM
>>
>> It'll be some time before such options can be added to the underlying
>> Proxmox API.
>>
>> As to your comments about the resize not showing in the GUI even with qm
>> resize, that isn't quite true. Let's say you'd resize your LV normally
>> using `lvextend -L 100G`. I understand you'd prefer to use `100%` here, but
>> just bear with my example for the moment and pretend you'd use `100G`.
>> Changing that to `qm resize 100G` would not only resize the LV to 100 GB,
>> but also 

Re: [pve-devel] Error between PVE and LVM

2014-11-29 Thread Daniel Hunsaker
OK, I think I see what you were saying there (having a bit of trouble with
the grammar; my German is so rusty as to be useless, but learning it while
I did gave me an appreciation for how hard it is to write/speak in a
language I didn't grow up with, so I'm not complaining about it; I just
mention it by way of apology for any remaining misunderstandings). You keep
the VGs themselves small, to minimize administrative/maintenance task
times. The situations in which you're looking to resize an LV are right
after increasing the size of the VG the LV belongs to. This makes complete
sense, and I feel a bit silly for not thinking of such a setup myself.

On Sat, Nov 29, 2014, 11:20 Cesar Peschiera  wrote:

>  Hi Daniel
>
> Thanks for your clarification, it is good know it, and please let me to do
> some comments:
>
> About of do to grow the max the VD of the VM: as a administration
> strategy, if my hard disks don't have all  the space occupied with a
> Physical Volume, will be very easy and online expand any thing of LVM which
> is desired, as also can be util if you want to use such space free for
> other things out of LVM.
>
> Moreover, as a administration strategy, if my DRBD resource has a space
> very limited, the verification of DRBD storages will take less time to
> complete (talking in hours of time), and when more later i want do grow the
> DRBD resource for any  need (that always i can do it), entails to that
> after will be need grow the Physical Volume and the Volume Group that is
> within of the DRBD resource, then finally, i will gain time for that DRBD
> completes the verification of his storages that are executed automatic and
> periodically if is that it is limited the space of these resources.
>
> Unfortunately, the times of delay for that finish the tasks of
> verifications of storages replicated or shared always was a topic of study
> and strategy important for his administrators, due to that also the VMs are
> using these same hard disks at the same time that a verification is in
> progress, and obviously it can be more critical talking in speed terms when
> a data base depend of these hard drives.
>
> And finally, I know that the all manufacturers of hard drives tell us
> about of the percent of error in writes that have his hard drives, and is
> for it (as also for other reasons), that any decent RAID controller have
> the option of verify these units.
>
> So in conclusion, and talking in general mode, as are needed that the
> systems to make these tasks:
> 1) The backup of the VMs.
> 2) The verification of the storages replicated.
> 3) The verification of the HDDs.
> 4) All these tasks must be do in online mode, but none at the same time
> for avoid degradation of yield.
>
> It is for it that i need reduce the times of this verifications to the max
> possible, and also so i need to have the data and storages as small as
> possible, whether they are in a RAID Controller, as in LVM, DRBD, Gluster,
> Ceph, hardware storage or any kind of storage that exists.
>
> Best regards
> Cesar
>
>
>
> - Original Message -
> *From:* Daniel Hunsaker 
>
> *To:* Cesar Peschiera  ; Alexandre DERUMIER
>  ; pve-devel@pve.proxmox.com
> *Sent:* Saturday, November 29, 2014 8:43 AM
> *Subject:* Re: [pve-devel] Error between PVE and LVM
>
> It'll be some time before such options can be added to the underlying
> Proxmox API.
>
> As to your comments about the resize not showing in the GUI even with qm
> resize, that isn't quite true. Let's say you'd resize your LV normally
> using `lvextend -L 100G`. I understand you'd prefer to use `100%` here, but
> just bear with my example for the moment and pretend you'd use `100G`.
> Changing that to `qm resize 100G` would not only resize the LV to 100 GB,
> but also tell QEMU about the new size, which would in turn update the GUI.
> That's because the qm command hooks into the same code the GUI uses to make
> the changes it makes. So switching to `qm resize` will give you the VM/GUI
> update at the cost of using % sizes (for now).
>
> I still don't understand what benefit you'd get from increasing a single
> VM's LV to the full VG size. No other LV would then be able to grow within
> that VG. The only setup I can imagine that would avoid that problem is one
> where each LV has a dedicated VG, which is to say, each VG only has a
> single LV. That setup seems pretty inefficient to me, not really taking
> advantage of LVM's design principles. But then, maybe I'm just missing
> something.
>
> On Sat, Nov 29, 2014, 00:37 Cesar Peschiera  wrote:
>
>>  Hi Daniel
>>
>> As the moderns Windows systems have the options of grow or shrin

Re: [pve-devel] Error between PVE and LVM

2014-11-29 Thread Daniel Hunsaker
It'll be some time before such options can be added to the underlying
Proxmox API.

As to your comments about the resize not showing in the GUI even with qm
resize, that isn't quite true. Let's say you'd resize your LV normally
using `lvextend -L 100G`. I understand you'd prefer to use `100%` here, but
just bear with my example for the moment and pretend you'd use `100G`.
Changing that to `qm resize 100G` would not only resize the LV to 100 GB,
but also tell QEMU about the new size, which would in turn update the GUI.
That's because the qm command hooks into the same code the GUI uses to make
the changes it makes. So switching to `qm resize` will give you the VM/GUI
update at the cost of using % sizes (for now).

I still don't understand what benefit you'd get from increasing a single
VM's LV to the full VG size. No other LV would then be able to grow within
that VG. The only setup I can imagine that would avoid that problem is one
where each LV has a dedicated VG, which is to say, each VG only has a
single LV. That setup seems pretty inefficient to me, not really taking
advantage of LVM's design principles. But then, maybe I'm just missing
something.

On Sat, Nov 29, 2014, 00:37 Cesar Peschiera  wrote:

>  Hi Daniel
>
> As the moderns Windows systems have the options of grow or shrink the
> partitions of his HDDs online, add such features to the PVE GUI will be
> very pleasant.
>
> And if is possible also add the a option of "resize to the max allowed"
> (the space available in his Volume group) , will be fantastic.
>
> Best regards
> Cesar
>
> - Original Message -
> *From:* Daniel Hunsaker 
> *To:* Alexandre DERUMIER 
>
> *Cc:* pve-devel@pve.proxmox.com ; Cesar Peschiera 
> *Sent:* Friday, November 28, 2014 3:55 AM
> *Subject:* Re: [pve-devel] Error between PVE and LVM
>
> The only difference between lvresize and lvextend is that lvresize
> supports shrinking the volume as well as growing it. My comments aren't
> questions so much as observations on a patch series I'm planning to create
> and submit that will allow shrinking volumes as well as expanding them.
> Probably roll the max-free option in there as well if it hasn't been added
> already by then, though I wonder at the utility of such an operation, since
> you could only use it on one lv per vg.
>
> On Thu, Nov 27, 2014, 23:49 Alexandre DERUMIER 
> wrote:
>
>> Sorry, but I'm a bit lost in all the discussion.
>>
>> are your questions (both daniel and cesar) about shrinking ?  or extend ?
>>
>> (I don't have used lvm since a long time, don't known the difference
>> between lvresize and lvextend).
>>
>> @cesar, I don't understand why since the begin of this discus, you resize
>> manually the lvm disk.
>> (If the need is to do it command line, use qm resize, it'll extend the
>> lvm volume and tell to qemu the new size)
>>
>>
>>
>>
>> - Mail original -
>>
>> De: "Daniel Hunsaker" 
>> À: "Alexandre DERUMIER" , "Cesar Peschiera" <
>> br...@click.com.py>
>> Cc: pve-devel@pve.proxmox.com
>> Envoyé: Vendredi 28 Novembre 2014 06:32:15
>> Objet: Re: [pve-devel] Error between PVE and LVM
>>
>> Ah, good, it does support +size. In that case, simply swapping `lvresize`
>> into the code in place of `lvextend` (along with properly handling -size to
>> convert it to an absolute size for the command) would add shrinking support
>> to LVM storage. Just need to further explore the other storage plugins'
>> resize options to get shrinking for them as well. Makes the patches a bit
>> simpler.
>>
>> And yes, I always forget about `qm`. It's quite a bit simpler than
>> `pvesh`.
>>
>>
>> On Thu, Nov 27, 2014, 22:11 Alexandre DERUMIER < aderum...@odiso.com >
>> wrote:
>>
>>
>> >>But for this moment, i have two questions:
>> >>1) Do I have any simpler option to grow my LV(that is the HDD of the
>> VM) by
>> >>CLI?
>> >>2) If the answer is correct, what exactly should i execute?
>>
>> all the gui feature are available with cli, with "qm" command.
>>
>>
>>
>> #qm resize   
>>
>> #man qm
>> qm resize[OPTIONS]
>>
>> Extend volume size.
>>
>>  integer (1 - N)
>>
>> The (unique) ID of the VM.
>>
>>  (ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 |
>> sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 |
>> scsi13 | scsi2 | scsi3 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8
>> | scs

Re: [pve-devel] Error between PVE and LVM

2014-11-27 Thread Daniel Hunsaker
The only difference between lvresize and lvextend is that lvresize supports
shrinking the volume as well as growing it. My comments aren't questions so
much as observations on a patch series I'm planning to create and submit
that will allow shrinking volumes as well as expanding them. Probably roll
the max-free option in there as well if it hasn't been added already by
then, though I wonder at the utility of such an operation, since you could
only use it on one lv per vg.

On Thu, Nov 27, 2014, 23:49 Alexandre DERUMIER  wrote:

> Sorry, but I'm a bit lost in all the discussion.
>
> are your questions (both daniel and cesar) about shrinking ?  or extend ?
>
> (I don't have used lvm since a long time, don't known the difference
> between lvresize and lvextend).
>
> @cesar, I don't understand why since the begin of this discus, you resize
> manually the lvm disk.
> (If the need is to do it command line, use qm resize, it'll extend the lvm
> volume and tell to qemu the new size)
>
>
>
>
> - Mail original -
>
> De: "Daniel Hunsaker" 
> À: "Alexandre DERUMIER" , "Cesar Peschiera" <
> br...@click.com.py>
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Vendredi 28 Novembre 2014 06:32:15
> Objet: Re: [pve-devel] Error between PVE and LVM
>
> Ah, good, it does support +size. In that case, simply swapping `lvresize`
> into the code in place of `lvextend` (along with properly handling -size to
> convert it to an absolute size for the command) would add shrinking support
> to LVM storage. Just need to further explore the other storage plugins'
> resize options to get shrinking for them as well. Makes the patches a bit
> simpler.
>
> And yes, I always forget about `qm`. It's quite a bit simpler than `pvesh`.
>
>
> On Thu, Nov 27, 2014, 22:11 Alexandre DERUMIER < aderum...@odiso.com >
> wrote:
>
>
> >>But for this moment, i have two questions:
> >>1) Do I have any simpler option to grow my LV(that is the HDD of the VM)
> by
> >>CLI?
> >>2) If the answer is correct, what exactly should i execute?
>
> all the gui feature are available with cli, with "qm" command.
>
>
>
> #qm resize   
>
> #man qm
> qm resize[OPTIONS]
>
> Extend volume size.
>
>  integer (1 - N)
>
> The (unique) ID of the VM.
>
>  (ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 |
> sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 |
> scsi13 | scsi2 | scsi3 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8
> | scsi9 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 |
> virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 |
> virtio5 | virtio6 | virtio7 | virtio8 | virtio9)
>
> The disk you want to resize.
>
>  \+?\d+(\.\d+)?[KMGT]?
>
> The new size. With the '+' sign the value is added to the
> actual size of the volume and without it, the value is taken
> as an absolute one. Shrinking disk size is not supported.
>
> -digest string
>
> Prevent changes if current configuration file has different
> SHA1 digest. This can be used to prevent concurrent
> modifications.
>
> -skiplock boolean
>
> Ignore locks - only root is allowed to use this option.
>
>
> - Mail original -
>
> De: "Cesar Peschiera" < br...@click.com.py >
> À: "Daniel Hunsaker" < danhunsa...@gmail.com >, "Alexandre DERUMIER" <
> aderum...@odiso.com >
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Jeudi 27 Novembre 2014 21:01:28
> Objet: Re: [pve-devel] Error between PVE and LVM
>
> Thanks Daniel, your words are encouraging for the future of PVE and for me.
>
> But for this moment, i have two questions:
> 1) Do I have any simpler option to grow my LV(that is the HDD of the VM) by
> CLI?
> 2) If the answer is correct, what exactly should i execute?
>
> Best regards
> Cesar
>
>
> - Original Message -
> From: Daniel Hunsaker
> To: Alexandre DERUMIER ; Cesar Peschiera
> Cc: pve-devel@pve.proxmox.com
> Sent: Thursday, November 27, 2014 3:37 PM
> Subject: Re: [pve-devel] Error between PVE and LVM
>
>
> If the GUI is resizing volumes, the API supports it, which means you should
> be able to use `pvesh` to do the operation in one command, instead of using
> the LVM commands and QEMU monitor directly. It does only support specifying
> the new size in bytes (which it seems to convert to MiB before actually
> using), but it's still an option.
>
> As for the "max available" option, I'd personally find it more useful to
> upgrade the API itself support the full range of `lvresize -L` values (it
> currently uses 

Re: [pve-devel] Error between PVE and LVM

2014-11-27 Thread Daniel Hunsaker
Ah, good, it does support +size. In that case, simply swapping `lvresize`
into the code in place of `lvextend` (along with properly handling -size to
convert it to an absolute size for the command) would add shrinking support
to LVM storage. Just need to further explore the other storage plugins'
resize options to get shrinking for them as well. Makes the patches a bit
simpler.

And yes, I always forget about `qm`. It's quite a bit simpler than `pvesh`.

On Thu, Nov 27, 2014, 22:11 Alexandre DERUMIER  wrote:

> >>But for this moment, i have two questions:
> >>1) Do I have any simpler option to grow my LV(that is the HDD of the VM)
> by
> >>CLI?
> >>2) If the answer is correct, what exactly should i execute?
>
> all the gui feature are available with cli, with "qm" command.
>
>
>
> #qm resize   
>
> #man qm
>   qm resize[OPTIONS]
>
>   Extend volume size.
>
>integer (1 - N)
>
> The (unique) ID of the VM.
>
>(ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 |
> sata3 |
> sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 |
> scsi12 |
> scsi13 | scsi2 | scsi3 | scsi4 | scsi5 | scsi6 | scsi7
> | scsi8
> | scsi9 | virtio0 | virtio1 | virtio10 | virtio11 |
> virtio12 |
> virtio13 | virtio14 | virtio15 | virtio2 | virtio3 |
> virtio4 |
> virtio5 | virtio6 | virtio7 | virtio8 | virtio9)
>
> The disk you want to resize.
>
>\+?\d+(\.\d+)?[KMGT]?
>
> The new size. With the '+' sign the value is added to
> the
> actual size of the volume and without it, the value is
> taken
> as an absolute one. Shrinking disk size is not
> supported.
>
>   -digeststring
>
> Prevent changes if current configuration file has
> different
> SHA1 digest. This can be used to prevent concurrent
>     modifications.
>
>   -skiplock  boolean
>
> Ignore locks - only root is allowed to use this option.
>
>
> - Mail original -
>
> De: "Cesar Peschiera" 
> À: "Daniel Hunsaker" , "Alexandre DERUMIER" <
> aderum...@odiso.com>
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Jeudi 27 Novembre 2014 21:01:28
> Objet: Re: [pve-devel] Error between PVE and LVM
>
> Thanks Daniel, your words are encouraging for the future of PVE and for me.
>
> But for this moment, i have two questions:
> 1) Do I have any simpler option to grow my LV(that is the HDD of the VM) by
> CLI?
> 2) If the answer is correct, what exactly should i execute?
>
> Best regards
> Cesar
>
>
> - Original Message -
> From: Daniel Hunsaker
> To: Alexandre DERUMIER ; Cesar Peschiera
> Cc: pve-devel@pve.proxmox.com
> Sent: Thursday, November 27, 2014 3:37 PM
> Subject: Re: [pve-devel] Error between PVE and LVM
>
>
> If the GUI is resizing volumes, the API supports it, which means you should
> be able to use `pvesh` to do the operation in one command, instead of using
> the LVM commands and QEMU monitor directly. It does only support specifying
> the new size in bytes (which it seems to convert to MiB before actually
> using), but it's still an option.
>
> As for the "max available" option, I'd personally find it more useful to
> upgrade the API itself support the full range of `lvresize -L` values (it
> currently uses `lvextend`, which means volumes cannot be reduced in size -
> a
> fairly safe approach in case the filesystem inside the VM hasn't been
> reduced in advance, but also a bit restrictive), or at least the largest
> subset we could also support for other storage plugins. I'll see about
> implementing that if nobody else gets to it first.
>
>
> On Thu, Nov 27, 2014, 09:15 Alexandre DERUMIER 
> wrote:
>
> >>This process is correct when you use the GUI, but when you have space
> >>limited in the hard disk, and you want to change some partitions by CLI,
> >>where finally will be working with the logical volumes, is when starting
> >>the
> >>problem due that the VM not see the change applied.
>
> ah ok.
>
> This is normal, you need to tell to qemu what is the new size.
> (This is what we are doing in the code : vm_mon_cmd($vmid, "block_resize",
> device => $deviceid, size => int($size)); )
>
> if you manually upgrade the disk size,
> you need to use the monitor :
>
> #block_resize device size
>
> ex:
&g

Re: [pve-devel] Error between PVE and LVM

2014-11-27 Thread Daniel Hunsaker
The actual resize is done via `lvresize` or `lvextend` either way, so for
now, that's still your best bet, like always.  However, you'll also need to
access the QEMU monitor and issue the command Alexandre recommended in it.
As I understand it, there isn't currently a way to access the monitor
outside the Web UI while pve-manager is already connected to it, but I
haven't had much need to access it directly outside the UI myself.
On Nov 27, 2014 1:02 PM, "Cesar Peschiera"  wrote:

> Thanks Daniel, your words are encouraging for the future of PVE and for me.
>
> But for this moment, i have two questions:
> 1) Do I have any simpler option to grow my LV(that is the HDD of the VM)
> by CLI?
> 2) If the answer is correct, what exactly should i execute?
>
> Best regards
> Cesar
>
>
> - Original Message - From: Daniel Hunsaker
> To: Alexandre DERUMIER ; Cesar Peschiera
> Cc: pve-devel@pve.proxmox.com
> Sent: Thursday, November 27, 2014 3:37 PM
> Subject: Re: [pve-devel] Error between PVE and LVM
>
>
> If the GUI is resizing volumes, the API supports it, which means you
> should be able to use `pvesh` to do the operation in one command, instead
> of using the LVM commands and QEMU monitor directly. It does only support
> specifying the new size in bytes (which it seems to convert to MiB before
> actually using), but it's still an option.
>
> As for the "max available" option, I'd personally find it more useful to
> upgrade the API itself support the full range of `lvresize -L` values (it
> currently uses `lvextend`, which means volumes cannot be reduced in size -
> a fairly safe approach in case the filesystem inside the VM hasn't been
> reduced in advance, but also a bit restrictive), or at least the largest
> subset we could also support for other storage plugins. I'll see about
> implementing that if nobody else gets to it first.
>
>
> On Thu, Nov 27, 2014, 09:15 Alexandre DERUMIER 
> wrote:
>
>  This process is correct when you use the GUI, but when you have space
>>> limited in the hard disk, and you want to change some partitions by CLI,
>>> where finally will be working with the logical volumes, is when starting
>>> the
>>> problem due that the VM not see the change applied.
>>>
>>
> ah ok.
>
> This is normal, you need to tell to qemu what is the new size.
> (This is what we are doing in the code : vm_mon_cmd($vmid, "block_resize",
> device => $deviceid, size => int($size)); )
>
> if you manually upgrade the disk size,
> you need to use the monitor :
>
> #block_resize device size
>
> ex:
>
> #block_resize drive-virtio0 sizeinbytes
>
>
>
>
>
>
>
> - Mail original -
>
> De: "Cesar Peschiera" 
> À: "Alexandre DERUMIER" 
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Jeudi 27 Novembre 2014 17:00:37
> Objet: Re: [pve-devel] Error between PVE and LVM
>
> Hi Alexandre
>
>  This value correctly change after resize ?
>>
> If, before change, the logical volume had a smaller size.
>
>  We first extend the lvm disk, then we tell to qemu the new disk.
>>
> This process is correct when you use the GUI, but when you have space
> limited in the hard disk, and you want to change some partitions by CLI,
> where finally will be working with the logical volumes, is when starting
> the
> problem due that the VM not see the change applied.
>
> Please let me to do two suggestions:
> - Maybe will be better than PVE GUI have a option that say: "resize to max
> available", or something.
> I guess that this first option would be very good due to that the user will
> not need calculate the space available considering the space used in the
> metadata of LVM.
>
> - Moreover, in previous versions of PVE, while that I could see the
> reflected changes into the VM, in the PVE GUI, when i see the size of his
> hard disk, it shows his old size, then i had that remove the disk for re
> add
> it, only of this manner i could see his new size.
>
>  What kind of disk do you use in your guest ? virtio ? scsi ? ide ?
>>
> Virtio-block, moreover i have good references about virtio-scsi, do you
> know
> something about virtio-scsi for use it in windows systems?
>
> Many thanks again for your attention
> Best regards
> Cesar
>
>
> - Original Message -
> From: "Alexandre DERUMIER" 
> To: "Cesar Peschiera" 
> Cc: 
> Sent: Thursday, November 27, 2014 6:35 AM
> Subject: Re: [pve-devel] Error between PVE and LVM
>
>
> So,
>
>  shell# lvs
>>> LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
>&g

Re: [pve-devel] Error between PVE and LVM

2014-11-27 Thread Daniel Hunsaker
If the GUI is resizing volumes, the API supports it, which means you should
be able to use `pvesh` to do the operation in one command, instead of using
the LVM commands and QEMU monitor directly. It does only support specifying
the new size in bytes (which it seems to convert to MiB before actually
using), but it's still an option.

As for the "max available" option, I'd personally find it more useful to
upgrade the API itself support the full range of `lvresize -L` values (it
currently uses `lvextend`, which means volumes cannot be reduced in size -
a fairly safe approach in case the filesystem inside the VM hasn't been
reduced in advance, but also a bit restrictive), or at least the largest
subset we could also support for other storage plugins. I'll see about
implementing that if nobody else gets to it first.

On Thu, Nov 27, 2014, 09:15 Alexandre DERUMIER  wrote:

> >>This process is correct when you use the GUI, but when you have space
> >>limited in the hard disk, and you want to change some partitions by CLI,
> >>where finally will be working with the logical volumes, is when starting
> the
> >>problem due that the VM not see the change applied.
>
> ah ok.
>
> This is normal, you need to tell to qemu what is the new size.
> (This is what we are doing in the code : vm_mon_cmd($vmid, "block_resize",
> device => $deviceid, size => int($size)); )
>
> if you manually upgrade the disk size,
> you need to use the monitor :
>
> #block_resize device size
>
> ex:
>
> #block_resize drive-virtio0 sizeinbytes
>
>
>
>
>
>
>
> - Mail original -
>
> De: "Cesar Peschiera" 
> À: "Alexandre DERUMIER" 
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Jeudi 27 Novembre 2014 17:00:37
> Objet: Re: [pve-devel] Error between PVE and LVM
>
> Hi Alexandre
>
> >This value correctly change after resize ?
> If, before change, the logical volume had a smaller size.
>
> >We first extend the lvm disk, then we tell to qemu the new disk.
> This process is correct when you use the GUI, but when you have space
> limited in the hard disk, and you want to change some partitions by CLI,
> where finally will be working with the logical volumes, is when starting
> the
> problem due that the VM not see the change applied.
>
> Please let me to do two suggestions:
> - Maybe will be better than PVE GUI have a option that say: "resize to max
> available", or something.
> I guess that this first option would be very good due to that the user will
> not need calculate the space available considering the space used in the
> metadata of LVM.
>
> - Moreover, in previous versions of PVE, while that I could see the
> reflected changes into the VM, in the PVE GUI, when i see the size of his
> hard disk, it shows his old size, then i had that remove the disk for re
> add
> it, only of this manner i could see his new size.
>
> >What kind of disk do you use in your guest ? virtio ? scsi ? ide ?
> Virtio-block, moreover i have good references about virtio-scsi, do you
> know
> something about virtio-scsi for use it in windows systems?
>
> Many thanks again for your attention
> Best regards
> Cesar
>
>
> - Original Message -
> From: "Alexandre DERUMIER" 
> To: "Cesar Peschiera" 
> Cc: 
> Sent: Thursday, November 27, 2014 6:35 AM
> Subject: Re: [pve-devel] Error between PVE and LVM
>
>
> So,
>
> >>shell# lvs
> >>LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
> >>vm-100-disk-1 drbdvg2 -wi-- 30.00g
>
> This value correctly change after resize ?
>
>
>
> the resize code is here:
>
> we first extend the lvm disk, then we tell to qemu the new disk.
>
> (What kind of disk do you use in your guest ? virtio ? scsi ? ide ?)
>
>
>
>
> /usr/share/perl5/PVE/QemuServer.pm
>
>
> sub qemu_block_resize {
> my ($vmid, $deviceid, $storecfg, $volid, $size) = @_;
>
> my $running = check_running($vmid);
>
> return if !PVE::Storage::volume_resize($storecfg, $volid, $size,
> $running);
>
> return if !$running;
>
> vm_mon_cmd($vmid, "block_resize", device => $deviceid, size =>
> int($size));
>
> }
>
>
> /usr/share/perl5/PVE/Storage/LVMPlugin.pm
>
> sub volume_resize {
> my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
>
> $size = ($size/1024/1024) . "M";
>
> my $path = $class->path($scfg, $volname);
> my $cmd = ['/sbin/lvextend', '-L', $size, $path];
> run_command($cmd, errmsg => "error resizing volume '$path'");
>
> return 1;
> }
>
> - Mail original -
>
> De: "Cesar Peschiera" 
> À: "Alexandre DERUMIER" 
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Jeudi 27 Novembre 2014 09:11:29
> Objet: Re: [pve-devel] Error between PVE and LVM
>
> Hi Alexandre
>
> Thanks for your attention, here my answers and suggestions about of the
> problem of your customer:
>
> >We have made no change since resize feature has been implemented.
> >Can you describe a little bit more the problem on the guest side ?
> >do you see disk size increase with parted/fdisk ?
> I see the new size (vm-100-disk-1) of the logical volume by CLI, but it
> isn't reflected into the VM.

Re: [pve-devel] bug discovered

2014-11-11 Thread Daniel Hunsaker
We *have* operated largely under the understanding that not all Proxmox
sysadmins will fully understand the process that each command goes through
before they invoke it.  A sanity check to ensure the directory can be
manipulated before starting the cluster join would be in keeping with that
approach.
On Nov 11, 2014 11:52 PM, "Michael Rasmussen"  wrote:

> On Wed, 12 Nov 2014 05:36:40 +
> Dietmar Maurer  wrote:
>
> > > 5) initial cluster joining starts which at some point involves
> restarting pve-
> > > cluster. when pve-cluster restarts the running initial cluster joining
> process halts
> > > since pve-cluster is not able to remount pve filesystem since we block
> remount
> > > as our current directory is the mount point for pve filesystem.
> >
> > Sure, you cannot do that if you current directory is inside /etc/pve.
> The solution is
> > to change to another directory, for example
> >
> > # cd
> > # pvecm add ...
> >
> >
> >
> I know this but when changes for failure is 100% the command should
> warn the user and refuse to continue.
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael  rasmussen  cc
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
> mir  datanom  net
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
> mir  miras  org
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
> --
> /usr/games/fortune -es says:
> The IBM 2250 is impressive ...
> if you compare it with a system selling for a tenth its price.
> -- D. Cohen
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Maintenance mode

2014-11-11 Thread Daniel Hunsaker
I'd still like to see settings for default migration targets, possibly a
list instead of a single entry (so there are fallbacks in case the first
node is inaccessible), on a per-VM basis.  Per-node defaults would probably
also be useful, too - if nothing else, they might pre-select the node in
the dialog you're already building anyway, though if it's a list of
defaults instead of a single entry, the logic obviously gets a bit more
complex.  HA settings would, of course, need to be honored regardless.
On Nov 11, 2014 7:32 PM, "Michael Rasmussen"  wrote:

> On Tue, 11 Nov 2014 21:22:51 -0500
> Eric Blevins  wrote:
>
> > I use DRBD on many nodes. The DRBD storage is marked as shared and
> > accessible to only the two nodes that have access to the volume.
> >
> > If maintenance mode does not take the storage accessibility into account
> > migrations would fail.
> >
> If a node is not accessible from the node you which to put into
> maintenance mode no VM's will be migrate to this node.
>
> Alternatively you could simply choose the correct target node manually.
> The implementation will give you these options for target:
> 1) Evenly distributed to all accessible nodes
> 2) Manually selected node(s)
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael  rasmussen  cc
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
> mir  datanom  net
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
> mir  miras  org
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
> --
> /usr/games/fortune -es says:
> It is often easier to ask for forgiveness than to ask for permission.
> -- Grace Murray Hopper
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Maintenance mode

2014-11-10 Thread Daniel Hunsaker
> Would a "shutdown" option for VM's make sense?

I think there would be situations where that would make sense.  We have
suspend now, too, so it would make sense to add that to the list as well.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Maintenance mode

2014-11-10 Thread Daniel Hunsaker
Agree with Andrew on this one.  I could also see default migration paths
being defined somewhere, such that when a node goes into maintenance mode,
VMs are migrated to the preconfigured target node, with a fallback to the
user-selected mode (that is, distribute evenly, or distribute to selected
nodes) if the target node is unavailable or not specified in the VM
config.  Haven't looked at HA in a while to remember if its configuration
already implements something like this, but if so, honoring that would be a
good idea.  I imagine there will be clusters with disparate hardware (mine
tend to be, anyway), and as such some nodes will be capable of running some
VMs that other nodes will not (the key example here being nodes without CPU
virtualization support, but other examples exist).  Being able to define
where VMs will migrate by default could help mitigate that.
On Nov 10, 2014 5:42 PM, "Andrew Thrift"  wrote:

> Option 4, You could have a drop down box with a default selection of
> "Distribute Evenly" then a list of all available nodes.
> This would allow the user to just click through and have them distribute
> evenly,  OR they could select a specific node.
>
>
>
> On Tue, Nov 11, 2014 at 12:32 PM, Michael Rasmussen 
> wrote:
>
>> On Tue, 11 Nov 2014 00:27:46 +0100
>> Michael Rasmussen  wrote:
>>
>> > Hi all,
>> >
>> > Just to let you know. I have started implementing the feature for
>> > applying maintenance mode to a node. Right-click on any node will open
>> > a context menu which, at the moment, only contains one item:
>> > Maintenance. See attached screenshot.
>> >
>> I quick poll.
>>
>> 1) Should the user select a specific target node?
>> 2) Should the user select a list of target nodes?
>> 3) Should the implementation migrate running VM's evenly across all
>> other nodes?
>> 4) Should the user be able to select any one of the methods above?
>>
>> --
>> Hilsen/Regards
>> Michael Rasmussen
>>
>> Get my public GnuPG keys:
>> michael  rasmussen  cc
>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
>> mir  datanom  net
>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
>> mir  miras  org
>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
>> --
>> /usr/games/fortune -es says:
>> "How should I know if it works?  That's what beta testers are for.  I
>> only coded it."
>> (Attributed to Linus Torvalds, somewhere in a posting)
>>
>> ___
>> pve-devel mailing list
>> pve-devel@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Use block storage migration for migration of KVM machines with local based storages

2014-11-04 Thread Daniel Hunsaker
On Nov 3, 2014 4:13 AM, "Kamil Trzcinski"  wrote:
>
> - for stopped VM start it for migration

This may not always be desired.  For example, I often create VMs on one
node, but don't want them to spin up until they're on another node.  I
create/maintain templates on an internal node which connects via OpenVPN to
the public nodes, so my workflow makes more sense with create locally,
migrate, then start, since my internal system isn't configured anywhere
close to how my external systems are.  Some of these systems wouldn't even
start internally, since they rely on the host being set up how the external
nodes are, and the internal node isn't set up that way.

I guess what I'm saying is it would be good to at least have a node-level
option to disable this step (starting VMs before migration is a sane
default, just let me opt out).  The more granularity the better, but there
does come a point where options become excessive, so per-VM or
per-migration disable might be overkill.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : implement pending changes v2

2014-10-30 Thread Daniel Hunsaker
The pending changes are added to the configuration file, so the changes
would likely be migrated, but they wouldn't be applied until the normal
process for applying them is performed.  The benefit here is that they
aren't immediately applied to the VM's settings, the way they are without
this patch.  Basically, this change makes it so you can change anything you
want, and the VM will live-migrate cleanly between systems.  The changes
will then be committed at whatever point you apply them, be it via
restarting the VM or otherwise.  Suddenly it's safe to change any setting
you like on a VM under HA, which is very useful.
On Oct 30, 2014 5:00 PM, "Cesar Peschiera"  wrote:

> Hi
>
> I have a question, please see below
>
> - Original Message - From: "Alexandre DERUMIER" <
> aderum...@odiso.com>
> To: "Stanislav German-Evtushenko" 
> Cc: 
> Sent: Thursday, October 30, 2014 1:32 PM
> Subject: Re: [pve-devel] qemu-server : implement pending changes v2
>
>
>  Some questions regarding this patch:
 - What happens when we do online migration with pending changes?

>>>
>> pending changes not applied.
>> (Currently if you do a change a non-hotpluggage value and you do a live
>> migration, you have good chance to crash)
>>
>>   :-(
> Is possible that the change a non-hotpluggage value also be migrated with
> the VM?
>
> Best regards
> Cesar Peschiera
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Make qm clone working with devices.

2014-10-25 Thread Daniel Hunsaker
Wouldn't this cause issues with multiple systems attempting to control the
same device simultaneously?
On Oct 25, 2014 12:07 PM, "Jasmin Jessich"  wrote:

> Signed-off-by: Jasmin Jessich 
> ---
>  PVE/API2/Qemu.pm  | 5 -
>  PVE/QemuServer.pm | 8 
>  2 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index a0fcd28..32ee6de 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -71,6 +71,7 @@ my $check_storage_access_clone = sub {
> my ($ds, $drive) = @_;
>
> my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
> +   my $isDEVICE = PVE::QemuServer::drive_is_device($drive);
>
> my $volid = $drive->{file};
>
> @@ -86,7 +87,7 @@ my $check_storage_access_clone = sub {
> $sharedvm = 0 if !$scfg->{shared};
>
> }
> -   } else {
> +   } elsif (!$isDEVICE) {
> my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
> my $scfg = PVE::Storage::storage_config($storecfg, $sid);
> $sharedvm = 0 if !$scfg->{shared};
> @@ -2260,6 +2261,8 @@ __PACKAGE__->register_method({
> die "unable to parse drive options for '$opt'\n" if
> !$drive;
> if (PVE::QemuServer::drive_is_cdrom($drive)) {
> $newconf->{$opt} = $value; # simply copy
> configuration
> +   } elsif (PVE::QemuServer::drive_is_device($drive)) {
> +   $newconf->{$opt} = $value; # simply copy
> configuration
> } else {
> if ($param->{full}) {
> die "Full clone feature is not available"
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 98264d1..49fbffa 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -1272,6 +1272,14 @@ sub drive_is_cdrom {
>
>  }
>
> +sub drive_is_device {
> +my ($drive) = @_;
> +
> +my $volid = $drive->{file};
> +
> +return $volid && $volid =~ m/^\/dev\//;
> +}
> +
>  sub parse_hostpci {
>  my ($value) = @_;
>
> --
> 1.8.3.2
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 4/4] Add suspend/resume options to the mobile web UI menus

2014-10-08 Thread Daniel Hunsaker
From: Dan Hunsaker 

With the new mobile interface, we need to implement UI changes in two
places.  This lets us simplify our mobile interface so it isn't cluttered
with options that mobile browsers can't easily handle, usually due to size.
This patch implements Suspend and Resume of VMs and CTs via the mobile
web UI.

Signed-off-by: Dan Hunsaker 
---
 www/mobile/OpenVzSummary.js | 12 
 www/mobile/QemuSummary.js   | 12 
 2 files changed, 24 insertions(+)

diff --git a/www/mobile/OpenVzSummary.js b/www/mobile/OpenVzSummary.js
index f71fbec..6450bca 100644
--- a/www/mobile/OpenVzSummary.js
+++ b/www/mobile/OpenVzSummary.js
@@ -159,6 +159,18 @@ Ext.define('PVE.OpenVzSummary', {
}
},
{ 
+   text: gettext('Suspend'),
+   handler: function() {
+   me.vm_command("suspend", {});
+   }
+   },
+   { 
+   text: gettext('Resume'),
+   handler: function() {
+   me.vm_command("resume", {});
+   }
+   },
+   { 
text: gettext('Shutdown'),
handler: function() {
me.vm_command("shutdown", {});
diff --git a/www/mobile/QemuSummary.js b/www/mobile/QemuSummary.js
index eb33222..a0b3ef0 100644
--- a/www/mobile/QemuSummary.js
+++ b/www/mobile/QemuSummary.js
@@ -162,6 +162,18 @@ Ext.define('PVE.QemuSummary', {
}
},
{ 
+   text: gettext('Suspend'),
+   handler: function() {
+   me.vm_command("suspend", {});
+   }
+   },
+   { 
+   text: gettext('Resume'),
+   handler: function() {
+   me.vm_command("resume", {});
+   }
+   },
+   { 
text: gettext('Shutdown'),
handler: function() {
me.vm_command("shutdown", {});
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/4] Add suspend/resume options to web UI CmdMenus

2014-10-08 Thread Daniel Hunsaker
From: Dan Hunsaker 

The PVE2 API supports suspend/resume of VMs (and now CTs), but the web UI
doesn't make these options available.  This patch adds Suspend and Resume
items to the CmdMenus of OpenVZ and QEMU guests.  I considered adding the
options to the toolbar, but since it is already pretty full, I opted
against doing so for the moment.  Perhaps the various startup options can
be combined into a dropdown menu similar to how the console options are
set up, and the various shutdown opitons combined into another.  That
would provide the necesarry space to add the Suspend and Resume options
there.

This patch also provides descriptions for Suspend and Resume tasks in the
task logs, bringing full suspend/resume support to the web GUI.

Signed-off-by: Dan Hunsaker 
---
 www/manager/Utils.js  |  2 ++
 www/manager/openvz/CmdMenu.js | 27 ---
 www/manager/qemu/CmdMenu.js   | 20 
 3 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/www/manager/Utils.js b/www/manager/Utils.js
index f95c180..93bd90b 100644
--- a/www/manager/Utils.js
+++ b/www/manager/Utils.js
@@ -510,6 +510,8 @@ Ext.define('PVE.Utils', { statics: {
vzmount: ['CT', gettext('Mount') ],
vzumount: ['CT', gettext('Unmount') ],
vzshutdown: ['CT', gettext('Shutdown') ],
+   vzsuspend: [ 'CT', gettext('Suspend') ],
+   vzresume: [ 'CT', gettext('Resume') ],
hamigrate: [ 'HA', gettext('Migrate') ],
hastart: [ 'HA', gettext('Start') ],
hastop: [ 'HA', gettext('Stop') ],
diff --git a/www/manager/openvz/CmdMenu.js b/www/manager/openvz/CmdMenu.js
index 85589ed..0c6f5bb 100644
--- a/www/manager/openvz/CmdMenu.js
+++ b/www/manager/openvz/CmdMenu.js
@@ -11,7 +11,7 @@ Ext.define('PVE.openvz.CmdMenu', {
 
var vmid = me.pveSelNode.data.vmid;
if (!vmid) {
-   throw "no VM ID specified";
+   throw "no CT ID specified";
}
 
var vmname = me.pveSelNode.data.name;
@@ -50,10 +50,31 @@ Ext.define('PVE.openvz.CmdMenu', {
}
},
{
+   text: gettext('Suspend'),
+   icon: '/pve2/images/forward.png',
+   handler: function() {
+   var msg = Ext.String.format(gettext("Do you really want to 
suspend CT {0}?"), vmid);
+   Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
+   if (btn !== 'yes') {
+   return;
+   }
+   
+   vm_command('suspend');
+   });
+   }
+   },
+   {
+   text: gettext('Resume'),
+   icon: '/pve2/images/forward.png',
+   handler: function() {
+   vm_command('resume');
+   }
+   },
+   {
text: gettext('Shutdown'),
icon: '/pve2/images/stop.png',
handler: function() {
-   var msg = Ext.String.format(gettext("Do you really want to 
shutdown VM {0}?"), vmid);
+   var msg = Ext.String.format(gettext("Do you really want to 
shutdown CT {0}?"), vmid);
Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
if (btn !== 'yes') {
return;
@@ -67,7 +88,7 @@ Ext.define('PVE.openvz.CmdMenu', {
text: gettext('Stop'),
icon: '/pve2/images/gtk-stop.png',
handler: function() {
-   var msg = Ext.String.format(gettext("Do you really want to 
stop VM {0}?"), vmid);
+   var msg = Ext.String.format(gettext("Do you really want to 
stop CT {0}?"), vmid);
Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
if (btn !== 'yes') {
return;
diff --git a/www/manager/qemu/CmdMenu.js b/www/manager/qemu/CmdMenu.js
index 853f57b..a9a8ce4 100644
--- a/www/manager/qemu/CmdMenu.js
+++ b/www/manager/qemu/CmdMenu.js
@@ -50,6 +50,26 @@ Ext.define('PVE.qemu.CmdMenu', {
}
},
{
+   text: gettext('Suspend'),
+   icon: '/pve2/images/forward.png',
+   handler: function() {
+   var msg = Ext.String.format(gettext("Do you really want to 
suspend VM {0}?"), vmid);
+   Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
+   if (btn !== 'yes') {
+   return;
+   }
+   vm_command('suspend');
+   });
+   }
+   },
+   {
+   text: gettext('Resume'),
+   icon: '/pve2/images/forward.png',
+   handler: function() {
+   vm_command('resume');
+   }
+   },
+   {
text: gettext('Shutdown'),
 

[pve-devel] Resubmit PVE API suspend/resume patch (conflicts resolved)

2014-10-08 Thread Daniel Hunsaker
The previous patchset was generated by ignoring whitespace differences rather
than removing the whitespace changes from the local commits themselves, and
therefore would require the `--ignore-whitespace` smitch to `git apply`.  This
patchset removes the whitespace from the source commits, eliminating the issue
altogether.  So, one more time:

[PATCH 1/4] Add CT suspend/resume support via PVE2 API
[PATCH 2/4] Add suspend/resume support to pvectl
[PATCH 3/4] Add suspend/resume options to web UI CmdMenus
[PATCH 4/4] Add suspend/resume options to the mobile web UI menus
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2 API

2014-10-08 Thread Daniel Hunsaker
From: Dan Hunsaker 

Suspend/resume support for VMs has been in the PVE2 API for some time,
but even though vzctl supports suspend/resume (what they call checkpoint/
restore), the API doesn't yet support suspend/resume for CTs.  This patch
adds that support.

Signed-off-by: Dan Hunsaker 
---
 PVE/API2/OpenVZ.pm | 96 ++
 PVE/OpenVZ.pm  | 26 ++-
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
index 184ebdf..955611d 100644
--- a/PVE/API2/OpenVZ.pm
+++ b/PVE/API2/OpenVZ.pm
@@ -1459,6 +1459,102 @@ __PACKAGE__->register_method({
 }});
 
 __PACKAGE__->register_method({
+name => 'vm_suspend',
+path => '{vmid}/status/suspend',
+method => 'POST',
+protected => 1,
+proxyto => 'node',
+description => "Suspend the container.",
+permissions => {
+check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
+},
+parameters => {
+additionalProperties => 0,
+properties => {
+node => get_standard_option('pve-node'),
+vmid => get_standard_option('pve-vmid'),
+},
+},
+returns => {
+type => 'string',
+},
+code => sub {
+my ($param) = @_;
+
+my $rpcenv = PVE::RPCEnvironment::get();
+
+my $authuser = $rpcenv->get_user();
+
+my $node = extract_param($param, 'node');
+
+my $vmid = extract_param($param, 'vmid');
+
+die "CT $vmid not running\n" if !PVE::OpenVZ::check_running($vmid);
+
+my $realcmd = sub {
+my $upid = shift;
+
+syslog('info', "suspend CT $vmid: $upid\n");
+
+PVE::OpenVZ::vm_suspend($vmid);
+
+return;
+};
+
+my $upid = $rpcenv->fork_worker('vzsuspend', $vmid, $authuser, 
$realcmd);
+
+return $upid;
+}});
+
+__PACKAGE__->register_method({
+name => 'vm_resume',
+path => '{vmid}/status/resume',
+method => 'POST',
+protected => 1,
+proxyto => 'node',
+description => "Resume the container.",
+permissions => {
+check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
+},
+parameters => {
+additionalProperties => 0,
+properties => {
+node => get_standard_option('pve-node'),
+vmid => get_standard_option('pve-vmid'),
+},
+},
+returns => {
+type => 'string',
+},
+code => sub {
+my ($param) = @_;
+
+my $rpcenv = PVE::RPCEnvironment::get();
+
+my $authuser = $rpcenv->get_user();
+
+my $node = extract_param($param, 'node');
+
+my $vmid = extract_param($param, 'vmid');
+
+die "CT $vmid already running\n" if PVE::OpenVZ::check_running($vmid);
+
+my $realcmd = sub {
+my $upid = shift;
+
+syslog('info', "resume CT $vmid: $upid\n");
+
+PVE::OpenVZ::vm_resume($vmid);
+
+return;
+};
+
+my $upid = $rpcenv->fork_worker('vzresume', $vmid, $authuser, 
$realcmd);
+
+return $upid;
+}});
+
+__PACKAGE__->register_method({
 name => 'migrate_vm', 
 path => '{vmid}/migrate',
 method => 'POST',
diff --git a/PVE/OpenVZ.pm b/PVE/OpenVZ.pm
index aa6f502..2577561 100644
--- a/PVE/OpenVZ.pm
+++ b/PVE/OpenVZ.pm
@@ -6,7 +6,7 @@ use File::stat qw();
 use POSIX qw (LONG_MAX);
 use IO::Dir;
 use IO::File;
-use PVE::Tools qw(extract_param $IPV6RE $IPV4RE);
+use PVE::Tools qw(run_command extract_param $IPV6RE $IPV4RE);
 use PVE::ProcFSTools;
 use PVE::Cluster qw(cfs_register_file cfs_read_file);
 use PVE::SafeSyslog;
@@ -1220,6 +1220,30 @@ sub lock_container {
 return $res;
 }
 
+sub vm_suspend {
+my ($vmid) = @_;
+
+my $cmd = ['vzctl', 'chkpnt', $vmid];
+
+eval { run_command($cmd); };
+if (my $err = $@) {
+syslog("err", "CT $vmid suspend failed - $err");
+die $err;
+}
+}
+
+sub vm_resume {
+my ($vmid) = @_;
+
+my $cmd = ['vzctl', 'restore', $vmid];
+
+eval { run_command($cmd); };
+if (my $err = $@) {
+syslog("err", "CT $vmid resume failed - $err");
+die $err;
+}
+}
+
 sub replacepw {
 my ($file, $epw) = @_;
 
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/4] Add suspend/resume support to pvectl

2014-10-08 Thread Daniel Hunsaker
From: Dan Hunsaker 

Now that the API supports CT suspend/resume, it makes sense to have pvectl
support it, too.  It *does* use different names than vzctl does, but it
seems to make sense to be consistent with the API naming in a PVE utility.

Signed-off-by: Dan Hunsaker 
---
 bin/pvectl | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/bin/pvectl b/bin/pvectl
index f8ae3ad..8f2643d 100755
--- a/bin/pvectl
+++ b/bin/pvectl
@@ -74,6 +74,8 @@ my $cmddef = {
}],
 
 start => [ 'PVE::API2::OpenVZ', 'vm_start', ['vmid'], { node => $nodename 
}, $upid_exit],
+suspend => [ 'PVE::API2::OpenVZ', 'vm_suspend', ['vmid'], { node => 
$nodename }, $upid_exit],
+resume => [ 'PVE::API2::OpenVZ', 'vm_resume', ['vmid'], { node => 
$nodename }, $upid_exit],
 shutdown => [ 'PVE::API2::OpenVZ', 'vm_shutdown', ['vmid'], { node => 
$nodename }, $upid_exit],
 stop => [ 'PVE::API2::OpenVZ', 'vm_stop', ['vmid'], { node => $nodename }, 
$upid_exit],
 mount => [ 'PVE::API2::OpenVZ', 'vm_mount', ['vmid'], { node => $nodename 
}, $upid_exit],
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2 API

2014-10-06 Thread Daniel Hunsaker
Strange.   I created the commit directly off of the most recent master.
I'll have a look and see if I need to rebase or something...
On Oct 6, 2014 3:13 AM, "Dietmar Maurer"  wrote:

> Applying: Add CT suspend/resume support via PVE2 API
> error: patch failed: PVE/API2/OpenVZ.pm:1459
> error: PVE/API2/OpenVZ.pm: patch does not apply
> Patch failed at 0001 Add CT suspend/resume support via PVE2 API
>
> ??
>
> > -Original Message-
> > From: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] On Behalf Of
> > Daniel Hunsaker
> > Sent: Freitag, 03. Oktober 2014 20:59
> > To: pve-devel@pve.proxmox.com
> > Subject: [pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2
> API
> >
> > From: Dan Hunsaker 
> >
> > Suspend/resume support for VMs has been in the PVE2 API for some time,
> but
> > even though vzctl supports suspend/resume (what they call checkpoint/
> restore),
> > the API doesn't yet support suspend/resume for CTs.  This patch adds that
> > support.
> >
> > Signed-off-by: Dan Hunsaker 
> > ---
> >  PVE/API2/OpenVZ.pm | 96
> > ++
> >  PVE/OpenVZ.pm  | 26 ++-
> >  2 files changed, 121 insertions(+), 1 deletion(-)
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2 API

2014-10-05 Thread Daniel Hunsaker
It would simply fail, and not necessarily in a way that the reason makes
sense to the user.  But that answers my question on how to handle such
scenarios.

KVM doesn't have the same issue, since the entire system is virtualized in
that case.  So it may not be obvious why CT migrations are failing, even
while VM migrations succeed.  That's all I was concerned about, really.
On Oct 5, 2014 6:23 AM, "Dietmar Maurer"  wrote:

> I don't really understand what you talk about. If you suspend to disk, you
> also need/should transfer that state when you migrate the VM. This is
> unrelated to online migration.
> Sure, this can fails, but that should be handled by vzctl chkpnt/restore
> internally.
>
> > I know I'd like to try it.  The potential issue is when migrating
> between nodes
> > with different kernel versions/modules/configurations.  It would
> probably be
> > useful to detect those cases (as best we can), and either issue a
> warning, or
> > automatically switch to an offline migration, shutting down and starting
> up
> > before and after migration, respectively.  If such detection isn't
> feasible, perhaps
> > online migrations of CTs are something we shouldn't worry about.  We can
> only
> > control the environment to a certain extent.
> > On the other hand, a failed online migration ought to lead sysadmins to
> attempt
> > offline migration before giving up, so maybe we just let the migration
> fail and let
> > the sysadmin adapt their course of action accordingly.  Just some
> thoughts on it.
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2 API

2014-10-05 Thread Daniel Hunsaker
I know I'd like to try it.  The potential issue is when migrating between
nodes with different kernel versions/modules/configurations.  It would
probably be useful to detect those cases (as best we can), and either issue
a warning, or automatically switch to an offline migration, shutting down
and starting up before and after migration, respectively.  If such
detection isn't feasible, perhaps online migrations of CTs are something we
shouldn't worry about.  We can only control the environment to a certain
extent.

On the other hand, a failed online migration ought to lead sysadmins to
attempt offline migration before giving up, so maybe we just let the
migration fail and let the sysadmin adapt their course of action
accordingly.  Just some thoughts on it.
On Oct 4, 2014 1:11 AM, "Dietmar Maurer"  wrote:

> Thank for that patch, looks good.
>
> I wonder if we should migrate the saved state when we migrate the
> container?
>
> > -Original Message-
> > From: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] On Behalf Of
> > Daniel Hunsaker
> > Sent: Freitag, 03. Oktober 2014 20:59
> > To: pve-devel@pve.proxmox.com
> > Subject: [pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2
> API
> >
> > From: Dan Hunsaker 
> >
> > Suspend/resume support for VMs has been in the PVE2 API for some time,
> but
> > even though vzctl supports suspend/resume (what they call checkpoint/
> restore),
> > the API doesn't yet support suspend/resume for CTs.  This patch adds that
> > support.
> >
> > Signed-off-by: Dan Hunsaker 
> > ---
> >  PVE/API2/OpenVZ.pm | 96
> > ++
> >  PVE/OpenVZ.pm  | 26 ++-
> >  2 files changed, 121 insertions(+), 1 deletion(-)
> >
> > diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm index
> > 184ebdf..5d8c0c6 100644
> > --- a/PVE/API2/OpenVZ.pm
> > +++ b/PVE/API2/OpenVZ.pm
> > @@ -1459,6 +1459,102 @@ __PACKAGE__->register_method({
> >  }});
> >
> >  __PACKAGE__->register_method({
> > +   name => 'vm_suspend',
> > +   path => '{vmid}/status/suspend',
> > +   method => 'POST',
> > +   protected => 1,
> > +   proxyto => 'node',
> > +   description => "Suspend the container.",
> > +   permissions => {
> > +   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
> > +   },
> > +   parameters => {
> > +   additionalProperties => 0,
> > +   properties => {
> > +   node => get_standard_option('pve-node'),
> > +   vmid => get_standard_option('pve-vmid'),
> > +   },
> > +   },
> > +   returns => {
> > +   type => 'string',
> > +   },
> > +   code => sub {
> > +   my ($param) = @_;
> > +
> > +   my $rpcenv = PVE::RPCEnvironment::get();
> > +
> > +   my $authuser = $rpcenv->get_user();
> > +
> > +   my $node = extract_param($param, 'node');
> > +
> > +   my $vmid = extract_param($param, 'vmid');
> > +
> > +   die "CT $vmid not running\n" if
> > + !PVE::OpenVZ::check_running($vmid);
> > +
> > +   my $realcmd = sub {
> > +   my $upid = shift;
> > +
> > +   syslog('info', "suspend CT $vmid: $upid\n");
> > +
> > +   PVE::OpenVZ::vm_suspend($vmid);
> > +
> > +   return;
> > +   };
> > +
> > +   my $upid = $rpcenv->fork_worker('vzsuspend', $vmid,
> > + $authuser, $realcmd);
> > +
> > +   return $upid;
> > +   }});
> > +
> > +__PACKAGE__->register_method({
> > +   name => 'vm_resume',
> > +   path => '{vmid}/status/resume',
> > +   method => 'POST',
> > +   protected => 1,
> > +   proxyto => 'node',
> > +   description => "Resume the container.",
> > +   permissions => {
> > +   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
> > +   },
> > +   parameters => {
> > +   additionalProperties =

Re: [pve-devel] Whislist

2014-10-03 Thread Daniel Hunsaker
There's some RDB stuff already in the API/web UI, if that's what you're
after.  As far as anything more complex, it would be nice to have, but
there are a number of features it's nice to leave up to sysadmins to select
and implement.
On Oct 3, 2014 8:28 AM, "Gilberto Nunes"  wrote:

> Hi guys
> It would be nice if Proxmox came with some monitoring tool, like Centreon,
> Nagios, Zabbix or Zenoss, to monitoring some resources...
> I remember that OpenQRM has Nagios build in to monitoring VM's and others
> stuff...
>
>
> Gilberto Ferreira
>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] roadmap for proxmox 3.4 ?

2014-10-03 Thread Daniel Hunsaker
In a pinch, even rsync would work, though it would take a while.

The trouble is with HA migrations.  There's a decent chance your VM is down
because your node is, and you can't migrate from a storage which is offline.

Still, HA is handled separately, so it shouldn't cause too many issues to
support manual migrations to/from local storages.
On Oct 3, 2014 2:54 PM, "Gilberto Nunes"  wrote:

> Nice... Over DRBD, I suppose... That is clearly possible... Even over
> glusterfs ou DRBD+OCFS
>
> 2014-10-03 17:47 GMT-03:00 Kamil Trzciński :
>
>> It's possible, because I were doing it already, but only from command
>> line. Qemu basically transfers all disks and memory state over
>> network.
>>
>>
>> On Fri, Oct 3, 2014 at 8:47 PM, Gilberto Nunes
>>  wrote:
>> > I think that is not possible... Or may I wrong?
>> >
>> > 2014-10-03 15:12 GMT-03:00 Kamil Trzciński :
>> >
>> >> I would like to see migration between non-shared storage. I can even
>> >> prepare patches if anyone will help me with where to start.
>> >>
>> >>
>> >> On Fri, Oct 3, 2014 at 6:50 PM, Dietmar Maurer 
>> >> wrote:
>> >> >> about dataplane, blockjobs are coming for qemu 2.2. (first patches
>> >> >> already sent
>> >> >> to the mailing some days ago)
>> >> >>
>> >> >> I have talked with paolo, and the roadmap seem to implement all
>> >> >> features to
>> >> >> dataplane.
>> >> >> (and make it the default in the future)
>> >> >
>> >> >
>> >> > Great! Thanks for the update.
>> >> > ___
>> >> > pve-devel mailing list
>> >> > pve-devel@pve.proxmox.com
>> >> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>> >>
>> >>
>> >>
>> >> --
>> >> Kamil Trzciński
>> >>
>> >> ayu...@ayufan.eu
>> >> www.ayufan.eu
>> >> ___
>> >> pve-devel mailing list
>> >> pve-devel@pve.proxmox.com
>> >> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>> >
>> >
>> >
>> >
>> > --
>> > --
>> >
>> > Gilberto Ferreira
>> >
>>
>>
>>
>> --
>> Kamil Trzciński
>>
>> ayu...@ayufan.eu
>> www.ayufan.eu
>>
>
>
>
> --
> --
>
> Gilberto Ferreira
>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 4/4] Add suspend/resume options to the mobile web UI menus

2014-10-03 Thread Daniel Hunsaker
From: Dan Hunsaker 

With the new mobile interface, we need to implement UI changes in two
places.  This lets us simplify our mobile interface so it isn't cluttered
with options that mobile browsers can't easily handle, usually due to size.
This patch implements Suspend and Resume of VMs and CTs via the mobile
web UI.

Signed-off-by: Dan Hunsaker 
---
 www/mobile/OpenVzSummary.js | 12 
 www/mobile/QemuSummary.js   | 12 
 2 files changed, 24 insertions(+)

diff --git a/www/mobile/OpenVzSummary.js b/www/mobile/OpenVzSummary.js
index f71fbec..4c27e93 100644
--- a/www/mobile/OpenVzSummary.js
+++ b/www/mobile/OpenVzSummary.js
@@ -159,6 +159,18 @@ Ext.define('PVE.OpenVzSummary', {
}
},
{
+   text: gettext('Suspend'),
+   handler: function() {
+   me.vm_command("suspend", {});
+   }
+   },
+   {
+   text: gettext('Resume'),
+   handler: function() {
+   me.vm_command("resume", {});
+   }
+   },
+   {
text: gettext('Shutdown'),
handler: function() {
me.vm_command("shutdown", {});
diff --git a/www/mobile/QemuSummary.js b/www/mobile/QemuSummary.js
index eb33222..b392e1e 100644
--- a/www/mobile/QemuSummary.js
+++ b/www/mobile/QemuSummary.js
@@ -162,6 +162,18 @@ Ext.define('PVE.QemuSummary', {
}
},
{
+   text: gettext('Suspend'),
+   handler: function() {
+   me.vm_command("suspend", {});
+   }
+   },
+   {
+   text: gettext('Resume'),
+   handler: function() {
+   me.vm_command("resume", {});
+   }
+   },
+   {
text: gettext('Shutdown'),
handler: function() {
me.vm_command("shutdown", {});
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/4] Add suspend/resume options to web UI CmdMenus

2014-10-03 Thread Daniel Hunsaker
From: Dan Hunsaker 

The PVE2 API supports suspend/resume of VMs (and now CTs), but the web UI
doesn't make these options available.  This patch adds Suspend and Resume
items to the CmdMenus of OpenVZ and QEMU guests.  I considered adding the
options to the toolbar, but since it is already pretty full, I opted
against doing so for the moment.  Perhaps the various startup options can
be combined into a dropdown menu similar to how the console options are
set up, and the various shutdown opitons combined into another.  That
would provide the necesarry space to add the Suspend and Resume options
there.

This patch also provides descriptions for Suspend and Resume tasks in the
task logs, bringing full suspend/resume support to the web GUI.

Signed-off-by: Dan Hunsaker 
---
 www/manager/Utils.js  |  2 ++
 www/manager/openvz/CmdMenu.js | 24 ++--
 www/manager/qemu/CmdMenu.js   | 20 
 3 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/www/manager/Utils.js b/www/manager/Utils.js
index f95c180..151df32 100644
--- a/www/manager/Utils.js
+++ b/www/manager/Utils.js
@@ -510,6 +510,8 @@ Ext.define('PVE.Utils', { statics: {
vzmount: ['CT', gettext('Mount') ],
vzumount: ['CT', gettext('Unmount') ],
vzshutdown: ['CT', gettext('Shutdown') ],
+   vzsuspend: [ 'CT', gettext('Suspend') ],
+   vzresume: [ 'CT', gettext('Resume') ],
hamigrate: [ 'HA', gettext('Migrate') ],
hastart: [ 'HA', gettext('Start') ],
hastop: [ 'HA', gettext('Stop') ],
diff --git a/www/manager/openvz/CmdMenu.js b/www/manager/openvz/CmdMenu.js
index 85589ed..6bb5326 100644
--- a/www/manager/openvz/CmdMenu.js
+++ b/www/manager/openvz/CmdMenu.js
@@ -50,10 +50,30 @@ Ext.define('PVE.openvz.CmdMenu', {
}
},
{
+   text: gettext('Suspend'),
+   icon: '/pve2/images/forward.png',
+   handler: function() {
+   var msg = Ext.String.format(gettext("Do you really want to 
suspend CT {0}?"), vmid);
+   Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
+   if (btn !== 'yes') {
+   return;
+   }
+   vm_command('suspend');
+   });
+   }
+   },
+   {
+   text: gettext('Resume'),
+   icon: '/pve2/images/forward.png',
+   handler: function() {
+   vm_command('resume');
+   }
+   },
+   {
text: gettext('Shutdown'),
icon: '/pve2/images/stop.png',
handler: function() {
-   var msg = Ext.String.format(gettext("Do you really want to 
shutdown VM {0}?"), vmid);
+   var msg = Ext.String.format(gettext("Do you really want to 
shutdown CT {0}?"), vmid);
Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
if (btn !== 'yes') {
return;
@@ -67,7 +87,7 @@ Ext.define('PVE.openvz.CmdMenu', {
text: gettext('Stop'),
icon: '/pve2/images/gtk-stop.png',
handler: function() {
-   var msg = Ext.String.format(gettext("Do you really want to 
stop VM {0}?"), vmid);
+   var msg = Ext.String.format(gettext("Do you really want to 
stop CT {0}?"), vmid);
Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
if (btn !== 'yes') {
return;
diff --git a/www/manager/qemu/CmdMenu.js b/www/manager/qemu/CmdMenu.js
index 853f57b..25591e9 100644
--- a/www/manager/qemu/CmdMenu.js
+++ b/www/manager/qemu/CmdMenu.js
@@ -50,6 +50,26 @@ Ext.define('PVE.qemu.CmdMenu', {
}
},
{
+text: gettext('Suspend'),
+icon: '/pve2/images/forward.png',
+handler: function() {
+var msg = Ext.String.format(gettext("Do you really want to suspend 
VM {0}?"), vmid);
+Ext.Msg.confirm(gettext('Confirm'), msg, function(btn) {
+if (btn !== 'yes') {
+return;
+}
+vm_command('suspend');
+});
+}
+},
+{
+text: gettext('Resume'),
+icon: '/pve2/images/forward.png',
+handler: function() {
+vm_command('resume');
+}
+},
+{
text: gettext('Shutdown'),
icon: '/pve2/images/stop.png',
handler: function() {
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/4] Add suspend/resume support to pvectl

2014-10-03 Thread Daniel Hunsaker
From: Dan Hunsaker 

Now that the API supports CT suspend/resume, it makes sense to have pvectl
support it, too.  It *does* use different names than vzctl does, but it
seems to make sense to be consistent with the API naming in a PVE utility.

Signed-off-by: Dan Hunsaker 
---
 bin/pvectl | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/bin/pvectl b/bin/pvectl
index f8ae3ad..9e9a797 100755
--- a/bin/pvectl
+++ b/bin/pvectl
@@ -74,6 +74,8 @@ my $cmddef = {
}],
 
 start => [ 'PVE::API2::OpenVZ', 'vm_start', ['vmid'], { node => $nodename 
}, $upid_exit],
+suspend => [ 'PVE::API2::OpenVZ', 'vm_suspend', ['vmid'], { node => 
$nodename }, $upid_exit],
+resume => [ 'PVE::API2::OpenVZ', 'vm_resume', ['vmid'], { node => 
$nodename }, $upid_exit],
 shutdown => [ 'PVE::API2::OpenVZ', 'vm_shutdown', ['vmid'], { node => 
$nodename }, $upid_exit],
 stop => [ 'PVE::API2::OpenVZ', 'vm_stop', ['vmid'], { node => $nodename }, 
$upid_exit],
 mount => [ 'PVE::API2::OpenVZ', 'vm_mount', ['vmid'], { node => $nodename 
}, $upid_exit],
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] Resubmit PVE API suspend/resume patch

2014-10-03 Thread Daniel Hunsaker
One more time, as separate patches.  This will let us suspend and resume all
guests, VM and CT alike, via the API.  That in turn will let us do fancy
things like suspend guests before node restart, then resume them after
(though first we need to get QEMU suspend to save the state to disk).

[PATCH 1/4] Add CT suspend/resume support via PVE2 API
[PATCH 2/4] Add suspend/resume support to pvectl
[PATCH 3/4] Add suspend/resume options to web UI CmdMenus
[PATCH 4/4] Add suspend/resume options to the mobile web UI menus
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/4] Add CT suspend/resume support via PVE2 API

2014-10-03 Thread Daniel Hunsaker
From: Dan Hunsaker 

Suspend/resume support for VMs has been in the PVE2 API for some time,
but even though vzctl supports suspend/resume (what they call checkpoint/
restore), the API doesn't yet support suspend/resume for CTs.  This patch
adds that support.

Signed-off-by: Dan Hunsaker 
---
 PVE/API2/OpenVZ.pm | 96 ++
 PVE/OpenVZ.pm  | 26 ++-
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
index 184ebdf..5d8c0c6 100644
--- a/PVE/API2/OpenVZ.pm
+++ b/PVE/API2/OpenVZ.pm
@@ -1459,6 +1459,102 @@ __PACKAGE__->register_method({
 }});
 
 __PACKAGE__->register_method({
+   name => 'vm_suspend',
+   path => '{vmid}/status/suspend',
+   method => 'POST',
+   protected => 1,
+   proxyto => 'node',
+   description => "Suspend the container.",
+   permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
+   },
+   parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid'),
+   },
+   },
+   returns => {
+   type => 'string',
+   },
+   code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $authuser = $rpcenv->get_user();
+
+   my $node = extract_param($param, 'node');
+
+   my $vmid = extract_param($param, 'vmid');
+
+   die "CT $vmid not running\n" if 
!PVE::OpenVZ::check_running($vmid);
+
+   my $realcmd = sub {
+   my $upid = shift;
+
+   syslog('info', "suspend CT $vmid: $upid\n");
+
+   PVE::OpenVZ::vm_suspend($vmid);
+
+   return;
+   };
+
+   my $upid = $rpcenv->fork_worker('vzsuspend', $vmid, $authuser, 
$realcmd);
+
+   return $upid;
+   }});
+
+__PACKAGE__->register_method({
+   name => 'vm_resume',
+   path => '{vmid}/status/resume',
+   method => 'POST',
+   protected => 1,
+   proxyto => 'node',
+   description => "Resume the container.",
+   permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
+   },
+   parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid'),
+   },
+   },
+   returns => {
+   type => 'string',
+   },
+   code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $authuser = $rpcenv->get_user();
+
+   my $node = extract_param($param, 'node');
+
+   my $vmid = extract_param($param, 'vmid');
+
+   die "CT $vmid already running\n" if 
PVE::OpenVZ::check_running($vmid);
+
+   my $realcmd = sub {
+   my $upid = shift;
+
+   syslog('info', "resume CT $vmid: $upid\n");
+
+   PVE::OpenVZ::vm_resume($vmid);
+
+   return;
+   };
+
+   my $upid = $rpcenv->fork_worker('vzresume', $vmid, $authuser, 
$realcmd);
+
+   return $upid;
+   }});
+
+__PACKAGE__->register_method({
 name => 'migrate_vm',
 path => '{vmid}/migrate',
 method => 'POST',
diff --git a/PVE/OpenVZ.pm b/PVE/OpenVZ.pm
index aa6f502..fcfb0c2 100644
--- a/PVE/OpenVZ.pm
+++ b/PVE/OpenVZ.pm
@@ -6,7 +6,7 @@ use File::stat qw();
 use POSIX qw (LONG_MAX);
 use IO::Dir;
 use IO::File;
-use PVE::Tools qw(extract_param $IPV6RE $IPV4RE);
+use PVE::Tools qw(run_command extract_param $IPV6RE $IPV4RE);
 use PVE::ProcFSTools;
 use PVE::Cluster qw(cfs_register_file cfs_read_file);
 use PVE::SafeSyslog;
@@ -1220,6 +1220,30 @@ sub lock_container {
 return $res;
 }
 
+sub vm_suspend {
+my ($vmid) = @_;
+
+my $cmd = ['vzctl', 'chkpnt', $vmid];
+
+eval { run_command($cmd); };
+if (my $err = $@) {
+syslog("err", "CT $vmid suspend failed - $err");
+die $err;
+}
+}
+
+sub vm_resume {
+my ($vmid) = @_;
+
+my $cmd = ['vzctl', 'restore', $vmid];
+
+eval { run_command($cmd); };
+if (my $err = $@) {
+syslog("err", "CT $vmid resume failed - $err");
+die $err;
+}
+}
+
 sub replacepw {
 my ($file, $epw) = @_;
 
-- 
1.9.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Add CT suspend/resume to PVE API

2014-10-03 Thread Daniel Hunsaker
Actually, that part wasn't me, but since the answer is yes, I'll look into
getting QEMU to save state to disk so we can do the rest.  :-)

And now to sleep, finally...
On Oct 3, 2014 2:53 AM, "Dietmar Maurer"  wrote:

> > > 1.) Implement suspend/resume API
> > > 2.) add it to pvectl
> > > 3.) Implement suspend/resume GUI (extjs)
> > > 4.) Implement suspend/resume GUI (mobile)
> > Alright, I'll make that happen tomorrow.  Currently just after 02:00
> here.  :-)
>
> Thanks!
>
> > > I also have some further ideas. Currently qemu suspend/resume does not
> > > save state to disk. It would be great to implement that also.
> > I'll have to research that some, but I should be able to write a patch
> for that as
> > well.
> > > Then implement an option in datacenter.cfg like:
> > >
> > > reboot: stop|suspend
> > >
> > > So that VMs are suspended while we reboot a host. What do you think?
> > That would probably save a *lot* of time bringing servers back up after
> > reboot.  I'll look into that as well, probably next week.
>
> OK
>
> > To go another step with that logic, I wonder if there might be a benefit
> to
> > modifying QEMU migrations so they suspend with state, transfer the
> suspended
> > VM, and resume on the destination node.
>
> This is how migrate works (basically). Or what is the difference?
>
> > I could see an advanced implementation where VM snapshots are taken
> > periodically, and if the node experiences a power failure, the VM could
> resume
> > from the snapshot.  HA failover could take advantage of the same
> snapshots in
> > the same way, thereby (hopefully) losing less data, and possibly
> resulting in less
> > downtime.  This would definitely need to be an option enabled on VMs that
> > would benefit from such an approach, rather than enabled universally,
> and is
> > advanced enough it might remain in the realm of third-party scripts or
> packages,
> > but it still might be useful.
> > Before I get too far into the QEMU suspend-with-state patch, I want to
> ask -
> > does OpenVZ support suspend-with-state?  Might be nice to support that
> in the
> > patch, too, if it does.
>
> You already implemented that!
>
> chkpnt CTID [--dumpfile name]
>This  command  saves  a  complete state of a running container
> to a
>dump file, and stops the container. If an option --dumpfile is
> not
>set, default dump file name /vz/dump/Dump.CTID is used.
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Add CT suspend/resume to PVE API

2014-10-03 Thread Daniel Hunsaker
> > How would you recommend I split the changes?  They're all related
directly to
> > providing suspend/resume support.
>
> 1.) Implement suspend/resume API
> 2.) add it to pvectl
> 3.) Implement suspend/resume GUI (extjs)
> 4.) Implement suspend/resume GUI (mobile)

Alright, I'll make that happen tomorrow.  Currently just after 02:00 here.
:-)

> I also have some further ideas. Currently qemu suspend/resume does not
> save state to disk. It would be great to implement that also.

I'll have to research that some, but I should be able to write a patch for
that as well.

> Then implement an option in datacenter.cfg like:
>
> reboot: stop|suspend
>
> So that VMs are suspended while we reboot a host. What do you think?

That would probably save a *lot* of time bringing servers back up after
reboot.  I'll look into that as well, probably next week.

To go another step with that logic, I wonder if there might be a benefit to
modifying QEMU migrations so they suspend with state, transfer the
suspended VM, and resume on the destination node.

I could see an advanced implementation where VM snapshots are taken
periodically, and if the node experiences a power failure, the VM could
resume from the snapshot.  HA failover could take advantage of the same
snapshots in the same way, thereby (hopefully) losing less data, and
possibly resulting in less downtime.  This would definitely need to be an
option enabled on VMs that would benefit from such an approach, rather than
enabled universally, and is advanced enough it might remain in the realm of
third-party scripts or packages, but it still might be useful.

Before I get too far into the QEMU suspend-with-state patch, I want to ask
- does OpenVZ support suspend-with-state?  Might be nice to support that in
the patch, too, if it does.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Add CT suspend/resume to PVE API

2014-10-03 Thread Daniel Hunsaker
I noticed the whitespace changes after I sent this one, and resent it
without them shortly after.  Sent a reply that I was going to resend, but
it seems not all my emails get through to the list?

How would you recommend I split the changes?  They're all related directly
to providing suspend/resume support.
On Oct 3, 2014 12:25 AM, "Dietmar Maurer"  wrote:

> First, thanks for the patch. But please can you split the patch into
> smaller ones?
>
> > Signed-off-by: Dan Hunsaker 
> > ---
> >  PVE/API2/OpenVZ.pm| 308
> +++---
> >  PVE/OpenVZ.pm |  92 -
> >  bin/pvectl|  16 ++-
> >  www/manager/Utils.js  |  80 +--
> >  www/manager/openvz/CmdMenu.js |  28 +++-
> >  www/manager/qemu/CmdMenu.js   |  26 +++-
> >  www/mobile/OpenVzSummary.js   |  30 ++--
> >  www/mobile/QemuSummary.js |  34 +++--
> >  8 files changed, 401 insertions(+), 213 deletions(-)
> >
> > diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
> > index 184ebdf..5d8c0c6 100644
> > --- a/PVE/API2/OpenVZ.pm
> > +++ b/PVE/API2/OpenVZ.pm
> > @@ -71,7 +71,7 @@ my $get_container_storage = sub {
> >
> >  my $check_ct_modify_config_perm = sub {
> >  my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
> > -
> > +
>
> And there are tons of white-space changes. Please remove them first.
>
> - Dietmar
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Add CT suspend/resume to PVE API (resubmit without whitespace changes)

2014-10-02 Thread Daniel Hunsaker
From: Dan Hunsaker 

As discussed in a previous thread, following is a patch to support container
suspend (via vzctl chkpnt) and resume (via vzctl restore).

- Added /nodes/{node}/openvz/{vmid}/status/suspend to API
- Added /nodes/{node}/openvz/{vmid}/status/resume to API
- Adapted vm_suspend/vm_resume from PVE/QemuServer.pm into PVE/OpenVZ.pm
  - Removed locking since vzctl already does this for us, and the locks
conflict with each other (container already locked)
  - Changed monitor commands to run_command(vzctl) calls
  - Refuse to suspend if CT is offline
  - Refuse to resume if CT is online
  - vzctl does these checks as well, but it doesn't really hurt to have them

This was great, but there were artifacts in the web UI - specifically, the
task descriptions were unformatted.  So, I moved over to www/manager/Utils.js:

- Added descriptions for vzsuspend and vzresume tasks in web UI

And while I was working with the web UI anyway:

- Added suspend/resume options to CmdMenu for both OpenVZ and QEMU guests
  - Confirm suspend before proceeding
  - No confirm on resume, since it's a startup action
- Fixed OpenVZ CmdMenu shutdown and stop confirmation prompts to refer to CTs

I considered adding these options to the toolbar, but there are enough options
there already that it can get crowded quick in smaller browser windows (such
as the ones I tend to use, for screen real estate purposes), so I opted
against that.

REVISION: Between the original version of this patch and the present, mobile
support was added, so I went into www/mobile/(OpenVZ|QEMU)Summary.js and added
the suspend and resume options there as well.  No confirmation this time, since
stop and shutdown don't bother with it either in the mobile interface.

I also did a cursory search for other places where suspend/resume commands
might be useful, and added them to bin/pvectl.  If I've missed any other spots,
I'll gladly add the commands to them, as well.

Signed-off-by: Dan Hunsaker 
---
 PVE/API2/OpenVZ.pm| 96 +++
 PVE/OpenVZ.pm | 26 +++-
 bin/pvectl|  2 +
 www/manager/Utils.js  |  2 +
 www/manager/openvz/CmdMenu.js | 24 ++-
 www/manager/qemu/CmdMenu.js   | 20 +
 www/mobile/OpenVzSummary.js   | 12 ++
 www/mobile/QemuSummary.js | 12 ++
 8 files changed, 191 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
index 184ebdf..5d8c0c6 100644
--- a/PVE/API2/OpenVZ.pm
+++ b/PVE/API2/OpenVZ.pm
@@ -1459,6 +1459,102 @@ __PACKAGE__->register_method({
 }});
 
 __PACKAGE__->register_method({
+   name => 'vm_suspend',
+   path => '{vmid}/status/suspend',
+   method => 'POST',
+   protected => 1,
+   proxyto => 'node',
+   description => "Suspend the container.",
+   permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
+   },
+   parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid'),
+   },
+   },
+   returns => {
+   type => 'string',
+   },
+   code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $authuser = $rpcenv->get_user();
+
+   my $node = extract_param($param, 'node');
+
+   my $vmid = extract_param($param, 'vmid');
+
+   die "CT $vmid not running\n" if 
!PVE::OpenVZ::check_running($vmid);
+
+   my $realcmd = sub {
+   my $upid = shift;
+
+   syslog('info', "suspend CT $vmid: $upid\n");
+
+   PVE::OpenVZ::vm_suspend($vmid);
+
+   return;
+   };
+
+   my $upid = $rpcenv->fork_worker('vzsuspend', $vmid, $authuser, 
$realcmd);
+
+   return $upid;
+   }});
+
+__PACKAGE__->register_method({
+   name => 'vm_resume',
+   path => '{vmid}/status/resume',
+   method => 'POST',
+   protected => 1,
+   proxyto => 'node',
+   description => "Resume the container.",
+   permissions => {
+   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
+   },
+   parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   vmid => get_standard_option('pve-vmid'),
+   },
+   },
+   returns => {
+   type => 'string',
+   },
+   code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+
+   my $authuser = $rpcenv->get_user();
+
+   my $node = extract_param($param, 'node');
+
+   my $vmid = extract_param($param, 

Re: [pve-devel] [PATCH] Add CT suspend/resume to PVE API

2014-10-02 Thread Daniel Hunsaker
Oops.  That's a lot of whitespace changes.  Let me resubmit again without
them...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] Add CT suspend/resume to PVE API

2014-10-02 Thread Daniel Hunsaker
From: Dan Hunsaker 

As discussed in a previous thread, following is a patch to support container
suspend (via vzctl chkpnt) and resume (via vzctl restore).

- Added /nodes/{node}/openvz/{vmid}/status/suspend to API
- Added /nodes/{node}/openvz/{vmid}/status/resume to API
- Adapted vm_suspend/vm_resume from PVE/QemuServer.pm into PVE/OpenVZ.pm
  - Removed locking since vzctl already does this for us, and the locks
conflict with each other (container already locked)
  - Changed monitor commands to run_command(vzctl) calls
  - Refuse to suspend if CT is offline
  - Refuse to resume if CT is online
  - vzctl does these checks as well, but it doesn't really hurt to have them

This was great, but there were artifacts in the web UI - specifically, the
task descriptions were unformatted.  So, I moved over to www/manager/Utils.js:

- Added descriptions for vzsuspend and vzresume tasks in web UI

And while I was working with the web UI anyway:

- Added suspend/resume options to CmdMenu for both OpenVZ and QEMU guests
  - Confirm suspend before proceeding
  - No confirm on resume, since it's a startup action
- Fixed OpenVZ CmdMenu shutdown and stop confirmation prompts to refer to CTs

I considered adding these options to the toolbar, but there are enough options
there already that it can get crowded quick in smaller browser windows (such
as the ones I tend to use, for screen real estate purposes), so I opted
against that.

REVISION: Between the original version of this patch and the present, mobile
support was added, so I went into www/mobile/(OpenVZ|QEMU)Summary.js and added
the suspend and resume options there as well.  No confirmation this time, since
stop and shutdown don't bother with it either in the mobile interface.

I also did a cursory search for other places where suspend/resume commands
might be useful, and added them to bin/pvectl.  If I've missed any other spots,
I'll gladly add the commands to them, as well.

Signed-off-by: Dan Hunsaker 
---
 PVE/API2/OpenVZ.pm| 308 +++---
 PVE/OpenVZ.pm |  92 -
 bin/pvectl|  16 ++-
 www/manager/Utils.js  |  80 +--
 www/manager/openvz/CmdMenu.js |  28 +++-
 www/manager/qemu/CmdMenu.js   |  26 +++-
 www/mobile/OpenVzSummary.js   |  30 ++--
 www/mobile/QemuSummary.js |  34 +++--
 8 files changed, 401 insertions(+), 213 deletions(-)

diff --git a/PVE/API2/OpenVZ.pm b/PVE/API2/OpenVZ.pm
index 184ebdf..5d8c0c6 100644
--- a/PVE/API2/OpenVZ.pm
+++ b/PVE/API2/OpenVZ.pm
@@ -71,7 +71,7 @@ my $get_container_storage = sub {
 
 my $check_ct_modify_config_perm = sub {
 my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
-
+
 return 1 if $authuser ne 'root@pam';
 
 foreach my $opt (@$key_list) {
@@ -82,7 +82,7 @@ my $check_ct_modify_config_perm = sub {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
} elsif ($opt eq 'memory' || $opt eq 'swap') {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, 
['VM.Config.Memory']);
-   } elsif ($opt eq 'netif' || $opt eq 'ip_address' || $opt eq 
'nameserver' || 
+   } elsif ($opt eq 'netif' || $opt eq 'ip_address' || $opt eq 
'nameserver' ||
 $opt eq 'searchdomain' || $opt eq 'hostname') {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, 
['VM.Config.Network']);
} else {
@@ -94,8 +94,8 @@ my $check_ct_modify_config_perm = sub {
 };
 
 __PACKAGE__->register_method({
-name => 'vmlist', 
-path => '', 
+name => 'vmlist',
+path => '',
 method => 'GET',
 description => "OpenVZ container index (per node).",
 permissions => {
@@ -136,7 +136,7 @@ __PACKAGE__->register_method({
}
 
return $res;
-  
+
 }});
 
 my $restore_openvz = sub {
@@ -153,10 +153,10 @@ my $restore_openvz = sub {
 
 die "unable to create CT $vmid - container already exists\n"
if !$force && -f $conffile;
- 
+
 die "unable to create CT $vmid - directory '$private' already exists\n"
if !$force && -d $private;
-   
+
 die "unable to create CT $vmid - directory '$root' already exists\n"
if !$force && -d $root;
 
@@ -168,14 +168,14 @@ my $restore_openvz = sub {
 
my $oldprivate = PVE::OpenVZ::get_privatedir($conf, $vmid);
rmtree $oldprivate if -d $oldprivate;
-  
+
my $oldroot = $conf->{ve_root} ? $conf->{ve_root}->{value} : $root;
rmtree $oldroot if -d $oldroot;
};
 
mkpath $private || die "unable to create private dir '$private'";
mkpath $root || die "unable to create root dir '$root'";
-   
+
my $cmd = ['tar', 'xpf', $archive, '--totals', '--sparse', '-C', 
$private];
 
if ($archive eq '-') {
@@ -197,7 +197,7 @@ my $restore_openvz = sub {
$conf =~ s/host_ifname=veth[0-9]+\./host_ifname=veth${vmid}\./g;
 
PVE::Tools::file_set_contents($conffile, $conf);
- 

Re: [pve-devel] Task List Migration

2014-10-02 Thread Daniel Hunsaker
We *could* put them in /etc/pve someplace, so they automatically sync
throughout the cluster, but that would probably cause more issues than it
solves, especially as regards disk space and change frequency.  The fact
that VMs themselves are sometimes node local, and sometimes on shared
storage, means there's some inconsistency involved with not storing the
logs on the same storage as the VMs (they're currently under /var/log,
which generally makes sense).  However, rsync on migrate would cause the
fewest compatibility issues, and the fewest code changes.

As to deleted VMs, we could easily just delete the task logs when we delete
the VM.  However, there might be some benefit from keeping the old logs for
later review.  So, perhaps rename them (they're stored one directory to a
VM, one file to a task, so rename the directory) to include the VM deletion
date/time (or some other unique identifier, if we have one I've forgotten),
so they don't persist through to any new VM(s) using the same ID, but they
also stick around on disk, even if multiple VMs with the same ID get
deleted over time.  We'd probably want a mechanism in the API/web UI for
reviewing/removing such logs.
On Oct 2, 2014 11:31 AM, "Dietmar Maurer"  wrote:

> > Any ideas how to archieve this.
>
> Task logs and syslog are node local.
>
> But I guess it would be possible to:
>
> a.) rsync logs on migration.
>
> b.) store logs on shared storage
>
> other ideas?
>
>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Task List Migration

2014-10-02 Thread Daniel Hunsaker
+1
On Oct 2, 2014 6:46 AM, "Stefan Priebe - Profihost AG" <
s.pri...@profihost.ag> wrote:

> Hi,
>
> i have asked this question already in the past but didn't had the time
> to proceed.
>
> I'm still missing that in case of a migration the task history migrates
> too.
>
> I really like to know the history of a VM and would like to be able to
> see it after migration too.
>
> Another problem to me is, that the task history is kept even after
> deleting the VM. So after recreating the VM i see the history of an old vm.
>
> Any ideas how to archieve this? Is this a feature somebody else likes?
>
> Greets,
> Stefan
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Suspend/Resume VM/CT via API/Web GUIs

2014-09-30 Thread Daniel Hunsaker
If there is still interest in having support for suspend/resume via API/web
UI, I can rebase (and, presumably, rework, for the mobile UI) this patch
from just before the 3.2 release back in March.

http://pve.proxmox.com/pipermail/pve-devel/2014-March/010331.html
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Support Debian Jessie in DAB

2014-09-29 Thread Daniel Hunsaker
DAB doesn't support most of the *current* versions of Debian/Ubuntu,
either.  It's badly in need of an update to make these new releases
available, and support for the testing versions would be nice to have as
well...  There's a bit of trickery you can pull to trick DAB into thinking
it's creating a template for one version when it's really another (by
specifying your own repos in the config), but some of the internal logic
DAB uses to resolve conflicts with using OpenVZ instead of a dedicated
kernel doesn't work with these newer versions, as the way they work wasn't
known when DAB was last being maintained.  So even the trickery doesn't
really work all that well.

This is a good change, and probably a decent starting point for getting the
other releases supported.
On Sep 29, 2014 4:40 PM, "Tom Dobes"  wrote:

> I needed to put together a container running Debian Jessie, so I added
> support for it to DAB.  I've attached the patch.
>
> I'm not really a Perl programmer, so there might be a cleaner/better
> way to make some of these changes.  However, I'm submitting this in
> hopes that it might be useful, possibly as the base of a cleaner
> patch.
>
> Tom
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : allow only hotpluggable|dynamics options to be change online

2014-09-05 Thread Daniel Hunsaker
> But I really don't known how to display that in pve-manager.
> (for devices grid for example and forms)

As to forms, maybe highlight the pending values in a manner similar to how
invalid values are highlighted?  You'd want a different color (yellow?
blue?) and a different icon, of course, but that's a simple way that
doesn't require too much reshuffle of the interface elements.  Might be
able to do something similar (or identical) with the grids, too.

My question is how we'd indicate the pending and current values in the API
response, as well as whether we'd want, then, to add a "revert pending
changes" option as well.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] firewall : cluster.fw [rules] section ?

2014-07-05 Thread Daniel Hunsaker
> Yes it's already in proxmox. If you set a vlan tag inside the gui for a
network card - exactly this happens. Traffic gets untagged at the bridge.

Perfect, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] firewall : cluster.fw [rules] section ?

2014-07-05 Thread Daniel Hunsaker
Is 802_1Q required for VLAN traffic?  Or do we have a mechanism for
adding/removing VLAN tags outside the VMs?  Something where inbound traffic
has tags removed before forwarding to the VM, and outbound has it added
after receipt from the VM, so that the host and the physical network use
tagged traffic, but the VMs have it untagged?
On Jul 5, 2014 7:37 AM, "Alexandre DERUMIER"  wrote:

> >>What about ICMP? among other things ICMP is used to optimize network
> >>traffic and QoS.
>
> yes, sure ;)  icmp and icmpv6 are included in IPV4 and IPV6
>
> available ebtables protocol are :
>
> cat /etc/ethertypes
>
> IPv40800ip ip4  # Internet IP (IPv4)
> X25 0805
> ARP 0806ether-arp   #
> FR_ARP  0808# Frame Relay ARP[RFC1701]
> BPQ 08FF# G8BPQ AX.25 Ethernet Packet
> DEC 6000# DEC Assigned proto
> DNA_DL  6001# DEC DNA Dump/Load
> DNA_RC  6002# DEC DNA Remote Console
> DNA_RT  6003# DEC DNA Routing
> LAT 6004# DEC LAT
> DIAG6005# DEC Diagnostics
> CUST6006# DEC Customer use
> SCA 6007# DEC Systems Comms Arch
> TEB 6558# Trans Ether Bridging   [RFC1701]
> RAW_FR  6559# Raw Frame Relay[RFC1701]
> AARP80F3# Appletalk AARP
> ATALK   809B# Appletalk
> 802_1Q  81008021q 1q 802.1q dot1q # 802.1Q Virtual LAN tagged
> frame
> IPX 8137# Novell IPX
> NetBEUI 8191# NetBEUI
> IPv686DDip6 # IP version 6
> PPP 880B# PPP
> ATMMPOA 884C# MultiProtocol over ATM
> PPP_DISC8863# PPPoE discovery messages
> PPP_SES 8864# PPPoE session messages
> ATMFATE 8884# Frame-based ATM Transport over
> Ethernet
> LOOP9000loopback# loop proto
>
>
> - Mail original -
>
> De: "Michael Rasmussen" 
> À: pve-devel@pve.proxmox.com
> Envoyé: Samedi 5 Juillet 2014 14:52:04
> Objet: Re: [pve-devel] firewall : cluster.fw [rules] section ?
>
> On Sat, 05 Jul 2014 14:18:01 +0200 (CEST)
> Alexandre DERUMIER  wrote:
>
> > >>Maybe simply:
> > >>
> > >>protocols: ARP, IPV4, IPV6
> >
> > No objection for me.
> >
> > @Stefan, do you think we need other protocols inside a vm ?
> >
> What about ICMP? among other things ICMP is used to optimize network
> traffic and QoS.
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael  rasmussen  cc
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
> mir  datanom  net
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
> mir  miras  org
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
> --
> /usr/games/fortune -es says:
> Q: What's the difference between USL and the Titanic?
> A: The Titanic had a band.
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] configure qemu vm args with api interface

2014-06-26 Thread Daniel Hunsaker
The command line tool calls the same code as the HTTP(S) API, so if one can
do it, so can the other.  The docs won't be updated to reflect that unless
you regenerate your docs, which is currently nontrivial as the current
doc-regeneration code assumes apache2, which is no longer installed by
default.

At any rate, you'll need to make sure you've installed the latest version
of the qemu-server, to get the code Dietmar just added.  Then, you can run
a config request with smbios1 instead of args, and that should handle the
request quite cleanly.
On Jun 26, 2014 3:39 AM, "Christian Fröstl" 
wrote:

> Ähm, I don’t see where I can use this option over http within
> http://pve.proxmox.com/pve2-api-doc/.
> I can make the script itself accessible via http or login via ssh in pve
> host and use the script.
> Or how do you mean, that I can change this option?
>
> Am 26.06.14 11:31 schrieb "Dietmar Maurer" unter :
>
> >> I see. It¹s not possible to run this option directly via the https api,
> >>isn¹t it?
> >
> >why not?
> >
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager: novnc preview V2

2014-06-13 Thread Daniel Hunsaker
> BTW, /usr/share/novnc/utils/wsproxy.py is a symlink to
/usr/share/novnc/utils/websockify,
>
> so maybe it's better to use it directly.

That depends on whether websockify operates differently when started with
different names.  I doubt it in this case, but it's pretty common in *NIX
utilities.  Busybox, for example.  So it might be good to check that, first.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] mass deployment for testing ipcc_send_rec failed

2014-05-28 Thread Daniel Hunsaker
> Disks are on external ceph system - so compute node is not under load. 15
VMs for 48 cores are not much.
>
At that point you're waiting for the network I/O, so there's still that
(which is sometimes actually a bit slower).  Plus, the VM's devices will
still be loaded from the compute node itself, unless it's a completely
diskless server (which has its own potential problems).  So you still have
some load to consider.  Add that to the amount of other operations
happening at the same time (file locking/unlocking, waiting for files to be
unlocked, virtual hardware initialization, resource allocation, etc) and
starting a VM is computationally expensive, even with 48 cores.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] mass deployment for testing ipcc_send_rec failed

2014-05-28 Thread Daniel Hunsaker
> > seems like that - yes. But do you know why this happens? it happens
already
> > starting 15 vms in parallel while everything is idle.
>
> That is quite normal behavior.
>
There's a lot that goes on when starting a VM, and it consumes resources
like crazy.  Each VM (once the KVM process starts up) has to create all of
its virtual hardware, initialize it, and start the boot process.  That
involves a lot of disk activity, since the virtual devices need to be
loaded individually, and the boot data is also not only stored on disk, but
frequently inside a virtual disk file.  There's a lot of other stuff
happening, too, but these are the main resource hogs.

The process tends to be pretty quick most of the time, so it may not
register on load averages (and I/O wait doesn't always show in load
averages anyway), but it gets pretty intense.  So it's perfectly normal
behavior that makes a lot of sense.  Containers will be somewhat less
demanding, due to their nature, so you might be able to start more of them
at once, but there will still be limits even there.  This is why the
on-boot VM startup process works the way it does.  I forget where that
logic is stored, but that approach would probably be your best bet, at
least until the changes Dietmar mentioned earlier can be merged and
released.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Better translate to Spanish language

2014-05-19 Thread Daniel Hunsaker
> Today I wanted to compile the translation to the Spanish unsuccessfully,
i did run this command (in the root directory):
>
> root@kvmtest:/# make dinstall es.po
>
> make: *** No rule to make target `dinstall'.  Stop.

You need to be in the root directory of the git repository, not the root
directory of your entire system.  So whatever directory you cloned the repo
into with `git clone`.  That should fix the problem, since `make` will then
know where to look for instructions on what to build and how.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] loading nf_conntrack_ftp module by default ?

2014-05-19 Thread Daniel Hunsaker
It's probably a negligible difference in overhead and so forth, but it
might be nice to only load the module if FTP rules actually exist.  I, for
one, never plan to support FTP in particular on my servers.  Maybe a future
optimization, at least?
On May 19, 2014 3:52 AM, "Alexandre DERUMIER"  wrote:

> ok, I'll send a patch this afternoon
> - Mail original -
>
> De: "Dietmar Maurer" 
> À: "Alexandre DERUMIER" 
> Cc: "pve-devel" 
> Envoyé: Lundi 19 Mai 2014 11:15:38
> Objet: RE: [pve-devel] loading nf_conntrack_ftp module by default ?
>
> > maybe in Firewall.pm, sub update() (which is called in run_server) ?
>
> I just added an init() function - please use that:
>
>
> https://git.proxmox.com/?p=pve-firewall.git;a=commitdiff;h=8b453a09f302dd91db5c02c92da144df37503d79
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] review of dietmar patches

2014-05-12 Thread Daniel Hunsaker
> Ok, all seem to works fine now.
>
> tap->tap
> tap->host
> host->tap
> tap->vnet0
> vnet0->tap
> vnet0->host
> host->vnet0
>

Maybe it's just me, but shouldn't there also have been a vnet0->vnet0
test?  You tested tap->tap, and I suspect host->host won't be an issue, but
after the discussion over vnet0->vnet0 yesterday, it seems odd that it's
missing here.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-firewall : masquerade results (+veth vlan tag bug)

2014-05-05 Thread Daniel Hunsaker
Just a side note that it might be a good idea to hack in the other script
types as well while you're in there anyway.  That way if/when something
should end up in, say, a premount script, you only need to write the script
itself.  Something to consider, anyway.
On May 5, 2014 11:12 PM, "Dietmar Maurer"  wrote:

> >
> > ++snprintf(buf, sizeof(buf), "%sproxmox.%s",
> VPS_CONF_DIR,
> > ++POST_UMOUNT_PREFIX);
> > }
> > }
> >
> >
> > should call /etc/vz/conf/proxmox.postumount
> >
> > (maybe putting the script is /usr/sbin/  is better ?)
>
> Please use SCRIPTDIR (see include/types.h)
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-common : linux bridge and ovs new modelimplementation v2

2014-05-01 Thread Daniel Hunsaker
Multicast doesn't have destination IPs to filter by, so multicast traffic
leaving a node can't be filtered that way by the node.  As to filtering it
coming in, it might be possible to prevent VMs/CTs from seeing the Proxmox
multicast data by simply preventing it from being forwarded to those
interfaces.  But you'll still have the multicast across your LAN either
way, because that's how multicast works.
On May 1, 2014 1:52 PM, "Cesar Peschiera"  wrote:

> It's not possible with a firewall to say only send multicast traffic to a
>> specific host.
>> (or it's not multicast anymore ;)
>>
>
> But, i think that is possible, while PVE is transmitting in mode multicast
> by ports UDP 5404 and 5405, the firewall can drop the packets for all
> except for the IP addresses that are the PVE Nodes.
>
> A example in iptables (we know that the order of the rules is important
> for get this target):
>
> iptables -A OUTPUT -o  -p udp -m multiport
> --ports 5404,5405 -j ACCEPT
> iptables -A OUTPUT -o  -p udp -m
> multiport --ports 5404,5405 -j ACCEPT
> #And finally the magic rule:
> iptables -A OUTPUT -p udp -m multiport --ports 5404,5405 -j DROP
>
> i see it very simple, or i am missing of something?
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] OpenVZ for Upstream

2014-04-29 Thread Daniel Hunsaker
Three things I'd like to point out here.  The 2.6.32 is patched by RedHat
with much of the later (upstream) features of and improvements to the 3.x
kernel.  Second, the cgroup stuff came from OpenVZ almost exclusively
anyway - LXC is just a reimplementation.  And third, the OpenVZ team have
realized that sticking with 2.6.32 as RedHat has done in their current
stable (7 should be out soon, as I understand it, and will then become the
current stable) is not the best decision they've made, and efforts have
been underway for some time now to bring OpenVZ into the 3.x line.  Indeed,
the lead Proxmox devs have been working with the 3.10 kernel, prepping it
for a near-future release, for several months now.  So we will be seeing
this soon.

That said, this is a good proof of concept to see.  Hopefully the
maintainers of vzctl are open to the patches you've made and will be
willing to merge them, though I wouldn't be surprised if there were some
tweaks to the specifics.  I believe that LXC will be added to the Proxmox
virtualization options when it is mature and stable enough to do so without
interfering with OpenVZ or any of the other moving parts that make Proxmox
go.  (And I'm sure I'll be corrected on that if I'm wrong. :-) )

In short, much of this is coming, but thanks for sharing a glimpse at what
it might look like for today.
On Apr 29, 2014 2:12 AM, "Kamil Trzciński"  wrote:

> As proof of concept written in my broken english:
> http://ayufan.eu/projects/proxmox-for-upstream/
>
> Kamil Trzciński
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-firewall : enable|disable firewall atinterfacelevel

2014-04-28 Thread Daniel Hunsaker
Just pointing out that a full reboot wouldn't necessarily be required even
then.  `service networking restart` would do the trick as easily.

But yes, not the intended implementation path.
On Apr 28, 2014 9:47 AM, "Cesar Peschiera"  wrote:

> Ah, it is Ok! :-) ,
> please forget my silly comment
>
> - Original Message - From: "Dietmar Maurer" 
> To: "Alexandre DERUMIER" ; "Cesar Peschiera" <
> br...@click.com.py>
> Cc: "pve-devel" 
> Sent: Monday, April 28, 2014 11:24 AM
> Subject: RE: [pve-devel] pve-firewall : enable|disable firewall
> atinterfacelevel
>
>
>
>>> I was talking about network interface of the vm configuration, not
>>> /etc/network/interfaces of the host ;)
>>>
>>> net0 : net0: virtio=1E:0B:85:27:8D:65,bridge=vmbr0,fw=0|1
>>>
>>
>> yes - i talk about the same ;-)
>>
>>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] first recompiling from source

2014-04-14 Thread Daniel Hunsaker
> > I'm trying to compile master on a wheezy install, but I'm encountering
> > problems with dependencies.
>
>
> I found it easier to build using an actual proxmox install.

Mostly because Proxmox ships with the Proxmox package repos already in the
sources list.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enforce cpu check

2014-04-02 Thread Daniel Hunsaker
Looks like the default "kvm64" CPU isn't based on the Xeon feature set
(probably based on an AMD processor).  That is, I don't see a processor
type in your config...
On Apr 2, 2014 10:17 PM, "Dietmar Maurer"  wrote:

> > But I think that enforce is enough for now,  the vm will stop when
> starting or
> > the targetvm when we do live migration if flags are not supproted
>
> I am unable to start my VMs now:
>
> warning: host doesn't support requested feature:
> CPUID.4001H:EAX.kvm_asyncpf [bit 4]
> kvm: Host's CPU doesn't support requested features
>
> # cat /etc/pve/local/qemu-server/101.conf
> bootdisk: virtio0
> cores: 1
> ide2: none,media=cdrom
> memory: 1024
> name: win7-32-spice
> net0: virtio=AA:8D:D8:DB:6C:82,bridge=vmbr0
> ostype: win7
> sockets: 1
> tablet: 0
> vga: qxl2
> virtio0:
> local:101/vm-101-disk-1.qcow2,format=qcow2,cache=writeback,size=32G
>
> # cat /proc/cpuinfo
> ..
> processor   : 7
> vendor_id   : GenuineIntel
> cpu family  : 6
> model   : 30
> model name  : Intel(R) Xeon(R) CPU   X3470  @ 2.93GHz
> stepping: 5
> cpu MHz : 2933.171
> cache size  : 8192 KB
> physical id : 0
> siblings: 8
> core id : 3
> cpu cores   : 4
> apicid  : 7
> initial apicid  : 7
> fpu : yes
> fpu_exception   : yes
> cpuid level : 11
> wp  : yes
> flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc
> aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm
> sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpid
> bogomips: 5866.34
> clflush size: 64
> cache_alignment : 64
> address sizes   : 36 bits physical, 48 bits virtual
> power management:
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] win8r2 virtio net problems

2014-03-29 Thread Daniel Hunsaker
Ultimately, the Windows network stack is pretty terrible.  You'll get some
improvements from tweaking things (and it would be nice to have those
tweaks applied automatically wherever possible), but it'll never be near as
efficient as a Linux or BSD guest.  Microsoft hasn't done much with it
since they adapted it (poorly) from BSD's back around the time of NT4 (or
earlier), aside from fixing a few of the more major bugs.  Personally, I'd
avoid using Windows servers for anything network-heavy, whenever possible.
On Mar 29, 2014 9:36 PM, "Alexandre DERUMIER"  wrote:

> I have done test with win2012 + virtio-net, no tuning, last virtio driver.
>
> on a small 2core 5110  @ 1.60GHz, I can achieve around 500mbit/s
>
> But the cpu is limiting the speed.  (so 500mbits vm<->host, or 250mbits
> vm<->vm)
>
> (seem that virtio-net windows use a lot more cpu than linux)
>
> I try to play with offloading to see if I can improve performance.
>
>
> - Mail original -
>
> De: "Alexandre DERUMIER" 
> À: "Dietmar Maurer" 
> Cc: pve-devel@pve.proxmox.com
> Envoyé: Samedi 29 Mars 2014 13:22:21
> Objet: Re: [pve-devel] win8r2 virtio net problems
>
> Hi, I never reach myself more than 900Mbits with windows guest. (2003 >
> 2012, any virito driver
>
> But 30mbit/s seem very slow.
>
> Maybe trying to play with disabling offloading options (in windows guest
> driver options) ? gro,gso,...
>
>
> - Mail original -
>
> De: "Dietmar Maurer" 
> À: pve-devel@pve.proxmox.com
> Envoyé: Vendredi 28 Mars 2014 15:07:23
> Objet: [pve-devel] win8r2 virtio net problems
>
>
>
> Hi all,
>
> we just observed bad performance values with virtio network on win8r2
> guests.
>
> A simple iperf from a VM to the host shown about 700Mbit/s.
>
> Even worse, iperf between 2 win8r2 guests on the same bridge shown below
> 30Mbit/s.
>
> Using linux VMs, I am able to get > 9Gbit/s.
>
> I already made all optimizations suggested at:
>
> http://pve.proxmox.com/wiki/Paravirtualized_Network_Drivers_for_Windows
>
> Any ideas?
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enforce cpu check

2014-03-28 Thread Daniel Hunsaker
> I think that this is impossible to implement, because users may have
different hosts
> inside one cluster. So It is hard to tell what selection we should allow.

We should have the info on the host they're creating it on, which is what I
was thinking about, but it would certainly be tricky to do a lowest common
denominator of the whole cluster.  Though since the CPU features aren't
likely to change often, if at all, we *could* have each host report what it
supports when it joins the cluster, and go from there.  That data could
then be used to prevent creating a VM with a virtual CPU the selected host
doesn't support, migrating a VM to a host that won't be able to run it, and
anything else that might come up as useful to use such data for.

The information itself, namely which instruction set(s) and features a CPU
supports (rather than the make/model of the CPU, which would be
ridiculously overcomplicated to maintain a list for) is readily available
through the proc filesystem (I'll have to look up exactly what the path
is).  So getting that data will be pretty straightforward.  We'd then have
to compile/acquire a list of features needed by each of the virtual CPU
types KVM supports, and it would be a relatively simple matter, from there,
to filter out the ones the current host isn't compatible with.  So long as
it's kept on a strictly per-host basis, this approach should work fairly
well.  Though cluster-wide statistics on what people are actually using
might be nice to have available, too...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enforce cpu check

2014-03-27 Thread Daniel Hunsaker
> this avoid some bad setup like Opteron vcpu on a intel host for example,
> and avoid some bad live migrations

I wonder if there's a reliable way to prevent selecting a mismatching CPU
in the GUI/API...  Won't stop bad choices in direct config edits, nor bad
choices made prior to such an update (not to mention the bad migrations you
mentioned above), so this patch would be important either way, but it would
be nice to prevent selecting a non-functioning option in the first place.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] implement ipset ip/net groups

2014-03-27 Thread Daniel Hunsaker
> [ipgroup ipgroup1]
> ...
> [ipgroup ipgroup2]
> ...
> [netgroup netgroup1]

This looks a bit redundant...  Is the goal to allow sysadmins to write
things like

[ipgroup production]
...
[ipgroup development]
...
[netgroup internal]

and so forth?  If so, perhaps that would be a better example?  If not, why
not something closer to

[ipgroup 1]
...
[ipgroup 2]
...
[netgroup 1]

and automatically add the appropriate prefix?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/2] This patch refactors ZFS's LUN management code, removing the existing LunCmds implementations, in favor of a single lun management infraestructure which just invokes a 'man

2014-03-20 Thread Daniel Hunsaker
> This are my findings regarding maximun command line string which can be
passed to ssh sucessfully.
>
> CentOS-6.x/x86-64 => 256k
> CentOS-5.x/x86-64 => 128k
> Debian-Wheezy/x86-64 => 128k
> Old-Fedora/Sparc64 => 256k
> Solaris 10/Sparc64 => 256k
> Linux-dd-wrt/arm => 128k

Oh.  Wow.  Those are each around 512 times longer than I thought they
were.  We'd have to actually try to before we'd run out of space...  So
never mind that concern.

I agree, though, that we really don't want to pass *everything* along to
the remote server.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/2] This patch refactors ZFS's LUN management code, removing the existing LunCmds implementations, in favor of a single lun management infraestructure which just invokes a 'man

2014-03-18 Thread Daniel Hunsaker
> Just curious - how long are the command lines this patch generates?

Well, it looks like that depends on a number of things.  The entire
contents of %$scfg are added to the environment as PMXCFG_*, the full path
to the ssh key is added to the environment as PMXVAR_SSHKEY (and then added
again as an argument to ssh, if ssh is used), and depending on the command
being issued, you either get PMXVAR_LUNDEV or PMXVAR_LUNUUID, set to the
appropriate value.  So potentially quite long - certainly longer than I
expected on my first glance through the code.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/2] This patch refactors ZFS's LUN management code, removing the existing LunCmds implementations, in favor of a single lun management infraestructure which just invokes a 'man

2014-03-18 Thread Daniel Hunsaker
> > > AFAIK ssh does not allow to pass environment variables to remote
hosts?
> > So I guess
> > > it is better to pass all arguments on the command line?
> > Yes and no.  While ssh doesn't pass the local environment to the remote
> > system, you can include the env var assignments as part of the ssh
command
> > line, just as you would when executing such a command locally.  Indeed,
> > that's what this patch does.
>
> Ah, OK. Thanks for explaining that.

Any time.

I do have a related concern, though.  We might run into a problem (using
either the env-var approach *or* the args-only approach) with command line
lengths under certain LUN drivers.  These limits vary from shell to shell,
and aren't even guaranteed to be the same for a given shell on different
host systems (Linux vs BSD vs Solaris vs Cygwin, etc).  If we do encounter
such troubles, an alternate approach will be required.

All I can propose at the moment is the construction of a temporary wrapper
script, which would set up the environment before calling the actual driver
script/binary, and would be copied (via scp, sftp, rsync+ssh, or so) to the
remote host, then executed via ssh.  This wrapper could then be deleted
from the remote system, or perhaps preserved for later reuse as long as the
various driver parameters remain the same (we already make use of hashes
for determining these kinds of things elsewhere).

Of course, it's entirely possible (and even likely) that command line
lengths will never become an issue, so my concern may be unwarranted.  But
in case the concern is valid, I thought I'd propose a potential solution,
if for no other reason than to find out if anyone else can think of
something better.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/2] This patch refactors ZFS's LUN management code, removing the existing LunCmds implementations, in favor of a single lun management infraestructure which just invokes a 'man

2014-03-18 Thread Daniel Hunsaker
> AFAIK ssh does not allow to pass environment variables to remote hosts?
So I guess
> it is better to pass all arguments on the command line?

Yes and no.  While ssh doesn't pass the local environment to the remote
system, you can include the env var assignments as part of the ssh command
line, just as you would when executing such a command locally.  Indeed,
that's what this patch does.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Create template from CT

2014-03-06 Thread Daniel Hunsaker
The backup/restore method isn't intended for distribution-worthy templates,
so this tends to be a non-issue.  If you have access to do backup/restore,
you have access to get at sensitive files within the CTs and backups
already anyway.

Ultimately, since the vast majority of template creation is necessarily
manual, the tar step being manual as well is a minimal amount of overhead
which preserves the principle of least surprise as a side effect.  Unless
we *can* make a magic method for removing all the sensitive stuff, we
should avoid allowing screwing the whole process up through ignorance by
not putting the option in the web interface.

Out of curiosity, how exactly are you setting up the CTs for conversion if
you're not using SSH?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Create template from CT

2014-03-05 Thread Daniel Hunsaker
The officially-supported way of doing this would be to backup the CT, then
restore it under a new ID and tweak any settings which should be
different.  No SSH required; just a different way of looking at it.  You
have to do the tweaking steps with a new CT-from-template anyway, so there
aren't really any extra steps involved.  Unless you're planning on
distributing your templates to non-Proxmox users of OpenVZ, that tends to
be enough. (And even then, vzrestore is available outside of Proxmox either
way, so still no real issue.)
On Mar 5, 2014 3:48 AM, "James A. Coyle"  wrote:

> >> >why not?
> >> Well where does the proxmox config come from in a vzdump? I assume it
> has
> >> to be in there somewhere?
>
> >The config is stored in /etc/vzdump/
>
> Exactly, so we can add this to an exclusion list, along with a few other
> things.
>
> >> Looking further into it, other items need
> >> removing too which could be added to an excludes list in the tar
> command.
> >>
> >> > and I suspect that other things may need removing too - such as
> >> > networks, /tmp, etc. I need some more information on what the exact
> >> > differences are and we may need to consider the CT OS distribution
> >> > type. In addition, it lands in the cache dir of the storage device so
> >> > that no manual intervention is required.
> >>
> >> >Sure. But there is currently no real standard which defines what a
> >> >template should contain or not. So automatic creation is out of scope
> unless
> >> there is a well-defined standard.
> >> For now, we use DAB to create templates.
> >>
> >> DAB does not meet my needs - I spend a lot of time installing software
> which
> >> is not in a repo and has to be manually installed.
>
> >Installing software that way is a bad idea. I always create debian
> packages
> >before installing something.
>
> That is not correct. One example is any software from Oracle.com - it
> needs to be installed using the Oracle installer because it makes many
> config files which are specific to the target environment. In addition,
> what if the target CT is not Debian? The same issue exists with RPMs btw.
> Using a Debian package would not be supported by the vendor as a method of
> install and be a huge undertaking to do in the first place.
>
> >> So are you saying that
> >> because there is nothing written down on OpenVZ, Proxmox will not
> support
> >> this feature? I basically want to create a template from an existing CT.
>
> >IMHO creating a OpenVZ template is always a manual process, because you
> need to carefully
> >remove unwanted files/data/daemons.
>
> If it's created from an existing template, what is there to remove? I'm
> not talking about creating a template from scratch here - as that's not
> really possible anyway using just a CT. I'm talking about creating a new
> template from an existing CT.
>
> I'd really like to get this feature available in Proxmox as every time I
> create a new template I have to SSH to the box and tar the CT folder. It's
> such a simple process and it drives me crazy every time I have to SSH to
> the box.
>
> Is there any way of getting this feature into Proxmox - even if it means
> completely changing how it's implemented, or is this just a no-go from the
> start?
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] Add CT suspend/resume to PVE API

2014-03-04 Thread Daniel Hunsaker
Alright.  If I need to rebase again by then, let me know.
On Mar 4, 2014 1:02 AM, "Dietmar Maurer"  wrote:

> Thanks.
>
> I will try to add that after the 3.2 release, which is planned next week.
>
> > As discussed in a previous thread, following is a patch to support
> container
> > suspend (via vzctl chkpnt) and resume (via vzctl restore).
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  1   2   >