I also tried that, yet depending on the order of mounting and starting of
some PVE daemons, the files get recreated and then the ZFS could not be
mounted again.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/li
While we are on the subject of /var/lib/vz…
Does anyone see a problem with having the “local” storage be a zfs dataset. I
don’t really need it, but I can’t delete it from the UI. I need to exclude the
dir from my znapzend auto-snapshot plan.
I succeeded by doing:
zfs create -o mountpoint=/var/
> On Jun 16, 2016, at 21:40, Michael Rasmussen wrote:
>
> I guess root is for OpenVZ
>
Oh, I hinted at “zfs” thinking it would be obvious, but I guess it’s not since
zfs appeared prior to proxmox 4.x. So I am running 4.2-5 (no OpenVZ).
Thanks for the quick answer.
_
On Thu, 16 Jun 2016 21:29:05 -0400
Jean-Francois Dagenais wrote:
> Hi all,
>
> Accidentally deleted one (I think, don’t know which) directory in
> /var/lib/vz… Ironically, I was actually in the process of setting up znapzend
> to periodically snapshot my zfs datasets to prevent exactly this ki
> On Jun 16, 2016, at 21:29, Jean-Francois Dagenais
> wrote:
>
> Hi all,
>
> Accidentally deleted one (I think, don’t know which) directory in
> /var/lib/vz… Ironically, I was actually in the process of setting up znapzend
> to periodically snapshot my zfs datasets to prevent exactly this ki
Hi all,
Accidentally deleted one (I think, don’t know which) directory in /var/lib/vz…
Ironically, I was actually in the process of setting up znapzend to
periodically snapshot my zfs datasets to prevent exactly this kind of accident!
Super odd.
Could someone show me their “ls /var/lib/vz” so
I have open a new tracker
http://tracker.ceph.com/issues/16351
bug with udev jessie and a old workaround udev rules removed in jewel package
(still needed)
should happen on fresh jewel install, not upgraded ceph (because the upgrade
don't remove this file)
applied.
It would be great if we can also view the backup log...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
this adds a button to show the configuration of backups
this helps if someone wants to know the details of the
vm in the backup (e.g. name, storages, etc.)
Signed-off-by: Dominik Csapak
---
www/manager6/grid/BackupView.js | 51 -
1 file changed, 50 insert
> Lee Lists hat am 16. Juni 2016 um 07:57 geschrieben:
>
> Hi,
>
> Did you notice that zfs 0.6.5.7 was released ?
>
> https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.5.7
>
> Is there a mean to upgrade to this version ?
>
> Regards,
> Lee
>
Yes I did, but did not have time so far to r
this patch series introduces three features, which make
the whole gui a little better
the split view sizes are now saved (with sane maxima)
the viewselector is also saved
the bottom log panel can be collapsed
i think these changes improve the usabilty a lot while
not adding much overhead
Dominik
to save space if not needed, but keep it available with one click
(even temporarily when you click the title instead of the
expand tool)
Signed-off-by: Dominik Csapak
---
www/manager6/Workspace.js | 3 +++
1 file changed, 3 insertions(+)
diff --git a/www/manager6/Workspace.js b/www/manager6/Wor
with this patch, the split view saves its state
in the local storage, so that users don't lose this
after a refresh or even a new browser session
(only when they change browser/workstation)
if the window resizes (or refreshes),
the left/bottom panel gets resized to a sane width/height
in case this
to save the view across browser refresh/sessions
Signed-off-by: Dominik Csapak
---
www/manager6/form/ViewSelector.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/form/ViewSelector.js
b/www/manager6/form/ViewSelector.js
index 0256b74..e075be1 100644
--- a/www/manager6/form/Vi
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Add parameter array to foreach_volid to use is in the functions.
correct typos.
---
PVE/QemuMigrate.pm | 12 +---
PVE/QemuServer.pm | 4 ++--
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index f0734cb..3e90a46 100644
--- a/PVE/Qem
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Thu, Jun 16, 2016 at 10:25:55AM +0200, Wolfgang Link wrote:
> Sorry you are right on online move disk.
> This is not optimal.
> will check it, if it is a bug or only an special case what we haven't set.
Probably a case where we forget to try to use the zeroinit filter?
>
>
> On 06/16/2016 10
> So - with the hope of a quick answer - did we do something wrong, is
> this a bug or a workaround?
This works if you stop the VM (offline). Please can you file at bug
at bugzilla.proxmox.com for this issue?
___
pve-devel mailing list
pve-devel@pve.pro
Sorry you are right on online move disk.
This is not optimal.
will check it, if it is a bug or only an special case what we haven't set.
On 06/16/2016 10:01 AM, Wolfgang Link wrote:
>
> Hello,
>
> Test it and move KVM disk from LVMThin to LVMThin works well new Disk
> has 80% usage as the sourc
as with vm templates, restyle the summary panel
for lxc templates
Signed-off-by: Dominik Csapak
---
www/manager6/lxc/StatusView.js | 41 +++-
www/manager6/lxc/Summary.js| 140 +
2 files changed, 112 insertions(+), 69 deletions(-)
diff --git a
use the shortcut for padding,
same as everywhere else
Signed-off-by: Dominik Csapak
---
www/manager6/lxc/Summary.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/lxc/Summary.js b/www/manager6/lxc/Summary.js
index 8df54e9..b675956 100644
--- a/www/manager6/lxc/S
Hello,
Test it and move KVM disk from LVMThin to LVMThin works well new Disk
has 80% usage as the source.
On 06/16/2016 09:13 AM, Dennis Busch wrote:
> Good morning!
>
> We're right now holding a training in Verona. It is the first one with
> LVMthin. In our hands-on parts we recognized that if
Good morning!
We're right now holding a training in Verona. It is the first one with
LVMthin. In our hands-on parts we recognized that if you're moving a
disk from a thin provisioned LVM storage to another one, the target
volume is filled to 100% and so the thin provisioning is completely gone.
M
26 matches
Mail list logo