wn the 1 or 2 VM's in the same Host.
>
Please describe what you mean by "not able to start"
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
ues I'm using for some AD members containers. Note however that
native PVE restore code might refuse to work with those UID (I recall the 65535
max UID hardcoded somewhere in the restore path, but can't remember exactly
where)
++
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIR
- Le 24 Jan 20, à 17:52, Daniel Berteaud dan...@firewall-services.com a
écrit :
> And vxlan interfaces aren't created. ifreload -a complains :
>
> error: /etc/network/interfaces: failed to render template (Undefined).
> Continue
> without template rendering ...
> war
ce name
error: /etc/network/interfaces: line41: iface vxlan${v}: invalid syntax
'%endfor'
error: vxlan${v}: invalid vxlan-id '${v}'
I do have python-mako installed (which wasn't pulled in as a dependency of
ifupdown2 BTW, maybe it should)
Any idea ?
--
[ https://www.firewall-services.co
n_remoteip 192.168.0.2
>vxlan_remoteip 192.168.0.3
> %endfor
>
>
> auto vmbr2
> iface vmbr2 inet manual
>bridge_ports glob vxlan1010-1020
>bridge_stp off
>bridge_fd 0
>bridge-vlan-aware yes
> bridge-vids 2-4094
- Le 24 Jan 20, à 8:20, Daniel Berteaud dan...@firewall-services.com a
écrit :
> - Le 23 Jan 20, à 20:53, Alexandre DERUMIER aderum...@odiso.com a écrit :
>>
>> I think if you want to do something like a simple vxlan tunnel, with multiple
>> vlan, something like t
pt get all the cluster members, and create one gre tunnel with each
other node like :
ovs-vsctl add-port vmbr0 gre0 -- set interface gre0 type=gre
options:remote_ip=10.22.5.2
ovs-vsctl add-port vmbr0 gre1 -- set interface gre1 type=gre
options:remote_ip=10.22.5.3
etc.
Not perfect, but workin
- Le 22 Jan 20, à 16:27, Chris Hofstaedtler | Deduktiva
chris.hofstaedt...@deduktiva.com a écrit :
> * Daniel Berteaud [200122 09:50]:
>> I used to rely on being able to load nf_conntrack_proto_gre in PVE 5 days.
>> It's
>> still present in kernel 5.0 for PVE 6, but
Hi.
I used to rely on being able to load nf_conntrack_proto_gre in PVE 5 days. It's
still present in kernel 5.0 for PVE 6, but missing in kernel 5.3. Is that
expected ?
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des
plugins available somewhere as deb so I can play a bit
with it ? (couldn't find it in pve-test or no-subscription)
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56
enced (using a software watchdog) to prevent any corruption and allow
services to be recovered on the quorate part of the cluster. In your case,
there was no quorate part, as there was no network at all.
Cheers
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécu
- Le 19 Sep 19, à 7:57, Daniel Berteaud a
écrit :
> Forgot to mention. When moving a disk offline, from ZFS over iSCSI to
> something
> else (in my case to an NFS storage), I do have warnings like this :
> create full clone of drive scsi0 (zfs-test:vm-132-disk-0)
> Forma
- Le 17 Sep 19, à 18:27, Daniel Berteaud a
écrit :
> Hi there.
> I'm working on moving my NFS setup to ZFS over iSCSI. I'm using a CentOS 7.6
> box
> with ZoL 0.8.1, with the LIO backend (but this shouldn't be relevent, see
> further). For the PVE side, I'm running PVE6 w
,
Daniel
On 9/12/2019 2:18 AM, Mike O'Connor wrote:
> HI All
>
> I just finished upgrading from V5 to V6 of Proxmox and have an issue
> with LXC 's not starting.
>
> The issue seems to be that the LXC is being started with out first
> mounting the ZFS subvolume.
> This re
es on root except folders. Symbolic
> links from "/usr" merged have also been removed. You have to recreate
> them by hand. Maybe by starting the machine on a live or rescue system.
>
> bin -> usr/bin
> lib -> usr/lib
> lib64 -> usr/lib64
> sbin -> usr/sbin
>
ost (in VM config).
You need to enable discard for guest filesystems on mount (usually in
/etc/fstab), or just put fstrim to crontab.
26.01.2019 23:56, Daniel wrote:
> But when this is not enbaled by default in Proxmox it sounds like a "bug"
inside
system in guest (via mount option of FS, or
regulary run fstrim or its analog in your guest OS) to free space on
host's LVM level.
26.01.2019 23:17, Daniel пишет:
> Hi there,
>
>
>
> i have a question. Proxmox is telling me that my THIN-LVM getting
this.
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Does anyone setup proxmox as the cloud infrastructure for using
Canonical juju? I am currently using it with VMWare VSphere and would
like to try it with proxmox.
--
Daniel BIdwell
___
pve-user mailing list
pve-user@pve.proxmox.com
https
Le 15/11/2018 à 14:24, Marco Gaiarin a écrit :
> Mandi! Daniel Berteaud
> In chel di` si favelave...
>
>> If at one time, the storage pool went out of space, then the FS is most
>> likely corrupted. Fixing the space issue will prevent further
>> corruption, but won'
Le 15/11/2018 à 13:10, Gerald Brandt a écrit :
> I've only had filesystem corruption when using XFS in a VM.
In my experience, XFS has been more reliable, and robust. But anyway,
99.9% of the time, FS corruption is caused by one of the underlying layers
++
--
Logo FWS
*Dan
Le 15/11/2018 à 12:49, Marco Gaiarin a écrit :
> Mandi! Daniel Berteaud
> In chel di` si favelave...
>
>> Not that strange. It's expected to have FS corruption if they resides on
>> a thin provisionned volume, which itself has no space left. Lucky you
>>
in provisionned volume, which itself has no space left. Lucky you
only had one FS corrupted.
++
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
is that it's could work with any storage
That's be even better indeed !
Wasn't aware of that
++
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
___
ptsetup open --type=luks /dev/sdc clear
Now you can use /dev/mapper/clear as LVM (pvcreate && vgcreate on one
node before using it).
Now, when you reboot one of your node, you just have to unlock the
device with
cryptsetup open --type=luks /dev/sdc clear
Before you can access the data
Access (VDA) in order to
access a Windows VDI desktop. Windows VDA is also applicable to
third party devices, such as contractor or employee-owned PCs.
HTH,
Daniel
- Original Message -
From: "Gilberto Nunes"
To: "PVE User List"
Sent: Friday, September 28, 2018 1:38:09
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
You only need the CAL if you are connecting to a Windows Server. You need
Windows Licenses for each copy of Windows you run however. If you subscribe to
Microsoft Volume Licensing you can use the same key for each copy you run.
--
Daniel Bayerdorffer, VP dani...@numberall.com
Numberall Stamp
ngful on different platforms as
there can be too many differences in how this is mesured (and how the
CPU governor is handled when nearly idle)
++
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fw
. This will
> then compute such values correctly.
I'll check this tool when it's available :-)
++
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
__
> regarding this: you run into our forums spam alert,
> now you should be able to post :)
Thanks, I can indeed now :-)
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
Le 01/06/2018 à 16:55, René Jochum a écrit :
> Get it from "pvesh get /cluster/resources" :)
Indeed, here it has the correct value. Still, isn't that strange that on
/nodes//qemu//status/current we do not get the same value ?
Thanks,
Daniel
--
Logo FWS
*Daniel Bertea
the number of cores assigned to the VM, and cpu
the % used), but the value I get for cpu is always 0, no matter the
activity of the VM.
Am I missing something, or is this a bug ?
BTW, I'd have asked this on the forum, but I get "You have insufficient
privileges to post here"
Regar
Hi there,
can someone give me a hint where and who to increase ulimits?
I cant find the correct and working solution for it ☹
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman
I would be interested in this too.
--
Daniel B
- Original Message -
From: "Johns, Daniel (GPK)" <daniel.jo...@fecrwy.com>
To: pve-user@pve.proxmox.com
Sent: Thursday, March 8, 2018 9:55:21 AM
Subject: [PVE-User] pve-zsync log
Hello,
Does anyone know a decent way of l
Hello,
Does anyone know a decent way of logging pve-zsync status? For failure or how
long it took to run the sync?
Thanks
-Daniel J
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi!
I have a server with proxmox installed. I set up the dayli auto snapshot.
Today I try to rollback to a snapshot but it failed, because of my disk
full. I remove a few snapshot, but my CT doesn't start. I try to rollback,
but failed also:
lvremove 'pve/vm-100-disk-1' error: Failed to find
Hi,
but actually not on the GUI right?
Am 09.11.17, 07:38 schrieb "pve-user im Auftrag von Alexandre DERUMIER"
:
already available in proxmox 5.1 :)
#qm importovf
- Mail original -
De:
and test if the issue will remove.
Am 03.11.17, 18:05 schrieb "pve-user im Auftrag von Silvestre Figueroa"
<pve-user-boun...@pve.proxmox.com im Auftrag von silvestrefigue...@gmail.com>:
Hi Daniel,
2017-11-03 13:31 GMT-03:00 Daniel <dan...@linux-nerd
s to things thet
some mac addresses cant be learned on switch FDB.
03.11.2017 11:20, Daniel пишет:
> Hi there,
>
>
>
> i have some strange issues. I have a couple of Proxmox Server in a
Cluster.
>
> First of all I see on all Serve
Found by my own.
Pve-managed was removed since i installed „vlan“
So all ok after installed pve-manager
Am 18.10.17, 12:30 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.com im Auftrag von dan...@linux-nerd.de>:
Hi there,
As i remeber correctly you need minimum 3 nodes to have a quroum.
Am 18.10.17, 12:13 schrieb "pve-user im Auftrag von Жюль Верн"
:
I read documentation and know how to create a cluster. Problem now is that
no quorum
per VM.
Any glue?
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Sounds perfect for me ;)
--
Grüsse
Daniel
Am 25.09.17, 08:18 schrieb "pve-user im Auftrag von Fabian Grünbichler"
<pve-user-boun...@pve.proxmox.com im Auftrag von f.gruenbich...@proxmox.com>:
On Sun, Sep 24, 2017 at 05:11:11PM +, Daniel wrote:
> Doese it have
Doese it have any netork trouble or is it working fine just see that error in
the kernel log?
--
Grüsse
Daniel
Von: dORSY <dors...@yahoo.com>
Datum: Sonntag, 24. September 2017 um 18:24
An: PVE User List <pve-user@pve.proxmox.com>, Daniel <dan...@linux-nerd.de>
Betreff: Re:
: 000188de2956 R11: 0293 R12: 7f5bed2a6cf8
[ 4675.285432] R13: 7f5bed2b701e R14: 7f5bed284010 R15: 000c
[ 4675.285446] ---[ end trace 5d57510b90b28d5f ]---
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
Hey,
as i said: LACP is configured on my switches only.
2x HP Switches are connected to each other with 4x 1Gbe in a LACP Trunk.
My Proxmox hosts are connected with 1 Gbe to each Switch and the Bonding
interface has mode 6 (balance-alb)
--
Grüsse
Daniel
Am 06.09.17, 13:41 schrieb "pve
/sec
run 4: 35.9 Mbits/sec
run 5: 36.1 Mbits/sec
--
average ... 219.86 Mbits/sec
--
Grüsse
Daniel
Am 04.09.17, 15:31 schrieb "pve-user im Auftrag von Mark Schouten"
<pve-user-boun...@pve.p
Hey,
after changing to mode 5 it should work for me.
But actually I run in other small problems but not important right now.
--
Grüsse
Daniel
Am 01.09.17, 22:22 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.com im Auftrag von dan...@linux-nerd
bridge_fd 0
bridge_maxage 0
bridge_ageing 0
bridge_maxwait 0
I am absolutely without any glue ☹ tested a lot and nothing really helps to
solve this problem.
--
Grüsse
Daniel
___
pve-user
Hi,
yes ist LVM-THIN – but it seems that the Container which i try to recover seems
broker.
I will check another container in a pouple of hours. Lets see what happened
then.
--
Grüsse
Daniel
Am 25.08.17, 10:46 schrieb "pve-user im Auftrag von Philip Abernethy"
<
(in theorie yes it is
but not realy used ;))
So is there any way to “overbook” the System? In Proxmox 4 there was no problem
with it.
--
Grüsse
Daniel
Am 25.08.17, 10:01 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.com im Auftrag von dan...@linux-nerd
Hi there,
is there any way to lower the Disks from a Container?
I just see that I can increase it but I want to decrease the disk..
For example from 500GB to 50GB
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https
Seems Picture was removed:
I got this error:
Command „chroot /target dpkg –force-confold –configure –a failed
With exit code 1 at /usr/bun/proxmoxinstall line 385
--
Grüsse
Daniel
Am 25.07.17, 10:53 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.com i
Hi there,
i got 4 new Servers which i wanted to install Proxmox 5.
I Downloaded the ISO and try the install via CD.
After the Installtion I see the following error (attached as picture)
Any Idea what I can do?
--
Grüsse
Daniel
___
pve-user mailing
Problem was solved by my self.
Problem was that the Container is already running on another Host.
--
Grüsse
Daniel
Am 21.06.17, 13:21 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.com im Auftrag von dan...@linux-nerd.de>:
Hi there,
.
Anyone has an Idea what can I do here?
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Found it by my self.
Its located here: /dev/rbd/ceph/
--
Grüsse
Daniel
Am 05.05.17, 22:52 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.com im Auftrag von dan...@linux-nerd.de>:
Hi at All,
i have a VM which is on a Ceph Storage: rootfs:
Hi at All,
i have a VM which is on a Ceph Storage: rootfs: ceph:vm-171-disk-1,size=20G
Is there anyway to mount this Image on the local PMX Host? I need this to copy
some data ;)
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
only the
half of 2700GB which also means I can break 3 HDDs.
But what means Min? I also can’t find anything in the documentation.
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi There,
just a short simple questions. Is it possible to have the Node name in the
backup instead of node id only?
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
(10.0.2.110:127160) was formed. Members
Mar 10 15:52:27 host01 corosync[14350]: [QUORUM] Members[12]: 1 2 3 4 5 6 7 8
9 10 11 12
Mar 10 15:52:27 host01 corosync[14350]: [MAIN ] Completed service
synchronization, ready to provide service.
--
Grüsse
Daniel
Am 08.03.17, 13:16 schrieb "pve
Hi,
i was able to resolve this by my self. After i restarted the network Interface
(bonding) it was working again.
So maybe the problem was the Bonding on that case.
--
Grüsse
Daniel
Am 08.03.17, 12:51 schrieb "pve-user im Auftrag von Daniel"
<pve-user-boun...@pve.proxmox.c
And then I got the error omping: Can't get addr info for omping: Name or
service not known
I cant absolutely understand what is happening here.
All Servers has the same Network config.
--
Grüsse
Daniel
Am 08.03.17, 12:39 schrieb "pve-user im Auftrag von Thomas Lamprecht"
<
Hi,
ok it seems that Multicast is not working anymore. But how can this happen? It
was working before without any trouble.
--
Grüsse
Daniel
Am 08.03.17, 11:15 schrieb "pve-user im Auftrag von Thomas Lamprecht"
<pve-user-boun...@pve.proxmox.com im Auftrag von t.lampre..
And i got a new error:
When it run the imping command I got this:
omping -c 10 -i 1 -q 10.0.2.111
omping: Can't find local address in arguments
Maybe this is correct?
--
Grüsse
Daniel
Am 08.03.17, 11:15 schrieb "pve-user im Auftrag von Thomas Lamprecht"
<pve-user-boun...@pv
ing after 13 was shutdown.
--
Grüsse
Daniel
Am 08.03.17, 11:15 schrieb "pve-user im Auftrag von Thomas Lamprecht"
<pve-user-boun...@pve.proxmox.com im Auftrag von t.lampre...@proxmox.com>:
Hi,
On 03/08/2017 11:02 AM, Daniel wrote:
> HI,
>
>
message. failed: 13 was the
problem.
--
Grüsse
Daniel
Am 08.03.17, 10:53 schrieb "pve-user im Auftrag von Thomas Lamprecht"
<pve-user-boun...@pve.proxmox.com im Auftrag von t.lampre...@proxmox.com>:
On 03/08/2017 10:40 AM, Daniel wrote:
> Hi there,
failed: Connection refused
Mar 8 10:35:10 host01 pvestatd[2090]: ipcc_send_rec failed: Connection refused
So /etc/pve/ is not mounted anymore and I cant restart anythink.
Anyone have an idee what can happen?
--
Grüsse
Daniel
___
pve-user mailing list
pv
sizing of Harddisk?
--
Grüsse
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi,
looks perfect. I think i can adobt this to bond interfaces as well.
--
Grüsse
Daniel
Am 28.02.17, 17:36 schrieb "pve-user im Auftrag von Uwe Sauter"
<pve-user-boun...@pve.proxmox.com im Auftrag von uwe.sauter...@gmail.com>:
I have a setup where I don't use
Hi,
my last test was
10 = high
0 = low
So high value has higher prior.
cheers
daniel
> Am 18.02.2017 um 00:23 schrieb Gilberto Nunes <gilberto.nune...@gmail.com>:
>
> Hi List
>
> Let's suppose we have 3 nodes HA Proxmox.
> There is a screen where we can define cer
this at the logs:
Feb 10 23:22:08 host07 corosync[2599]: [QUORUM] Members[12]: 1 2 3 4 5 6 7 8 9
10 11 12
Feb 10 23:22:08 host07 corosync[2599]: [MAIN ] Completed service
synchronization, ready to provide service.
So there are no errors from my side i would say.
Cheers
Daniel
Hi,
didnt know anymore. With a force readd it was possible to readd the node to the
cluster.
> Am 09.02.2017 um 22:41 schrieb Thomas Lamprecht <t.lampre...@proxmox.com>:
>
> Hi,
>
> Am 09.02.2017 um 18:27 schrieb Daniel:
>> Hi there,
>>
>> aft
[2026]: [dcdb] crit: cpg_initialize failed: 2
Feb 9 18:25:33 host04 pmxcfs[2026]: [status] crit: cpg_initialize failed: 2
Is there any way to add the Server or to fix this?
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http
"uid" : "KUqfYKj7gMFyS8IDbXhSfw"
},
"ct:201798" : {
"node" : "host08",
"running" : 1,
"state" : "started",
"uid" : "
to configure the auto failover after one host failed or goes
down?
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi there,
i ask just to be sure.
The Traffic Graph in the Host Overview is based in bytes - So this means 100M
are 1Gbit right?
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi there,
is there any way to set an different count for VMIDs?
For exmaple i wanted to start like this: 2017001
So 2017 is fix and the last 3 digits are increment counter.
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http
Hi,
i would say it more a softraid problem.
What you get when you do „cat /proc/mdstat“
Maybe still rebuilding.
> Am 23.01.2017 um 16:35 schrieb Miguel González :
>
> Dear all,
>
> I´m running Proxmox 4.2 onto a software RAID of 2 Tb SATA disks of
> 7200 RPM.
>
2 pg
> are needed to operate correct.
>
> This means if you write 1GB you lose 3GB of free storage.
>
>
> On 12/14/2016 12:14 PM, Daniel wrote:
>> Hi there,
>>
>> i created a Ceph File-System with 3x 400GB
>> In my config i said 3/2 that means that one of that d
shows me:
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
ceph 2 0 0 441G 0
But as i understand it correctly max Avail GB must be round about 800GB
Cheers
Daniel
___
pve-user mailing
Hi,
i didnt test it yet but is LXC Live Migration implemented now?
If not, someone knows if there are plans for implementation?
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Actually i have the same problem.
Thats the reason why i started to develop my own App.
But these App will take serval month till i will be able to show something.
> Am 16.11.2016 um 12:40 schrieb Nicola Ferrari (#554252)
> :
>
> Hi everybody.
>
> I'm running a 3-nodes
> Am 14.11.2016 um 09:53 schrieb Fabian Grünbichler
> <f.gruenbich...@proxmox.com>:
>
> On Mon, Nov 14, 2016 at 09:43:40AM +0100, Daniel wrote:
>>>
>>> but I would advise you to use vzdump to backup containers - you get a
>>> (compressed) tar arc
>
> but I would advise you to use vzdump to backup containers - you get a
> (compressed) tar archive, the config is backed up as well and you get
> consistency "for free" (or almost free ;)). normally, you want to
> restore individual containers anyway.
The problem is that there is no way to
Hi there,
before we used LVM-THIN we were able to Backup all Contains directly from the
Host-System.
Now, everythink is LVM. Is there any known and easy way to backup all Hosts
including all VMs?
For example with rsync or backuppc or how ever?
Cheers
Daniel
./aquota.
after killing that task by hand it begins to boot as expected.
Anyone know if this is normal and tooks some time to be finished?
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo
on
the container.
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi,
would be realy cool if it is useable also with Logstash/Kibana (ELK Stack)
Maybe there is also to Monitor it?
Cheers
Daniel
> Am 07.11.2016 um 18:44 schrieb Thomas Lamprecht <t.lampre...@proxmox.com>:
>
> Hi,
>
> On 07.11.2016 18:26, lists wrote:
>> Hi,
&
You dont need to move anythink because /etc/pve is shared and all Cluster-Nodes
knows all Configs from the whole Cluster.
The Magic Word here is Shared-Storage.
> Am 07.11.2016 um 11:37 schrieb Guy :
>
> Yes with shared storage it's simple. Move the Conf file to a new node
Do you have a shared Storage?
If not, how could it be migrated without having all data ;)
> Am 07.11.2016 um 11:16 schrieb Szabolcs F. :
>
> Hello All,
>
> I've got a Proxmox VE 4.3 cluster (no subscription) of 12 Dell C6220 nodes.
>
> My question is: how do I move a VM
>>>
>>> You also can enable HA for the ct and select "online"…
>>
>> Only with a Shared Storage.
> Yes, of course.
> But i guess, for live migration that`s allways the case, isn`t it?
Nope, some Release before Live Migration was rsync ;)
___
pve-user
> Am 22.10.2016 um 20:38 schrieb Markus Dellermann <li-...@gmx.net>:
>
> Am Samstag, 22. Oktober 2016, 19:11:19 CEST schrieb Daniel:
>> You have to turn off the Container then you can migrate it.
>>
>> Cheers
>>
>> Daniel
>>
> You als
You have to turn off the Container then you can migrate it.
Cheers
Daniel
> Am 21.10.2016 um 18:19 schrieb Marco Gaiarin <g...@sv.lnf.it>:
>
>
> If i try to move/migrate a (running!) container:
>
> a) if i select 'online', i get:
>
> lxc live migratio
Hi there,
is it possible to change easily the LVMs in LXC?
So someone create a LCX Node on our Backup-Space and I want to migrate it back
to our Storage-System.
In KVM this seems easy by just clicking arround but in LXC it seems not
supported yet :-(
Cheers
Daniel
ant ?
>
I am not completely sure but at the install you can choose which FS you prefer.
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Good Morning Karel,
how’s going today?
What kind of errors you get?
> Am 10.08.2016 um 18:36 schrieb karel.gonzalez :
>
> after a cluster of Proxmox update the corosync dont satrt i cant not to star
> lxc
>
> help pleaseee
>
>
Maybe your Switch as Problems with the Mac Switching?
> Am 25.07.2016 um 13:35 schrieb Tonči Stipičević :
>
> Hello to all,
>
> after I migrated to the latest version (enterprise - repos), have tested live
> migration.
>
> So , vm-win7 cannot survive more than 2
Very cool ;)
I can wait a couple of days;)
Live Migration also implemented as well?
> Am 16.06.2016 um 15:06 schrieb Wolfgang Link <w.l...@proxmox.com>:
>
> This will come in some days.
> code is at git available.
>
>
> On 06/16/2016 03:04 PM, Daniel
1 - 100 of 169 matches
Mail list logo