6G IB nic
for the backend network . For some reason the installation is aborting
because of no dhcp lease. See the screenshot :
any idea or workaround?
Cheers,
Vadim
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: +49-341-97-33380
mail:vadim.bu...@uni-leipz
6G IB nic
for the backend network . For some reason the installation is aborting
because of no dhcp lease. See the screenshot :
any idea or workaround?
Cheers,
Vadim
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: +49-341-97-33380
mail:vadim.bu...@uni-leipz
Thanks guys - great help! All up and running :-)
On 08.08.2018 09:22, Alwin Antreich wrote:
Hi,
On Wed, Aug 08, 2018 at 07:54:45AM +0200, Vadim Bulst wrote:
Hi Alwin,
thanks for your advise. But no success. Still same error.
mds-section:
[mds.1]
host = scvirt03
keyring
Hi Alwin,
thanks for your advise. But no success. Still same error.
mds-section:
[mds.1]
host = scvirt03
keyring = /var/lib/ceph/mds/ceph-scvirt03/keyring
Vadim
On 07.08.2018 15:30, Alwin Antreich wrote:
Hello Vadim,
On Tue, Aug 7, 2018, 12:13 Vadim Bulst
wrote:
Dear
srl
-rw--- 1 root www-data 36 May 23 13:03 urzbackup.cred
What could be the reason this failure?
Cheers,
Vadim
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de
__
_
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de
___
Sorry - my fault. Everything is like it should. My puppet-module was
delivering the wrong file.
Am 24.01.2017 um 07:50 schrieb Vadim Bulst:
Hi there,
I'd like to use DHCP-Address configation on my bridged network interfaces
/etc/network/interfaces which i'd like to use:
auto l
Hi there,
I'd like to use DHCP-Address configation on my bridged network interfaces
/etc/network/interfaces which i'd like to use:
auto lo
iface lo inet loopback
iface eth4 inet manual
iface eth5 inet manual
auto bond0
iface bond0 inet manual
slaves eth4 eth5
bond_miimon 100
pve-user-ow...@pve.proxmox.com
When replying, please edit your Subject line so it is more specific
than "Re: Contents of pve-user digest..."
Today's Topics:
1. pve-user-request(any help? ) (jatani halake)
2. Re: pve-user-request
Hi Jatani,
did you missed to attach your error messages?
Cheers,
Vadim
Am 18.01.2017 um 14:01 schrieb jatani halake:
Hi all,
how are you doing? on my proxmox node i destroy vm permanently& then i
found disk image under /etc/lvm/archive/VG, i tried to restore by using
creating new vm and copy
!
Best,
Vadim
Am 09.01.2017 um 16:19 schrieb Marco Gaiarin:
Mandi! Vadim Bulst
In chel di` si favelave...
Setting up pve-cluster (4.0-48) ...
Job for pve-cluster.service failed. See 'systemctl status
pve-cluster.service' and 'journalctl -xn' for details.
invoke-rc.d: in
Hi Emmanuel,
thanks for your playbook. I'll give it a shoot and get back to you.
Cheers,
Vadim
Am 10.01.2017 um 09:20 schrieb Emmanuel Kasper:
On 01/09/2017 07:04 PM, Vadim Bulst wrote:
Hi Jeff,
i totally agree! I also tried to deploy PVE via Puppet:
class urzpvesrv (
) inh
roxmox software. I personally would recommend
doing this as two distinctly seperate things.
Use your normal unattended install stuff for debian using
forman/puppet, and assign the host into whatever proxmox group you've
configured in forman. on the next puppet run, your normal proxmox role
er:
On Mon, Jan 09, 2017 at 01:26:03PM +0100, Fabian Grünbichler wrote:
see comment inline
On Mon, Jan 09, 2017 at 12:03:02PM +0100, Vadim Bulst wrote:
Sorry my fault. It was the Debian Jessie howto and not Wheezy. Attached are
some logs. Let me know if you need some more.
As you may see in syslo
Well I have to apologize. Yeah this is true. I'll change it and give it
another try.
Cheers,
Vadim
Am 09.01.2017 um 13:30 schrieb Fabian Grünbichler:
On Mon, Jan 09, 2017 at 01:26:03PM +0100, Fabian Grünbichler wrote:
see comment inline
On Mon, Jan 09, 2017 at 12:03:02PM +0100,
ackages.rb]/ensure)
defined content as '{md5
Cheers,
Vadim
Am 09.01.2017 um 12:03 schrieb Vadim Bulst:
Sorry my fault. It was the Debian Jessie howto and not Wheezy.
Attached are some logs. Let me know if you need some more.
As you may see in syslog i'm calling a finish.sh script w
f --onetime --tags
no_such_tag --server urzlxdeploy.rz.uni-leipzig.de --no-daemonize
"
Cheers,
Vadim
Am 09.01.2017 um 08:20 schrieb Fabian Grünbichler:
On Sun, Jan 08, 2017 at 09:48:00PM +0100, Vadim Bulst wrote:
Dear all,
I'm trying to automate the PVE server installation with Forem
Dear all,
I'm trying to automate the PVE server installation with Foreman and
Puppet based on Debian stable. Well - i don't have any luck of
installing the packages. I use
"apt-get --force-yes -y install proxmox-ve ssh postfix
ksm-control-daemon open-iscsi systemd-sysv"
seen on https://pve
Thanks for your response. To use debian-installer with ZFS - I think
this is an additional problem.
Cheers,
Vadim
Am 04.01.2017 um 14:15 schrieb Emmanuel Kasper:
Hi Chance
On 01/04/2017 01:56 PM, Chance Ellis wrote:
Hi Emmanuel,
ZFS RAID 1 is supported for installation correct?
Yes ZFS r
Am 04.01.2017 um 13:24 schrieb Emmanuel Kasper:
On 01/04/2017 12:27 PM, Vadim Bulst wrote:
Dear list,
I'd like to preseed / automate the PVE-installation with Foreman. I'm
following the howto to install PVE on Debian Jessie (
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Deb
mbled to a software-raid and used to hold the regular PVE setup. I'm
able to create this layout but I don't see any option to assign a
specific VolumeGroup-Name. Is it really necessary to have "pve" as the
VG-name? Am I free to create my own partition layout?
Cheers,
Vadi
Hi Markus,
you are right! It was just some silly quotes around the ip address.
Thanks for your help.
Cheers,
Vadim
On 02/28/2016 01:46 PM, Markus Dellermann wrote:
Am Freitag, 26. Februar 2016, 23:28:17 CET schrieb Vadim Bulst:
Dear List,
after 2 years running a PVE-Cluster of four
Dear List,
after 2 years running a PVE-Cluster of four nodes with Ceph-RBD it got
time to for PVE major release Four. We added a fifth node with the same
version of PVE (3.4) and local ZFS-Volume and migrated all important VMs
to this node and ZFS-Storage. After this was done we removed all ol
23 matches
Mail list logo