Il 30/08/2017 08:32, Eneko Lacunza ha scritto:
> Hi,
>
> El 29/08/17 a las 19:41, Petric Frank escribió:
>>
>>> Is it possible to configure a proxmox cluster behind a single public IP
>>> address ? If possible, how do I configure my nodes at the time of
>>> installation ? I don't see this configura
Il 04/05/2017 09:26, Fabian Grünbichler ha scritto:
> because without HA/fencing you have no guarantee that the other node
> is actually off, and not just not reachable. the same applies for any
> guests potentially running there. "stealing" the guest (configuration)
> is therefor potentially dange
I have had a problem in a cluster with configured a gluster fs.
I manually migrated the VMs to another server, though when I started it
it gave me the following:
kvm: -drive
file=gluster://srvpve2g/datastore2/images/201/vm-201-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=nati
Hi all,
I have had some troubles with 1 server in a cluster.
As the VM have all disks on shared storage I though it would have been
possible to migrate them from the current out of order server to the
others in the cluster.
HA is not enabled (I prefer doing it manual)
Though the GUI gave errors 'c
Il 12/04/2017 07:50, Sten Aus ha scritto:
> I can confirm that we've succesfully moved from iSCSI 10G (HP EVA
> storage) to Ceph (10G) by doing move disk from Proxmox GUI.
>
> Haven't encountered any problems yet. :)
>
> On 08.03.17 1:36, Kevin Lemonnier wrote:
>>> Has anyone used PVE "move disk" w
Il 11/04/2017 21:51, Jeff Palmer ha scritto:
> By default, in a 4 node cluster, 3 members would need to agree for quorum,
> not 2. In your example, if 2 hosts split from the other 2 hosts, neither
> side will have quorum. No split brain in that scenario.
>
> Think of quorum as a 'majority wins' v
Hi all,
I have a question about cluster quorum (proxmox 4).
I currently have a 3 host cluster with shared storage (gluster).
I have an old machine which I could use as a backup/service in case of
some host failure, though adding this server to the cluster would make
the cluter composed of 4 hosts
Il 05/04/2017 12:07, Guillaume ha scritto:
> Le 05/04/2017 à 11:49, Alessandro Briosi a écrit :
>> Il 05/04/2017 11:19, Guillaume ha scritto:
>>> Le 05/04/2017 à 11:01, Guillaume a écrit :
>>>> Le 04/04/2017 à 23:28, Michael Rasmussen a écrit :
>>&
Il 05/04/2017 12:07, Guillaume ha scritto:
> Le 05/04/2017 à 11:49, Alessandro Briosi a écrit :
>> Il 05/04/2017 11:19, Guillaume ha scritto:
>>> Le 05/04/2017 à 11:01, Guillaume a écrit :
>>>> Le 04/04/2017 à 23:28, Michael Rasmussen a écrit :
>>&
Il 05/04/2017 11:19, Guillaume ha scritto:
> Le 05/04/2017 à 11:01, Guillaume a écrit :
>>
>> Le 04/04/2017 à 23:28, Michael Rasmussen a écrit :
>>> On Tue, 4 Apr 2017 22:48:54 +0200
>>> Guillaume wrote:
>>>
The vrack system already took care of that, that's why i didn't
speak about that
Il 24/03/2017 20:19, Yannis Milios ha scritto:
> You just need to edit the config file of the vm and edit the line(s)
> related to the vm disk from scsiX to ideX.
>
> Posting vm config would help...
> Generally speaking Windows are a bit painful to virtualise.
>
>
> On Fri, 24 Mar 2017 at 18:48, w
Il 24/03/2017 13:57, Hexis ha scritto:
> I recently did a P2V onto LVM thin from a Windows 2003 server running
> on an old HP Proliant with a HP 6400 U-SCSI RAID. I used dd | ssh to
> accomplish it, and then of=the actual lvm volume. This completed
> successfully, however, I cannot seem to get SCSI
Il 06/03/2017 16:17, Marco Gaiarin ha scritto:
> nteresting. But how can i red this data:
>
> root@magneto:~# lvs
> LVVG Attr LSizePool Origin Data% Meta% Move Log
> Cpy%Sync Convert
> data pve twi-aotz-- 783.23g 50.38 25.36
>
Hi all,
I have had a strange behavior yesterday on a new cluster.
A Windows 2008 Guest suddenly was off, but I could not find any clue in
the logs on why it was off.
It's a VM which was migrated from a physical one in the w.e. It had been
working fine for the whole time.
I then thought it had som
Il 17/01/2017 15:00, lists ha scritto:
> Hi,
>
> On 10-1-2017 12:56, Alexandre DERUMIER wrote:
>> maybe as workaround, create a small boot drive with grub as
>> bootloader, to boot the windows system ?
>
> Didn't work out. :-(
>
> My source machine has a raid1 dynamic disk configuration, and
> appe
Il 01/04/2016 11:20, Michael Rasmussen ha scritto:
I would try setting disk controler in proxmox to sata.
sata needs ahci driver.
set it to IDE which should be already included in the kernel.
Then change /dev/sda1 into /dev/hda1 in grub.
It will probably drop you into the rescue shell. Then
Il 31/03/2016 13:43, Emmanuel Kasper ha scritto:
Hi again
As Alessandro said this is probably a mismatch of the root device.
Do you have your old VM running on VmWare ?
If yes please note there the value of the root device in /proc/cmdline
it should be something like root=/dev/mapper/susa--vg-
Il 30/03/2016 11:42, Edgardo Ghibaudo ha scritto:
I solved the previous problem.
Now the Linux guest (RHEL 4) in Proxmox environment starts, but after
a while the VM panics reporting the following message:
/mount: error 6 mounting ext3//
//mount: error 2 mounting none//
//switchroot:
Il 10/03/2016 11:11, Florent B ha scritto:
> Hi everyone,
>
> I think there's a little problem with ceph.conf permissions on Proxmox.
>
> With Infernalis release, all ceph processes are running under "ceph" user.
>
> root user starts processes, then changes user to ceph. All is fine.
>
> But proble
Hello all,
it would be nice to be able to assign a name (or a comment) to a backup.
This would simplify life when you save a VM into a certain "state" which
then can be restored later in another VM/CT.
I always have to lookup which VM ID it's using. And if the VM gets
removed I loose this infor
Il 22/10/2014 19:04, Alessandro Briosi ha scritto:
...
Though I'm surprised that cloning the VM through proxmox the disk is not
sparse and is using the whole 33G. So the question was if it's a known
behaviour (maybe it's because of NFS and other filesystems which might
crea
Il 22/10/2014 18:45, Paul Gray ha scritto:
Your definition of "sparse" and my definition of "cruft" are colliding
here.
"Sparse" == hardly used filesystem.
"cruft" == non-zeroed, *unused* sectors on the disk
Your sparse filesystem likely has a lot of cruft. The two facets aren't
mutually exclu
Il 22/10/2014 17:41, Paul Gray ha scritto:
A clone of cruft should contain cruft, so your report of filesizes above
doesn't come as a surprise.
If you want a smaller footprint, I suggest using zerofree (linux) or
SDelete (windows) to zero out unused sectors on the disk to make the
compression of
Hi all,
don't know if this has been rised before.
I have created a KVM with a qcow2 file (32G per default)
Now if I look at the file on filesystem size is reported as 32GB, but at
a closer look it's a sparse file so with a system inside using only 5GB,
it's acqually using 5GB on the server (ls
Il 15/09/2014 19:05, Joerg Hanebuth ha scritto:
Yes - dpkg --remove-architecture i386
but
dpkg: error: cannot remove architecture 'i386' currently in use by the database
but i guess uninstalling all i386 will mess up my system - i'm afraid;)
so I'll have to wait until I have the machine on my
Il 15/09/2014 17:34, Joerg Hanebuth ha scritto:
But whats that?
Ver. 3.2-4
Subscription is active
At a customers system i got that from apt-update:
W: Failed to fetchhttps://enterprise.proxmox.com/debian/dists/wheezy/Release
Unable to find expected entry 'pve-enterprise/binary-i386/Packages
Il 25/03/2014 08:40, Laurent Caron (Mobile) ha scritto:
Hi,
1/ you should fix your nfs host to be reliable
2/ other option is making backups on local disks and then pulling them.
No nfs,dependency then.
3/ if you can ssh into the box, you can reboot it. No need to go on site
I changed the op
Il 21/01/2014 21:52, Alain Péan ha scritto:
> Yes, I already migrated VMs from VMWare to Proxmox. The vmdk image file
> format is equivalent to raw disk format in Proxmox. I think this is this
> one that you should use. qcow2 is for dynamic disks growing when adding
> more data, as are dynamic disk
Il 21/01/2014 20:22, Tonči Stipičević ha scritto:
> Hello to everybody,
>
> till recently I hoped that I had 100% reliable solution for doing p2v
> this way:
>
> http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Physical_.28running.29_Windows_server_to_Proxmox_VE_.28KVM.29_using_VMwa
Il 12/09/2013 09:33, lyt_y...@126.com ha scritto:
> It's PERC H200I, FW Revision:7.15.08.00-IR
ok. I think the h200 does not have a BBU, so it can't be that one.
first check the firmware
I'd do some tests with some other distro which has a more updated kernel
(there has been activity on the driv
Il 12/09/2013 04:11, lyt_y...@126.com ha scritto:
> very tks you reply!
>
>>>It obviously is caused by the raid (either hardware or software)
> no raid, each disk alone use
>
hmm, mpt2sas seems to be handling perc (LSI) controllers so I'd guess
you have a raid controller.
>>>Does it say somet
Il 11/09/2013 03:30, lyt_y...@126.com ha scritto:
> hi,
> This device configuration is Dell R510:
> 2TB SAS Disk x 12
> 64G Mem
> Intel Xeon CPU E5620 x 2
> 6Gbps SAS Controller(MPT2BIOS-7.11.10.00(2011.06.02))
>
> Recently,the kernel of the device is crashed,and occurs once every two days.
>
>
Hi all,
just to know if anybody has ever tried doing this.
We have a physical Windows 2008R2 machine which uses the
WindowsImageBackup to create a bunch of xml and VHD files.
It's slowly dieing and I'd like to know if it would be possible to
create a KVM from that VHD file?
Thanks,
Alessandro
33 matches
Mail list logo