Hello,
have you seen the proxmox API documentation?
https://pve.proxmox.com/wiki/Proxmox_VE_API
On 11/10/20 4:10 PM, Alejandro Bonilla wrote:
> Hello,
>
> Whenever I run commands from within one of the nodes, they appear to be
> targeted to the local system only.
>
> root@r620-1:~# for i in {123
> Whenever I run commands from within one of the nodes, they appear to be
> targeted to the local system only.
Yes, this is how it works.
___
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-use
Hello,
Whenever I run commands from within one of the nodes, they appear to be
targeted to the local system only.
root@r620-1:~# for i in {123..128}; do qm snapshot $i ready ; done
2020-11-10 09:15:21.882 7f81d2ffd700 -1 set_mon_vals failed to set
cluster_network = 10.0.0.0/24: Configuration op
--- Begin Message ---
Hi Chris,
El 10/11/20 a las 10:53, Chris Hofstaedtler | Deduktiva escribió:
Hi,
* Eneko Lacunza via pve-user [201110 09:03]:
I have hit a simple problem. Let be a VM with 3 disks, with .conf extract:
scsi0: ceph-proxmox:vm-100-disk-1,cache=writeback,size=6G
scsi1: ceph-
Hi,
* Eneko Lacunza via pve-user [201110 09:03]:
> I have hit a simple problem. Let be a VM with 3 disks, with .conf extract:
>
> scsi0: ceph-proxmox:vm-100-disk-1,cache=writeback,size=6G
> scsi1: ceph-proxmox:vm-100-disk-0,cache=writeback,size=400G
> scsi2: ceph-proxmox:vm-100-disk-3,cache=writ
--- Begin Message ---
Hi Dominik,
El 10/11/20 a las 10:26, Dominik Csapak escribió:
hi,
you can check out
`lsblk -o +serial`
should output something like:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT SERIAL
sda 8:0 0 50G 0 disk drive-scsi0
└─sda1 8:1 0 50G 0 part
hi,
you can check out
`lsblk -o +serial`
should output something like:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT SERIAL
sda 8:00 50G 0 disk drive-scsi0
└─sda1 8:10 50G 0 part /
sdb 8:16 0 100G 0 disk dri
--- Begin Message ---
Hi Arjen,
El 10/11/20 a las 9:12, Arjen via pve-user escribió:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 5.7G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5
--- Begin Message ---
On Tuesday, November 10, 2020 9:03 AM, Eneko Lacunza via pve-user
wrote:
> Hi all,
>
> I have hit a simple problem. Let be a VM with 3 disks, with .conf extract:
>
> scsi0: ceph-proxmox:vm-100-disk-1,cache=writeback,size=6G
> scsi1: ceph-proxmox:vm-100-disk-0,cache=writebac
--- Begin Message ---
Hi again,
Ok, just clicking the "send mail" button revealed the solution ;)
# lsscsi
[1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0
[2:0:0:0] disk QEMU QEMU HARDDISK 2.5+ /dev/sda
[2:0:0:1] disk QEMU QEMU HARDDISK 2.5+ /dev/sdc
[2:
--- Begin Message ---
Hi all,
I have hit a simple problem. Let be a VM with 3 disks, with .conf extract:
scsi0: ceph-proxmox:vm-100-disk-1,cache=writeback,size=6G
scsi1: ceph-proxmox:vm-100-disk-0,cache=writeback,size=400G
scsi2: ceph-proxmox:vm-100-disk-3,cache=writeback,size=400G
We have two
11 matches
Mail list logo